id
stringlengths
5
6
input
stringlengths
3
301
output
list
meta
null
7t7zob
how do fraternities work? do they serve any real function to the university?
[ { "answer": "Interesting question. Fraternity is a short for Fraternal Organization. Literally a brotherhood. Like minded, funded, intrests etc\n\nIn the case on campus many began over a century ago as literal acedemic and social clubs. Like any many a club they have standards for admission. Not everyone can join the varsity baseball team,. Marines or country club. \n\nThey have dues, silly rituals, and housing. Members pay rent to live there. They act as small businesses running a rental property and paying for social activities. Yes parties. A frat at one point greased the social wheels allowing the members to meet women. \n\nWhy were/are the women there? Boys I guess. \n\nExclusivity at one time increased their desirability. Like now, people want what they can't have. At their best the provide a center for campus life, housing and academic support for their members. At their worst well we all hear about that \n\nWhat purpose now? Well they still provide the good and the bad but campus values have changed. Isms like elitism is considered bad. Exclusion is considered bad now in our postmodern collegiate reality. \n\nIt works for their members and pisses off more for these reasons given. \n\nIs there a net benefit to the campus? Well if you are truly diverse you have those who believe exclusive is good and those that think bad. \n\nBut that isn't what vox collegeum can accept now. \n\nIt's not clear they ever were a plus or minus for colleges. \n\nLess know fact, up until the 70s most frats had a House Mother, an older woman that lived in the house. I sometimes wonder if many of the bad things we hear about frats would be mediated if that Mom returned", "provenance": null }, { "answer": "This is one that will surely receive different answers depending on who you ask. Interestingly (at least to me), I was a British student who studied in the US for four years and joined a Fraternity in my first year at College, so I feel I have a good view on this subject. Primarily, a fraternity is a brotherhood of men who are bound by the guiding principles and standards that are associated with that particular fraternity. They vary from fraternity to fraternity but usually ground themselves in principles such as brotherhood, education, leadership, etc. \nOn the face of it, it looks like a group of guys who like to drink and party together. Whilst this is essentially true (and surely that’s a large part of college anyway), we used to put on several charitable events for a national charity and would volunteer in the local community as well. We were able to raise a lot of money for those in the community and would bring good pr to our chapter and university. I’m not suggesting we were always saints but the boys were a family away from my family and we had a great time together and I made friends I hope to keep forever. It’s not for everyone but made my time as college a great one!", "provenance": null }, { "answer": "Depending on the university, they can be the core of \"involved\" students. A commuter school, or small school, often has its student government, sports pep rallys/student sections, volunteering, homecoming parades, etc being done by the Greek system. These are sometimes the students that are heavily involved in the extra curricular activities. \n\nThe dbag frat boy cliche stereotype does definitely have truth to it tho, not always, but often. ", "provenance": null }, { "answer": "Benefits to University:\n\n- Provides a social network for students who join.\n\n- Provides social and extracurricular events for students without the need for university resources.\n\n- Provides opportunities for students to gain experience holding leadership positions.\n\n- Often provides housing, which can be limited on some campuses.\n\n- Depending on a university's relationship with fraternities and sororities, it can provide the university with ways to regulate social events that can't be applied as easily to non-Greek events. \n\n- Fraternities and sororities typically require some amount of philanthropy, which benefit the community and improve a school's reputation. \n\n- Provides for networking opportunities that can help students with their careers.\n\n- Increased donations from alumni. \n\nHarm to University:\n\n- \"Pledging\" a fraternity or sorority often involves hazing, which can mentally and physically distress students. There have even been instances of people dying due to hazing.\n\n- Fraternities and sororities generally throw parties with alcohol, which can lead to irresponsible behavior, injuries, crime, etc. This is bad for students and the university's reputation.\n\n- Associating mainly with one's own brothers or sisters may limit interactions with other students and/or decrease the diversity of people a student gets to know. \n\n- Students may feel pressured to spend unnecessary amounts of time and money on matters related to their chapter.\n\n- Dealing with fraternities and sororities requires time on the part of the university's staff, often requiring hiring people specifically for this purpose. \n\n- Promotion of \"fratty\" culture, which can include immature behavior, sexism, sexual misconduct, excessive alcohol consumption, etc.", "provenance": null }, { "answer": "Social Fraternities (i.e. the Greek System) are social organizations that are supposed to bring together like-minded individuals and give a structured recreational environment to balance the generally individual focus of academic studies. \n\nThe original goal was not to create drinking clubs, but instead provide a semi-organized environment where students could relax, develop social skills, and have a support environment to assist in a student's studies. They were designed to offer benefits to the school through community service, philanthropy, and competitive pride as frats tried to out-perform one another in academics and intramural athletics. \n\nSadly, a significant number of people join them today because of the perceived party lifestyle. In that sense, they can be actually disruptive to the academic mission of the school. \n\nNote that there are academic fraternities to which you gain entrance through school performance or professional area of interest/study. While these may use the Greek letters for their names, they are not part of the \"Greek System\" at a college or university. ", "provenance": null }, { "answer": "Plenty of universities function without them. They have benefits and drawbacks to those who get involved but they make ultimately make no necessary contribution. Their functions can be found otherwise.", "provenance": null }, { "answer": null, "provenance": [ { "wikipedia_id": "2536719", "title": "Service fraternities and sororities", "section": "", "start_paragraph_id": 2, "start_character": 0, "end_paragraph_id": 2, "end_character": 613, "text": "In the context of the North American student fraternity and sorority system, service fraternities and service sororities comprise a type of organization whose \"primary\" purpose is community service. Members of these organizations are not restricted from joining other types of fraternities. This may be contrasted with professional fraternities, whose primary purpose is to promote the interests of a particular profession, and general or social fraternities, whose primary purposes are generally aimed towards some other aspect, such as the development of character, friendship, leadership, or literary ability.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "2536719", "title": "Service fraternities and sororities", "section": "", "start_paragraph_id": 1, "start_character": 0, "end_paragraph_id": 1, "end_character": 273, "text": "Service fraternity may refer to any fraternal public service organization, such as the Kiwanis or Rotary International. In Canada and the United States, the term fraternal organization is more common as \"fraternity\" in everyday usage refers to fraternal student societies.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "1669998", "title": "Alpha Phi Alpha", "section": "Section::::National programs.\n", "start_paragraph_id": 68, "start_character": 0, "end_paragraph_id": 68, "end_character": 544, "text": "The fraternity provides for charitable endeavors through its Education and Building Foundations, providing academic scholarships and shelter to underprivileged families these projects are managed by fraternity brothers; Broderick McKinney, Kenneth Burnside and Gregory Anderson. The fraternity combines its efforts in conjunction with other philanthropic organizations such as Head Start, Boy Scouts of America, Big Brothers Big Sisters of America, Project Alpha with the March of Dimes, NAACP, Habitat for Humanity, and Fortune 500 companies.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "47400013", "title": "Fraternities and sororities", "section": "Section::::Structure and organization.:Common elements.:Governance.\n", "start_paragraph_id": 40, "start_character": 0, "end_paragraph_id": 40, "end_character": 708, "text": "Individual chapters of fraternities and sororities are largely self-governed by their active (student) members; however, alumni members may retain legal ownership of the fraternity or sorority's property through an alumni chapter or alumni corporation. All of a single fraternity or sorority's chapters are generally grouped together in a national or international organization that sets standards, regulates insignia and ritual, publishes a journal or magazine for all of the chapters of the organization, and has the power to grant and revoke charters to chapters. These federal structures are largely governed by alumni members of the fraternity, though with some input from the active (student) members.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "46641525", "title": "University of Virginia Greek life", "section": "Section::::Governance of Greek organizations.:Greek councils.\n", "start_paragraph_id": 36, "start_character": 0, "end_paragraph_id": 36, "end_character": 365, "text": "BULLET::::- The Inter-Fraternity Council, or IFC, is the oldest of the Greek councils. Founded in 1934, the IFC oversees 32 social fraternities and is led by a governing board that is elected by the brothers of the member fraternities. The IFC works with the Presidents' Council, which consists of fraternity chapter presidents, to govern the fraternity community.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "2205788", "title": "K.A.V. Lovania Leuven", "section": "Section::::Structures.\n", "start_paragraph_id": 14, "start_character": 0, "end_paragraph_id": 14, "end_character": 379, "text": "The fraternity has a legislative (the power to make laws), executive (the power to implement laws) and judiciary (the power to judge and apply punishment when laws are broken) body. All full members make up the legislative body, which elects the executive body. The legislative body also functions as a judiciary body. In this case it assumes the function of an honorary senate.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "27838158", "title": "Gamma Alpha Chi", "section": "Section::::History.\n", "start_paragraph_id": 5, "start_character": 0, "end_paragraph_id": 5, "end_character": 579, "text": "In their vision, the founders designated the purpose of the fraternity to be the following: \"To gather young men into an organization in which membership is not based on any particular race, creed, religion, or social background, but on the values of brotherhood.\" But more specifically, this \"fraternity is to provide service to the brothers of Gamma Alpha Chi, and to our universities, communities, and nation, to our fullest capacity, and we will practice the highest forms of brotherhood amongst ourselves, our fellow fraternities and sororities, and to the general public.\"\n", "bleu_score": null, "meta": null } ] } ]
null
327gim
Why would decreasing the extracellular concentration of Na+ cause the membrane potential to increase (get more negative)?
[ { "answer": "To start with, your terminology is out. Getting more negative is a decrease in membrane potential. But it is quite confusing, that's why we always try to say depolarize or hyperpolarize.\n\nIn response to your question, it is a little hard to know why in YOUR simulation changing [Na+]o (extracellular sodium concentration) changed the resting membrane potential. But in general it boils down to the reversal potential of a given ion.\n\nThe reversal potential of an ion is the potential at which the ion bulk direction of travel, either into or out of the cell, reverses. Or more simply, the reversal potential is the potential the ion tries to pull the cell to, when it flows. That is to say, that if the reversal potential for potassium was -90 mV, when potassium channels opened, they would try to pull the cell to -90 mV.\n\nThe reversal potential of any ion channel that is permiant for a single ion (X) is given by the Nernst equation, which is:\n\n RT/zF * ln ( [X]o / [X]i )\n\nYou can look up what those various constants mean, but ultimately it boils down to the fact that when you change the intracellular or extracellular concentrations of ions, you change its reversal potential. So for Sodium, when you have 150 mM outside the cell (and probably about 10 mM inside), the reversal potential for Na+ is +70 mV. When you changed the extracellular concentration to 30 mM, the reversal potential dropped to about +30 mV.\n\nWe can calculate the current that flows into a cell with the following simple equation:\n\n i = G*(Vm-Ve) Where G = conductance, Vm = Membrane potential and Ve = the reversal potential\n\nThus you can see, that by changing the reversal potential, and in this case, by bringing it closer to Vm, we reduce the magnitude of the sodium current. I don't know what the total sodium conductance in your cell is, but it must have been something. Whether that conductance was due to a persistent sodium channel, or perhaps the Ih channel, I don't know. But ultimately, with a smaller sodium current, you remove s slight depolarizing influence.\n\nTLDR. Changing the extacellular concentration of any ion, changes its reversal potential, and hence the force driving the ion into/out of the cell. This means greater/lesser current, and hence a change in membrane potential.", "provenance": null }, { "answer": null, "provenance": [ { "wikipedia_id": "563161", "title": "Membrane potential", "section": "Section::::Graded potentials.\n", "start_paragraph_id": 69, "start_character": 0, "end_paragraph_id": 69, "end_character": 792, "text": "As can be derived from the Goldman equation shown above, the effect of increasing the permeability of a membrane to a particular type of ion shifts the membrane potential toward the reversal potential for that ion. Thus, opening Na channels shifts the membrane potential toward the Na reversal potential, which is usually around +100 mV. Likewise, opening K channels shifts the membrane potential toward about –90 mV, and opening Cl channels shifts it toward about –70 mV (resting potential of most membranes). Thus, Na channels shift the membrane potential in a positive direction, K channels shift it in a negative direction (except when the membrane is hyperpolarized to a value more negative than the K reversal potential), and Cl channels tend to shift it towards the resting potential.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "563161", "title": "Membrane potential", "section": "Section::::Graded potentials.\n", "start_paragraph_id": 68, "start_character": 0, "end_paragraph_id": 68, "end_character": 946, "text": "As explained above, the potential at any point in a cell's membrane is determined by the ion concentration differences between the intracellular and extracellular areas, and by the permeability of the membrane to each type of ion. The ion concentrations do not normally change very quickly (with the exception of Ca, where the baseline intracellular concentration is so low that even a small influx may increase it by orders of magnitude), but the permeabilities of the ions can change in a fraction of a millisecond, as a result of activation of ligand-gated ion channels. The change in membrane potential can be either large or small, depending on how many ion channels are activated and what type they are, and can be either long or short, depending on the lengths of time that the channels remain open. Changes of this type are referred to as graded potentials, in contrast to action potentials, which have a fixed amplitude and time course.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "741847", "title": "Hyperkalemia", "section": "Section::::Mechanism.:Elevated potassium.\n", "start_paragraph_id": 32, "start_character": 0, "end_paragraph_id": 32, "end_character": 805, "text": "Increased extracellular potassium levels result in depolarisation of the membrane potentials of cells due to the increase in the equilibrium potential of potassium. This depolarisation opens some voltage-gated sodium channels, but also increases the inactivation at the same time. Since depolarisation due to concentration change is slow, it never generates an action potential by itself; instead, it results in accommodation. Above a certain level of potassium the depolarisation inactivates sodium channels, opens potassium channels, thus the cells become refractory. This leads to the impairment of neuromuscular, cardiac, and gastrointestinal organ systems. Of most concern is the impairment of cardiac conduction, which can cause ventricular fibrillation, abnormally slow heart rhythms, or asystole.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "50662306", "title": "Ion channel hypothesis of Alzheimer's disease", "section": "Section::::Mechanism of action.:Ionic leakage.\n", "start_paragraph_id": 13, "start_character": 0, "end_paragraph_id": 13, "end_character": 487, "text": "The large, poorly selective, and long-lived nature of Aβ channels allows rapid degradation of membrane potential in neurons. A single Aβ channel 4 nS in size can cause Na concentration to change as much as 10 μM/s. Degradation of membrane potential in this manner also generates additional Ca influx through voltage-sensitive Ca channels in the plasma membrane. Ionic leakage alone has been demonstrated to be sufficient to rapidly disrupt cellular homeostasis and induce cell necrosis.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "857170", "title": "Cardiac action potential", "section": "Section::::Channels.:Potassium channels.\n", "start_paragraph_id": 48, "start_character": 0, "end_paragraph_id": 48, "end_character": 770, "text": "Inwardly rectifying potassium channels (K favour the flow of K into the cell. This influx of potassium, however, is larger when the membrane potential is more negative than the equilibrium potential for K (~-90mV). As the membrane potential becomes more positive (i.e. during cell stimulation from a neighbouring cell), the flow of potassium into the cell via the K decreases. Therefore, K is responsible for maintaining the resting membrane potential and initiating the depolarization phase. However, as the membrane potential continues to become more positive, the channel begins to allow the passage of K \"out\" of the cell. This outward flow of potassium ions at the more positive membrane potentials means that the K can also aid the final stages of repolarisation.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "25260082", "title": "Depolarizing prepulse", "section": "Section::::Depolarizing prepulse properties.:DPP amplitude.\n", "start_paragraph_id": 16, "start_character": 0, "end_paragraph_id": 16, "end_character": 274, "text": "As the DPP amplitude is increased from zero to near threshold, the resulting increase in threshold current will grow as well. This is because the higher amplitude activates more sodium channels, thus allowing more channels to become inactivated by their III and IV domains.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "578038", "title": "Reversal potential", "section": "Section::::Use in research.\n", "start_paragraph_id": 13, "start_character": 0, "end_paragraph_id": 13, "end_character": 574, "text": "This line of reasoning led to the development of experiments (by Akira Takeuchi and Noriko Takeuchi in 1960) that demonstrated that acetylcholine-activated ion channels are approximately equally permeable to Na and K ions. The experiment was performed by lowering the external Na concentration, which lowers (makes more negative) the Na equilibrium potential and produces a negative shift in reversal potential. Conversely, increasing the external K concentration raises (makes more positive) the K equilibrium potential and produces a positive shift in reversal potential.\n", "bleu_score": null, "meta": null } ] } ]
null
248vhd
Why was Unit 731 commissioned and did Japan ever intend on using their "research"?
[ { "answer": "You can get the US gov't documents here.\n\n_URL_0_\n\nPages 32-34 and 46-49 gives summaries of the activities during the investigation.\n\nPages 53-55 gives a Q & A of Unit 731 in 1995.\n\nIt's a US gov't report so it's very concerned about what happened to US PoWs.\n\nYou should read the documents for yourself but here's my summary:\nUnit 731 began experiments in 1932 on Biological Warfare in order to defend against a possible BW attack. This phase included human experiments such as figuring out the minimal dosages necessary for infection and for lethality. (The BW experiments were not conducted on US PoWs, but rather on Chinese criminal sentenced to death, at least 3000 were subjected to these horrific experimentations.)\n\nUnit 731's General Ishii Shiro then began to experiment on possible uses of BW as an offensive tool. This phase (1940-1941) starts the field tests, such as artillery shells and bombs with BW agents and crop destruction in China. So Chinese civilians (!) and soldiers are subjected to these tests a total of 12 times. Mostly unsuccessfully, thankfully: a total of 25946 people were infected after 6 tests according to data from the papers of Kaneko Junichi, a Japanese doctor who was part of Unit 731. [I don't know how many of them subsequently died.] \n\n_URL_1_\n\nTo me it seems clear that this is following a very similar pattern to many military research. \"The enemy has a devilish plan! We must defend against it by making our own!\" You make out the enemy to be inhuman, and in the process you yourself become inhuman.\n\nFinal point: Some people have accused the US of using BW during the Korean War and that because of this, there's massive cover up of the activities of Unit 731 even today. (The US government gave Unit 731 immunity from war crime prosecutions in exchange for their data.) There's no way to know for sure if the cover up is still going on or if all the information has been released, so for now I will only go with the documentary evidence that we have available instead of the hearsay that may or may not be accurate.\n", "provenance": null }, { "answer": null, "provenance": [ { "wikipedia_id": "214659", "title": "Unit 731", "section": "", "start_paragraph_id": 1, "start_character": 0, "end_paragraph_id": 1, "end_character": 565, "text": ", also referred to as Detachment 731, the 731 Regiment, Manshu Detachment 731, The Kamo Detachment, or the Ishii Company, was a covert biological and chemical warfare research and development unit of the Imperial Japanese Army that undertook lethal human experimentation during the Second Sino-Japanese War (1937–1945) of World War II. It was responsible for some of the most notorious war crimes carried out by Imperial Japan. Unit 731 was based at the Pingfang district of Harbin, the largest city in the Japanese puppet state of Manchukuo (now Northeast China).\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "21092366", "title": "Organization of the Imperial Japanese Army", "section": "Section::::Divisional.:Other units.:Unit 731.\n", "start_paragraph_id": 81, "start_character": 0, "end_paragraph_id": 81, "end_character": 516, "text": "Unit 731 were covert medical experiment units which conducted biological warfare research and development through human experimentation during the Second Sino-Japanese War (1937–1945) and World War II. Unit 731 responsible for some of the most notorious war crimes. Initially set up as a political and ideological section of the Kempeitai military police of pre-Pacific War Japan, they were meant to counter the ideological or political influence of Japan's enemies, and to reinforce the ideology of military units.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "29603647", "title": "Allegations of biological warfare in the Korean War", "section": "Section::::Background.\n", "start_paragraph_id": 3, "start_character": 0, "end_paragraph_id": 3, "end_character": 734, "text": "Until the end of World War II, Japan operated a covert biological and chemical warfare research and development unit called Unit 731 in Harbin. The unit's activities, including human experimentation, were documented by the Khabarovsk War Crime Trials conducted by the Soviet Union in December 1949. However, at that time, the US government described the Khabarovsk trials as \"vicious and unfounded propaganda\". It was later revealed that the accusations made against the Japanese military were correct. The US government had taken over the research at the end of the war and had then covered up the program. Leaders of Unit 731 were exempted from war crimes prosecution by the United States and then placed on the payroll of the US. \n", "bleu_score": null, "meta": null }, { "wikipedia_id": "29272570", "title": "Number Nine Research Laboratory", "section": "", "start_paragraph_id": 1, "start_character": 0, "end_paragraph_id": 1, "end_character": 655, "text": "The , also called the , was a military development laboratory run by the Imperial Japanese Army from 1937 to 1945. The lab, based in Noborito, Tama-ku, Kawasaki, Kanagawa Prefecture, Japan focused on clandestine activities and unconventional warfare, including energy weapons, intelligence and spycraft tools, chemical and biological weapons, poisons, and currency counterfeiting. One of the weapons developed by the lab was the fire balloon, thousands of which were launched against the United States in 1944 and 1945. The unit, which at its peak was staffed by 1,000 scientists and workers, was disbanded upon Japan's defeat at the end of World War II.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "44790848", "title": "Operation Cherry Blossoms at Night", "section": "Section::::Background.\n", "start_paragraph_id": 2, "start_character": 0, "end_paragraph_id": 2, "end_character": 861, "text": "Unit 731 was specifically created by the Japanese military in Harbin, China (then part of Japanese-occupied Manchukuo) for researching biological and chemical warfare, by carrying out human experimentation on people of all ages. During the Second Sino-Japanese War and later World War II, the Japanese had encased bubonic plague, cholera, smallpox, botulism, anthrax, and other diseases into bombs where they were routinely dropped on Chinese combatants and non-combatants. According to the 2002 \"International Symposium on the Crimes of Bacteriological Warfare\", the number of people killed by the Imperial Japanese Army germ warfare and human experiments was around 580,000. According to other sources, \"tens of thousands, and perhaps as many as 400,000 Chinese died of bubonic plague, cholera, anthrax and other diseases\" from the use of biological warfare.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "29603647", "title": "Allegations of biological warfare in the Korean War", "section": "Section::::Subsequent evaluation.\n", "start_paragraph_id": 29, "start_character": 0, "end_paragraph_id": 29, "end_character": 358, "text": "Published in Japan in 2001, the book \"Rikugun Noborito Kenkyujo no shinjitsu\" or \"The Truth About the Army Noborito Institute\" revealed that members of Japan's Unit 731 also worked for the \"chemical section\" of a U.S. clandestine unit hidden within Yokosuka Naval Base during the Korean War as well as on projects inside the United States from 1955 to 1959.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "400772", "title": "Vivisection", "section": "Section::::Human vivisection.\n", "start_paragraph_id": 18, "start_character": 0, "end_paragraph_id": 18, "end_character": 390, "text": "Unit 731, a biological and chemical warfare research and development unit of the Imperial Japanese Army, undertook lethal human experimentation during the period that comprised both the Second Sino-Japanese War, and the Second World War (1937–1945). In Mindanao, Moro Muslim prisoners of war were subjected to various forms of vivisection by the Japanese, in many cases without anesthesia.\n", "bleu_score": null, "meta": null } ] } ]
null
2g74tv
In general, how were utilities (plumbing, electricity, gas) handled in the United States during the late 19th and 20th centuries?
[ { "answer": "The modern induction type electromechanical watt-hour meter was invented for the Westinghouse corporation in 1894. Prior to that there were several different designs for metering electricity running back to Samuel Gardiner who invented a meter that measured how long electricity was applied to the load (it didn't measure how much power was used just when power was used) and to Thomas Edison who in 1881 developed a meter for his DC power system.\n\nRead all about it [here](_URL_0_)", "provenance": null }, { "answer": "In the US metering began very early, coin operated meters were not uncommon and were popular in older urban buildings for sub-metering purposes. The utility meter reader would collect the coins on his rounds. \n\nUtilities were very 'consolidated' the company I currently work for grew by developing trolley lines to new suburbs, running the gas and electric to the area, financing home building then selling appliances, electric, gas and transportation to those new homeowners.\n\nFor what its worth unmetered, flat fee service still exists for niche markets (area lighting and various body politic services). On the other side of the spectrum you have interval billing which fluctuates minute to minute and measures not just total KWH but capacity factors as well net billing which allows selling back into the grid. \n\nWhat has definitely changed is the format of the bill. I've seen many 60+ year old electric bills (apparently at one time it was fashionable for people to save the first electric bill after they bought their first home). They would just display your current and previous reading and the total amount owed. No rate breakdowns, no explanations or long complex fee structures just a simple PAY X by Z. They are about the size of a postcard.\n\n", "provenance": null }, { "answer": null, "provenance": [ { "wikipedia_id": "39300445", "title": "Hoboken Fire Department", "section": "Section::::History.\n", "start_paragraph_id": 5, "start_character": 0, "end_paragraph_id": 5, "end_character": 276, "text": "The 1860s saw the creation of a public water system providing firefighters with a source of water carried via wooden mains that could be accessed by boring a hole in them. Each of the pumpers carried a short pipe that was designed to be pushed into the hole to deliver water.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "551731", "title": "Electrification", "section": "Section::::History of electrification.:Electrical grid.\n", "start_paragraph_id": 60, "start_character": 0, "end_paragraph_id": 60, "end_character": 397, "text": "In the United States it became a national objective after the power crisis during the summer of 1918 in the midst of World War I to consolidate supply. In 1934 the Public Utility Holding Company Act recognized electric utilities as public goods of importance along with gas, water, and telephone companies and thereby were given outlined restrictions and regulatory oversight of their operations.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "24075355", "title": "Henry Latham Doherty", "section": "Section::::Business career.\n", "start_paragraph_id": 8, "start_character": 0, "end_paragraph_id": 8, "end_character": 409, "text": "The Public Utility Holding Company Act of 1935 (PUHCA) required that a company like Cities Service divest itself of either its electric utility holdings or its other energy companies. Cities Service chose to sell off its utilities and remain in the oil and gas business. The first steps to liquidate investments in its public utilities were taken in 1943 and affected over 250 different utility corporations.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "8611659", "title": "Oregon Public Utility Commission", "section": "", "start_paragraph_id": 2, "start_character": 0, "end_paragraph_id": 2, "end_character": 488, "text": "The first regulation of a public utility was effected in 1874 when the Oregon Legislative Assembly passed a law regulating rates and procedures for the gas distribution business of Al Zeiber in Portland. His primary contract was with the city for its gas street lamps. The agency, or its predecessors including the Public Service Commission, have been charged with a wide variety regulatory duties, encompassing industries as diverse as timber rafting to intrastate rail and bus service.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "692722", "title": "Pacific Gas and Electric Company", "section": "Section::::History.:Early history.:San Francisco Gas and Electric.\n", "start_paragraph_id": 14, "start_character": 0, "end_paragraph_id": 14, "end_character": 679, "text": "In 1896, the Edison Light and Power Company merged with the San Francisco Gas Light Company to form the new San Francisco Gas and Electric Company.Consolidation of gas and electric companies solved problems for both utilities by eliminating competition and producing economic savings through joint operation. Other companies that began operation as active competitors but eventually merged into the San Francisco Gas and Electric Company included the Equitable Gas Light Company, the Independent Electric Light and Power Company, and the Independent Gas and Power Company. In 1903, the company purchased its main competitor for gas lighting, the Pacific Gas Improvement Company.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "20344155", "title": "Electrical grid", "section": "Section::::History.\n", "start_paragraph_id": 16, "start_character": 0, "end_paragraph_id": 16, "end_character": 1046, "text": "In the United States in the 1920s, utilities formed joint-operations to share peak load coverage and backup power. In 1934, with the passage of the Public Utility Holding Company Act (USA), electric utilities were recognized as public goods of importance and were given outlined restrictions and regulatory oversight of their operations. The Energy Policy Act of 1992 required transmission line owners to allow electric generation companies open access to their network and led to a restructuring of how the electric industry operated in an effort to create competition in power generation. No longer were electric utilities built as vertical monopolies, where generation, transmission and distribution were handled by a single company. Now, the three stages could be split among various companies, in an effort to provide fair accessibility to high voltage transmission. The Energy Policy Act of 2005 allowed incentives and loan guarantees for alternative energy production and advance innovative technologies that avoided greenhouse emissions.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "51252717", "title": "North American power transmission grid", "section": "Section::::History.\n", "start_paragraph_id": 4, "start_character": 0, "end_paragraph_id": 4, "end_character": 1046, "text": "In the United States in the 1920s, utilities formed joint-operations to share peak load coverage and backup power. In 1934, with the passage of the Public Utility Holding Company Act (USA), electric utilities were recognized as public goods of importance and were given outlined restrictions and regulatory oversight of their operations. The Energy Policy Act of 1992 required transmission line owners to allow electric generation companies open access to their network and led to a restructuring of how the electric industry operated in an effort to create competition in power generation. No longer were electric utilities built as vertical monopolies, where generation, transmission and distribution were handled by a single company. Now, the three stages could be split among various companies, in an effort to provide fair accessibility to high voltage transmission. The Energy Policy Act of 2005 allowed incentives and loan guarantees for alternative energy production and advance innovative technologies that avoided greenhouse emissions.\n", "bleu_score": null, "meta": null } ] } ]
null
5m0h55
why does coconut oil and other oils soak into some people's skin, and sit on top of others?
[ { "answer": "Do they? I have never heard of this", "provenance": null }, { "answer": "Now that you mention it...Coconut oil will absorb on my upper body but just sits on my legs. It doesn't even help with the ash. You can see the ash under the oil if you look closely. ", "provenance": null }, { "answer": "I figure it depends on the oil used, and how much of it they use. AFAIK, oils do not get \"absorbed\", they just stick to your skin. I think this isn't the case for oils, but for creams and other such products, they have a big amount of water in it. The water eventually evaporates and all that remains is a thin layer of fat which prevents water from evaporating quickly, keeping your skin moist, thus preventing dryness. That's why your skin gets drier after taking a shower: you've taken all the fat from your skin so you dry quickly.", "provenance": null }, { "answer": "My understanding is that our skin is very porous and will absorb stuff put on it. That's why nicotine strips work. Might not work as well if you sweat a lot.", "provenance": null }, { "answer": null, "provenance": [ { "wikipedia_id": "239915", "title": "Coconut oil", "section": "Section::::Health concerns.\n", "start_paragraph_id": 32, "start_character": 0, "end_paragraph_id": 32, "end_character": 405, "text": "Many health organizations advise against the consumption of coconut oil due to its high levels of saturated fat, including the United States Food and Drug Administration, World Health Organization, the United States Department of Health and Human Services, American Dietetic Association, American Heart Association, British National Health Service, British Nutrition Foundation, and Dietitians of Canada.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "239915", "title": "Coconut oil", "section": "Section::::Health concerns.\n", "start_paragraph_id": 33, "start_character": 0, "end_paragraph_id": 33, "end_character": 1316, "text": "Marketing of coconut oil has created the inaccurate belief that it is a \"healthy food\". Instead, studies have found that coconut oil consumption has health effects similar to those of other unhealthy fats, including butter, beef fat and palm oil. Coconut oil contains a high amount of lauric acid, a saturated fat that raises total blood cholesterol levels by increasing both the amount of high-density lipoprotein (HDL) cholesterol and low-density lipoprotein (LDL) cholesterol. Although lauric acid consumption may create a more favorable total blood cholesterol profile, this does not exclude the possibility that persistent consumption of coconut oil may actually increase the risk of cardiovascular disease through other mechanisms, particularly via the marked increase in total blood cholesterol induced by lauric acid. Because the majority of saturated fat in coconut oil is lauric acid, coconut oil may be preferred over partially hydrogenated vegetable oil when solid fats are used in the diet. However, the weight of evidence to date indicates that consuming polyunsaturated fats instead of coconut oil would reduce the risk of cardiovascular diseases. Due to its high content of saturated fat with corresponding high caloric burden, regular use of coconut oil in food preparation may promote weight gain.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "239915", "title": "Coconut oil", "section": "", "start_paragraph_id": 2, "start_character": 0, "end_paragraph_id": 2, "end_character": 388, "text": "Due to its high levels of saturated fat, the World Health Organization, the United States Department of Health and Human Services, United States Food and Drug Administration, American Heart Association, American Dietetic Association, British National Health Service, British Nutrition Foundation, and Dietitians of Canada advise that coconut oil consumption should be limited or avoided.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "239915", "title": "Coconut oil", "section": "", "start_paragraph_id": 1, "start_character": 0, "end_paragraph_id": 1, "end_character": 344, "text": "Coconut oil, or copra oil, is an edible oil extracted from the kernel or meat of mature coconuts harvested from the coconut palm (\"Cocos nucifera\"). It has various applications. Because of its high saturated fat content, it is slow to oxidize and, thus, resistant to rancidification, lasting up to six months at 24 °C (75 °F) without spoiling.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "239915", "title": "Coconut oil", "section": "Section::::Uses.:In food.\n", "start_paragraph_id": 38, "start_character": 0, "end_paragraph_id": 38, "end_character": 422, "text": "Despite its high saturated fat content, coconut oil is commonly used in baked goods, pastries, and sautés, having a \"haunting, nutty\", flavor with a touch of sweetness. Used by movie theatre chains to pop popcorn, coconut oil adds considerable saturated fat and calories to the snackfood while enhancing flavor, possibly a factor increasing further consumption of high-calorie snackfoods, energy balance, and weight gain.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "239915", "title": "Coconut oil", "section": "Section::::Uses.:Industry.\n", "start_paragraph_id": 45, "start_character": 0, "end_paragraph_id": 45, "end_character": 267, "text": "Coconut oil has been tested for use as an engine lubricant and as a transformer oil. Coconut oil (and derivatives, such as coconut fatty acid) are used as raw materials in the manufacture of surfactants such as cocamidopropyl betaine, cocamide MEA, and cocamide DEA.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "57561", "title": "Palm oil", "section": "", "start_paragraph_id": 4, "start_character": 0, "end_paragraph_id": 4, "end_character": 512, "text": "Along with coconut oil, palm oil is one of the few highly saturated vegetable fats and is semisolid at room temperature. Palm oil is a common cooking ingredient in the tropical belt of Africa, Southeast Asia and parts of Brazil. Its use in the commercial food industry in other parts of the world is widespread because of its lower cost and the high oxidative stability (saturation) of the refined product when used for frying. One source reported that humans consumed an average of palm oil per person in 2015.\n", "bleu_score": null, "meta": null } ] } ]
null
10u99h
If stars emit light and planets don't, how do we discover new planets? Their reflection of their nearest stars?
[ { "answer": "[Wobble and transit.](_URL_0_)\n\nIn the first one, the gravitational effects of a planet-sun coupling cause a \"wobble\" that permits detection from afar.\n\nIn the second one, the planet's orbit is such that it goes between the distant star and the observer; this 'transit' blocks some of the light on a regular basis.", "provenance": null }, { "answer": "There are a few ways. Here's what [Wikipedia](_URL_0_) has to say on the topic.\n\nHere's one common way.\n\nIf the plane of the orbit of the planet is aligned correctly, we can see the planet pass in front of the star. This partially eclipses the star and we can detect the decrease in light.", "provenance": null }, { "answer": null, "provenance": [ { "wikipedia_id": "7290120", "title": "Methods of detecting exoplanets", "section": "Section::::Established detection methods.:Direct imaging.\n", "start_paragraph_id": 74, "start_character": 0, "end_paragraph_id": 74, "end_character": 1094, "text": "Planets are extremely faint light sources compared to stars, and what little light comes from them tends to be lost in the glare from their parent star. So in general, it is very difficult to detect and resolve them directly from their host star. Planets orbiting far enough from stars to be resolved reflect very little starlight, so planets are detected through their thermal emission instead. It is easier to obtain images when the star system is relatively near to the Sun, and when the planet is especially large (considerably larger than Jupiter), widely separated from its parent star, and hot so that it emits intense infrared radiation; images have then been made in the infrared, where the planet is brighter than it is at visible wavelengths. Coronagraphs are used to block light from the star, while leaving the planet visible. Direct imaging of an Earth-like exoplanet requires extreme optothermal stability. During the accretion phase of planetary formation, the star-planet contrast may be even better in H alpha than it is in infrared – an H alpha survey is currently underway.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "7290120", "title": "Methods of detecting exoplanets", "section": "Section::::Established detection methods.:Reflection/Emission Modulations.\n", "start_paragraph_id": 34, "start_character": 0, "end_paragraph_id": 34, "end_character": 1649, "text": "Short-period planets in close orbits around their stars will undergo reflected light variations because, like the Moon, they will go through phases from full to new and back again. In addition, as these planets receive a lot of starlight, it heats them, making thermal emissions potentially detectable. Since telescopes cannot resolve the planet from the star, they see only the combined light, and the brightness of the host star seems to change over each orbit in a periodic manner. Although the effect is small — the photometric precision required is about the same as to detect an Earth-sized planet in transit across a solar-type star – such Jupiter-sized planets with an orbital period of a few days are detectable by space telescopes such as the Kepler Space Observatory. Like with the transit method, it is easier to detect large planets orbiting close to their parent star than other planets as these planets catch more light from their parent star. When a planet has a high albedo and is situated around a relatively luminous star, its light variations are easier to detect in visible light while darker planets or planets around low-temperature stars are more easily detectable with infrared light with this method. In the long run, this method may find the most planets that will be discovered by that mission because the reflected light variation with orbital phase is largely independent of orbital inclination and does not require the planet to pass in front of the disk of the star. It still cannot detect planets with circular face-on orbits from Earth's viewpoint as the amount of reflected light does not change during its orbit.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "849815", "title": "Kepler space telescope", "section": "Section::::Planet finding process.:Confirming planet candidates.:Through other detection methods.\n", "start_paragraph_id": 77, "start_character": 0, "end_paragraph_id": 77, "end_character": 1187, "text": "In addition to transits, planets orbiting around their stars undergo reflected-light variations—like the Moon, they go through phases from full to new and back again. Because Kepler cannot resolve the planet from the star, it sees only the combined light, and the brightness of the host star seems to change over each orbit in a periodic manner. Although the effect is small—the photometric precision required to see a close-in giant planet is about the same as to detect an Earth-sized planet in transit across a solar-type star—Jupiter-sized planets with an orbital period of a few days or less are detectable by sensitive space telescopes such as Kepler. In the long run, this method may help find more planets than the transit method, because the reflected light variation with orbital phase is largely independent of the planet's orbital inclination, and does not require the planet to pass in front of the disk of the star. In addition, the phase function of a giant planet is also a function of its thermal properties and atmosphere, if any. Therefore, the phase curve may constrain other planetary properties, such as the particle size distribution of the atmospheric particles.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "7290120", "title": "Methods of detecting exoplanets", "section": "Section::::Established detection methods.:Relativistic beaming.\n", "start_paragraph_id": 41, "start_character": 0, "end_paragraph_id": 41, "end_character": 582, "text": "One of the biggest disadvantages of this method is that the light variation effect is very small. A Jovian-mass planet orbiting 0.025 AU away from a Sun-like star is barely detectable even when the orbit is edge-on. This is not an ideal method for discovering new planets, as the amount of emitted and reflected starlight from the planet is usually much larger than light variations due to relativistic beaming. This method is still useful, however, as it allows for measurement of the planet's mass without the need for follow-up data collection from radial velocity observations.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "11530435", "title": "XO Project", "section": "Section::::Duties.\n", "start_paragraph_id": 3, "start_character": 0, "end_paragraph_id": 3, "end_character": 555, "text": "Preliminary identification of possible star candidates starts at the Haleakala telescope in Hawaii by a team of professional astronomers. Once they identify a star that dims slightly from time to time, the information is forwarded to a team of amateur astronomers who then investigate for additional evidence suggesting this dimming is caused by a transiting planet. Once enough data is collected, it is forwarded to the University of Texas McDonald Observatory to confirm the presence of a transiting planet by a second team of professional astronomers.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "63025", "title": "Variable star", "section": "Section::::Extrinsic variable stars.:Planetary transits.\n", "start_paragraph_id": 177, "start_character": 0, "end_paragraph_id": 177, "end_character": 366, "text": "Stars with planets may also show brightness variations if their planets pass between Earth and the star. These variations are much smaller than those seen with stellar companions and are only detectable with extremely accurate observations. Examples include HD 209458 and GSC 02652-01324, and all of the planets and planet candidates detected by the Kepler Mission.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "48201905", "title": "OGLE-2014-BLG-0124Lb", "section": "Section::::Technique.\n", "start_paragraph_id": 3, "start_character": 0, "end_paragraph_id": 3, "end_character": 475, "text": "The two telescopes, OGLE and Spitzer, discovered the planet through gravitational microlensing. This is done by observing when the star passes between Earth and another star. The distance at which the star is seen allows us to observe gravity bending the light and the change of brightness shows the existence of the star. If there is a planet orbiting the star, then the astronomer will also see the same thing twice, which helped astronomers discover OGLE-2014-BLG-0124Lb.\n", "bleu_score": null, "meta": null } ] } ]
null
rxrrw
Why do objects in space tumble when rotated on a certain axis?
[ { "answer": "Objects will appear to \"tumble\" if they are not being rotated about one of their three \"principle axes.\" If you rotate an object about some arbitrary axis, then the angular momentum vector will not, in general, be in the same direction as the rotation vector. Because the angular momentum vector must be conserved, the rotation axis changes to keep the angular momentum vector pointing in the same direction and the object appears to \"tumble.\" If you happen to rotate the object about one of its principle axes (such as the deck of cards at the beginning of the second video), then the rotation and angular momentum vectors are aligned and the object does not \"tumble.\"", "provenance": null }, { "answer": null, "provenance": [ { "wikipedia_id": "55870511", "title": "List of tumblers (small Solar System bodies)", "section": "", "start_paragraph_id": 1, "start_character": 0, "end_paragraph_id": 1, "end_character": 480, "text": "This is a list of tumblers, minor planets, comets and natural satellites that rotate on a non-principal axis, commonly known as \"tumbling\" or \"wobbling\". As of 2018, there are 3 natural satellites and 198 confirmed or likely tumblers out of a total of nearly 800,000 discovered small Solar System bodies. The data is sourced from the \"Lightcurve Data Base\" (LCDB). The tumbling of a body can be caused by the torque from asymmetrically emitted radiation known as the YORP effect.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "34187558", "title": "Rotating unbalance", "section": "", "start_paragraph_id": 1, "start_character": 0, "end_paragraph_id": 1, "end_character": 363, "text": "Rotating unbalance is the uneven distribution of mass around an axis of rotation. A rotating mass, or rotor, is said to be out of balance when its center of mass (inertia axis) is out of alignment with the center of rotation (geometric axis). Unbalance causes a moment which gives the rotor a wobbling movement characteristic of vibration of rotating structures.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "227314", "title": "Chandler wobble", "section": "", "start_paragraph_id": 2, "start_character": 0, "end_paragraph_id": 2, "end_character": 422, "text": "The Chandler wobble is an example of the kind of motion that can occur for a spinning object that is not a sphere; this is called a free nutation. Somewhat confusingly, the direction of the Earth's spin axis relative to the stars also varies with different periods, and these motions—caused by the tidal forces of the Moon and Sun—are also called nutations, except for the slowest, which are precessions of the equinoxes.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "1155117", "title": "Darboux vector", "section": "", "start_paragraph_id": 6, "start_character": 0, "end_paragraph_id": 6, "end_character": 337, "text": "Note that this rotation is kinematic, rather than physical, because usually when a rigid object moves freely in space its rotation is independent of its translation. The exception would be if the object's rotation is physically constrained to align itself with the object's translation, as is the case with the cart of a roller coaster.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "46884197", "title": "Chaotic rotation", "section": "", "start_paragraph_id": 1, "start_character": 0, "end_paragraph_id": 1, "end_character": 468, "text": "Chaotic rotation involves the irregular and unpredictable rotation of an astronomical body. Unlike Earth's rotation, a chaotic rotation may not have a fixed axis or period. Because of the conservation of angular momentum, chaotic rotation is not seen in objects that are spherically symmetric or well isolated from gravitational interaction, but is the result of the interactions within a system of orbiting bodies, similar to those associated with orbital resonance.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "58136551", "title": "Inclination instability", "section": "Section::::Dynamics of the inclination instability.\n", "start_paragraph_id": 3, "start_character": 0, "end_paragraph_id": 3, "end_character": 1272, "text": "In a flat disk of objects with eccentric orbits a small initial vertical perturbation is amplified by the inclination instability. The initial perturbation exerts an vertical force. On very long timescales relative to the period of an object's orbit this force produces a net torque on the orbit due to the object spending more time near aphelion. This torque causes the plane of the orbit to roll on its major axis. In a disk this results in the orbits rolling with respect to each other so that the orbits are no longer co-planar. The gravity of the objects now exerts forces on each other that are out of planes of their orbits. Unlike the force due to the initial perturbation these forces are in opposite directions, up and down respectively, on the inbound and outbound portions of their orbits. The resulting torque causes their orbits to rotate about their minor axes, lifting their aphelia, causing the disk to form a cone. The angular momentum of the orbit is also increased due to this torque resulting in reduction of the eccentricity of the orbits. The inclination instability requires an initial eccentricity of 0.6 or larger, and saturates when inclinations reach ~1 radian, after which orbits precess due to the gravity toward the cone's axis of symmetry.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "16633596", "title": "1989 Tatry", "section": "Section::::Lightcurves.:Tumbler.\n", "start_paragraph_id": 10, "start_character": 0, "end_paragraph_id": 10, "end_character": 259, "text": "The observers also detected a non-principal axis rotation seen in distinct rotational cycles in successive order. This is commonly known as tumbling. \"Tatry\" is one of a group of less than 200 bodies known to be is such a state \"(also see List of tumblers).\"\n", "bleu_score": null, "meta": null } ] } ]
null
3ap16n
why does it seem like coca cola is sold in nearly every country of the world, even underdeveloped ones, but bottled water seems hard to come by?
[ { "answer": "Bottled water is available there also, but you hardly hear about it because bottled water doesn't have the marketing budget of a small country like Coca Cola pumps into marketing for it's Soft Drinks.", "provenance": null }, { "answer": "In an undeveloped nation, Coca Cola is a rare treat that the locals enjoy. In contrast, bottled water is something for tourists because the locals aren't going to spend good money on something they can get for free from a local stream.", "provenance": null }, { "answer": null, "provenance": [ { "wikipedia_id": "402923", "title": "Bottled water", "section": "Section::::Markets.:United States.\n", "start_paragraph_id": 73, "start_character": 0, "end_paragraph_id": 73, "end_character": 472, "text": "The U.S. is the second largest consumer market for bottled water in the world, followed by Mexico, Indonesia, and Brazil. China surpassed the United States to take the lead in 2013. In 2016, bottled water outsold carbonated soft drinks (by volume) to become the number one packaged beverage in the U.S. In 2018, bottled water consumption increased to 14 billion gallons, up 5.8 percent from 2017, with the average American drinking 41.9 gallons of bottled water annually.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "914869", "title": "The Coca-Cola Company", "section": "Section::::Products and brands.:Brands.:Best selling.\n", "start_paragraph_id": 98, "start_character": 0, "end_paragraph_id": 98, "end_character": 1134, "text": "Coca-Cola is the best-selling soft drink in most countries, and was recognized as the number one global brand in 2010. While the Middle East is one of the only regions in the world where Coca-Cola is not the number one soda drink, Coca-Cola nonetheless holds almost 25% market share (to Pepsi's 75%) and had double-digit growth in 2003. Similarly, in Scotland, where the locally produced Irn-Bru was once more popular, 2005 figures show that both Coca-Cola and Diet Coke now outsell Irn-Bru. In Peru, the native Inca Kola has been more popular than Coca-Cola, which prompted Coca-Cola to enter in negotiations with the soft drink's company and buy 50% of its stakes. In Japan, the best selling soft drink is not cola, as (canned) tea and coffee are more popular. As such, The Coca-Cola Company's best selling brand there is not Coca-Cola, but Georgia. In May 2016, The Coca-Cola Company temporarily halted production of its signature drink in Venezuela due to sugar shortages. Since then, The Coca-Cola Company has been using \"minimum inventories of raw material\" to make their signature drinks at two production plants in Venezuela.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "402923", "title": "Bottled water", "section": "Section::::Concerns.:Perceptions about bottled water.\n", "start_paragraph_id": 110, "start_character": 0, "end_paragraph_id": 110, "end_character": 492, "text": "In 2001, a WWF study, \"Bottled water: understanding a social phenomenon\", warned that in many countries, bottled water may be no safer or healthier than tap water and it sold for up to 1,000 times the price. It said the booming market would put severe pressure on recycling plastics and could lead to landfill sites drowning in mountains of plastic bottles. Also, the study discovered that the production of bottled water uses more water than the consumer actually buys in the bottle itself.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "371133", "title": "Cocacolonization", "section": "Section::::Significance.:Widespread.\n", "start_paragraph_id": 25, "start_character": 0, "end_paragraph_id": 25, "end_character": 311, "text": "As of 2015, Coca-Cola has been distributed to over 200 countries worldwide. A few of the many countries consist of China, Guatemala, Papua New Guinea, Mexico, Russia, Canada, United Kingdom, Algeria, and Libya. According to the company, \"Coca-Cola is the second-most understood term in the world behind \"okay.\"\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "402923", "title": "Bottled water", "section": "Section::::Concerns.:Perceptions about bottled water.\n", "start_paragraph_id": 104, "start_character": 0, "end_paragraph_id": 104, "end_character": 879, "text": "Bottled water is perceived by many as being a safer alternative to other sources of water such as tap water. Bottled water usage has increased even in countries where clean tap water is present. This may be attributed to consumers disliking the taste of tap water or its organoleptics. Another contributing factor to this shift could be the marketing success of bottled water. The success of bottled water marketing can be seen by Perrier's transformation of a bottle of water into a status symbol. However, while bottled water has grown in both consumption and sales, the industry's advertising expenses are considerably less than other beverages. According to the Beverage Marketing Corporation (BMC), in 2013, the bottled water industry spent $60.6 million on advertising. That same year, sports drinks spent $128 million, sodas spent $564 million, and beer spent $1 billion.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "8239601", "title": "Industrial market segmentation", "section": "Section::::Approaches.:A generic principle.\n", "start_paragraph_id": 13, "start_character": 0, "end_paragraph_id": 13, "end_character": 483, "text": "Examples are Coca-Cola and some of the General Electric businesses. The drawback is that the business would risk losing business as soon as a weakness in its supply chain or in its marketing forces it to withdraw from the market. Coca-Cola’s attempt to sell its Dasani bottled water in the UK turned out to be a flop mainly because it tries to position this “purified tap water” alongside mineral water of other brands. The trigger was a contamination scandal reported in the media.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "402923", "title": "Bottled water", "section": "Section::::Concerns.:Perceptions about bottled water.\n", "start_paragraph_id": 105, "start_character": 0, "end_paragraph_id": 105, "end_character": 626, "text": "Consumers tend to choose bottled water due to health related reasons. In communities that experience problems with their tap water, bottled water consumption is significantly higher. The International Bottled Water Association guidelines state that bottled water companies cannot compare their product to tap water in marketing operations. Consumers are also affected by memories associated with particular brands. For example, Coca-Cola took their Dasani product off the UK market after finding levels of bromate that were higher than legal standards because consumers in the UK associated this flaw with the Dasani product.\n", "bleu_score": null, "meta": null } ] } ]
null
2gz1gu
Why does traditional Japanese architecture only rarely use stone structures?
[ { "answer": "hi! additional input is welcome, but meanwhile, you may be interested in responses to these earlier questions\n\n* [Why are Japanese castles built of wood as opposed to stone?](_URL_5_)\n\n* [What military value did Japanese castles have, compared to European castles?](_URL_3_)\n\n* [Why didn't Asians build castles like the Europeans?](_URL_1_)\n\n* [Why didn't Europe adopt Japanese castles?](_URL_2_)\n\n* [Why don't we restore ancient ruins?](_URL_6_)\n\nsiege warfare in Japan\n\n* [What were some defense tactics used in castles?](_URL_4_)\n\n* [How did the Japanese lay siege to their castles?](_URL_0_)", "provenance": null }, { "answer": null, "provenance": [ { "wikipedia_id": "15026438", "title": "Architecture of Tokyo", "section": "Section::::History of Japanese architecture.\n", "start_paragraph_id": 3, "start_character": 0, "end_paragraph_id": 3, "end_character": 1037, "text": "Japanese architects have designed a way to build temples, furniture, and homes without using screws or nails. To keep the piece together joints are constructed to hold everything in place. However, more time consuming, joints tend to hold up to natural disasters better than nails and screws, which is how some temples in Japan are still standing despite recent natural events. There are two main categories with Japanese buildings, either craftsman like or industrial. Industrial tends to be made by machines while the craftsman style is handmade and tends to take up more time then the industrial style. Japanese homes were influenced from China greatly until 57 B.C when Japanese homes started to grow to be more distinct from other cultures. Until 660 AD homes and building constructed in Japan were made from stone and timber. Even though all buildings from this era are long gone there are documents showing traditional structures. Contrary to this however, wood still remains the most important material in Japanese architecture.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "1738605", "title": "Culture of Asia", "section": "Section::::Architecture.:Japan.\n", "start_paragraph_id": 46, "start_character": 0, "end_paragraph_id": 46, "end_character": 580, "text": "Japanese architecture is distinctive in that it reflects a deep ″understanding of the natural world as a source of spiritual insight and an instructive mirror of human emotion″. Attention to aesthetics and the surroundings is given, natural materials are preferred and artifice is generally being avoided. Impressive wooden castles and temples, some of them 2000 years old, stand embedded in the natural contours of the local topography. Notable examples include the Hōryū Temple complex (6th century), Himeji Castle (14th century), Hikone Castle (17th century) and Osaka Castle.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "15573", "title": "Japan", "section": "Section::::Culture.:Architecture.\n", "start_paragraph_id": 156, "start_character": 0, "end_paragraph_id": 156, "end_character": 695, "text": "Japanese architecture is a combination between local and other influences. It has traditionally been typified by wooden structures, elevated slightly off the ground, with tiled or thatched roofs. Sliding doors (\"fusuma\") were used in place of walls, allowing the internal configuration of a space to be customized for different occasions. People usually sat on cushions or otherwise on the floor, traditionally; chairs and high tables were not widely used until the 20th century. Since the 19th century, however, Japan has incorporated much of Western, modern, and post-modern architecture into construction and design, and is today a leader in cutting-edge architectural design and technology.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "1586702", "title": "History of Asian art", "section": "Section::::Japanese art.\n", "start_paragraph_id": 65, "start_character": 0, "end_paragraph_id": 65, "end_character": 411, "text": "Japanese art and architecture is works of art produced in Japan from the beginnings of human habitation there, sometime in the 10th millennium BC, to the present. Japanese art covers a wide range of art styles and media, including ancient pottery, sculpture in wood and bronze, ink painting on silk and paper, and a myriad of other types of works of art; from ancient times until the contemporary 21st century.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "344430", "title": "Japanese architecture", "section": "Section::::General features of Japanese traditional architecture.\n", "start_paragraph_id": 6, "start_character": 0, "end_paragraph_id": 6, "end_character": 381, "text": "In Japanese traditional architecture, there are various styles, features and techniques unique to Japan in each period and use, such as residence, castle, Buddhist temple and Shinto shrine. On the other hand, especially in ancient times, it was strongly influenced by Chinese culture like other Asian countries, so it has characteristics common to architecture in Asian countries.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "940177", "title": "Japantown", "section": "Section::::Characteristics.:Japanese architectural styles.\n", "start_paragraph_id": 9, "start_character": 0, "end_paragraph_id": 9, "end_character": 735, "text": "Many historical Japantowns will exhibit architectural styles that reflect the Japanese culture. Japanese architecture has traditionally been typified by wooden structures, elevated slightly off the ground, with tiled or thatched roofs. Sliding doors (\"fusuma\") were used in place of walls, allowing the internal configuration of a space to be customized for different occasions. People usually sat on cushions or otherwise on the floor, traditionally; chairs and high tables were not widely used until the 20th century. Since the 19th century, however, Japan has incorporated much of Western, modern, and post-modern architecture into construction and design, and is today a leader in cutting-edge architectural design and technology.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "167104", "title": "Culture of Japan", "section": "Section::::Installation arts.:Architecture.\n", "start_paragraph_id": 33, "start_character": 0, "end_paragraph_id": 33, "end_character": 586, "text": "Japanese architecture has as long of a history as any other aspect of Japanese culture. Originally heavily influenced by Chinese architecture, it has developed many differences and aspects which are indigenous to Japan. Examples of traditional architecture are seen at temples, Shinto shrines, and castles in Kyoto and Nara. Some of these buildings are constructed with traditional gardens, which are influenced from Zen ideas.Some modern architects, such as Yoshio Taniguchi and Tadao Ando are known for their amalgamation of Japanese traditional and Western architectural influences.\n", "bleu_score": null, "meta": null } ] } ]
null
1hgc7s
what is depression and how do i deal with a friend that has it?
[ { "answer": "Given that your friend was prescribed pills, his depression may stem from a chemical imbalance within the brain. Not sure what kind of anti-depressant he was given (Zoloft, Prozac, etc.), but if it helps, it helps. Like you mentioned, you can't just tell him to cheer up and pretend it's not there. I had an SO who was also diagnosed with depression and the best advice I can give you is to just be there for the person, listen to any problem he has, and just do your best. Also, just take others' advice and read articles or vignettes of others' experiences. For the most part, just be well informed of the situation and tailor all possible help to your friend. We're all different so some things may or may not work. ", "provenance": null }, { "answer": "I dealt with depression for a while and to be honest it was worse than any other physical pain I've ever experienced. It is something that is definitely overlooked or seen as not a big deal by a large part of our society because people don't understand it.\n\nWhile you will need to take what I say with a grain of salt, depression and the feelings and side effects vary enormously between cases so there is no, \"do this and that\" to make it better. Depending on the type of depression your friend is dealing with, there are different things they may want or not want, but overall, **just be a good friend.**\n\nHere are a few things related to my specific bout with depression that I felt and wanted:\n\n* First and foremost, do not treat them like there is something wrong with them. It is probably the worst thing you can do by treating them any differently because you know there is something wrong with them.\n\n* I was very cynical and negative about everything because I just couldn't see how anything good could happen to me. I didn't feel like I was making progress in my life while everyone else moved on around me. \n\n* I couldn't make friends. I probably could have had I actually tried but my persistence in the matter wasn't exactly very high which only perpetuated my thoughts that I just wasn't \"friend material.\"\n\n* I had little to no motivation to do anything. Everything was boring and dull to me, like walking around a gray world looking for those sparks of colors that never seemed to appear. I lost interest in all my old hobbies and couldn't seem to pick up any new ones. I had no future even in my sights because I didn't know what I wanted and didn't have any motivation to find out what that might be.\n\n* I wouldn't really cry very much (I'm not really that kind of person) but when I did it was always about things other people had that I felt like I would never experience. Things like having a best friend or finding a girl to love and marry. I didn't have these things because I was subconsciously expecting it to just fall in my lap. I wanted the experiences but didn't want to do any work to fulfill it.\n\n* I didn't have very strong will power. Sure I wished things would be different but I didn't have the will power to actually make things any different. I got stuck in a rut of complacency and didn't even care to get out although I said I wanted to.\n\nWhile there were several things that I definitely could have done, there were a few simple things I wanted from other people:\n\n* If we make plans, don't cancel on me for something else you think is more fun. If someone is in depression, going out and doing something in public with other people is a big deal to them. With me personally, if I made plans to go out with someone to do something it was because I **really** wanted to do it. When they would then casually cancel on me or worse, just not show up, I took it personally and prevented me from even making plans again for another month because of the fear of that personal rejection. \n\n* Make an effort to listen. I am a rather soft spoken person and when I would say something in a group of people it was sometimes lost on the group. It would get to a point I would say something and I would see they knew I said something but didn't know what it was but they didn't care enough to even ask me \"what?\" I felt like no one even cared what I had to say, that I was just there to make the group bigger. Also aside from physically listening, listen to the content of what it is. There were just too many times I was completely dismissed and mocked that I simply stopped talking because it only brought more hurt to me.\n\nIn my opinion the best thing you can do for your friend is to treat them like a friend. Return their phone calls and text messages, ask their opinions on small things (politics and religion would be big things, probably shouldn't talk about these unless you know you are both comfortable with it). Treat them like they matter, like they are a part of your life and you want to keep them there. Invite them out to things you think will interest them, it may be difficult as they probably won't want to do anything but keep in mind forcing them can also be bad. On the flip side, make sure they know you are there and would love to hang out or go anywhere with them and that they only need to ask.\n\nFeel free to ask me anything you might want to know!", "provenance": null }, { "answer": null, "provenance": [ { "wikipedia_id": "19064282", "title": "Management of depression", "section": "", "start_paragraph_id": 1, "start_character": 0, "end_paragraph_id": 1, "end_character": 458, "text": "Depression is a symptom of some physical diseases; a side effect of some drugs and medical treatments; and a symptom of some mood disorders such as major depressive disorder or dysthymia. Physical causes are ruled out with a clinical assessment of depression that measures vitamins, minerals, electrolytes, and hormones. Management of depression may involve a number of different therapies: medications, behavior therapy, psychotherapy, and medical devices.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "14754086", "title": "S100A10", "section": "Section::::Clinical significance.:Depression.\n", "start_paragraph_id": 26, "start_character": 0, "end_paragraph_id": 26, "end_character": 517, "text": "Depression is a widespread, debilitating disease affecting persons of all ages and backgrounds. Depression is characterized by a plethora of emotional and physiological symptoms including feelings of sadness, hopelessness, pessimism, guilt, a general loss of interest in life, and a sense of reduced emotional well-being or low energy. Very little is known about the underlying pathophysiology of clinical depression and other related mood-disorders including anxiety, bipolar disorder, ADD, ADHD, and Schizophrenia.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "22481627", "title": "Depression in childhood and adolescence", "section": "", "start_paragraph_id": 1, "start_character": 0, "end_paragraph_id": 1, "end_character": 1235, "text": "Depression is a state of low mood and aversion to activity. It may be a normal reaction to occurring life events or circumstances, a symptom of a medical condition, a side effect of drugs or medical treatments, or a symptom of certain psychiatric syndromes, such as the mood disorders major depressive disorder and dysthymia. Depression in childhood and adolescence is similar to adult major depressive disorder, although young sufferers may exhibit increased irritability or aggressive and self-destructive behavior, rather than the all-encompassing sadness associated with adult forms of depression. Children who are under stress, experience loss, or have attention, learning, behavioral, or anxiety disorders are at a higher risk for depression. Childhood depression is often comorbid with mental disorders outside of other mood disorders; most commonly anxiety disorder and conduct disorder. Depression also tends to run in families. Psychologists have developed different treatments to assist children and adolescents suffering from depression, though the legitimacy of the diagnosis of childhood depression as a psychiatric disorder, as well as the efficacy of various methods of assessment and treatment, remains controversial.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "840273", "title": "Depression (mood)", "section": "", "start_paragraph_id": 1, "start_character": 0, "end_paragraph_id": 1, "end_character": 736, "text": "Depression is a state of low mood and aversion to activity. It can affect a person's thoughts, behavior, motivation, feelings, and sense of well-being. It may feature sadness, difficulty in thinking and concentration and a significant increase/decrease in appetite and time spent sleeping, and people experiencing depression may have feelings of dejection, hopelessness and, sometimes, suicidal thoughts. It can either be short term or long term. Depressed mood is a symptom of some mood disorders such as major depressive disorder or dysthymia; it is a normal temporary reaction to life events, such as the loss of a loved one; and it is also a symptom of some physical diseases and a side effect of some drugs and medical treatments.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "26975661", "title": "Symptoms of victimization", "section": "Section::::Categories of outcomes.:Psychological.:Depression.\n", "start_paragraph_id": 7, "start_character": 0, "end_paragraph_id": 7, "end_character": 1089, "text": "Depression has been found to be associated with many forms of victimization, including sexual victimization, violent crime, property crime, peer victimization, and domestic abuse. Indicators of depression include irritable or sad mood for prolonged periods of time, lack of interest in most activities, significant changes in weight/appetite, activity, and sleep patterns, loss of energy and concentration, excessive feelings of guilt or worthlessness, and suicidality. The loss of energy, interest, and concentration associated with depression may impact individuals who have experienced victimization academically or professionally. Depression can impact many other areas of a person's life as well, including interpersonal relationships and physical health. Depression in response to victimization may be lethal, as it can result in suicidal ideation and suicide attempts. Examples of this include a ten-fold increase found in suicide attempts among rape victims compared to the general population, and significant correlations between being victimized in school and suicidal ideation.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "8389", "title": "Major depressive disorder", "section": "Section::::Society and culture.:Terminology.\n", "start_paragraph_id": 126, "start_character": 0, "end_paragraph_id": 126, "end_character": 903, "text": "The term \"depression\" is used in a number of different ways. It is often used to mean this syndrome but may refer to other mood disorders or simply to a low mood. People's conceptualizations of depression vary widely, both within and among cultures. \"Because of the lack of scientific certainty,\" one commentator has observed, \"the debate over depression turns on questions of language. What we call it—'disease,' 'disorder,' 'state of mind'—affects how we view, diagnose, and treat it.\" There are cultural differences in the extent to which serious depression is considered an illness requiring personal professional treatment, or is an indicator of something else, such as the need to address social or moral problems, the result of biological imbalances, or a reflection of individual differences in the understanding of distress that may reinforce feelings of powerlessness, and emotional struggle.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "18550003", "title": "Behavioral theories of depression", "section": "Section::::Behavioral theories.:Wendy Treynor's Theory of Depression.\n", "start_paragraph_id": 23, "start_character": 0, "end_paragraph_id": 23, "end_character": 2815, "text": "According to social psychologist Wendy Treynor, depression happens when one is trapped in a social setting that rejects the self, on a long-term basis (where one is devalued continually), and this rejection is internalized into self-rejection, winning one rejection from both the self and group—social rejection and self-rejection, respectively. This chronic conflict seems inescapable, and depression sets in. Stated differently, according to Treynor, the cause of depression is as follows: One's state of harmony is disrupted when faced with external conflict (social rejection) for failing to measure up to a group’s standard(s). Over time, this social rejection is internalized into self-rejection, where one experiences rejection from both the group and the self. Therefore, the rejection seems inescapable and depression sets in. In this framework, depression is conceptualized as being the result of long-term conflict (internal and external), where this conflict corresponds to self-rejection and social rejection, respectively, or the dual needs for self-esteem (self-acceptance) and belonging (social acceptance) being unmet, on a long-term basis. The solution to depression offered, therefore, is to end the conflict (get these needs met): Navigate oneself into an unconditionally accepting social environment, so one can internalize this social acceptance into self-acceptance, winning one peace both internally and externally (through self-acceptance and social acceptance—self-esteem and belonging, respectively), ending the conflict, and the depression. (Treynor obtained this result and framework by piecing together social psychological science research findings using mathematical logic.) But what if one cannot find an unconditionally accepting group to navigate oneself into? If one cannot find such a group, the solution the framework offers is to make the context in which one generally finds oneself the self (however, the self must be in meditative solitude—alone and at peace, not lonely and ruminating—as stated, a state commonly achieved through the practice of meditation). The framework suggests that a lack of self-acceptance lies at the root of depression and that one can heal their own depression if they (a) keep an alert eye to their own emotional state (i.e., identify feelings of shame or depression) and (b) upon identification, take reparative action: undergo a 'social environment' shift and immerse oneself in a new group that is unconditionally accepting (accepts the self, as it is)—whether that group is one that exists apart from the self or simply is the self [in meditative solitude]. Over time, the unconditional acceptance experienced in this setting will be internalized, allowing one to achieve self-acceptance, eradicating conflict, eliminating one's depression.\n", "bleu_score": null, "meta": null } ] } ]
null
dvbgwf
Is the expansion of Soviet influence and creation of the USSR considered imperialism?
[ { "answer": "It's generally (though not universally) accepted that the Soviet Union behaved in the manner of a traditional empire, however calling the Soviet Union 'an empire' is a bit of a loaded statement-- especially in the context of the 20th century when the British Empire and French Colonial Empire were either alive and well or living in the not-so-distant memory of the peoples of the world. Thus, the short answer to this question is yes-- the Soviet Union is (rightly) considered to have been an imperial power but no-- the term \"Soviet Empire\" is not appropriate to throw around without qualification. I talk about the USSR's transition from its revolutionary origins to a more traditional international actor in [this answer](_URL_3_) which I think provides a good lead-in to the kinds of issues that need to be understood when answering this question, I'd recommend you give it a read-- the following excerpt summarizes the key point though which is this:\n\n > \\[A\\]t its inception the Soviet Union tried to style itself as a sort of 'post-nation-state' nation-state but was forced to behave more and more like a traditional international actor as time progressed.\n\nThere are plenty of examples of the Soviet Union evincing traditional imperialist tendencies, no matter how much they tried to dress it up as Revolutionary Internationalist policy or how loaded the term 'empire' may be. This is a country (or collection of countries) which:\n\n* Expanded its borders by force, against the will of those whom it was absorbing.\n* Established puppet regimes in areas not contiguous to its quote-unquote natural borders which acted at the behest of their Soviet overlords' faraway capital.\n* Used [propaganda](_URL_1_) to define and beatify a 'Soviet way of life,' as a model which could (and more importantly, *should*) spread the across the globe. (Translation of the poster text: *Leninism is our banner-- the future is on our side!*)\n\nThose are the big ones, and just to be clear-- I'm defining 'imperialism' in the most reasonable way I can here using Harrison M. Wright's guidelines for doing so in his 1967 essay *Imperialism: the Word and its Meanings*^(\\[)[^(1)](_URL_2_)^(\\]): the process by which a nation uses military force, coercion, and propaganda to gain territory and influence. If you have a *very specific* expression of imperialism that you want to understand with respect to Soviet policy, I'm all ears and will try my best to answer any follow-up questions.\n\nIf you don't have time to read Wright's whole essay linked above, allow me to summarize: the author talks about the inherent pitfalls of using words like 'imperialism' in the post-imperial modern world when the word has become a pejorative and its meaning has been obscured to the point of near-ambiguity and certainly diminishing returns on any actual substantial definitional power, which is why I'm spelling it out so explicitly-- it doesn't come from a place of condescension or abject pedantry.\n\nAll that disclaimed though, this conversation becomes infinitely more interesting when you start asking *why* the Soviet Union behaved imperially. Here, there are two conflicting schools of thought which I'll personify with Professors Robert Service and Richard Pipes (RIP). Service argues that the impetus for Soviet imperialism lay within communism (and to a lesser extent Marxism) itself and therefore Revolutionary Internationalist policy is an inevitability in any nation which claims to be striving toward those outcomes. Pipes argues that there was something authentically Russian about the expansionary policy of the Soviet Union and the banner of Marxism-Leninism was more like a placeholder than a rallying ideology-- that is, maybe communism was nominally the justification Moscow was using to push its borders further and further west, but in fact, that desire was rooted in Great Russian territorial ambitions to the core.\n\nNeither Service nor Pipes is 100% in either camp-- of course. They are just convenient proponents for each of these respective hypotheses so I've chosen to use them in that manner.\n\nFrom Pipes' *Survival is not Enough* (1984):\n\n > The decisive factors \\[for Soviet authoritarianism\\] are not the ideas but the soil on which they happen to fall.\n\nCompared to a 1993 political opinion piece Service wrote for *The Independent*:\n\n > \\[T\\]he Orwellian maxim that he who controls the past also controls the present still holds true. \\[...\\] historians who once lauded Lenin now proclaim that he was a mass murderer. The entire Marxist-Leninist experiment is denounced. The blame for all Russia's ills is placed squarely on the Communist Party. \\[...\\] \n > \n > Yeltsin and his supporters are not totalitarian in aspiration; but they recognise that, in Russia's present turmoil, a new identity has somehow to be formed. The main problem is that, until recently, Russians were encouraged to think of themselves as the main constituent segment of 'the Soviet people'. This was Stalin's way of conferring a quasi-imperial role upon them.^(\\[)[^(2)](_URL_4_)^(\\])\n\nBut what about contemporaneously? Both of these historians are looking backward and assessing the events after the fact. Can we know what the Soviets were thinking at the time? *Why did they* think they were expanding?\n\nAt this point, the most valuable contrast to answer this question is the one between Lev Trotsky and Iosef Stalin. Trotsky, the inveterate revolutionary, is going to play the role of Service here (that is, the USSR is expanding to further the cause of worldwide communism) and Stalin, the inveterate pragmatist, is going to be our Pipes (that is, the USSR is expanding to further its superior Russo-centric culture and/or 'protect' Russia proper). I'm going to use the Winter War as the backdrop for this conversation since we can almost all agree that the Soviet invasion of Finland in 1939 was about as pure an act of imperialist expansionism by the Soviet Union that you're going to find (which doesn't seek to belittle the invasions of Poland, Afghanistan, Lithuania, Estonia, Latvia, Georgia, Ukraine, Belarus, Armenia, Azerbaijan, Mongolia, China, Iran, or any others-- I just find this example to be the most fitting for my own purposes here).\n\nFrom Trotsky's *Balance Sheet of the Finnish Events* (Trotsky gets a gold star for euphemism on that title):\n\n > \\[T\\]o approach the question of the fate of small states from the standpoint of 'national independence,' 'neutrality,' etc., is to remain in the sphere of imperialist mythology. The struggle involves world domination. The question of the existence of the USSR will be solved in passing. \\[...\\] So far as the small and second rate states are concerned, they are already today pawns in the hands of the great powers. The sole freedom they still retain, and this only to a limited extent, is the freedom of choosing between masters.^(\\[)[^(3)](_URL_0_)^(\\])\n\nHis justification for the invasion of Finland (in which he played no part remember, Trotsky is writing here from exile in Mexico) is, 'well they've got to be someone's lackey so they may as well be ours because communism has the best interests of the working man in mind.' That opinion in and of itself epitomizes the imperialism of 20th century grand narrative: the 'inevitability' of the subordination of small nations to their more powerful neighbors as justification for their subordination to their more powerful neighbors was the generally agreed upon talking point that the great powers used to more or less arbitrarily adjudicate the lines on the map against the will and without the consent of entire nations of people.", "provenance": null }, { "answer": null, "provenance": [ { "wikipedia_id": "48988512", "title": "Arrigo Cervetto", "section": "Section::::Biography.\n", "start_paragraph_id": 7, "start_character": 0, "end_paragraph_id": 7, "end_character": 460, "text": "Cervetto Arrigo theorized \"Unitary Imperialism,\" starting with the\"Imperialism\" of Lenin, as opposed to the common vision of a bipolar world divided into two camps, Soviet socialism and the American Capitalism. Cervetto states that both powers were imperialists and capitalists, and that the unequal economic development compelled them to a continuous struggle for the hoarding of new markets, stating that the two superpowers were not so different in nature.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "217572", "title": "Soviet Empire", "section": "Section::::Characteristics.\n", "start_paragraph_id": 6, "start_character": 0, "end_paragraph_id": 6, "end_character": 450, "text": "This does not mean that economic expansion did not play a significant role in the Soviet motivation to spread influence in these satellite territories. In fact, these new territories would ensure an increase in the global wealth which the Soviet Union would have a grasp on. If we follow the theoretical communist ideology, this expansion would contribute to a higher portion for every Soviet citizen through the process of redistribution of wealth.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "7720531", "title": "Criticism of communist party rule", "section": "Section::::Areas of criticism.:International politics and relations.:Imperialism.\n", "start_paragraph_id": 33, "start_character": 0, "end_paragraph_id": 33, "end_character": 1212, "text": "Western critics accused the Soviet Union and the People's Republic of China of practicing imperialism themselves, and communist condemnations of Western imperialism hypocritical. The attack on and restoration of Moscow's control of countries that had been under the rule of the tsarist empire, but briefly formed newly independent states in the aftermath of the Russian Civil War (including Armenia, Georgia and Azerbaijan), have been condemned as examples of Soviet imperialism. Similarly, Stalin's forced reassertion of Moscow's rule of the Baltic states in World War II has been condemned as Soviet imperialism. Western critics accused Stalin of creating satellite states in Eastern Europe after the end of World War II. Western critics also condemned the intervention of Soviet forces during the 1956 Hungarian Revolution, the Prague Spring and the war in Afghanistan as aggression against popular uprisings. Maoists argued that the Soviet Union had itself become an imperialist power while maintaining a socialist façade (social imperialism). China's reassertion of central control over territories on the frontiers of the Qing dynasty, particularly Tibet, has also been condemned as imperialistic by some.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "217572", "title": "Soviet Empire", "section": "", "start_paragraph_id": 1, "start_character": 0, "end_paragraph_id": 1, "end_character": 824, "text": "The informal term \"Soviet Empire\" has two meanings. In the narrow sense, it expresses a view in Western Sovietology that the Soviet Union as a state was a colonial empire. The onset of this interpretation is traditionally attributed to Richard Pipes's book \"The Formation of the Soviet Union\" (1954). In the wider sense, it refers to the country's perceived imperialist foreign policy during the Cold War. The nations said to be part of the Soviet Empire in the wider sense were officially independent countries with separate governments that set their own policies, but those policies had to remain within certain limits decided by the Soviet Union and enforced by threat of intervention by the Warsaw Pact (Hungary 1956, Czechoslovakia 1968 and Poland 1980). Countries in this situation are often called satellite states.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "48718576", "title": "Effects of economic liberalisation on education in Tajikistan", "section": "Section::::Reform.\n", "start_paragraph_id": 7, "start_character": 0, "end_paragraph_id": 7, "end_character": 616, "text": "Because of common Soviet history, Central Asian states shared most of their education policies at independence. Consequently, the post-socialist education reform packages employed by the WB, ADB and other UN organisations in these countries was very similar. The features most directly affected by the new liberal economic strategy were the “decentralization of educational finance and governance”, the “privatization of higher education”, the “reorganization (or “rationalization”) of schools\" and the “liberalization of textbook publishing”, although many other facets of education were also indirectly affected. \n", "bleu_score": null, "meta": null }, { "wikipedia_id": "3774387", "title": "Stalin's Missed Chance", "section": "Section::::On the eve of World War II.\n", "start_paragraph_id": 5, "start_character": 0, "end_paragraph_id": 5, "end_character": 642, "text": "According to Meltyukhov, Russia had lost its position as a Great Power during the Great War, the Revolution and the subsequent breakup of its Empire. The Soviet leadership had the option either to accept the regional status of the USSR or to become a Great Power once again. Having decided for the latter, the Soviet leadership used Communist ideology (the Comintern, the idea of world revolution etc.) to strengthen its position. The key objective was to exclude a possible alliance of Capitalist countries. Although diplomatic relationships had been established with the capitalist countries, the USSR was not accepted as an equal partner.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "217572", "title": "Soviet Empire", "section": "Section::::Characteristics.\n", "start_paragraph_id": 3, "start_character": 0, "end_paragraph_id": 3, "end_character": 734, "text": "Alhough the Soviet Union was not ruled by an emperor and declared itself anti-imperialist and a people's democracy, critics argue that it exhibited tendencies common to historic empires. Some scholars hold that the Soviet Union was a hybrid entity containing elements common to both multinational empires and nation states. It has also been argued that the Soviet Union practiced colonialism as did other imperial powers. Maoists argued that the Soviet Union had itself become an imperialist power while maintaining a socialist façade. The other dimension of \"Soviet imperialism\" is cultural imperialism. The policy of Soviet cultural imperialism implied the Sovietization of culture and education at the expense of local traditions.\n", "bleu_score": null, "meta": null } ] } ]
null
8wbfuk
we’re all told that using phones while they’re charging is bad. can anyone of the good people here tell me why?
[ { "answer": "Who told you that? It increases the amount of time it takes to charge, but is otherwise fine.", "provenance": null }, { "answer": "Never heard of that before. I go back to the Motorola clamshell (look it up you little punks - and get off my lawn!) and have always done that. ", "provenance": null }, { "answer": null, "provenance": [ { "wikipedia_id": "3280194", "title": "Mobile phones and driving safety", "section": "Section::::Studies.:Effectiveness of bans/restrictions on mobile phones.\n", "start_paragraph_id": 18, "start_character": 0, "end_paragraph_id": 18, "end_character": 649, "text": "In a number of cases it has been shown that bans on mobile use while driving have proven to be an effective way to deter people from picking up their phones. Those violating the ban usually face fines and points on their licence. Although an initial decrease/alteration in driving habits is to be expected. As time goes on the number of people breaking these laws/regulations eventually goes back to normal, sometimes higher levels as time goes on and people go back to their old habits. In addition, police officers have difficulties detecting mobile phone use in vehicles, which decreases the effectiveness of bans/restrictions on mobile phones. \n", "bleu_score": null, "meta": null }, { "wikipedia_id": "3280194", "title": "Mobile phones and driving safety", "section": "Section::::Public Economics.:Legislation and Social Economic Benefits.\n", "start_paragraph_id": 43, "start_character": 0, "end_paragraph_id": 43, "end_character": 1150, "text": "The negative consumption externalities caused by mobile phone use while driving, as shown, has economic costs. Not only does mobile phone use while driving jeopardize safety for the driver, anyone in the car, or others on the road but it also produces economic costs to all parties involved. As shown, these costs are best managed with government intervention through policy or legislation changes. Ticketing is often the best choice as it affects only those who are caught performing the illegal act. Ticketing is another cost induced from mobile phone use and driving because ticketing laws for this act have only been put into place due to the large number of crashes caused by distracted drivers due to mobile phone use. Further, not only are the tickets costly to individuals who receive them but so is the price that must be paid to enforce the prohibition of mobile phone use while driving. Key to the success of a legislative measure is the ability to maintain and sustain them through enforcement or the perception of enforcement. Police officer and photo radar cameras are other costs that must be paid in order to reduce this externality.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "3280194", "title": "Mobile phones and driving safety", "section": "Section::::Studies.:Effectiveness of bans/restrictions on mobile phones.:The United Kingdom.\n", "start_paragraph_id": 20, "start_character": 0, "end_paragraph_id": 20, "end_character": 1293, "text": "In the UK using a mobile phone while driving has been illegal since 2003, unless it is in a handsfree kit. The penalty originally started with a £30 ($40) fine which later became a fine of £60 ($80) plus 3 penalty points in 2006, then £100 ($134) and 3 points in 2013. There was a tendency for motorists behaving and becoming significantly more compliant initially with the introduction of the updated laws, only to later to resume their ordinary habits. The 2013 fine increase was not at all effective at stopping motorists from using their phones while driving. The percentage of drivers admitting to using their phones while on the road actually increased from 8% in 2014 to 31% in 2016 an increase of 23% in just two years. In the same year statistics revealed that only 30,000 drivers were given a Fixed penalty notice (FPN) for the offence, compared to 123,000 in 2011. The increased percentage of people using their phones can be attributed in part to the growing affordability of smartphones. Possibly the most important factor was the increasing lack of enforcement of the ban by the police. Both increased smartphone sales and lack of enforcement created a situation where in which it was acceptable to use your phone while driving again, yet having being illegal for over 13 years.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "19644137", "title": "Mobile phone", "section": "Section::::Use.:While driving.\n", "start_paragraph_id": 88, "start_character": 0, "end_paragraph_id": 88, "end_character": 749, "text": "Due to the increasing complexity of mobile phones, they are often more like mobile computers in their available uses. This has introduced additional difficulties for law enforcement officials when attempting to distinguish one usage from another in drivers using their devices. This is more apparent in countries which ban both handheld and hands-free usage, rather than those which ban handheld use only, as officials cannot easily tell which function of the mobile phone is being used simply by looking at the driver. This can lead to drivers being stopped for using their device illegally for a phone call when, in fact, they were using the device legally, for example, when using the phone's incorporated controls for car stereo, GPS or satnav.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "29196883", "title": "Distracted driving", "section": "Section::::Solutions.:Legislation.\n", "start_paragraph_id": 44, "start_character": 0, "end_paragraph_id": 44, "end_character": 263, "text": "Current US laws are not strictly enforced. Punishments are so mild that people pay little attention. Drivers are not categorically prohibited from using phones while driving. For example, using earphones to talk and texting with a hands-free device remain legal.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "3280194", "title": "Mobile phones and driving safety", "section": "", "start_paragraph_id": 1, "start_character": 0, "end_paragraph_id": 1, "end_character": 972, "text": "Mobile phone use while driving is common but it is widely considered dangerous due to its potential for causing distracted driving and crashes. Due to the number of crashes that are related to conducting calls on a phone and texting while driving, some jurisdictions have made the use of calling on a phone while driving illegal. Many jurisdictions have enacted laws to ban handheld mobile phone use. Nevertheless, many jurisdictions allow use of a hands-free device. Driving while using a hands-free device is not safer than using a handheld phone to conduct calls, as concluded by case-crossover studies, epidemiological, simulation, and meta-analysis. In some cases restrictions are directed only at minors, those who are newly qualified license holders (of any age), or to drivers in school zones. In addition to voice calling, activities such as texting while driving, web browsing, playing video games, or phone use in general can also increase the risk of a crash.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "12794798", "title": "LG Shine (U970)", "section": "Section::::Features.\n", "start_paragraph_id": 8, "start_character": 0, "end_paragraph_id": 8, "end_character": 238, "text": "The phone can be charged with the included proprietary USB cable which plugs into the only port on the phone, making it impossible to use the included headset while charging, although it is still possible to use the phone while charging.\n", "bleu_score": null, "meta": null } ] } ]
null
2yoitc
Searching for books about Current Elites in Korea (~ < 50 yrs) for research. Any recommendations? (x/post /r/korea)
[ { "answer": "I've done research on Korea from an economic perspective (looking at how political changes and actions were central to development), but there's some overlap with you want so here's a few papers that might be a good starting point. If you are looking for specific individuals then these papers won't be much help, but if you want an idea of what sort of groups elites belonged to then I think they will be helpful. The links are mostly about how economic and political elites were both focused on growth and development, with political favorites and corruption being part of the relationship.\n\nI also don't know how much basic info you have about the Park government but I would definitely start by researching the dramatic changes Park introduced into the country and economy because they form the basis of Korea in the second half on the 20th century.\n\n[Corruption and NIC development: A case study of South Korea](_URL_1_) Looks at how corruption between Korean conglomerates (the Chaebol) and the government was intertwined with development.\n\n[Crony Capitalism: Corruption and Development in South Korea and the Philippines](_URL_2_) Compares crony capitalism in the two countries. Useful because the corny capitilists were the economic elties, andf worked with political elites.\n\n[The Treatment of Market Power in Korea](_URL_0_) About how Chaebols are entrenched in Korea, and how they were even more entrenched previously. Shows the entrenchment of the economic elites who head them.\n\nI don't have time to track any more links down now but if you let me know what specifically you are looking for or are interested in I can check again later and hopefully find some more relevant sources for you!\n", "provenance": null }, { "answer": null, "provenance": [ { "wikipedia_id": "37732958", "title": "Daehan Gyenyeonsa", "section": "", "start_paragraph_id": 1, "start_character": 0, "end_paragraph_id": 1, "end_character": 655, "text": "The \"Daehan Gyenyeonsa\" (A History of the Final Years of the Empire of Great Han of Korea) is, as the title indicates, a history of the final forty years of Korea's Joseon dynasty (after 1898 known as the Empire of Great Han). It was penned by a minor government official and member of the Korean enlightenment movement, Jeong Gyo (鄭喬 1856-1925), about whom little is known. The books is chronologically ordered and much of the historical content is based upon Jeong's own experiences and eye-witness accounts, yielding up rich historical detail and anecdote not available elsewhere. It is particularly useful in its details of Korea's Independence Club.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "33020074", "title": "Doksa Sillon", "section": "", "start_paragraph_id": 1, "start_character": 0, "end_paragraph_id": 1, "end_character": 397, "text": "Doksa Sillon or A New Reading of History (1908) is a book that discusses the history of Korea from the time of the mythical Dangun to the fall of the kingdom of Baekje in 926 CE. Its author––historian, essayist, and independence activist Shin Chaeho (1880–1936)––first published it as a series of articles in the \"Daehan Maeil Sinbo\" (the \"Korea Daily News\"), of which he was the editor-in-chief.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "2252860", "title": "Dongguk Tonggam", "section": "", "start_paragraph_id": 1, "start_character": 0, "end_paragraph_id": 1, "end_character": 561, "text": "The Dongguk Tonggam (Comprehensive Mirror of the eastern state) is a chronicle of the early history of Korea compiled by Seo Geo-jeong (1420–1488) and other scholars in the 15th century. Originally commissioned by King Sejo in 1446, it was completed under the reign of Seongjong of Joseon, in 1485. The official Choe Bu was one of the scholars who helped compile and edit the work. The earlier works on which it may have been based have not survived. The \"Dongguk Tonggam\" is the earliest extant record to list the names of the rulers of Gojoseon after Dangun.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "47066581", "title": "Nurimedia", "section": "Section::::History.\n", "start_paragraph_id": 4, "start_character": 0, "end_paragraph_id": 4, "end_character": 440, "text": "In the beginning years, they announced their ambition of creating a definitive Korean studies database. Some of their first published Korean classical literary works, in digital form, included Goryeosa, The History of Balhae, Tripitaka Koreana, Samguk Sagi and Samguk Yusa. Research scholars, also noted the company as having introduced, in 1998-1999, a few historical works from North Korea, through China, which they published on CD-ROM.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "36570923", "title": "Chong-Sik Lee", "section": "Section::::Career.\n", "start_paragraph_id": 7, "start_character": 0, "end_paragraph_id": 7, "end_character": 677, "text": "Lee’s academic career includes works about Korea’s history of communism, the division of the Korean Peninsula, and the origins of the Republic of Korea. He also researched major figures in modern Korean history such as Syngman Rhee, the first president of Korea (1948-1960); Woon-Hyung Yuh, a Korean politician and reunification activist in the 1940s; and Chung-Hee Park, the third president of Korea (1963-1979) who seized power through a military coup. In particular, his works on Korea-Japan relations, communist movements in Manchuria, and the international relations of East Asia have been translated into many languages and are considered classics in East Asian studies.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "13276", "title": "Historiography", "section": "Section::::Middle Ages to Renaissance.:East Asia.:Korea.\n", "start_paragraph_id": 44, "start_character": 0, "end_paragraph_id": 44, "end_character": 466, "text": "The tradition of Korean historiography was established with the \"Samguk Sagi\", a history of Korea from its allegedly earliest times. It was compiled by Goryeo court historian Kim Busik after its commission by King Injong of Goryeo (r. 1122 – 1146). It was completed in 1145 and relied not only on earlier Chinese histories for source material, but also on the \"Hwarang Segi\" written by the Silla historian Kim Daemun in the 8th century. The latter work is now lost.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "6371768", "title": "James Palais", "section": "Section::::Career.\n", "start_paragraph_id": 9, "start_character": 0, "end_paragraph_id": 9, "end_character": 649, "text": "From 1974-77, Palais edited Occasional Papers on Korean Studies, as known as the Journal of Korean Studies, which was edited out of the University of Washington until 1988. Palais' political interests resulted in the Asia Watch report \"Human Rights in Korea\" (Washington, 1986), but perhaps his greatest work was the 1230-page \"Confucian Statecraft and Korean Institutions: Yu Hyongwon and the late Choson Dynasty\", a comprehensive overview of Choson Dynasty (1392-1910) Korean institutions as discussed by the eminent 17th century Korean statesman. This book was awarded the John Whitney Hall book prize as the best book on Japan or Korea in 1998.\n", "bleu_score": null, "meta": null } ] } ]
null
3b0ap7
even if we could terraform mars, wouldn't its lack of magnetic field mean cosmic radiation would continually bombard whatever is living on the surface?
[ { "answer": "Yes, the lack of a magnetosphere would be a big problem on Mars. That said, if we were able to deal with the other problems relating to colonizing a planet (like Mars' lower gravity, an arguably bigger hurdle) we could solve this one. NASA has even gone far enough to suggest long vertical rock covered shafts already present on the surface of Mars could offer some protection from solar radiation and a great deal of protection from dust storms.", "provenance": null }, { "answer": "Radiation doesn't just blast the surface with cancer rays, it also whisks away the atmosphere. Mar's atmosphere is very thin and complex life that we have on Earth cannot survive (it is called the Armstrong Limit).", "provenance": null }, { "answer": null, "provenance": [ { "wikipedia_id": "463835", "title": "Life on Mars", "section": "Section::::Habitability.:Present.:Cosmic radiation.\n", "start_paragraph_id": 28, "start_character": 0, "end_paragraph_id": 28, "end_character": 522, "text": "In 1965, the Mariner 4 probe discovered that Mars had no global magnetic field that would protect the planet from potentially life-threatening cosmic radiation and solar radiation; observations made in the late 1990s by the Mars Global Surveyor confirmed this discovery. Scientists speculate that the lack of magnetic shielding helped the solar wind blow away much of Mars' atmosphere over the course of several billion years. As a result, the planet has been vulnerable to radiation from space for about 4 billion years.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "463835", "title": "Life on Mars", "section": "Section::::Habitability.:Past.\n", "start_paragraph_id": 16, "start_character": 0, "end_paragraph_id": 16, "end_character": 580, "text": "The loss of the Martian magnetic field strongly affected surface environments through atmospheric loss and increased radiation; this change significantly degraded surface habitability. When there was a magnetic field, the atmosphere would have been protected from erosion by the solar wind, which would ensure the maintenance of a dense atmosphere, necessary for liquid water to exist on the surface of Mars. The loss of the atmosphere was accompanied by decreasing temperatures. Part of the liquid water inventory sublimed and was transported to the poles, while the rest became\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "463835", "title": "Life on Mars", "section": "Section::::Habitability.:Present.:Cumulative effects.\n", "start_paragraph_id": 31, "start_character": 0, "end_paragraph_id": 31, "end_character": 586, "text": "Even the hardiest cells known could not possibly survive the cosmic radiation near the surface of Mars since Mars lost its protective magnetosphere and atmosphere. After mapping cosmic radiation levels at various depths on Mars, researchers have concluded that over time, any life within the first several meters of the planet's surface would be killed by lethal doses of cosmic radiation. The team calculated that the cumulative damage to DNA and RNA by cosmic radiation would limit retrieving viable dormant cells on Mars to depths greater than 7.5 meters below the planet's surface.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "55351756", "title": "Future of space exploration", "section": "Section::::Breakthrough Starshot.:Human limitations.:Physiological issues.\n", "start_paragraph_id": 68, "start_character": 0, "end_paragraph_id": 68, "end_character": 364, "text": "Furthermore, without Earth's surrounding magnetic field as a shield, solar radiation has much harsher effects on biological organisms in space. The exposure can include damage to the central nervous system, (altered cognitive function, reducing motor function and incurring possible behavioral changes), as well as the possibility of degenerative tissue diseases.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "24793719", "title": "Energetic neutral atom", "section": "Section::::Magnetospheric ENA imaging.:Earth's magnetosphere.\n", "start_paragraph_id": 54, "start_character": 0, "end_paragraph_id": 54, "end_character": 416, "text": "Earth's magnetic field dominates the terrestrial magnetosphere and prevents the solar wind from hitting us head on. Lacking a large protective magnetosphere, Mars is thought to have lost much of its former oceans and atmosphere to space in part due to the direct impact of the solar wind. Venus with its thick atmosphere is thought to have lost most of its water to space in large part owing to solar wind ablation.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "4923933", "title": "Terraforming of Mars", "section": "Section::::Challenges and limitations.:Countering the effects of space weather.\n", "start_paragraph_id": 28, "start_character": 0, "end_paragraph_id": 28, "end_character": 275, "text": "Mars does not have an intrinsic global magnetic field, but the solar wind directly interacts with the atmosphere of Mars, leading to the formation of a magnetosphere from magnetic field tubes. This poses challenges for mitigating solar radiation and retaining an atmosphere.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "42302371", "title": "Mars habitat", "section": "Section::::Overview.:Radiation.\n", "start_paragraph_id": 83, "start_character": 0, "end_paragraph_id": 83, "end_character": 301, "text": "Radiation exposure is a concern for astronauts even on the surface, as Mars lacks a strong magnetic field and atmosphere is thin to stop as much radiation as Earth. However, the planet does reduce the radiation significantly especially on the surface, and it is not detected to be radioactive itself.\n", "bleu_score": null, "meta": null } ] } ]
null
3kqgpd
Did the U.S. experience any diplomatic fallout due to non-Japanese casualties of the atomic bombs?
[ { "answer": "I haven't really looked into the diplomatic fallout, though the issue did surface from time to time in the press. I know of nothing specific on this, but that doesn't mean anything (other than, maybe, the idea that it isn't something that has been written a lot about — but that doesn't mean it didn't exist). \n\nAs for \"third-party nations\" — the main non-Japanese victims of the bombs that come to mind are POWs (British, American, and Dutch), Koreans (laborers), and Germans (the Jesuits at Hiroshima, and maybe others). Of these groups, the ones most represented in American media are the Germans, who were featured quite prominently in John Hersey's _Hiroshima_, among other sources. The Koreans were by far the largest group of victims, of those groups.", "provenance": null }, { "answer": null, "provenance": [ { "wikipedia_id": "39287076", "title": "American military technology during World War II", "section": "Section::::Atomic bomb.\n", "start_paragraph_id": 8, "start_character": 0, "end_paragraph_id": 8, "end_character": 408, "text": "After the use of the bombs, American journalists traveled to the devastated areas and documented the horrors they saw. This raised moral concerns and the necessity of the attack. The motives of President Harry Truman, the United States Army Air Force (USAAF), and the United States Navy came under suspicion, and the USAAF and Navy released statements that it was necessary in order to make Japan surrender.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "13121116", "title": "Debate over the atomic bombings of Hiroshima and Nagasaki", "section": "Section::::Support.:Part of total war.\n", "start_paragraph_id": 27, "start_character": 0, "end_paragraph_id": 27, "end_character": 969, "text": "On 30 June 2007, Japan's defense minister Fumio Kyūma said the dropping of atomic bombs on Japan by the United States during World War II was an inevitable way to end the war. Kyūma said: \"I now have come to accept in my mind that in order to end the war, it could not be helped (shikata ga nai) that an atomic bomb was dropped on Nagasaki and that countless numbers of people suffered great tragedy.\" Kyūma, who is from Nagasaki, said the bombing caused great suffering in the city, but he does not resent the U.S. because it prevented the Soviet Union from entering the war with Japan. Kyūma's comments were similar to those made by Emperor Hirohito when, in his first ever press conference given in Tokyo in 1975, he was asked what he thought of the bombing of Hiroshima, and answered: \"It's very regrettable that nuclear bombs were dropped and I feel sorry for the citizens of Hiroshima but it couldn't be helped (shikata ga nai) because that happened in wartime.\"\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "9559286", "title": "Air raids on Japan", "section": "Section::::Assessments.:Morality.\n", "start_paragraph_id": 116, "start_character": 0, "end_paragraph_id": 116, "end_character": 1391, "text": "The atomic bomb attacks have been the subject of long-running controversy. Shortly after the attacks an opinion poll found that about 85 percent of Americans supported the use of atomic weapons, and the wartime generation believed that they had saved millions of lives. Criticisms over the decision to use the bombs have increased over time, however. Arguments made against the attacks include that Japan would have eventually surrendered and that the attacks were made to either intimidate the Soviet Union or justify the Manhattan Project. In 1994, an opinion poll found that 55 percent of Americans supported the decision to bomb Hiroshima and Nagasaki. When registering the only dissenting opinion of the judges involved in the International Military Tribunal for the Far East in 1947, Justice Radhabinod Pal argued that Japan's leadership had not conspired to commit atrocities and stated that the decision to conduct the atomic bomb attacks was the clearest example of a direct order to conduct \"indiscriminate murder\" during the Pacific War. Since then, Japanese academics, such as Yuki Tanaka and Tsuyoshi Hasegawa, have argued that use of the bombs was immoral and constituted a war crime. In contrast, President Truman and, more recently, historians such as Paul Fussell have argued that the attacks on Hiroshima and Nagasaki were justified as they induced the Japanese surrender.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "25890428", "title": "History of Japan", "section": "Section::::Modern Japan.:Shōwa period (1926–1989).:World War II.\n", "start_paragraph_id": 127, "start_character": 0, "end_paragraph_id": 127, "end_character": 607, "text": "However, on August 6, 1945, the US dropped an atomic bomb over Hiroshima, killing over 70,000 people. This was the first nuclear attack in history. On August 9 the Soviet Union declared war on Japan and invaded Manchukuo and other territories, and Nagasaki was struck by a second atomic bomb, killing around 40,000 people. The unconditional surrender of Japan was announced by Emperor Hirohito and communicated to the Allies on August 14, and broadcast on national radio on the following day, marking the end of Imperial Japan's ultranationalist ideology, and was a major turning point in Japanese history.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "4653534", "title": "Military history of the United States during World War II", "section": "Section::::Pacific Theater.:Island hopping.:Atomic bombing of Japanese cities.\n", "start_paragraph_id": 164, "start_character": 0, "end_paragraph_id": 164, "end_character": 977, "text": "As victory for the United States slowly approached, casualties mounted. A fear in the American high command was that an invasion of mainland Japan would lead to enormous losses on the part of the Allies, as casualty estimates for the planned Operation Downfall demonstrate. As Japan was able to withstand the devastating incendiary raids and the naval blockade despite hundreds of thousands of civilian deaths, President Harry Truman gave the order to drop the only two available atomic bombs, hoping that such sheer force of destruction on a city would break Japanese resolve and end the war. The first bomb was dropped on an industrial city, Hiroshima, on August 6, 1945, killing appropriately 70,000 people. A second bomb was dropped on another industrial city, Nagasaki, on August 9 after it appeared that the Japanese high command was not planning to surrender, killing approximately 35,000 people. Fearing additional atomic attacks, Japan surrendered on August 15, 1945.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "59062", "title": "Hiroshima", "section": "Section::::History.:World War II and the atomic bombing (1939–1945).\n", "start_paragraph_id": 16, "start_character": 0, "end_paragraph_id": 16, "end_character": 1148, "text": "As Ian Buruma observed, \"News of the terrible consequences of the atom bomb attacks on Japan was deliberately withheld from the Japanese public by US military censors during the Allied occupation—even as they sought to teach the natives the virtues of a free press. Casualty statistics were suppressed. Film shot by Japanese cameramen in Hiroshima and Nagasaki after the bombings was confiscated. \"Hiroshima\", the account written by John Hersey for \"The New Yorker\", had a huge impact in the US, but was banned in Japan. As [John] Dower says: 'In the localities themselves, suffering was compounded not merely by the unprecedented nature of the catastrophe ... but also by the fact that public struggle with this traumatic experience was not permitted.\" The US occupation authorities maintained a monopoly on scientific and medical information about the effects of the atomic bomb through the work of the Atomic Bomb Casualty Commission, which treated the data gathered in studies of hibakusha as privileged information rather than making the results available for the treatment of victims or providing financial or medical support to aid victims.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "730658", "title": "Strategic bombing during World War II", "section": "Section::::Asia.:Japanese bombing.\n", "start_paragraph_id": 140, "start_character": 0, "end_paragraph_id": 140, "end_character": 341, "text": "The Imperial Japanese Navy also carried out a carrier-based airstrike on the neutral United States at Pearl Harbor and Oahu on 7 December 1941, resulting in almost 2,500 fatalities and plunging America into World War II the next day. There were also air raids on the Philippines and northern Australia (Bombing of Darwin, 19 February 1942).\n", "bleu_score": null, "meta": null } ] } ]
null
1frm5e
How does your body remove excess salt from your body on a physiological level?
[ { "answer": "Excess salt doesn't really go into your cells, because you have a pump that pumps it out in exchange for pumping potassium into the cell. If large quantities of excess salt went into the cell, osmosis would, in fact, pull water into the cell causing it to swell and eventually burst.\n\nInstead, the salt remains in your plasma, where it reaches the kidneys. Your kidneys have various mechanisms for adjusting the salt concentration in your urine. For example, there are cells in the kidney that can detect high levels of salt in the blood, which ultimately prevents your kidneys from reabsorbing salt back into your body. You can think of it as your body maintaining a certain salt concentration - if you have too much salt, your kidneys will \"use\" extra water to remove it. There are other mechanisms as well - for example, taking in a lot of salt increases your thirst drive, trying to dilute the salt that's in your body to maintain the proper concentration.\n\nI realize that this wasn't too specific, but the main point is that it would be very bad if a lot of salt were permitted to enter your cells, so your body has mechanisms to keep that from happening. The kidneys are the primarily regulators that maintain a proper concentration of salt in your system by controlling how much salt you pee out.", "provenance": null }, { "answer": null, "provenance": [ { "wikipedia_id": "966653", "title": "Salting out", "section": "", "start_paragraph_id": 1, "start_character": 0, "end_paragraph_id": 1, "end_character": 687, "text": "Salting out (also known as salt-induced precipitation, salt fractionation, anti-solvent crystallization, precipitation crystallization, or drowning out) is an effect based on the electrolyte–non-electrolyte interaction, in which the non-electrolyte could be less soluble at high salt concentrations. It is used as a method of purification for proteins, as well as preventing protein denaturation due to excessively diluted samples during experiments. The salt concentration needed for the protein to precipitate out of the solution differs from protein to protein. This process is also used to concentrate dilute solutions of proteins. Dialysis can be used to remove the salt if needed.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "29825613", "title": "Salt and cardiovascular disease", "section": "Section::::Effect of salt on blood pressure.\n", "start_paragraph_id": 4, "start_character": 0, "end_paragraph_id": 4, "end_character": 387, "text": "The human body has evolved to balance salt intake with need through means such as the renin–angiotensin system. In humans, salt has important biological functions. Relevant to risk of cardiovascular disease, salt is highly involved with the maintenance of body fluid volume, including osmotic balance in the blood, extracellular and intracellular fluids, and resting membrane potential.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "966653", "title": "Salting out", "section": "Section::::Principle.\n", "start_paragraph_id": 3, "start_character": 0, "end_paragraph_id": 3, "end_character": 314, "text": "Salt compounds dissociate in aqueous solutions. This property is exploited in the process of salting out. When the salt concentration is increased, some of the water molecules are attracted by the salt ions, which decreases the number of water molecules available to interact with the charged part of the protein.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "1055257", "title": "Ammonium sulfate precipitation", "section": "Section::::Procedure.\n", "start_paragraph_id": 12, "start_character": 0, "end_paragraph_id": 12, "end_character": 464, "text": "In the next stage of purification, all this added salt needs to be removed from the protein. One way to do so is using dialysis, but dialysis further dilutes the concentrated protein. The better way of removing Ammonium sulfate from the protein is mixing the precipitate protein a buffer containing mixture of SDS, Tris-HCl and phenol and centrifuging the mixture. The precipitate that comes out of this centrifugation will contain salt-less concentrated protein.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "8253098", "title": "Protein precipitation", "section": "Section::::Methods.:Salting out.:Energetics involved in salting out.\n", "start_paragraph_id": 15, "start_character": 0, "end_paragraph_id": 15, "end_character": 455, "text": "Salting out is a spontaneous process when the right concentration of the salt is reached in solution. The hydrophobic patches on the protein surface generate highly ordered water shells. This results in a small decrease in enthalpy, Δ\"H\", and a larger decrease in entropy, Δ\"S,\" of the ordered water molecules relative to the molecules in the bulk solution. The overall free energy change, Δ\"G\", of the process is given by the Gibbs free energy equation:\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "41565521", "title": "Exercise-associated hyponatremia", "section": "Section::::Prevention.\n", "start_paragraph_id": 16, "start_character": 0, "end_paragraph_id": 16, "end_character": 285, "text": "However, since this can risk dehydration, an alternative approach is possible of consuming a substantial amount of salt prior to exercise. It is still important not to overconsume water to the extent of requiring urination, because urination would cause the extra salt to be excreted.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "101956", "title": "Orthostatic hypotension", "section": "", "start_paragraph_id": 4, "start_character": 0, "end_paragraph_id": 4, "end_character": 421, "text": "Apart from addressing the underlying cause, orthostatic hypotension may be treated with a recommendation to increase salt and water intake (to increase the blood volume), wearing compression stockings, and sometimes medication (fludrocortisone, midodrine or others). Salt loading (dramatic increases in salt intake) must be supervised by a doctor, as this can cause severe neurological problems if done too aggressively.\n", "bleu_score": null, "meta": null } ] } ]
null
7qumum
Does anyone have book recommendations covering the Battle of the Aisne (WW1)?
[ { "answer": "The Aisne is at the center of my research, and I feel your pain. There's not much out there. \n\nIn the grand scheme of things, overviews of the 1914 campaign tend to view the Aisne as the final stage of the Marne. To historians of the French and British armies, it represents the Entente's inability to exploit the gap between German First and Second Armies. Some trenches were dug, and the battle stabilized before both sides started swinging around the northern flank. To German historians, the Aisne often comes off as Moltke's last act and the final death of the Schlieffen Plan. He ordered his forces do dig in, and then he was out. Falkenhayn picked up and turned his attention to the northwest.\n\nI've found one English-language book on the Aisne: Paul Kendall's *The Aisne 1914: The Dawn of Trench Warfare.* It's not very good, and it's not an academic history by any means. There are lots of pictures, lots of talk about operations and the movement of units, and short biographies of some of the British officers, but it breaks no new ground in terms of what it says about the battle. Yes, the Aisne was the start of trench warfare (for the British), but it doesn't drive at what the battle says about the army, its preparedness, or its ability to cope with the demands of the fighting in 1914. \n\nIf you want to know the operational side of the battle, the first volume of the British official history is still your best bet (Edmonds, *Military Operations: France and Belgium 1914*, volume 1). You can download it for free, I believe, on _URL_0_. Though a bit stale and lacking interpretation or critical assessment, the narrative is richly detailed and dense. \n\nIf you are more interested in analysis of the battle, Nikolas Gardner, in his *Trial by Fire,* has a chapter on the Aisne that I'd highly recommend. Gardner addresses the operational hazards of the Aisne and how the British army adapted to the changing nature of the fighting there. It's a proper academic study that actually looks at the Aisne in the context of the army's performance and development in 1914. \n\nAlong those lines, my paper (Dykstra, \"'To Dig and Burrow Like Rabbits': British Field Fortifications at the Battle of the Aisne\") looks at how the British army handled the transition from mobile to trench war at the Aisne from the defensive perspective. There isn't a ton of operational info in there, but it gives a good sense of how the army prepared for defensive trench war and how its trench systems performed at the Aisne. It's due to come out in October 2018, but Chapter 2 of my MA thesis has much of the same info. You can get that [here](_URL_1_). \n\nOther than that, the Aisne, like I said, is usually glossed over as being either an addendum to the Marne or the first stage of the Race to the Sea. Personally, and this is probably because I've spent most of my academic life studying it, I think that the Aisne was an important moment for the British army: the point at which it learned, for the first time, the true power of modern artillery, particularly howitzers. The battle also conditioned the army to large-scale entrenchment and afforded commanders the opportunity to refine their field fortification system, something that helped, to some extent, later at Ypres. \n\nHappy to answer any follow-up questions you have. ", "provenance": null }, { "answer": null, "provenance": [ { "wikipedia_id": "991922", "title": "Royal Aircraft Factory F.E.2", "section": "Section::::Notable appearances in popular fiction.\n", "start_paragraph_id": 31, "start_character": 0, "end_paragraph_id": 31, "end_character": 208, "text": "Derek Robinson's novel \"War Story\" is about the fictional Hornet Squadron flying the F.E.2b, and later the F.E.2d, giving an account of flying the fighter in the months leading up to the Battle of the Somme.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "1358244", "title": "The Old Front Line", "section": "Section::::Book.\n", "start_paragraph_id": 3, "start_character": 0, "end_paragraph_id": 3, "end_character": 793, "text": "The book is a description of the battlefield front-line from which the British Army attacked on the first day on the Somme, 1 July 1916, and as such is perhaps the first battlefield guide of the First World War. Masefield had originally been asked to write a full account of the Battle of the Somme (in 1916 he had written a successful book on the Battle of Gallipoli) but the project fell through when he was refused access to official army documents. All he was able to produce was his description of the battlefield as seen in 1917 following the German withdrawal to the Hindenburg Line. Nevertheless, \"The Old Front Line\" is still frequently referenced today as an eyewitness description of the Somme terrain and it is written with lyrical prose that is rare in books on military history.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "1910797", "title": "Winged Victory (novel)", "section": "Section::::Trivia.\n", "start_paragraph_id": 17, "start_character": 0, "end_paragraph_id": 17, "end_character": 746, "text": "Yeates was credited with five enemy aircraft shot down in World War I (2 + 3 shared). Some of the characters of the book have real names of pilots who served in 46 Squadron at the time, like ace George Edwin Thomson (21 enemy aircraft shot down, called \"Tommy\" in the book, and who is transferred to Home Establishment before April 1918), Harry Noel Cornforth Robinson (called \"Robinson\" in the book, 10 aircraft destroyed), and Horace Gilbert Wanklyn Debenham (\"Debenham\" in the book, six enemy aircraft destroyed). There are some with names very similar to real names, like Flight Commander \"MacAndrews (Mac)\" who is based upon Canadian ace Donald MacLaren (48 enemy aircraft and six balloons shot down), who served in 46 Squadron at the time.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "58943286", "title": "Military Historical Society of Australia", "section": "Section::::Other publications.\n", "start_paragraph_id": 26, "start_character": 0, "end_paragraph_id": 26, "end_character": 326, "text": "In 2017, the society published \"Fighting on All Fronts\", the first volume of its centenary of World War I series. Consisting of 11 articles that had been previously published in various editions of \"Sabretache\", the volume deals with a diverse range of topics covering the period 1916–1917, with a preface from Peter Stanley.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "5460299", "title": "Victor Maslin Yeates", "section": "Section::::\"Winged Victory\".:Philosophy about war.\n", "start_paragraph_id": 16, "start_character": 0, "end_paragraph_id": 16, "end_character": 727, "text": "The novel is occasionally over-written or unduly discursive but contains a realistic portrait of RFC and then RAF life and operations on the Western Front, starting with the launch of Operation Michael, the gigantic German Spring Offensive on 21 March 1918. The narrator and his squadron are ground down by ground attack operations against the German army, as the Allied Armies fight for their lives, while faster scouts (fighters) such as the S.E.5 and the Bristol Fighter dogfight with German fighters. The Camel role is unglamorous and very dangerous, machine gunning trenches and approach routes at a few hundred feet up in un-armoured aircraft, with a constant threat from machine-gun fire from the soldiers beneath them.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "17734751", "title": "The Book of Lost Things", "section": "Section::::References and allusions.\n", "start_paragraph_id": 44, "start_character": 0, "end_paragraph_id": 44, "end_character": 260, "text": "The novel takes place during World War II, and there are many references to the war and its principles. Throughout the story contemporary vehicles appear, such as the Ju88 bomber plane which crashes into the sunken garden and the tank attacked by the monster.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "906060", "title": "Douglas B-23 Dragon", "section": "Section::::References.:Bibliography.\n", "start_paragraph_id": 40, "start_character": 0, "end_paragraph_id": 40, "end_character": 202, "text": "BULLET::::- Mondey, David. \"The Hamlyn Concise Guide to American Aircraft of World War II\". London: Hamlyn Publishing Group Ltd., 2002, (republished 1996 by the Chancellor Press), First edition 1982. .\n", "bleu_score": null, "meta": null } ] } ]
null
3l7v9m
When Did Black Canadians Gain the Vote in Canada?
[ { "answer": "I've found a bunch of sources saying it was on the 24th of March 1837, at least for Lower Canada, but none of them tie into usable links, actual documents or even elaborate...", "provenance": null }, { "answer": null, "provenance": [ { "wikipedia_id": "466982", "title": "Black Canadians", "section": "Section::::History.:Underground Railroad.\n", "start_paragraph_id": 46, "start_character": 0, "end_paragraph_id": 46, "end_character": 1686, "text": "Following the abolition of slavery in the British empire in 1834, any black man born a British subject or who become a British subject was allowed to vote and run for office, provided that they owned taxable property. The property requirement on voting in Canada was not ended until 1920. Black Canadian women like all other Canadian women were not granted the right to vote until partially in 1917 ( when wives, daughters, sisters and mothers of servicemen were granted the right to vote) and fully in 1918 (when all women were granted the right to vote). In 1850, Canadian black women together with all other women were granted the right to vote for school trustees, which was the limit of female voting rights in Canada West. In 1848, in Colchester county in Canada West, white men prevented black men from voting in the municipal elections, but following complaints in the courts, a judge ruled that black voters could not be prevented from voting. Ward, writing about the Colchester case in \"The Voice of the Fugitive\" newspaper, declared that the right to vote was the \"most sacred\" of all rights, and that even if white men took away everything from the black farmers in Colchester county, that would still be a lesser crime compared with losing the \"right of a British vote\". In 1840, Wilson Ruffin Abbott become the first black elected to any office in what became Canada when he was elected to the city council in Toronto. In 1851, James Douglas became the governor of Vancouver Island, but that was not an elective one. Unlike in the United States, in Canada after the abolition of slavery in 1834, black Canadians were never stripped of their right to vote and hold office.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "798360", "title": "1935 Canadian federal election", "section": "", "start_paragraph_id": 1, "start_character": 0, "end_paragraph_id": 1, "end_character": 285, "text": "The Canadian federal election of 1935 was held on October 14, 1935. to elect members of the House of Commons of Canada of the 18th Parliament of Canada. The Liberal Party of William Lyon Mackenzie King won a majority government, defeating Prime Minister R. B. Bennett's Conservatives.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "34545", "title": "1960s", "section": "Section::::Politics and wars.:Prominent political events.:North America.:Canada.\n", "start_paragraph_id": 70, "start_character": 0, "end_paragraph_id": 70, "end_character": 281, "text": "BULLET::::- In 1960, the Canadian Bill of Rights becomes law, and suffrage, and the right for any Canadian citizen to vote, was finally adopted by John Diefenbaker's Progressive Conservative government. The new election act allowed First Nations people to vote for the first time.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "8824877", "title": "Timeline of events in Hamilton, Ontario", "section": "Section::::1960–1969.\n", "start_paragraph_id": 284, "start_character": 0, "end_paragraph_id": 284, "end_character": 208, "text": "BULLET::::- 1968– Lincoln Alexander, became Canada's first black Member of Parliament when he was elected to the Canadian House of Commons in 1968 as a member of the Progressive Conservative Party of Canada.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "32825077", "title": "Women's suffrage in Canada", "section": "", "start_paragraph_id": 1, "start_character": 0, "end_paragraph_id": 1, "end_character": 1028, "text": "Women's suffrage in Canada occurred at different times in different jurisdictions and at different times to different demographics of women. By the close of 1918, all the Canadian provinces except Quebec had granted full suffrage to white and black women. Municipal suffrage was granted in 1884 to property-owning widows and spinsters in the provinces of Quebec and Ontario; in 1886, in the province of New Brunswick, to all property-owning women except those whose husbands were voters; in Nova Scotia, in 1886; and in Prince Edward Island, in 1888, to property-owning widows and spinsters. In 1916, suffrage was given to women in Manitoba, Saskatchewan, Alberta, and British Columbia. Women in Quebec did not receive full suffrage until 1940. Asian women (and men) were not granted suffrage until after World War II in 1948, Inuit women (and men) were not granted suffrage until 1950 and it was not until 1960 that suffrage was extended to First Nations women (and men) without requiring them to give up their treaty status. \n", "bleu_score": null, "meta": null }, { "wikipedia_id": "794115", "title": "1962 Canadian federal election", "section": "", "start_paragraph_id": 3, "start_character": 0, "end_paragraph_id": 3, "end_character": 382, "text": "This was the first election in which all of Canada's Indigenous Peoples had the right to vote after the passage in March 31, 1960 of a repeal of certain sections of the Canada Elections Act. For the first time ever, the entire land mass of Canada was covered by federal electoral districts (the former Mackenzie River riding was expanded to cover the entire Northwest Territories).\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "466982", "title": "Black Canadians", "section": "Section::::History.:Early to mid-20th century.\n", "start_paragraph_id": 62, "start_character": 0, "end_paragraph_id": 62, "end_character": 2301, "text": "Historically, Black Canadians, being descended from either Black Loyalists or American run-away slaves, had supported the Conservative Party as the party most inclined to maintain ties with Britain, which was seen as the nation that had given them freedom. The Liberals were historically the party of continentalism (i.e moving Canada closer to the United States), which was not an appealing position for most Black Canadians. In the first half of the 20th century, Black Canadians usually voted solidly for the Conservatives as the party seen as the most pro-British. Until the 1930s–1940s, the majority of Black Canadians lived in rural areas, mostly in Ontario and Nova Scotia, which provided a certain degree of insulation from the effects of racism. The self-contained nature of the rural Black communities in Ontario and Nova Scotia with Black farmers clustered together in certain rural counties meant that racism was not experienced on a daily basis. The centre of social life in the rural black communities were the churches, usually Methodist or Baptist, and ministers were generally the most important community leaders. Through anti-Black racism did exist in Canada, as the Black population in Canada was extremely small, there was nothing comparable to the massive campaign directed against Asian immigration, the so-called \"Yellow Peril\", which was a major political issue in the late 19th and early 20th centuries, especially in British Columbia. In 1908, the Canadian Brotherhood of Railroad Employees and Other Transport Workers (CBRE) was founded under the leadership of Aaron Mosher, an avowed white supremacist who objected to white workers like himself having to work alongside black workers. In 1909 and 1913, Mosher negotiated contracts with the Inter Colonial Railroad Company, where he worked as a freight handler, that imposed segregation in workplaces while giving increased wages and benefits to white workers alone. The contracts that Mosher negotiated in 1909 and 1913 served as the basis for the contracts that other railroad companies negotiated with the CBRE. To fight against the discriminatory treatment, the all-black Order of Sleeping Car Porters union was founded in 1917 to fight to end segregation on the railroad lines and to fight for equal pay and benefits.\n", "bleu_score": null, "meta": null } ] } ]
null
2ybgv7
How did the KKK become anti-semitic? I've read before that older members of the KKK claim that it wasn't anti-semitic initially, but that became part of the organisation's ideology over time. How and when did this happen?
[ { "answer": "Follow up question:\n\nCould this have been influenced by Nazism?", "provenance": null }, { "answer": "There have really been 3 distinct KKKs over time. The first one was created in the wake of the Civil War by Nathan Bedford Forrest and others, and its target was blacks. That KKK attacked, terrorized, and killed former slaves until Jim Crow laws created a more legal venue with which to oppress blacks. Then the KKK's size waned.\n\nThe second KKK was created in Georgia by William Simmons in 1915, shortly after Birth of a Nation was released and a few months after Leo Frank (a Jewish man) was lynched in Georgia by \"the Knights of Mary Phagan\" (made up of core members of what would become the 2nd KKK). In brief: Mary Phagan was a young white girl who was raped and murdered in Atlanta. Leo Frank was fingered for the crime despite lack of evidence, and anti-semitic comments were trotted out at the trial. Also, there was a large wave of immigrants to the US from Eastern and Southern Europe and Ireland that began a bit before the second Klan was founded, and a lot of anti-immigrant sentiment followed. Birth of a Nation is all about the founding of the original Klan after the Civil War, and how glorious and wonderful and necessary it was. It was by far the most popular and profitable film of its time and it inspired a lot of people to talk about/believe in the glory days of the Klan. This attitude tied in with then-popular eugenics ideals and the popular anti-immigrant ideas. Plus the Mary Phagan situation became a lightning rod for antisemitism in the US, and a large percentage of Atlanta's Jews were driven out of the state at this time. And hostility came to be directed against the immigrants, particularly Catholic immigrants from Italy and Ireland. This KKK became somewhat of a social club in some areas, while being a terrorist organization as well. Like, in Indiana, a large percentage of prominent white, non-Catholic men became members. The second klan also began adopting some of the elements of the Klan on display in Birth of a Nation that were artistic license by D.W. Griffith's but that the first Klan had not used. For example, the first Klan did not burn crosses. But crosses were burned in Birth of a Nation and so the second Klan did that, too. Because this all happened before there were Nazis, there was no influence by Nazis on the second Klan's anti-semitism. It was all American-grown. However, the second Klan died out a bit after World War II. Then, the third Klan was founded as a reaction to the civil rights movement. Their anti-semitism had a lot of influence from Nazis and the white supremacist movement. \n\nI should mention, though, that Birth of a Nation has an intertitle (the text on the screen that silent movies have instead of speech) that can be misleading in this regard. It says \"The former enemies of North and South are united again in common defence of their Aryan birthright.\"\nNowadays, \"Aryan\" is closely associated with Nazis and white supremacy. but in 1915, Aryan referred to Europeans in general, not specifically blond-haired, blue-eyed northern Europeans only. So it was used in the film in a white supremacist way, but no specific reference to Nazis occurred.", "provenance": null }, { "answer": null, "provenance": [ { "wikipedia_id": "566888", "title": "Christian terrorism", "section": "Section::::History.:Ku Klux Klan.\n", "start_paragraph_id": 17, "start_character": 0, "end_paragraph_id": 17, "end_character": 704, "text": "Vehemently anti-Catholic, the 1915 Klan had an explicitly Protestant Christian terrorist ideology, basing its beliefs in part on a \"religious foundation\" in Protestant Christianity and targeting Jews, Catholics, and other social or ethnic minorities, as well as people who engaged in \"immoral\" practices such as adulterers, bad debters, gamblers, and alcohol abusers. From an early time onward, the goals of the KKK included an intent to \"reestablish Protestant Christian values in America by any means possible\", and it believed that \"Jesus was the first Klansman\". Although members of the KKK swear to uphold Christian morality, virtually every Christian denomination has officially denounced the KKK.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "3283791", "title": "Domestic terrorism in the United States", "section": "Section::::Terrorist organizations.:Ku Klux Klan.\n", "start_paragraph_id": 36, "start_character": 0, "end_paragraph_id": 36, "end_character": 550, "text": "During reconstruction at the end of the civil war the original KKK used domestic terrorism against the Federal Government and against freed slaves. During the 20th century, leading up to the Civil Rights Movement, unrelated Ku Klux Klan (KKK) groups used threats, violence, arson, and murder to further their anti-Black, anti-Catholic, anti-Communist, anti-immigrant, anti-semitic, homophobic and white-supremacist agenda. Other groups with agendas similar to the Ku Klux Klan include neo-Nazis, white power skinheads, and other far-right movements.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "1688046", "title": "Civil rights movement (1896–1954)", "section": "Section::::Criminal law and lynching.\n", "start_paragraph_id": 35, "start_character": 0, "end_paragraph_id": 35, "end_character": 634, "text": "Initially the KKK presented itself as another fraternal organization devoted to betterment of its members. The KKK's revival was inspired in part by the movie \"Birth of a Nation\", which glorified the earlier Klan and dramatized the racist stereotypes concerning blacks of that era. The Klan focused on political mobilization, which allowed it to gain power in states such as Indiana, on a platform that combined racism with anti-immigrant, anti-Semitic, anti-Catholic and anti-union rhetoric, but also supported lynching. It reached its peak of membership and influence about 1925, declining rapidly afterward as opponents mobilized.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "2162835", "title": "Racism in the United States", "section": "Section::::Jewish Americans.\n", "start_paragraph_id": 109, "start_character": 0, "end_paragraph_id": 109, "end_character": 408, "text": "Beginning in the 1910s, Southern Jewish communities were attacked by the Ku Klux Klan, which objected to Jewish immigration, and often used \"The Jewish Banker\" caricature in its propaganda. In 1915, Leo Frank was lynched in Georgia after being convicted of rape and sentenced to death (his punishment was commuted to life imprisonment). This event was a catalyst in the re-formation of the new Ku Klux Klan.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "31983145", "title": "Meridian race riot of 1871", "section": "Section::::Background.:Ku Klux Klan.\n", "start_paragraph_id": 6, "start_character": 0, "end_paragraph_id": 6, "end_character": 404, "text": "The Ku Klux Klan (KKK) arose as independent chapters, part of the postwar insurgency related to the struggle for power in the South. In 1866, Mississippi Governor William L. Sharkey reported that disorder, lack of control and lawlessness were widespread. The Klan used public violence against blacks as intimidation. They burned houses, and attacked and killed blacks, leaving their bodies on the roads.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "14171216", "title": "History of antisemitism in the United States", "section": "Section::::Early 20th century.:Lynching of Leo Frank.\n", "start_paragraph_id": 36, "start_character": 0, "end_paragraph_id": 36, "end_character": 428, "text": "In response to the lynching of Leo Frank, Sigmund Livingston founded the Anti-Defamation League (ADL) under the sponsorship of B'nai B'rith. The ADL became the leading Jewish group fighting antisemitism in the United States. The lynching of Leo Frank coincided with and helped spark the revival of the Ku Klux Klan. The Klan disseminated the view that anarchists, communists and Jews were subverting American values and ideals.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "39498321", "title": "Anti-Middle Eastern sentiment", "section": "Section::::United States.\n", "start_paragraph_id": 4, "start_character": 0, "end_paragraph_id": 4, "end_character": 454, "text": "From the 1910s, Southern Jewish communities were attacked by the Ku Klux Klan, who objected to Jewish immigration, and often used \"The Jewish Banker\" in their propaganda. In 1915, Leo Frank was lynched in Georgia after being convicted of rape and sentenced to death (his punishment was commuted to life imprisonment). The second Ku Klux Klan, which grew enormously in the early 1920s by promoting \"100% Americanism\", focused much of its hatred on Jews. \n", "bleu_score": null, "meta": null } ] } ]
null
damdqg
why have some languages like spanish kept the pronunciation of the written language so that it can still be read phonetically, while spoken english deviated so much from the original spelling?
[ { "answer": "Spanish has an academy whose mission is to standardize and grow the Spanish language, so that helps Spanish to keep its strict pronunciation. English is, and has always been, a total shitshow, linguistically speaking.\n\n [_URL_0_](_URL_1_)", "provenance": null }, { "answer": "English did not originally have fixed spelling. People would spell words however they thought it sounded. This means that spelling varied from person to person and region to region. Also, due to being made of bits of several languages all smushed together often retaining parts of the original language's rules, there's no consistency as to how words are pronounced or where you even get the spelling from. A man named Samuel Johnson eventually wrote a dictionary in which he spelled the words however he wanted to and because of how popular it became, that became the fixed spelling. Johson liked stuffy fancy spellings rather than simple phonetic ones and he set the idea of telling people the \"correct\" way to write instead of telling them how words were normally used. Webster eventually did something similar for American English, although he preferred simplified spellings, hence some of the differences between American and British spelling.", "provenance": null }, { "answer": "A mix of historical change and language attitudes. English spelling was mostly standardised just before a [major series of sound changes](_URL_0_)happened, and the spelling mostly reflects the pronunciation from before those changes. Spanish hasn't had really much of anything quite so disruptive happen - it's been more a long series of much smaller changes. On the attitude side of things, English speakers have made a huge deal out of the concept of 'spelling things right', to the point that major change is largely unthinkable at this point - too many people have too strong of feelings about the current spelling system. (This might also be due in part to English's more major sound changes! It would take a massive reform to update English spelling, and it would have even if the reform had happened in 1600, thanks to the above-mentioned Great Vowel Shift - updating to account for even just that change would require a major change. Spanish on the other hand has largely been able to get by on a rolling series of small tweaks.)\n\nPlus, now English has different standard dialects in different places, and it would be impossible to achieve a Spanish-like level of one(ish)-to-one(ish) letter-to-sound correspondences in all dialects simultaneously without having different spellings per dialect.\n\nFor some other examples, compare Tibetan - which has a worse spelling-to-pronunciation correspondence than English does - and Swedish and Norwegian, where Swedish has much less predictable spelling than Norwegian despite them being basically dialects of the same language (from a purely linguistic perspective). Norwegian has gone through a series of language reforms (not confined only to spelling) since Norway's independence from Denmark in 1814, in part as a way of asserting a separate linguistic identity from Danish; Swedish just hasn't ever had the same impetus to change. Tibetan went through a drastic change somewhat like English did, where several kinds of previous consonant distinctions got turned into tone distinctions all in one go; I suspect that's also part of why Tibetan hasn't been updated.", "provenance": null }, { "answer": "There’s also the fact that as English has absorbed words from other languages, it sometimes stuck to the original pronunciation, but has more often anglicized it into something that sounds more like a word that’s English. \n\nTake the French word “foyer, “ which is pronounced in American English with a hard “r” at the end. Or the Japanese word “karaoke,” which I have heard butchered as “ka-row-kee” and “karry-okie” (not those spellings, just those pronunciations), when the Japanese pronunciation is “kah-rah-oh-kay.” But having those middle vowels is not an English thing, so the word gets pronounced like I showed.\n\nThose are recently absorbed words, but the same thing has been happening for centuries.", "provenance": null }, { "answer": "English actually did enunciate phonemes that are no longer enunciated. For instance, in night the gh was pronounced, and the e at the end of “silent e” words was said as an “ee” or “e” sound. Many of these much more Germanic enunciation were spoken all the way through to at least Early Modern English, and sometimes even into late modern English. It began as a much more phonetic language, but the incorporation of Latin language aspects into its every day language, along with dialectical phonemic changes over time made it deviate from original pronunciation.", "provenance": null }, { "answer": "No language has “original spelling”. Languages are oral and evolve based on usage.\n\nWriting systems weren’t introduced until very late in the history of language.\n\nEnglish spelling was standardised in the 1600s in the middle of something called “The Great Vowel Shift” where certain vowels and diphthongs shifted up (yes up, physically) in the mouth. \n\nFor instance “House” used to be pronounced exactly as it is spelled “Hoos-uh”. During the great vowel shift the pronunciation changed, but the spelling never did. \n\nEnglish has no central authority, whereas Spanish does, and it has so many dialects now that even if it did have an “English language academy”, which one becomes the “standard” dialect? I’m sure the 67 million people in Britain would *never* accept an “American standard English” spelling reform based on American pronunciation.", "provenance": null }, { "answer": "Part of the issue here is that you're comparing apples and oranges. There isn't really any single set of phonetic rules for English to deviate from. Spanish is a purely romance language, so it's based on a single previous language. English is based on many, so any given word might have very different phonetics than another. English even borrows heavily from other languages like Spanish and French, so even though those are both based on Latin, they gave their own unique spin to the sounds, and English copies both.", "provenance": null }, { "answer": "Because English is a germanic language with a relatively large influence from languages of non-germanic origin, such as french and a plethora of other romance languages. In terms of pronunciation and spelling, germanic languages are more straightforward than romance languages. This particularly large french influence, however, can be traced back to the norman conquest of england.\n\n To illustrate how modern english has deviated from its early, germanic roots; Icelandic is, among living languages, the most closely related to old english. Seriously, I strongly encourage you to look into the similarities between Icelandic and Old English (anglo saxon). It is fascinating", "provenance": null }, { "answer": "English is a MESS.\n\nThe languages spoken in the British Isles first are various versions of Pict and Celtic. Britain was then invaded repeatedly. first invasion with a written record was by the Romans (and the Greeks tagged along). Various place names show signs of it, including any town called -caster, which suffix is derived from the Latin word 'castrum', which is a fort or castle.\n\nEventually, the Romans left (the Roman Empire was in decline), and then various tribes of Germanic peoples migrated, taking their languages with them. This includes the Angles (where the word 'English' eventually formed), the Saxons, the Jutes, and probably a few other tribes. Eventually the Angles and the Saxons intermarried and mostly won out for the moment, hence the term 'Anglo-Saxon'.\n\nThe Danes and other Norse peoples were a constant pain in the English backside, leaving behind all sorts of words (including most that start 'kn-' with the k being silent). In fact, the Norse invasion of 1066 drained the English King Harold's army's reserves badly, so when the Norman Duke William decided he wanted to force Harold to give up the throne (there was a lot of brute force politics involved), Harold's exhausted army couldn't withstand William's fresh one and Harold was slain. The Normans spoke French, and a lot of the 'fancy' English words are originally French.\n\nThis whole mess has led to English being a mess, phonetically. It's also led to a pair of fun sayings.\n\n1. 'English is the product of Norman knights wanting a little fun with Saxon barmaids, and is no more or less legitimate than any of the other results.'\n\n2. 'English doesn't just borrow words from other languages. It follows them down dark alleys, knocks them out with a club and goes through their pockets looking for loose vocabulary.'", "provenance": null }, { "answer": "Ditto to everyone about the random spellings and eventual \"uniformity\" inspired by dictionaries and an increased number of literate speakers.\n\nI'm a L2 (second language learner) of Spanish and a native English speaker. \n\nSpanish only has 22-24 phonemes while English has 38-45. (World languages like these two have A LOT of speakers spanning a big portion of the globe).\n\n*Phonemes are distinct sounds of speech. We think of these as letters, but English doesn't have the same amount of letters to match the phonemes.\n\nEnglish also has a lot more phonemes than Spanish so exponentially there are more combinations in English than in Spanish.\n\nExamples- English sound /zh/ or /ʒ/; this sound has no singular letter to represent it. Example words are azure, measure, Jacques (loan words/names from French), casual. \n\nSo /ʒ/ can be represented as z, s, j, or s. This variation is confusing so many people believe that /zh/ could be an allophone of /s/ /sh/ /z/ or /j/. S sound, Sh sound, Z sound, or J sound (/dʒ/ for j sound) respectively.\n\nAn allophone is a variation of a phoneme because phonemes change based on mouth position and the way your produce the sound (though teeth, throat, nose, etc.,,) \n\nAllophone example- Stop versus top. Say stop and put your hand in front of your mouth to feel if air hits your hand when you say the t (it shouldn't), but when you say top it should. These are two different sounds of /t/, but we only use one letter for these sounds. The two variations are the same phoneme or base sound. \n\nThis happens a lot in any language. Allophones are everywhere, but we don't notice them because our brains steam line when we're in diapers.\n\nI could go on. Comment if you want more explanation.", "provenance": null }, { "answer": "It is important to note that spoken languages always evolve in the way that they're spoken. Spanish is no exception to this; 1600s Spanish is very different to the Spanish of today, and even among different regions and countries, Spanish is spoken differently.\n\nThere are a couple of key differences between Spanish and English that makes it more 'phonetic': \n\n* Note that both languages use the *Latin alphabet*. The language for which it was most suited for is, by and large, Latin, which had five vowel sounds and some number of consonants. English has always had more than five; hence why we have to distinguish between the *long* vowel sounds and the *short* vowel sounds, and why two vowel letters like 'ew' make one sound. Spanish is also not quite a perfect match to Latin's sounds: letters like 'h' are pretty much obsolete as Spanish doesn't have this sound, and letters like 'b' and 'v' actually make the same sound in Spanish. So Spanish isn't as phonetic as it might seem at first glance.\n\n* Spanish has updated its spelling to reflect changing pronunciations. This is largely thanks to a central body governing - written - Spanish: the Réal Academia, which happens to be highly respected by education and the media, and so any decisions they make happens to eventually make it through to all parts of society. English lacks such a central body, and so it's much harder to convince people to spell differently. For all the rag that English gets, no one actually seems enthusiastic about a more phonetic variant. Quite a few Commonwealth speakers I know seem to scoff at the idea of adopting even American English spelling, even though it was born out of Noah Webster's (failed) attempt to make English a more phonetic language.\n\n* The pronunciation of Spanish has changed in a way that doesn't seem contradictory to the way it's written. For example, 'g' and 'd' have evolved to a much softer sound than we would say them in English. When a Spanish speaker says 'de nada', it's closer \"de natha\", but since Spanish originally had no 'th' sound to begin with, d just becomes associated with that 'th' sound; same with 'g', whose pronunciation is closer to the soft Dutch 'g'. Contrast this with English; the 'ea' in 'meat' and 'ee' in 'meet' where once pronounced differently, but these two sounds merged a few centuries ago to give the modern pronunciation. \n\nThis, on top of no one being able to convince speakers to spell them the same when they started to be pronounced the same, creates a very much 'fossilised' version of English; a spelling of English that largely reflects its old pronunciation, while Spanish has, for the most part, managed to keep up the way it writes with the speaking populace.\n\nSide-note:\nThere exists this big misconception that language use is dictated by the way it is written; this is very much false. In all regards, the way a language is written is subservient to the way that the people speak it. Written English (or written Spanish) is not the 'ideal' nor 'correct' way to use or speak the language; this is just a by-product of the way writing evolves: the elite and educated use writing, therefore how they do it must be somehow 'correct'. This is, of course, not at all reliable. When the French Revolution occurred, the way the bourgeois used French immediately became stigmatised, and the language of the revolutionaries became the 'correct' way. The point being, what happens to be considered the 'correct' way of writing or using a language has no objective reason; it's just that that version happened to be in vogue.", "provenance": null }, { "answer": "There’s broadly two reasons why English spelling is terrible. The first is that when we borrow words from other languages that use the Latin Alphabet, we generally just leave the spelling as-is, even if the spelling rules in that language are different. This is how you get words like chauffeur (from Old French I think). The second is that we don’t update the spelling of a word to reflect pronunciation changes. So words like knife and what were originally pronounced more like k-nife and hwat, respectively, and we never got around to changing the spelling. This is exacerbated by something called the “great vowel shift” where basically all the vowels changed their sounds but spelling didn’t change.", "provenance": null }, { "answer": "Short version: The Latin-based words in English haven't shifted much. Ditto, Spanish. The Germanic/Old English words *have* shifted lots, because they're not used as much by the posh people who controlled Standard English and therefore controlled the pronunciation of English. Also, the spellings *used to be* phonetic but they only reflected the pronunciations that the 1% used. So, from the very start, the spellings were all jacked up.\n\nEnglish was given standardized spelling in the 15th Century by the Chancery, a government agency (king's court, whatever). The spelling was based on the way words were pronounced within the London-Oxford-Cambridge triangle, a chunk of England where rich, posh, well educated people lived. This led to two problems.\n\n1. Other accents, other dialects (subgroups of English), etc. were ignored.\n2. As pronunciation shifted, both inside and outside the London-Oxford-Cambridge triangle, spellings didn't keep up. Therefore, over time, spellings ceased to reflect the pronunciation.\n\nSpanish is heavily derived from Latin. So are Italian, Romansch, Languedoc, Romanian, French, and ... something else. That's why their spelling and pronunciation didn't shift all that much. They're Romance languages, which means they're balls-deep in Latin. (That's a technical term.)\n\nEnglish, on the other hand, is primarily Germanic. It uses a lot of French and Latin because of the Norman Invasion and the Catholic Church respectively. Still, there's always been a tension between the two groups of words. Old English (AKA Anglo Saxon, AKA pre-1066 'English') words tend to be pronounced *very* differently from Latinate (AKA Latin/French/romance) words.\n\nIf you look at the posher, more highfalutin' words, they're Latinate and their pronunciation hasn't shifted that much. Check this out. \"I desire to inquire as to the propinquity of my artisanal cutlery. Your concomitant reply is appreciated.\" It sounds pretentious because it's all Latinate. The bigger words' pronunciation hasn't shifted much because they're Latinate and share much with Spanish/French/etc. \n\nNow, try the inkhorn (more Germanic) version. \"I want to ask where my stuff is. Tell me. Thanks.\" Much more casual, much more 'common', and much more prone to shifts in pronunciation. 'Want' was *vanta*. 'Ask' was *ax* or *ascian*. Stuff was *stoppian*. Thanks was probably *tanke* or something.\n\nCompare that to 'desire' (French *desirer*, Latin *desiderare*), 'inquire' (French *enquerre*, Latin *inquirere*), etc. The Latinate words are so close to Latin that you can almost understand high-register English without studying it, if you know enough Latin.\n\nNow, consider this. The posh folks who controlled English spelling also controlled Standard English pronunciation, either consciously or unconsciously. (Think about Downton Abbey and how influential it is. Then, think about monks, politicians, and aristocrats. They control the schools, which produce the next generation of high-register English speakers, and so on.) So, not only do the 1% control the money, but they also control how high-register English (Latinate English) evolves. Pronunciation won't shift much, because spelling won't shift much, because the spelling of Latinate words doesn't usually *need* to change, because the pronunciation is already set by the Oxford-Cambridge-London triangle. It's quite circular in reasoning and in feedback.\n\nCommon English, AKA inkhorn English, AKA low-register English, can evolve much more and *does* evolve much more. There are 100 dialects, 200 regional accents, etc. and most of them contain words and phrases that pre-date the Norman Invasion. Fore example, Geordie contains a surprising about of Danish. Naturally, those words didn't make it into Standard English. Still, the spellings of inkhorn English could evolve in those communities because most people spoke two dialects anyway (Standard English and the local dialect of English). The 1% felt no need to regulate non-standard dialects, and hoi polloi felt no need to kiss the 1%'s ass by tweaking their own spellings.\n\nEventually, as I said, the Chancery did standardize inkhorn spellings, but no one really paid attention to *speaking* in those spellings. The spellings *were* phonetic briefly, but they were standardized about the Oxford-Cambridge-London pronunciation! So, from the very start, the spellings did not reflect the way that most English-speakers talked. Matters worsened as the centuries passed, because English evolves... and whereas Latinate words' pronunciations stayed true to their roots (because the 1% tried super-duper hard to keep on speaking 'nicely'), the inkhorn words' pronunciations shifted all over the bloody shop (because that's what happens when normal people speak normal English in 200 different ways).", "provenance": null }, { "answer": "Bill Bryson’s book “Mother Tongue” is a great read that explains the history of the English language, as well as its peculiarities and similarities/differences to other languages. And it’s actually funny, which isn’t easy with a potentially dry subject.", "provenance": null }, { "answer": "English didn't deviate from original spelling. Spelling adapted to the English language speakers throughout its history. And its why it gives so many secondary speakers a vocabulary pronounciation headache of its own. 😁\n\nI wouldn't say Spanish has kept pronunciation for the same reasons above and below. It was adapted to its speakers.\nBut Korean fits this description of being pronounced as written because modern Korean was constructed to be so.\n\nSo why isn't English pronounced the way it's spelled? As logical as it would seem from convenience and efficiency in learning, languages don't always evolve that way. They become designed that way once speakers become aware of their sound and written language and try to find ways to standardize it so it's easier to educate others and make the population literate. This is what happened with Korean. The Chinese characters didn't exactly fit the pronounced language. So they designed a written language to fit their pronounced language (Korean characters literally tell you how to make the sound in your mouth) to make it easy for everyone to learn and be literate. But for a language to change like this takes strong influences, like an effective government and education system.\n\nBut languages don't always turn out this way because native speakers get used to inconsistencies and inconveniences. People learn to adapt to the 'logic' of their language. And in the case of English, you just have to learn those awkward pronounciations ( thought, night, this, house, mice, exam ) because a lot of foreign influences over hundreds integrated into English.\n\nFirst there was the Celtic languages.\nThen the Romans came, left, then came back with Caesar and established some of the latin in our grammar and alphabet.\nThen Germanic groups like the Angles ans Saxons brought Old English, which is not too unfamiliar from Modern English.\n\nThen the Vikings raided and gave us some cool words like that start with sk, sky and skill.\n\nThen the Norman french invaded and slowly killed off Germanic Old English after making French the court language for awhile, which is why English has a lot of French vocabulary that trickled to the peasantry. (Colour, battle, castle) Apple used to refer to all kinds of fruit in general rather than just a Red Delicious or Granny Smith.\nIt wasn't until around the Tudor era that Early Modern English broke out of the French from court. We also had the Great Vowel shift where our pronounciation of vowels in words went rose in the mouth. \n\nColonialism and Exploration added some words from Dutch, German, Spanish, and Portuguese into English because of over seas trading.\n\nAnd by this point the printing press was made so more people started to become literate and read and write in English. But everyone had their own spelling and writing conventions.\nDictionaries and rules of style to standardize English were slowly being established mostly in the 1700s by a lot of educated men who had their own ideas of what proper grammar and spelling for English should be, like Samuel Johnson for the British and Webster for the Americans. (This is why the British spell Colour and Americans Color.)\n\nHope this explains why languages don't always pronounce as they're written.", "provenance": null }, { "answer": "I think it's important to mention that Spanish hasn't retained anything. There is an institution that actively works to keep spelling consistent with pronunciation.", "provenance": null }, { "answer": "One thing I haven't seen mentioned yet is that English as far as I know just never updated their spelling as opposed to for example German (my native language) which has spelling reforms every few years.\n\nYou know. So we spell things the way they're spoken.", "provenance": null }, { "answer": "You are assumeing that the original spelling of english words were phonetic.... They were not", "provenance": null }, { "answer": "Macedonian is the same with spanish,one letter makes a sound and that sound is always the same in every word.\n\nI feel bad for the people who have English as their 1st but still struggle to spell.", "provenance": null }, { "answer": "You have two options: embed all the information into the writing of the language or make the writing more efficient by relying on the memories of the people. Languages like Spanish embed more of the information into the writing. But then, numbers require many more syllables. So it’d easier to teach reading and writing, but harder to learn mathematics. Languages like Chinese have a single symbol for each word. They rely MUCH more heavily on the memories of the people. But numbers have one syllable, and saying 11 is just saying 10-1 (or vice versa, I don’t remember). So it’s harder to teach reading and writing but easier to teach mathematics.", "provenance": null }, { "answer": "I just want to know how it's spelled.....gray or grey?", "provenance": null }, { "answer": null, "provenance": [ { "wikipedia_id": "24391752", "title": "Gender neutrality in Spanish", "section": "Section::::Replacing -a and -o.:Pronunciation.\n", "start_paragraph_id": 23, "start_character": 0, "end_paragraph_id": 23, "end_character": 254, "text": "However, some Spanish speakers are concerned that this proposal is unlikely to be adopted, since the Spanish language does not distinguish and from and respectively, and most of its speakers would therefore not even notice a difference in pronunciation.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "26825", "title": "Spanish language", "section": "Section::::History.\n", "start_paragraph_id": 28, "start_character": 0, "end_paragraph_id": 28, "end_character": 847, "text": "Peculiar to Spanish (as well as to the neighboring Gascon dialect of Occitan, and attributed to a Basque substratum) was the mutation of Latin initial into whenever it was followed by a vowel that did not diphthongize. The , still preserved in spelling, is now silent in most varieties of the language, although in some Andalusian and Caribbean dialects it is still aspirated in some words. Because of borrowings from Latin and from neighboring Romance languages, there are many -/-doublets in modern Spanish: and (both Spanish for \"Ferdinand\"), and (both Spanish for \"smith\"), and (both Spanish for \"iron\"), and and (both Spanish for \"deep\", but means \"bottom\" while means \"deep\"); (Spanish for \"to make\") is cognate to the root word of (Spanish for \"to satisfy\"), and (\"made\") is similarly cognate to the root word of (Spanish for \"satisfied\").\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "5888447", "title": "Castilian Spanish", "section": "Section::::Difference with American Spanish.\n", "start_paragraph_id": 13, "start_character": 0, "end_paragraph_id": 13, "end_character": 358, "text": "However, some traits of the Spanish spoken in Spain are exclusive to that country, and for this reason, courses of Spanish as a second language often neglect them, preferring Mexican Spanish in the United States and Canada whilst European Spanish is taught in Europe. Spanish grammar and to a lesser extent pronunciation can vary sometimes between variants.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "265417", "title": "Spanish language in the Americas", "section": "", "start_paragraph_id": 3, "start_character": 0, "end_paragraph_id": 3, "end_character": 743, "text": "There are numerous regional particularities and idiomatic expressions within Spanish. In Latin American Spanish, loanwords directly from English are relatively more frequent, and often foreign spellings are left intact. One notable trend is the higher abundance of loan words taken from English in Latin America as well as words derived from English. The Latin American Spanish word for \"computer\" is \"computadora\", whereas the word used in Spain is \"ordenador\", and each word sounds foreign in the region where it is not used. Some differences are due to Iberian Spanish having a stronger French influence than Latin America, where, for geopolitical reasons, the United States influence has been predominant throughout the twentieth century.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "17017272", "title": "Old Spanish language", "section": "Section::::Phonology.:ch.\n", "start_paragraph_id": 57, "start_character": 0, "end_paragraph_id": 57, "end_character": 307, "text": "Old Spanish had , just as Modern Spanish does, which mostly represents a development of earlier * (still preserved in Portuguese and French), from the Latin . The use of for originated in Old French and spread to Spanish, Portuguese, and English despite the different origins of the sound in each language:\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "780760", "title": "Spanish dialects and varieties", "section": "Section::::Pronunciation.:Judaeo-Spanish.\n", "start_paragraph_id": 75, "start_character": 0, "end_paragraph_id": 75, "end_character": 767, "text": "These dialects have important phonological differences compared to varieties of Spanish proper; for example, they have preserved the voiced/voiceless distinction among sibilants as they were in Old Spanish. For this reason, the letter , when written single between vowels, corresponds to a voiced —e.g. ('rose'). Where is not between vowels and is not followed by a voiced consonant, or when it is written double, it corresponds to voiceless —thus ('to sit down'). And due to a phonemic neutralization similar to the \"seseo\" of other dialects, the Old Spanish voiced and the voiceless \"ç\" have merged, respectively, with and —while maintaining the voicing contrast between them. Thus ('to make') has gone from the medieval to , and ('town square') has gone from to .\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "3349086", "title": "Inventive spelling", "section": "", "start_paragraph_id": 2, "start_character": 0, "end_paragraph_id": 2, "end_character": 405, "text": "Conventional written English is not phonetic (that is, it is not written as it sounds, due to the history of its spelling, which led to outdated, unintuitive, misleading or arbitrary spelling conventions and spellings of individual words) unlike, for example, German or Spanish, where letters have relatively fixed associated sounds, so that the written text is a fair representation of the spoken words.\n", "bleu_score": null, "meta": null } ] } ]
null
5tnn88
Why were most of the popular ancient literature written in verse?
[ { "answer": "I think there is a flaw in your question, or at least several problematic assumptions about literature, ancient and modern.\n\nLet's take your examples. Firstly, the Bible contain significant portions of poetry (Psalms, large portions of the Prophetic books), but it is not all poetry, and it is not even mostly poetry. \n\nHomer's Iliad is verse, because it emerges in the context of a pre-literate society. Generally, highly oral cultures tend to maintain a high value on poetry and song, because those forms of composition do indeed lend themselves to memorisation. It's much harder to memorise long prose texts, and it's much less interesting to hear long prose texts \"performed\". Sticking with ancient Greek literature, though, plenty of non-verse material was produced. Herodotus, Thucydides, etc.. The prose genre of history, among others, was \"popular\". You just have selected a poetic example.\n\nA very similar thing could be said about Ovid's Metamorphoses. Yes, it's poetry. But Romans produced plenty of prose literature - philosophy, for-publication epistles, history - that was popular. At least as much as poetry was.\n\nDante, I will skip, since it's much later than your other examples of \"ancient\" literature, and the context of its composition is a different literary world to antiquity.\n\nI think the flaw in your question can be demonstrated simply by reversing the examples: \"why was so much prose written in ancient times? The Bible, Livy, Herodotus?\"\n\n > Verse nowadays seems confined to music, theater and poems.\n\nWhich is exactly the same. Your examples are all poetry, that's why they are written in poetic forms. Similarly, plays tend to be written in verse as well (in antiquity). Songs, too, obviously, though our knowledge of ancient melodies is severely limited.", "provenance": null }, { "answer": null, "provenance": [ { "wikipedia_id": "52682", "title": "Greek literature", "section": "Section::::Ancient Greek literature (800 BC-350 AD).:Preclassical (800 BC-500 BC).\n", "start_paragraph_id": 8, "start_character": 0, "end_paragraph_id": 8, "end_character": 332, "text": "The Greeks created poetry before making use of writing for literary purposes. Poems created in the Preclassical period were meant to be sung or recited (writing was little known before the 7th century BC). Most poems focused on myths, legends that were part folktale and part religion. Tragedies and comedies emerged around 600 BC.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "138848", "title": "History of literature", "section": "Section::::Antiquity.:Classical antiquity.:Greek literature.\n", "start_paragraph_id": 22, "start_character": 0, "end_paragraph_id": 22, "end_character": 419, "text": "Ancient Greek society placed considerable emphasis upon literature. Many authors consider the western literary tradition to have begun with the epic poems \"The Iliad\" and \"The Odyssey\", which remain giants in the literary canon for their skillful and vivid depictions of war and peace, honor and disgrace, love and hatred. Notable among later Greek poets was Sappho, who defined, in many ways, lyric poetry as a genre.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "66540", "title": "Ancient Greece", "section": "Section::::Culture.:Literature and theatre.\n", "start_paragraph_id": 91, "start_character": 0, "end_paragraph_id": 91, "end_character": 908, "text": "The earliest Greek literature was poetry, and was composed for performance rather than private consumption. The earliest Greek poet known is Homer, although he was certainly part of an existing tradition of oral poetry. Homer's poetry, though it was developed around the same time that the Greeks developed writing, would have been composed orally; the first poet to certainly compose their work in writing was Archilochus, a lyric poet from the mid-seventh century BC. tragedy developed, around the end of the archaic period, taking elements from across the pre-existing genres of late archaic poetry. Towards the beginning of the classical period, comedy began to develop – the earliest date associated with the genre is 486 BC, when a competition for comedy became an official event at the City Dionysia in Athens, though the first preserved ancient comedy is Aristophanes' \"Acharnians\", produced in 425.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "39262", "title": "Playwright", "section": "Section::::History.:Early playwrights.\n", "start_paragraph_id": 9, "start_character": 0, "end_paragraph_id": 9, "end_character": 453, "text": "The earliest playwright in Western literature with surviving works are the Ancient Greeks. These early plays were for annual Athenian competitions among play writers held around the 5th century BC. Such notables as Aeschylus, Sophocles, Euripides, and Aristophanes established forms still relied on by their modern counterparts. For the ancient Greeks, playwriting involved \"poïesis\", \"the act of making\". This is the source of the English word \"poet\".\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "187260", "title": "Culture of Greece", "section": "Section::::Literature.:Ancient Greece.\n", "start_paragraph_id": 96, "start_character": 0, "end_paragraph_id": 96, "end_character": 514, "text": "The first recorded works in the western literary tradition are the epic poems of Homer and Hesiod. Early Greek lyric poetry, as represented by poets such as Sappho and Pindar, was responsible for defining the lyric genre as it is understood today in western literature. Aesop wrote his \"Fables\" in the 6th century BC. These innovations were to have a profound influence not only on Roman poets, most notably Virgil in his epic poem on the founding of Rome, \"The Aeneid\", but one that flourished throughout Europe.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "1844767", "title": "Archaic Greece", "section": "Section::::Culture.:Literature.:Poetry.\n", "start_paragraph_id": 71, "start_character": 0, "end_paragraph_id": 71, "end_character": 377, "text": "Greek literature in the archaic period was predominantly poetry, though the earliest prose dates to the sixth century BC. archaic poetry was primarily intended to be performed rather than read, and can be broadly divided into three categories: lyric, rhapsodic, and citharodic. The performance of the poetry could either be private (most commonly in the symposium) or public. \n", "bleu_score": null, "meta": null }, { "wikipedia_id": "214974", "title": "Culture of ancient Rome", "section": "Section::::The arts.:Literature.\n", "start_paragraph_id": 47, "start_character": 0, "end_paragraph_id": 47, "end_character": 1600, "text": "In the ancient world, poetry usually played a far more important part of daily life than it does today. In general, educated Greeks and Romans thought of poetry as playing a much more fundamental part of life than in modern times. Initially in Rome poetry was not considered a suitable occupation for important citizens, but the attitude changed in the second and first centuries BC. In Rome poetry considerably preceded prose writing in date. As Aristotle pointed out, poetry was the first sort of literate to arouse people's interest in questions of style. The importance of poetry in the Roman Empire was so strong that Quintilian, the greatest authority on education, wanted secondary schools to focus on the reading and teaching of poetry, leaving prose writings to what would now be referred to as the university stage. Virgil represents the pinnacle of Roman epic poetry. His \"Aeneid\" was produced at the request of Maecenas and tells the story of flight of Aeneas from Troy and his settlement of the city that would become Rome. Lucretius, in his \"On the Nature of Things\", attempted to explicate science in an epic poem. Some of his science seems remarkably modern, but other ideas, especially his theory of light, are no longer accepted. Later Ovid produced his \"Metamorphoses\", written in dactylic hexameter verse, the meter of epic, attempting a complete mythology from the creation of the earth to his own time. He unifies his subject matter through the theme of metamorphosis. It was noted in classical times that Ovid's work lacked the \"gravitas\" possessed by traditional epic poetry.\n", "bleu_score": null, "meta": null } ] } ]
null
in4ps
How long before a nuclear weapon is incapable of producing a nuclear explosion?
[ { "answer": "The decay of the fissile material is much less likely to be the limiting factor than is the lifetime of the warhead, e.g. the electronics which cause it to detonate. Or in the case of a rocket-based warhead, the rocket may become nonviable while the warhead is still perfectly capable of exploding if you don't mind it going off in your own silo.", "provenance": null }, { "answer": "So the uranium 235 bombs required 56 kg of uranium. For an actual nuclear weapon (not a dirty bomb) about 85% of uranium must be weapons grade (not decayed).\n\nSoo.. Using formula N(t) = N e^ (-(half life)(t))\nwhere N(t) = 85% * 56 = 47.6\nN = 56\nhalf life constant = 9.72*10^-10 atoms per year\n\nt = 1.67201x10^8 years! A loooong time. Easier to disassemble the nukes than to wait for them to expire. ", "provenance": null }, { "answer": "Let's assume bomb makers don't want to waste their rare and expensive isotopes, using as little as possible to achieve the desired yeild. Adding a small safety (heh) factor, let's guess that below 95% of the original materials, the bomb fails.\n\nUranium 235 has a half life of 700 million years, so the 95% point is reached after 60 million years or so. So your bomb will be subducted by a continental plate or defused by highly evolved crab people before it loses it radioactive punch.\n\nPlutonium 239 has a half life of 24,100 years, so it would \"only\" take 1,800 years to reach the 95% point. Rusted to pieces long before that. ", "provenance": null }, { "answer": "Nuclear arms require conventional explosives to start the initial chain reaction. Most of these conventional explosives are unstable over long periods of time and are almost certainly going to be the first things that fail on weapons. After that, unstable / rusting wiring, casing, etc. will kill the weapon. ", "provenance": null }, { "answer": null, "provenance": [ { "wikipedia_id": "2908928", "title": "MAUD Committee", "section": "Section::::Activity.:University of Liverpool.\n", "start_paragraph_id": 36, "start_character": 0, "end_paragraph_id": 36, "end_character": 529, "text": "Meanwhile, Pryce investigated how long a runaway nuclear chain reaction in an atomic bomb would continue before it blew itself apart. He calculated that since the neutrons produced by fission have an energy of about this corresponded to a speed of . The major part of the chain reaction would be completed in the order of (ten \"shakes\"). From 1 to 10 per cent of the fissile material would fission in this time; but even an atomic bomb with 1 per cent efficiency would release as much energy as 180,000 times its weight in TNT. \n", "bleu_score": null, "meta": null }, { "wikipedia_id": "31889700", "title": "June 1962", "section": "Section::::June 4, 1962 (Monday).\n", "start_paragraph_id": 22, "start_character": 0, "end_paragraph_id": 22, "end_character": 276, "text": "BULLET::::- Plans to detonate an American nuclear weapon, 40 miles above the Earth, were halted one minute and 40 seconds before the scheduled explosion. Failure of the tracking system in the Thor missile led to the decision to blow the warhead apart without an atomic blast.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "29952420", "title": "Chemistry: A Volatile History", "section": "Section::::Episode 3: The Power of the Elements.:The Manhattan Project.\n", "start_paragraph_id": 185, "start_character": 0, "end_paragraph_id": 185, "end_character": 251, "text": "For an explosion to occur, there must be a rapid release of energy – a slow release of energy from uranium nuclei would give a uranium fire, but no explosion. Both sides poured their effort into creating the necessary conditions for a chain reaction.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "3185688", "title": "Nuclear reactor physics", "section": "Section::::Delayed neutrons and controllability.\n", "start_paragraph_id": 39, "start_character": 0, "end_paragraph_id": 39, "end_character": 843, "text": "Fission reactions and subsequent neutron escape happen very quickly; this is important for nuclear weapons, where the objective is to make a nuclear pit release as much energy as possible before it physically explodes. Most neutrons emitted by fission events are prompt: they are emitted effectively instantaneously. Once emitted, the average neutron lifetime (formula_1) in a typical core is on the order of a millisecond, so if the exponential factor formula_3 is as small as 0.01, then in one second the reactor power will vary by a factor of (1 + 0.01), or more than ten thousand. Nuclear weapons are engineered to maximize the power growth rate, with lifetimes well under a millisecond and exponential factors close to 2; but such rapid variation would render it practically impossible to control the reaction rates in a nuclear reactor.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "2589713", "title": "Chernobyl disaster", "section": "Section::::Accident.:Explosions.\n", "start_paragraph_id": 49, "start_character": 0, "end_paragraph_id": 49, "end_character": 523, "text": "A second, more powerful explosion occurred about two or three seconds after the first; this explosion dispersed the damaged core and effectively terminated the nuclear chain reaction. This explosion also compromised more of the reactor containment vessel and ejected hot lumps of graphite moderator. The ejected graphite and the demolished channels still in the remains of the reactor vessel caught fire on exposure to air, greatly contributing to the spread of radioactive fallout and the contamination of outlying areas.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "42015113", "title": "Joint Comprehensive Plan of Action", "section": "Section::::Summary of provisions.:Nuclear.\n", "start_paragraph_id": 52, "start_character": 0, "end_paragraph_id": 52, "end_character": 924, "text": "As a result of the above, the \"breakout time\"—the time in which it would be possible for Iran to make enough material for a single nuclear weapon—will increase from two to three months to one year, according to U.S. officials and U.S. intelligence. An August 2015 report published by a group of experts at Harvard University's Belfer Center for Science and International Affairs concurs in these estimates, writing that under the JCPOA, \"over the next decade would be extended to roughly a year, from the current estimated breakout time of 2 to 3 months\". The Center for Arms Control and Non-Proliferation also accepts these estimates. By contrast, Alan J. Kuperman, coordinator of the Nuclear Proliferation Prevention Project at the University of Texas at Austin, disputed the one-year assessment, arguing that under the agreement, Iran's breakout time \"would be only about three months, not much longer than it is today\".\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "337065", "title": "Exploding-bridgewire detonator", "section": "Section::::Description.:Use in nuclear weapons.\n", "start_paragraph_id": 12, "start_character": 0, "end_paragraph_id": 12, "end_character": 623, "text": "Since explosives detonate at typically 7–8 kilometers per second, or 7–8 meters per millisecond, a 1 millisecond delay in detonation from one side of a nuclear weapon to the other would be longer than the time the detonation would take to cross the weapon. The time precision and consistency of EBWs (0.1 microsecond or less) are roughly enough time for the detonation to move 1 millimeter at most, and for the most precise commercial EBWs this is 0.025 microsecond and about 0.2 mm variation in the detonation wave. This is sufficiently precise for very low tolerance applications such as nuclear weapon explosive lenses.\n", "bleu_score": null, "meta": null } ] } ]
null
8n8oge
When did people start to identify more with skin color rather than language/culture.
[ { "answer": "Hi there -- you may be interested in [this recent answer](_URL_0_) from u/sowser, in which they go into some detail about how race is constructed through the experience of the Transatlantic slave trade. The whole thing is worth a read, but constructions of race are in part 4. ", "provenance": null }, { "answer": null, "provenance": [ { "wikipedia_id": "25426221", "title": "Washington Redskins name controversy", "section": "Section::::History.:Origin and meaning of redskin.\n", "start_paragraph_id": 10, "start_character": 0, "end_paragraph_id": 10, "end_character": 540, "text": "The historical context for the emergence in the Americas of racial identities based upon skin color was the establishment of colonies which developed a plantation economy dependent upon slave labor. Before that, the British identified themselves as Christians rather than white. \"At the start of the eighteenth century, Indians and Europeans rarely mentioned the color of each other's skins. By midcentury, remarks about skin color and the categorization of peoples by simple color-coded labels (red, white, black) had become commonplace.\"\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "38079914", "title": "Dark skin", "section": "Section::::Geographic distribution.\n", "start_paragraph_id": 36, "start_character": 0, "end_paragraph_id": 36, "end_character": 828, "text": "More recent research has found that human populations over the past 50,000 years have changed from dark-skinned to light-skinned and vice versa. Only 100–200 generations ago, the ancestors of most people living today likely also resided in a different place and had a different skin color. According to Nina Jablonski, darkly pigmented modern populations in South India and Sri Lanka are an example of this, having redarkened after their ancestors migrated down from areas much farther north. Scientists originally believed that such shifts in pigmentation occurred relatively slowly. However, researchers have since observed that changes in skin coloration can happen in as little as 100 generations (~2,500 years), with no intermarriage required. The speed of change is also affected by clothing, which tends to slow it down.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "38041", "title": "Human skin color", "section": "Section::::Genetics.\n", "start_paragraph_id": 24, "start_character": 0, "end_paragraph_id": 24, "end_character": 819, "text": "The understanding of the genetic mechanisms underlying human skin color variation is still incomplete, however genetic studies have discovered a number of genes that affect human skin color in specific populations, and have shown that this happens independently of other physical features such as eye and hair color. Different populations have different allele frequencies of these genes, and it is the combination of these allele variations that bring about the complex, continuous variation in skin coloration we can observe today in modern humans. Population and admixture studies suggest a 3-way model for the evolution of human skin color, with dark skin evolving in early hominids in sub-Saharan Africa and light skin evolving independently in Europe and East Asia after modern humans had expanded out of Africa.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "25614", "title": "Race (human categorization)", "section": "Section::::Modern scholarship.:Biological classification.:Morphologically differentiated populations.:Clines.\n", "start_paragraph_id": 49, "start_character": 0, "end_paragraph_id": 49, "end_character": 818, "text": "Patterns such as those seen in human physical and genetic variation as described above, have led to the consequence that the number and geographic location of any described races is highly dependent on the importance attributed to, and quantity of, the traits considered. Scientists discovered a skin-lighting mutation that partially accounts for the appearance of Light skin in humans (people who migrated out of Africa northward into what is now Europe) which they estimate occurred 20,000 to 50,000 years ago. The East Asians owe their relatively light skin to different mutations. On the other hand, the greater the number of traits (or alleles) considered, the more subdivisions of humanity are detected, since traits and gene frequencies do not always correspond to the same geographical location. Or as put it:\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "38041", "title": "Human skin color", "section": "Section::::Genetics.:Light skin.\n", "start_paragraph_id": 29, "start_character": 0, "end_paragraph_id": 29, "end_character": 746, "text": "For the most part, the evolution of light skin has followed different genetic paths in European and East Asian populations. Two genes however, KITLG and ASIP, have mutations associated with lighter skin that have high frequencies in both European and East Asian populations. They are thought to have originated after humans spread out of Africa but before the divergence of the European and Asian lineages around 30,000 years ago. Two subsequent genome-wide association studies found no significant correlation between these genes and skin color, and suggest that the earlier findings may have been the result of incorrect correction methods and small panel sizes, or that the genes have an effect too small to be detected by the larger studies.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "50513", "title": "Melanin", "section": "Section::::Human adaptation.:Evolutionary origins.\n", "start_paragraph_id": 48, "start_character": 0, "end_paragraph_id": 48, "end_character": 1278, "text": "Early humans evolved to have dark skin color around 1.2 million years ago, as an adaptation to a loss of body hair that increased the effects of UV radiation. Before the development of hairlessness, early humans had reasonably light skin underneath their fur, similar to that found in other primates. The most recent scientific evidence indicates that anatomically modern humans evolved in Africa between 200,000 and 100,000 years, and then populated the rest of the world through one migration between 80,000 and 50,000 years ago, in some areas interbreeding with certain archaic human species (Neanderthals, Denisovans, and possibly others). It seems likely that the first modern humans had relatively large numbers of eumelanin-producing melanocytes, producing darker skin similar to the indigenous people of Africa today. As some of these original people migrated and settled in areas of Asia and Europe, the selective pressure for eumelanin production decreased in climates where radiation from the sun was less intense. This eventually produced the current range of human skin color. Of the two common gene variants known to be associated with pale human skin, \"Mc1r\" does not appear to have undergone positive selection, while \"SLC24A5\" has undergone positive selection.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "38041", "title": "Human skin color", "section": "Section::::Evolution of skin color.\n", "start_paragraph_id": 22, "start_character": 0, "end_paragraph_id": 22, "end_character": 372, "text": "For the most part, the evolution of light skin has followed different genetic paths in Western and Eastern Eurasian populations. Two genes however, KITLG and ASIP, have mutations associated with lighter skin that have high frequencies in Eurasian populations and have estimated origin dates after humans spread out of Africa but before the divergence of the two lineages.\n", "bleu_score": null, "meta": null } ] } ]
null
i5xmz
If you were to theoretically use a microwave to heat a freeze dried food product in an environment with 0% humidity, what would the outcome be?
[ { "answer": "Just a guess but here goes: Microwaves don't heat *only* water. They will heat a number of molecules, even if they're \"tuned\" to water or whatever. Anyways, The microwaves heat up the dehydrated food, but since it no longer has water to boil off, I imagine it could heat up above a combustion point quickly. So it could perhaps begin combusting within the microwave.", "provenance": null }, { "answer": "A microwave oven will cause any molecule with dipoles to 'vibrate.' This includes water but also includes fats and sugars, so it would still heat up.", "provenance": null }, { "answer": null, "provenance": [ { "wikipedia_id": "58017", "title": "Microwave oven", "section": "Section::::Hazards.:Uneven heating.\n", "start_paragraph_id": 100, "start_character": 0, "end_paragraph_id": 100, "end_character": 418, "text": "Microwave ovens are frequently used for reheating leftover food, and bacterial contamination may not be repressed if the safe temperature is not reached, resulting in foodborne illness, as with all inadequate reheating methods. While microwaves can destroy bacteria as well as conventional ovens, they do not cook as evenly, leading to an increased risk that parts of the food will not reach recommended temperatures.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "6079682", "title": "Susceptor", "section": "Section::::Design and use.\n", "start_paragraph_id": 10, "start_character": 0, "end_paragraph_id": 10, "end_character": 260, "text": "Susceptors built into packaging create high temperatures in a microwave oven. This is useful for crisping and browning foods, as well as concentrating heat on the oil in a microwave popcorn bag (which is solid at room temperature) in order to melt it rapidly.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "42246351", "title": "Microwave volumetric heating", "section": "Section::::Thermal processing using microwaves.\n", "start_paragraph_id": 5, "start_character": 0, "end_paragraph_id": 5, "end_character": 441, "text": "The FDA accepts that microwaves can be used to heat food for commercial use, pasteurization and sterilization. The main mechanism of microbial inactivation by microwaves is due to thermal effect; the phenomenon of lethality due to 'non-thermal effect' is controversial, and the mechanisms suggested include selective heating of micro-organisms, electroporation, cell membrane rupture, and cell lysis due to electromagnetic energy coupling. \n", "bleu_score": null, "meta": null }, { "wikipedia_id": "24820253", "title": "Microwave heat distribution", "section": "", "start_paragraph_id": 4, "start_character": 0, "end_paragraph_id": 4, "end_character": 289, "text": "For example, the uniformity of microwave heat distribution is key parameter in microwave food sterilization, due to the potential danger directly related to human health if the food has not been heated evenly up to desirable temperature for neutralization of possible bacteria population.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "1002536", "title": "Freeze-drying", "section": "Section::::Equipment and types of freeze dryers.:Microwave-assisted freeze dryers.\n", "start_paragraph_id": 91, "start_character": 0, "end_paragraph_id": 91, "end_character": 602, "text": "Microwave-assisted freeze dryers utilize microwaves to allow for deeper penetration into the sample to expedite the sublimation and heating processes in freeze-drying. This method can be very complicated to set up and run as the microwaves can create an electrical field capable of causing gases in the sample chamber to become plasma. This plasma could potentially burn the sample, so maintaining a microwave strength appropriate for the vacuum levels is imperative. The rate of sublimation in a product can affect the microwave impedance, in which power of the microwave must be changed accordingly.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "27625957", "title": "Microwave burn", "section": "Section::::Medical uses.\n", "start_paragraph_id": 48, "start_character": 0, "end_paragraph_id": 48, "end_character": 248, "text": "Microwave heating seems to cause more damage to bacteria than equivalent thermal-only heating. However food reheated in a microwave oven typically reaches lower temperature than classically reheated, therefore pathogens are more likely to survive.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "26009047", "title": "Thermal cooking", "section": "Section::::Precautions.\n", "start_paragraph_id": 25, "start_character": 0, "end_paragraph_id": 25, "end_character": 861, "text": "If a large part of the cooking time is spent at temperatures lower than 60 °C (as when the contents of the cooker are slowly cooling over a long period), a danger of food poisoning due to bacterial infection, or toxins produced by multiplying bacteria, arises. It is essential to heat food sufficiently at the outset of vacuum cooking; 60 °C throughout the dish for 10 minutes is sufficient to kill most pathogens of interest, effectively pasteurizing the dish. Some foods, such as kidney beans, fava beans, and many other varieties of beans contain a toxin, phytohaemagglutinin, that requires boiling at 100 °C for at least 10 minutes to break down to safe levels. The best practice is to bring briefly to a rolling boil then put the pot in the flask. This keeps it hottest longest. With big chunks of food, boil a little longer before putting into the flask.\n", "bleu_score": null, "meta": null } ] } ]
null
1zdvui
it takes 11 minutes of hypoxia for the brain to die, but yet you can kill a man by strangling him much less. how come?
[ { "answer": "Strangling someone where pressure is put on the blood vessels in the neck can cause feedback to the heart which can cause it to go into cardiac arrest (gentle massage to the carotid is used to slow down rapid heartbeats). If done properly that can be done in only seconds. The person still takes a while to die but they have no heart beat. ", "provenance": null }, { "answer": "the brain doesn't die for 4-5minutes. but the heart stops causing imminent death (not immediate)\n\n", "provenance": null }, { "answer": null, "provenance": [ { "wikipedia_id": "66393", "title": "Clinical death", "section": "", "start_paragraph_id": 3, "start_character": 0, "end_paragraph_id": 3, "end_character": 388, "text": "At the onset of clinical death, consciousness is lost within several seconds. Measurable brain activity stops within 20 to 40 seconds. Irregular gasping may occur during this early time period, and is sometimes mistaken by rescuers as a sign that CPR is not necessary. During clinical death, all tissues and organs in the body steadily accumulate a type of injury called ischemic injury.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "70802", "title": "Decapitation", "section": "Section::::Physiological aspects.:Physiology of death by decapitation.\n", "start_paragraph_id": 14, "start_character": 0, "end_paragraph_id": 14, "end_character": 1685, "text": "Decapitation is quickly fatal to humans and most animals. Unconsciousness occurs within 10 seconds without circulating oxygenated blood (brain ischemia). Cell death and irreversible brain damage occurs after 3–6 minutes with no oxygen, due to excitotoxicity. Some anecdotes suggest more extended persistence of human consciousness after decapitation, but most doctors consider this unlikely and consider such accounts to be misapprehensions of reflexive twitching rather than deliberate movement, since deprivation of oxygen must cause nearly immediate coma and death (\"[Consciousness is] probably lost within 2–3 seconds, due to a rapid fall of intracranial perfusion of blood.\"). A laboratory study testing for humane methods of euthanasia in awake animals used EEG monitoring to measure the time duration following decapitation for rats to become fully unconscious, unable to perceive distress and pain. It was estimated that this point was reached within 3 - 4 seconds, correlating closely with results found in other studies on rodents (2.7 seconds, and 3 - 6 seconds). The same study also suggested that the massive wave which can be recorded by EEG monitoring approximately one minute after decapitation ultimately reflects brain death. Other studies indicate that electrical activity in the brain has been demonstrated to persist for 13 to 14 seconds following decapitation (although it is disputed as to whether such activity implies that pain is perceived), and a 2010 study reported that decapitation of rats generated responses in EEG indices over a period of 10 seconds that have been linked to nociception across a number of different species of animals, including rats. \n", "bleu_score": null, "meta": null }, { "wikipedia_id": "177602", "title": "Outer space", "section": "Section::::Environment.:Effect on human bodies.\n", "start_paragraph_id": 30, "start_character": 0, "end_paragraph_id": 30, "end_character": 1277, "text": "As a consequence of rapid decompression, oxygen dissolved in the blood empties into the lungs to try to equalize the partial pressure gradient. Once the deoxygenated blood arrives at the brain, humans lose consciousness after a few seconds and die of hypoxia within minutes. Blood and other body fluids boil when the pressure drops below 6.3 kPa, and this condition is called ebullism. The steam may bloat the body to twice its normal size and slow circulation, but tissues are elastic and porous enough to prevent rupture. Ebullism is slowed by the pressure containment of blood vessels, so some blood remains liquid. Swelling and ebullism can be reduced by containment in a pressure suit. The Crew Altitude Protection Suit (CAPS), a fitted elastic garment designed in the 1960s for astronauts, prevents ebullism at pressures as low as 2 kPa. Supplemental oxygen is needed at to provide enough oxygen for breathing and to prevent water loss, while above pressure suits are essential to prevent ebullism. Most space suits use around 30–39 kPa of pure oxygen, about the same as on the Earth's surface. This pressure is high enough to prevent ebullism, but evaporation of nitrogen dissolved in the blood could still cause decompression sickness and gas embolisms if not managed.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "261148", "title": "Apnea", "section": "Section::::Complications.\n", "start_paragraph_id": 8, "start_character": 0, "end_paragraph_id": 8, "end_character": 527, "text": "Under normal conditions, humans cannot store much oxygen in the body. Prolonged apnea leads to severe lack of oxygen in the blood circulation. Permanent brain damage can occur after as little as three minutes and death will inevitably ensue after a few more minutes unless ventilation is restored. However, under special circumstances such as hypothermia, hyperbaric oxygenation, apneic oxygenation (see below), or extracorporeal membrane oxygenation, much longer periods of apnea may be tolerated without severe consequences.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "31547326", "title": "Choke-out", "section": "Section::::Dangers.\n", "start_paragraph_id": 9, "start_character": 0, "end_paragraph_id": 9, "end_character": 640, "text": "There is debate over the dangers of choke-outs. After 4 to 6 minutes of sustained cerebral anoxia, permanent brain damage will begin to occur, but the long-term effects of a controlled choke-out for less than 4 minutes (as most are applied for mere seconds and released when unconsciousness is achieved) are disputed. However, everyone should note that generally loss of oxygen is never safe and always (even if minimal) causes death of brain cells. There is always risk of short-term memory loss, hemorrhage and harm to the retina, concussions from falling when unconscious, stroke, seizures, permanent brain damage, coma, and even death.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "1232575", "title": "Suicide methods", "section": "Section::::Hypothermia.\n", "start_paragraph_id": 36, "start_character": 0, "end_paragraph_id": 36, "end_character": 358, "text": "Suicide by hypothermia is a slow death that goes through several stages. Hypothermia begins with mild symptoms, gradually leading to moderate and severe penalties. This may involve shivering, delirium, hallucinations, lack of coordination, sensations of warmth, then finally death. One's organs cease to function, though clinical brain death can be delayed.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "54637386", "title": "Physiology of underwater diving", "section": "Section::::Breathhold limitations.\n", "start_paragraph_id": 21, "start_character": 0, "end_paragraph_id": 21, "end_character": 337, "text": "BULLET::::1. Duration-induced hypoxia occurs when the breath is held long enough for metabolic activity to reduce the oxygen partial pressure sufficiently to cause loss of consciousness. This is accelerated by exertion, which uses oxygen faster or hyperventilation, which reduces the carbon dioxide level in the blood which in turn may:\n", "bleu_score": null, "meta": null } ] } ]
null
3peo97
How far does the effect of time dilation "spread" from an object traveling at relativistic speeds?
[ { "answer": "No, it doesn't affect your clock at all (except an incredibly tiny amount of gravitational time dilation, which I don't think is what you're talking about and certainly isn't important for the discussion.)\n\nIn special relativity, time dilation is not a 'field' or localized effect. It's just a thing that happens to objects that are moving *relative to you*. Importantly, from the spaceship's point of view its clock is totally normal and your clock is the one going slow. It doesn't matter how far away the ship is or whether or not you can see it.", "provenance": null }, { "answer": null, "provenance": [ { "wikipedia_id": "11647860", "title": "Minkowski diagram", "section": "Section::::Time dilation.\n", "start_paragraph_id": 50, "start_character": 0, "end_paragraph_id": 50, "end_character": 576, "text": "Relativistic time dilation means that a clock (indicating its proper time) that moves relative to an observer is observed to run slower. In fact, time itself in the frame of the moving clock is observed to run slower. This can be read immediately from the adjoining Loedel diagram quite straightforwardly because unit lengths in the two system of axes are identical. Thus, in order to compare reading between the two systems, we can simply compare lengths as they appear on the page: we do not need to consider the fact that unit lengths on each axis are warped by the factor\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "1608886", "title": "Tests of special relativity", "section": "Section::::Special relativity.:Time dilation and length contraction.\n", "start_paragraph_id": 46, "start_character": 0, "end_paragraph_id": 46, "end_character": 1012, "text": "The transverse Doppler effect and consequently time dilation was directly observed for the first time in the Ives–Stilwell experiment (1938). In modern Ives-Stilwell experiments in heavy ion storage rings using saturated spectroscopy, the maximum measured deviation of time dilation from the relativistic prediction has been limited to ≤ 10. Other confirmations of time dilation include Mössbauer rotor experiments in which gamma rays were sent from the middle of a rotating disc to a receiver at the edge of the disc, so that the transverse Doppler effect can be evaluated by means of the Mössbauer effect. By measuring the lifetime of muons in the atmosphere and in particle accelerators, the time dilation of moving particles was also verified. On the other hand, the Hafele–Keating experiment confirmed the twin paradox, \"i.e.\" that a clock moving from A to B back to A is retarded with respect to the initial clock. However, in this experiment the effects of general relativity also play an essential role.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "33431450", "title": "Modern searches for Lorentz violation", "section": "Section::::Time dilation.\n", "start_paragraph_id": 39, "start_character": 0, "end_paragraph_id": 39, "end_character": 397, "text": "The current precision with which time dilation is measured (using the RMS test theory), is at the ~10 level. It was shown, that Ives-Stilwell type experiments are also sensitive to the formula_3 isotropic light speed coefficient of the SME, as introduced above. Chou \"et al.\" (2010) even managed to measure a frequency shift of ~10 due to time dilation, namely at everyday speeds such as 36 km/h.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "1729618", "title": "Stasis (fiction)", "section": "Section::::Overview.\n", "start_paragraph_id": 6, "start_character": 0, "end_paragraph_id": 6, "end_character": 385, "text": "There are real phenomena that cause time dilation similar that of a stasis field. Extremely high velocities approaching light speed or immensely powerful gravitational fields such as those existing near the event horizons of black holes will cause time to progress more slowly. However, there is no known theoretical way of causing such time dilation independently of such conditions.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "297839", "title": "Time dilation", "section": "Section::::Velocity time dilation.\n", "start_paragraph_id": 7, "start_character": 0, "end_paragraph_id": 7, "end_character": 413, "text": "Theoretically, time dilation would make it possible for passengers in a fast-moving vehicle to advance further into the future in a short period of their own time. For sufficiently high speeds, the effect is dramatic. For example, one year of travel might correspond to ten years on Earth. Indeed, a constant 1 g acceleration would permit humans to travel through the entire known Universe in one human lifetime.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "19595664", "title": "Time in physics", "section": "Section::::Conceptions of time.:Einstein's physics: spacetime.\n", "start_paragraph_id": 71, "start_character": 0, "end_paragraph_id": 71, "end_character": 547, "text": "That is, the stronger the gravitational field (and, thus, the larger the acceleration), the more slowly time runs. The predictions of time dilation are confirmed by particle acceleration experiments and cosmic ray evidence, where moving particles decay more slowly than their less energetic counterparts. Gravitational time dilation gives rise to the phenomenon of gravitational redshift and Shapiro signal travel time delays near massive objects such as the sun. The Global Positioning System must also adjust signals to account for this effect.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "531373", "title": "Gravity Probe A", "section": "Section::::Background.:Time dilation.\n", "start_paragraph_id": 11, "start_character": 0, "end_paragraph_id": 11, "end_character": 1143, "text": "There is a similar idea of time dilation occurrence in Einstein's theory of special relativity (which deals with neither gravity nor the idea of curved spacetime). Such time dilation appears in the Rindler coordinates, attached to a uniformly accelerating particle in a flat spacetime. Such a particle would observe time passing faster on the side it is accelerating towards and more slowly on the opposite side. From this apparent variance in time, Einstein inferred that change in velocity affects the relativity of simultaneity for the particle. Einstein's equivalence principle generalizes this analogy, stating that an accelerating reference frame is locally indistinguishable from an inertial reference frame with a gravity force acting upon it. In this way, the Gravity Probe A was a test of the equivalence principle, matching the observations in the inertial reference frame (of special relativity) of the Earth's surface affected by gravity, with the predictions of special relativity for the same frame treated as being accelerating upwards with respect to free fall reference, which can thought of being inertial and gravity-less.\n", "bleu_score": null, "meta": null } ] } ]
null
9h1v9j
During the height of the Cathar movement, what were gender relations like among the Cather Christians? Did their theology translate into women having a more equal status in society?
[ { "answer": "Catharism is a well-studied topic, and while you are waiting for fresh responses to your question, it is well worth reviewing [this earlier thread](_URL_0_), led by u/sunagainstgold, which looks at the the history and historiography of the supposed heresy, and points out that our understanding of \"Catharism\" is really a construct imposed by the outsiders who persecuted it, adding that \"medievalists today are pretty unanimous that there was no such thing as 'Catharism' in southern France in the 12th-13th century.\"\n\nSun does also touch on the specific area of gender relations in the time and place that you're interested in.\n\nMeanwhile, and in the same thread, u/idjet (who was writing a dissertation on the Cathars) pushes back in posts that argue for something more approaching the old standard view of Catharism as a distinct set of real beliefs.", "provenance": null }, { "answer": null, "provenance": [ { "wikipedia_id": "6007007", "title": "Women in the Bible", "section": "Section::::New Testament.:New Testament views on gender.:Women in the early church.\n", "start_paragraph_id": 98, "start_character": 0, "end_paragraph_id": 98, "end_character": 748, "text": "Sociologist Linda L. Lindsey says \"Belief in the spiritual equality of the genders (Galatians 3:28) and Jesus' inclusion of women in prominent roles, led the early New Testament church to recognize women's contributions to charity, evangelism and teaching.\" Pliny the Younger, first century, says in his letter to Emperor Trajan that Christianity had people from every age and rank, and refers to \"two women slaves called deaconesses\" . Professor of religion Margaret Y. MacDonald uses a \"social scientific concept of power\" which distinguishes between power and authority to show early Christian women, while lacking overt authority, still retained sufficient indirect power and influence to play a significant role in Christianity's beginnings. \n", "bleu_score": null, "meta": null }, { "wikipedia_id": "5686076", "title": "Christian feminism", "section": "Section::::History.\n", "start_paragraph_id": 5, "start_character": 0, "end_paragraph_id": 5, "end_character": 466, "text": "The first wave of feminism in the nineteenth and early twentieth centuries included an increased interest in the place of women in religion. Women who were campaigning for their rights began to question their inferiority both within the church and in other spheres, which had previously been justified by church teachings. Some Christian feminists of this period were Marie Maugeret, Katharine Bushnell, Catherine Booth, Frances Willard, and Elizabeth Cady Stanton.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "24439012", "title": "Women in Church history", "section": "Section::::Apostolic age.:Early spread of Christianity.\n", "start_paragraph_id": 15, "start_character": 0, "end_paragraph_id": 15, "end_character": 837, "text": "Historian Geoffrey Blainey writes that women probably comprised the majority in early Christian congregations. This large female membership likely stemmed in part from the early church's informal and flexible organization offering significant roles to women. Another factor is that there appeared to be no division between clergy and laity. Leadership was shared among male and female members according to their \"gifts\" and talents. \"But even more important than church organization was the way in which the Gospel tradition and the Gospels themselves, along with the writing of Paul, could be interpreted as moving women beyond silence and subordination.\" Women may also have been driven from Judaism to Christianity through the taboos and rituals related to the menstrual cycle, and a society preference for male over female children.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "7630", "title": "Catharism", "section": "Section::::General beliefs.:Role of women and gender.\n", "start_paragraph_id": 29, "start_character": 0, "end_paragraph_id": 29, "end_character": 483, "text": "Cathars believed that one would be repeatedly reincarnated until one commits to the self-denial of the material world. A man could be reincarnated as a woman and vice versa, thereby rendering gender meaningless. The spirit was of utmost importance to the Cathars and was described as being immaterial and sexless. Because of this belief, the Cathars saw women as equally capable of being spiritual leaders, which undermined the very concept of gender as held by the Catholic Church.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "41338940", "title": "Feminist movement", "section": "Section::::History.:Religion.\n", "start_paragraph_id": 37, "start_character": 0, "end_paragraph_id": 37, "end_character": 547, "text": "Early feminists such as Elizabeth Cady Stanton concentrated almost solely on \"making women equal to men.\" However, the Christian feminist movement chose to concentrate on the language of religion because they viewed the historic gendering of God as male as a result of the pervasive influence of patriarchy. Rosemary Radford Ruether provided a systematic critique of Christian theology from a feminist and theist point of view. Stanton was an agnostic and Reuther is an agnostic who was born to Catholic parents but no longer practices the faith.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "5686076", "title": "Christian feminism", "section": "Section::::History.\n", "start_paragraph_id": 4, "start_character": 0, "end_paragraph_id": 4, "end_character": 885, "text": "Some Christian feminists believe that the principle of egalitarianism was present in the teachings of Jesus and the early Christian movements, but this is a highly contested view by many feminist scholars who believe that Christianity itself relies heavily on gender roles. These interpretations of Christian origins have been criticized by secular feminists for \"anachronistically projecting contemporary ideals back into the first century.\" In the Middle Ages Julian of Norwich and Hildegard of Bingen explored the idea of a divine power with both masculine and feminine characteristics. Feminist works from the fifteenth to seventeenth centuries addressed objections to women learning, teaching and preaching in a religious context. One such proto-feminist was Anne Hutchinson who was cast out of the Puritan colony of Massachusetts for teaching on the dignity and rights of women.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "10329352", "title": "Women in Christianity", "section": "Section::::Women in church history.:Apostolic age.\n", "start_paragraph_id": 86, "start_character": 0, "end_paragraph_id": 86, "end_character": 540, "text": "From the very beginning of the early Christian church, women were important members of the movement, although some complain that much of the information in the New Testament on the work of women has been overlooked. Some also argue that many assumed that it had been a \"man's church\" because sources of information stemming from the New Testament church were written and interpreted by men. Recently, scholars have begun looking in mosaics, frescoes, and inscriptions of that period for information about women's roles in the early church.\n", "bleu_score": null, "meta": null } ] } ]
null
13asz0
Why does the reflection in a shallow pond change depending on the viewing angle?
[ { "answer": "The answer you seek lies in [Fresnel Equations](_URL_0_). While Snell's Law (n1 Sin[theta1] = n2 Sin[theta2]) will tell you about the angle of refraction compared to the angle of incidence, you need Fresnel equations to tell you *how much* light is refracted vs how much is transmitted.\n\nTake a look at [this image](_URL_1_), which shows reflectance and transmittance as a function of incident angle (in this case, specifically in regards to light transmitting between air and glass). You can see in the graph on the left - light in air striking a surface of glass - that when the angle of incidence is close to zero, there is a very low reflection coefficient, which means that *most* of the light that hits the surface will transmit rather than reflect. As that angle of incidence increases, the proportion of light that is reflected only increases. Eventually, the brightness of the reflected light will outstrip the brightness of any transmitted light coming from inside the glass/second material.\n\nThis same thing applies to seeing a reflection in a pond. At low angles of incidence (looking straight down), very little light reflects so most of what you see is transmitted from under the water out into the air. At high angles of incidence a lot of light reflects so that reflected light is most of what you see.", "provenance": null }, { "answer": null, "provenance": [ { "wikipedia_id": "30426", "title": "Total internal reflection", "section": "Section::::Everyday examples.\n", "start_paragraph_id": 27, "start_character": 0, "end_paragraph_id": 27, "end_character": 981, "text": "A similar effect can be observed by opening one's eyes while swimming just below the water's surface. If the water is calm, the surface outside the critical angle (measured from the vertical) appears mirror-like, reflecting objects below. The region above the water cannot be seen except overhead, where the hemispherical field of view is compressed into a conical field known as \"Snell's window\", whose angular diameter is twice the critical angle (cf. Fig.6). The field of view above the water is theoretically 180° across, but seems less because as we look closer to the horizon, the vertical dimension is more strongly compressed by the refraction; e.g., by Eq.(), for air-to-water incident angles of 90°, 80°, and 70°, the corresponding angles of refraction are 48.6° (\"θ\" in Fig.6), 47.6°, and 44.8°, indicating that the image of a point 20° above the horizon is 3.8° from the edge of Snell's window while the image of a point 10° above the horizon is only 1° from the edge.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "9288958", "title": "Snell's window", "section": "Section::::Image formation.\n", "start_paragraph_id": 4, "start_character": 0, "end_paragraph_id": 4, "end_character": 819, "text": "Under ideal conditions, an observer looking up at the water surface from underneath sees a perfectly circular image of the entire above-water hemisphere—from horizon to horizon. Due to refraction at the air/water boundary, Snell's window compresses a 180° angle of view above water to a 97° angle of view below water, similar to the effect of a fisheye lens. The brightness of this image falls off to nothing at the circumference/horizon because more of the incident light at low grazing angles is reflected rather than refracted (see Fresnel equations). Refraction is very sensitive to any irregularities in the flatness of the surface (such as ripples or waves), which will cause local distortions or complete disintegration of the image. Turbidity in the water will veil the image behind a cloud of scattered light.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "191723", "title": "Ripple tank", "section": "Section::::Refraction.\n", "start_paragraph_id": 14, "start_character": 0, "end_paragraph_id": 14, "end_character": 594, "text": "If a sheet of glass is placed in the tank, the depth of water in the tank will be shallower over the glass than elsewhere. The speed of a wave in water depends on the depth, so the ripples slow down as they pass over the glass. This causes the wavelength to decrease. If the junction between the deep and shallow water is at an angle to the wavefront, the waves will refract. In the diagram above, the waves can be seen to bend towards the normal. The normal is shown as a dotted line. The dashed line is the direction that the waves would travel if they had not met the angled piece of glass.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "25948", "title": "Refraction", "section": "Section::::Light.:Refraction in a water surface.\n", "start_paragraph_id": 18, "start_character": 0, "end_paragraph_id": 18, "end_character": 614, "text": "For small angles of incidence (measured from the normal, when sin θ is approximately the same as tan θ), the ratio of apparent to real depth is the ratio of the refractive indexes of air to that of water. But, as the angle of incidence approaches 90, the apparent depth approaches zero, albeit reflection increases, which limits observation at high angles of incidence. Conversely, the apparent height approaches infinity as the angle of incidence (from below) increases, but even earlier, as the angle of total internal reflection is approached, albeit the image also fades from view as this limit is approached.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "30860655", "title": "Van Cittert–Zernike theorem", "section": "", "start_paragraph_id": 2, "start_character": 0, "end_paragraph_id": 2, "end_character": 324, "text": "This reasoning can be easily visualized by dropping two stones in the center of a calm pond. Near the center of the pond, the disturbance created by the two stones will be very complicated. As the disturbance propagates towards the edge of the pond, however, the waves will smooth out and will appear to be nearly circular.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "33555016", "title": "Wave setup", "section": "Section::::In and near the coastal surf zone.\n", "start_paragraph_id": 4, "start_character": 0, "end_paragraph_id": 4, "end_character": 395, "text": "As a progressive wave approaches shore and the water depth decreases, the wave height increases due to wave shoaling. As a result, there is additional wave-induced flux of horizontal momentum. The horizontal momentum equations of the mean flow requires this additional wave-induced flux to be balanced: this causes a decrease in the mean water level before the waves break, called a \"setdown\". \n", "bleu_score": null, "meta": null }, { "wikipedia_id": "9277", "title": "Ellipse", "section": "Section::::Applications.:Physics.:Elliptical reflectors and acoustics.\n", "start_paragraph_id": 260, "start_character": 0, "end_paragraph_id": 260, "end_character": 328, "text": "If the water's surface is disturbed at one focus of an elliptical water tank, the circular waves of that disturbance, after reflecting off the walls, converge simultaneously to a single point: the \"second focus\". This is a consequence of the total travel length being the same along any wall-bouncing path between the two foci.\n", "bleu_score": null, "meta": null } ] } ]
null
r54cc
the process and significance of "making partner" in a law firm
[ { "answer": "Significance, you become technically a part owner, and instead of just making your salary, you spin off profits from everyone else's billed hours too. A percentage of everything an associate bills goes to the \"partner fund.\"\n\nThis may be completely wrong, it's just what I've learned from books.", "provenance": null }, { "answer": "It takes anywhere from 2 to 10 years to make partner at an average law firm (sometimes longer). In order to be considered for a partner position (while working as an associate), you usually need to work very hard and contribute a lot to your firm's business. It helps if you can pull a lot of all-nighters, find new clients through connections, publish articles in law journals to gain prestige for your firm, or show great talent in a particular field of law that your firm works in.\n\nIn some firms, once you've gained enough experience, you're given a big, important client or assignment. If you succeed, you are promoted to partnership. A partner usually owns a share in the company, which means that they automatically make money every year by receiving a portion of the company's profits. There is also less pressure on a partner to work hard, because they've already \"made it\". So as a partner, you can take it easy unless you're really into your work or want to make even more money. An associate at a big law firm makes around $100,000 per year. A partner at the same law firm will make anywhere from $300,00 to over a million. In small firms that only have a couple of partners, your last name will also be added to the firm's name. So if your name is Johnson and you work at Anderson and Smith, your firm may be renamed to Anderson, Smith and Johnson once you make partner.", "provenance": null }, { "answer": "Alrighty, I'm only a 1L so this will probably have lots of holes in it.. but here you go.\n\nSo you have a law firm, lots of lawyers at different levels working on different shit. Associates are the lower level people doing research and appearing in court sometimes, while partners are the upper level people who oversee the operations of the entire firm. Some partners might be in court regularly while others might focus strictly on the business of running the firm.\n\nOne of the main differences is payment. Associates get paid a salary, as in they get $_____ per year. Partners get partial ownership in the firm. At the end of the year when they tally up the firm's profits, they split profits between all the partners depending on their percentage of ownership in the firm.\n\nSince one lump sum payment at the end of the year is kind of a shitty way to get paid, they often pay partners on a draw. This means they guesstimate how much a partner will get at the end of the year and divide it by 12, then pay this amount out each month (or divide by 24/paid every 2 weeks, whatever). In this system, however, when they tally up the end of year profits the partner can either get a bonus or actually owe the firm money because his draw was bigger than his contribution to the firm.\n\nAlso, Partners are required a certain number of billable hours for the firm each year, but one of their more important roles is to bring in new clients.\n\n*I say \"he\" because I was trained to write by sexist women, but there are many great lady lawyers who are partners at great firms!", "provenance": null }, { "answer": "Most law firms are established as private partnerships, meaning that the owners are a small group of individuals (called 'partners') who collectively share control of the business as well as its profits and losses.\n\nWhen a lawyer \"makes partner\", it means the existing partners have invited him to become a partner; ie. they've invited him to share ownership of the business. He becomes an employer as opposed to an employee. Instead of getting his employee salary, he gets a cut of everything the business makes (or loses), *and* he becomes one of \"the bosses\".", "provenance": null }, { "answer": "It's all about money and power. Partners receive a greater share of the firm's income beyond their salary. Additionally, partners have voting rights. If you think of the firm as a company then the partners are all stock holders / owners in addition to employees. ", "provenance": null }, { "answer": "An equity partner has ownership interest in the firm.", "provenance": null }, { "answer": null, "provenance": [ { "wikipedia_id": "222783", "title": "Partnership", "section": "Section::::Common law.:India.\n", "start_paragraph_id": 62, "start_character": 0, "end_paragraph_id": 62, "end_character": 462, "text": "4) Partners are Mutual Agents.The business of firm can be carried on by all or any of them for all. Any partner has authority to bind the firm. Act of any one partner is binding on all the partners. Thus, each partner is ‘agent’ of all the remaining partners. Hence, partners are ‘mutual agents’. Section 18 of the Partnership Act, 1932 says \"Subject to the provisions of this Act, a partner is the agent of the firm for the purpose of the business of the firm\"\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "14684519", "title": "Partner (business rank)", "section": "Section::::Law firms.\n", "start_paragraph_id": 3, "start_character": 0, "end_paragraph_id": 3, "end_character": 1073, "text": "In law firms, partners are primarily those senior lawyers who are responsible for generating the firm's revenue. The standards for equity partnership vary from firm to firm. Many law firms have a \"two-tiered\" partnership structure, in which some partners are designated as \"salaried partners\" or \"non-equity\" partners, and are allowed to use the \"partner\" title but do not share in profits. This position is often given to lawyers on track to become equity partners so that they can more easily generate business; it is typically a \"probationary\" status for associates (or former equity partners, who do not generate enough revenue to maintain equity partner status). The distinction between equity and non-equity partners is often internal to the firm and not disclosed to clients, although a typical equity partner could be compensated three times as much as a non-equity partner billing at the same hourly rate. In America, senior lawyers not on track for partnership often use the title \"of counsel\", whilst their equivalents in Britain use the title \"Senior Counsel\".\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "20379982", "title": "Indian Contract Act, 1872", "section": "Section::::Agency.\n", "start_paragraph_id": 77, "start_character": 0, "end_paragraph_id": 77, "end_character": 891, "text": "In law, the relationship that exists when one person or party (the principal) engages another (the agent) to act for him, e.g. to do his work, to sell his goods, to manage his business. The law of agency thus governs the legal relationship in which the agent deals with a third party on behalf of the principal. The competent agent is legally capable of acting for this principal vis-à-vis the third party. Hence, the process of concluding a contract through an agent involves a twofold relationship. On the one hand, the law of agency is concerned with the external business relations of an economic unit and with the powers of the various representatives to affect the legal position of the principal. On the other hand, it rules the internal relationship between principal and agent as well, thereby imposing certain duties on the representative (diligence, accounting, good faith, etc.)\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "1690376", "title": "List of legal entity types by country", "section": "Section::::Pakistan.\n", "start_paragraph_id": 626, "start_character": 0, "end_paragraph_id": 626, "end_character": 374, "text": "A partnership is a business relationship entered into by a formal agreement between two or more persons or corporations carrying on a business in common. The capital for a partnership is provided by the partners who are liable for the total debts of the firms and who share the profits and losses of the business concern according to the terms of the partnership agreement.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "56496657", "title": "Continental Bank Leasing Corp v Canada", "section": "Section::::Analysis.:Definition of \"Partnership\".\n", "start_paragraph_id": 8, "start_character": 0, "end_paragraph_id": 8, "end_character": 798, "text": "The court defines a \"partnership\" as having three elements: it is a business, it is carried on in common, and it is carried on with a view to profit. On the first, Bastarache J passes the partnership because it is a trade, occupation, or profession as per section 1(1)(a) of the Partnership Act. The partnership is also deemed to carry on business in common because, as long as management and duties are outlined in the partnership agreement, there are no requirements about the length of a partnership relationship, or that the partnership need expand its business in that time—even if, as in this case, the partnership business was fairly idle over the Christmas holidays. Last, the court concluded that the pursuit of profit need only be an ancillary purpose to the creation of the partnership.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "894300", "title": "Law firm", "section": "Section::::Structure and promotion.:Partnership.\n", "start_paragraph_id": 23, "start_character": 0, "end_paragraph_id": 23, "end_character": 899, "text": "Law firms are typically organized around partners, who are joint owners and business directors of the legal operation; associates, who are employees of the firm with the prospect of becoming partners; and a variety of staff employees, providing paralegal, clerical, and other support services. An associate may have to wait as long as 11 years before the decision is made as to whether the associate is made a partner. Many law firms have an \"up or out policy\", integral to the Cravath System, which had been pioneered during the early 20th century by partner Paul Cravath of Cravath, Swaine & Moore), and became widely adopted by, particularly, white-shoe firms; associates who do not make partner are required to resign, and may join another firm, become a solo practitioner, work in-house for a corporate legal department, or change professions. Burnout rates are notably high in the profession.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "5752181", "title": "Contract management", "section": "Section::::Contracts.\n", "start_paragraph_id": 9, "start_character": 0, "end_paragraph_id": 9, "end_character": 541, "text": "BULLET::::- A partnership agreement may be a contract which formally establishes the terms of a partnership between two legal entities such that they regard each other as 'partners' in a commercial arrangement. However, such expressions may also be merely a means to reflect the desire of the contracting parties to act 'as if' both are in a partnership with common goals. Therefore, it might not be the common law arrangement of a partnership which by definition creates fiduciary duties and which also has 'joint and several' liabilities.\n", "bleu_score": null, "meta": null } ] } ]
null
n4fuf
the corruption in illinois.
[ { "answer": "good question. \n\nillinois and chicago politics has a rich history of filth. ", "provenance": null }, { "answer": "If you're looking for a simple answer, you're not going to find one. The various motivations and relationships between corruption, politics, and power is an extremely complex issue that can be interpreted through multiple lenses. Aside from what you can easily read on the relevant Wikipedia articles, there's a deeper story about the history of Illinois politics vis-a-vis the history of Chicago.\n\nI was born and raised in Chicago; much of the historical corruption here is a result of the city's rise as an industrial powerhouse in the late 1800s and its foundation as both a nexus of immigration and as a labor/union stronghold. Chicago was the first uniquely \"American\" city - unlike Philadelphia, NYC, and Boston, it does not have its roots in the colonies. As such, it has its own set of unique cultural and sociological identifiers that differentiate it from other urban metropoles. \n\nChicago was a huge destination for Irish, German, Polish, Russian, and Italian immigrants (to name only a few) during the industrial era... these ethnic groupings paved the way for Chicago's [political machine](_URL_0_) that sought to protect the interests of ethnic immigrants by trading votes for patronage. Being the two primary nodes of political power in the state, Springfield (Illinois' capital) and Chicago have historically maintained a very close relationship as well.\n\nAlthough the era of the ethnic political machine has passed, elements of machine politics are still very much prevalent today. While this does not provide an explicit answer for the contemporary corruption in Springfield (Blago, Ryan, et cetera), I feel that the history of Chicago provides a lot of context to why the game of politics is played just a bit differently here than in other places. It really is a fascinating story.", "provenance": null }, { "answer": "good question. \n\nillinois and chicago politics has a rich history of filth. ", "provenance": null }, { "answer": "If you're looking for a simple answer, you're not going to find one. The various motivations and relationships between corruption, politics, and power is an extremely complex issue that can be interpreted through multiple lenses. Aside from what you can easily read on the relevant Wikipedia articles, there's a deeper story about the history of Illinois politics vis-a-vis the history of Chicago.\n\nI was born and raised in Chicago; much of the historical corruption here is a result of the city's rise as an industrial powerhouse in the late 1800s and its foundation as both a nexus of immigration and as a labor/union stronghold. Chicago was the first uniquely \"American\" city - unlike Philadelphia, NYC, and Boston, it does not have its roots in the colonies. As such, it has its own set of unique cultural and sociological identifiers that differentiate it from other urban metropoles. \n\nChicago was a huge destination for Irish, German, Polish, Russian, and Italian immigrants (to name only a few) during the industrial era... these ethnic groupings paved the way for Chicago's [political machine](_URL_0_) that sought to protect the interests of ethnic immigrants by trading votes for patronage. Being the two primary nodes of political power in the state, Springfield (Illinois' capital) and Chicago have historically maintained a very close relationship as well.\n\nAlthough the era of the ethnic political machine has passed, elements of machine politics are still very much prevalent today. While this does not provide an explicit answer for the contemporary corruption in Springfield (Blago, Ryan, et cetera), I feel that the history of Chicago provides a lot of context to why the game of politics is played just a bit differently here than in other places. It really is a fascinating story.", "provenance": null }, { "answer": null, "provenance": [ { "wikipedia_id": "43582002", "title": "Corruption in Illinois", "section": "", "start_paragraph_id": 1, "start_character": 0, "end_paragraph_id": 1, "end_character": 325, "text": "Corruption in Illinois has been a problem from the earliest history of the state. Electoral fraud in Illinois pre-dates the territory's admission to the Union in 1818, Illinois was the third most corrupt state in the country, after New York and California, judging by federal public corruption convictions between 1976-2012.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "893818", "title": "United States District Court for the Northern District of Illinois", "section": "Section::::History.\n", "start_paragraph_id": 11, "start_character": 0, "end_paragraph_id": 11, "end_character": 313, "text": "The Northern District of Illinois, which contains the entire Chicago metropolitan area, accounts for 1531 of the 1828 public corruption convictions in the state between 1976 and 2012, almost 84%, also making it the federal district with the most public corruption convictions in the nation between 1976 and 2012.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "2236326", "title": "Political history of Chicago", "section": "Section::::Corruption.\n", "start_paragraph_id": 16, "start_character": 0, "end_paragraph_id": 16, "end_character": 854, "text": "Chicago has a long history of political corruption, dating to the incorporation of the city in 1833. It has been a de facto monolithic entity of the Democratic Party from the mid 20th century onward. Research released by the University of Illinois at Chicago reports that Chicago and Cook County's judicial district recorded 45 public corruption convictions for 2013, and 1642 convictions since 1976, when the Department of Justice began compiling statistics. This prompted many media outlets to declare Chicago the \"corruption capital of America\". Gradel and Simpson's \"Corrupt Illinois\" (2015) provides the data behind Chicago's corrupt political culture. They found that a tabulation of federal public corruption convictions make Chicago \"undoubtedly the most corrupt city in our nation\", with the cost of corruption \"at least\" $500 million per year.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "11669284", "title": "Crime in Chicago", "section": "Section::::Public corruption and political crime.\n", "start_paragraph_id": 39, "start_character": 0, "end_paragraph_id": 39, "end_character": 608, "text": "Most corruption cases in Chicago are prosecuted by the US Attorney's office, as legal jurisdiction makes most offenses punishable as a federal crime. The current US Attorney for the Northern district of Illinois is Zachary T. Fardon. In a press conference in January 2016, in the wake of the conviction of former Chicago City Hall official, John Bills, for taking 2 million dollars in bribes, Fardon commented \"Public corruption [in Chicago] is a disease and where public officials violate the public trust, we have to hold them accountable. And I do believe that by doing so, it sends a deterrent message.\"\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "772102", "title": "Chicago City Council", "section": "Section::::History.:Corruption.\n", "start_paragraph_id": 14, "start_character": 0, "end_paragraph_id": 14, "end_character": 487, "text": "Chicago City Council Chambers has long been the center of public corruption in Chicago. The first conviction of Chicago aldermen and Cook County Commissioners for accepting bribes to rig a crooked contract occurred in 1869. Between 1972 and 1999, 26 current or former Chicago aldermen were convicted for official corruption. Between 1973 and 2012, 31 aldermen were convicted of corruption. Approximately 100 aldermen served in that period, which is a conviction rate of about one-third.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "11827814", "title": "Chicago Crime Commission", "section": "Section::::Highlights.\n", "start_paragraph_id": 15, "start_character": 0, "end_paragraph_id": 15, "end_character": 365, "text": "BULLET::::- The Commission monitors the federal prosecutions of Illinois' alleged pay-to-play political insiders and the general public corruption allegations levied against government officials throughout Illinois, including the trials involving former Governor George H. Ryan, his former Chief-of-Staff, Scott Fawell and current Chicago businessman Antoin Rezko.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "11669284", "title": "Crime in Chicago", "section": "Section::::Public corruption and political crime.\n", "start_paragraph_id": 36, "start_character": 0, "end_paragraph_id": 36, "end_character": 759, "text": "A 2015 report released by the University of Illinois at Chicago's political science department declared Chicago the \"corruption capital of America\", citing that the Chicago-based Federal Judicial District for Northern Illinois reported 45 public corruption convictions for 2013 and a total of 1,642 convictions for the 38 years since 1976 when the U.S. Department of Justice began compiling the statistics. UIC Professor and former Chicago Alderman Dick Simpson noted in the report that \"To end corruption, society needs to do more than convict the guys that get caught. A comprehensive anti-corruption strategy must be forged and carried out over at least a decade. A new political culture in which public corruption is no longer tolerated must be created\".\n", "bleu_score": null, "meta": null } ] } ]
null
2zx5be
why do phone carriers (verizon, etc) have a say in the release of updates for android phones, but not iphones?
[ { "answer": "iPhones are a locked ecosystem. The software & hardware is produced by them and therefore updates are pushed out whenever they want independent of the carrier. \n\nAndroid is an operating system that runs on other peoples hardware ... The hardware manufacturer has a deal with the carriers, the carrier sells their phones if they add in / lock the phone to that carrier and add their proprietary apps with backdoor access. \n\nTherefore google launches an updates, but until the hardware manufacturer configures it for the phone and hands it to the carrier and then the carrier rolls it out ... You are stuck in the middle. ", "provenance": null }, { "answer": "Because with Android phones, those providers make their own fork of Android - like with Ubuntu, openSUSE, Fedora, etc. all being versions of Linux (the changes made within Linux are more substantial than in this case, but basically it is the same concept). \nOn iPhones, Apple provides the OS, no matter who sells the phone. \n\nThis, on one hand, can be a good thing - as independent groups can make custom Android versions, more diverse hardware can be supported, carriers/retailers can add features to make their product more desirable, the whole thing is open source, etc... but it can also be bad, because after Google pushes an update for Android, Verizon and all the other carriers have to implement those changes into their Android versions. \n\nSo TL;DR iPhone OS always comes directly from Apple. Android is generally developed by ~~Apple~~ Google (obviously, thanks /u/chaoticsuono), but the carriers use their own versions of Android on the phones they sell - so updates additionally have to go through them before they get to the end users. ", "provenance": null }, { "answer": null, "provenance": [ { "wikipedia_id": "8841749", "title": "IPhone", "section": "Section::::History and availability.:Sales.\n", "start_paragraph_id": 36, "start_character": 0, "end_paragraph_id": 36, "end_character": 466, "text": "The continued top popularity of the iPhone despite growing Android competition was also attributed to Apple being able to deliver iOS updates over the air, while Android updates are frequently impeded by carrier testing requirements and hardware tailoring, forcing consumers to purchase a new Android smartphone to get the latest version of that OS. However, by 2013, Apple's market share had fallen to 13.1%, due to the surging popularity of the Android offerings.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "12610483", "title": "Android (operating system)", "section": "Section::::Development.:Update schedule.\n", "start_paragraph_id": 39, "start_character": 0, "end_paragraph_id": 39, "end_character": 919, "text": "Compared to its primary rival mobile operating system, Apple's iOS, Android updates typically reach various devices with significant delays. Except for devices within the Google Nexus and Pixel brands, updates often arrive months after the release of the new version, or not at all. This was partly due to the extensive variation in hardware in Android devices, to which each upgrade must be specifically tailored, a time- and resource-consuming process. Manufacturers often prioritize their newest devices and leave old ones behind. Additional delays can be introduced by wireless carriers that, after receiving updates from manufacturers, further customize and brand Android to their needs and conduct extensive testing on their networks before sending the upgrade out to users. There are also situations in which upgrades are not possible due to one manufacturing partner not providing necessary updates to drivers.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "4573623", "title": "Proximity marketing", "section": "Section::::Bluetooth-based systems.\n", "start_paragraph_id": 19, "start_character": 0, "end_paragraph_id": 19, "end_character": 235, "text": "Android users will no longer be able to receive notifications without an app after Nearby shuts down on December 6, 2018. Proximity marketing campaigns can still be run on smartphones running on Android but they need a compatible app.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "26614973", "title": "Motorola Backflip", "section": "Section::::Applications.\n", "start_paragraph_id": 29, "start_character": 0, "end_paragraph_id": 29, "end_character": 542, "text": "Users may customize their phones by installing apps through the Android Market; however, some carriers (AT&T) do not give users the option to install non-market apps onto the Backflip (a policy they have continued with all of their Android phones). This has created some controversy with users, as the non-market apps are often seen as a useful way to expand a phone's capabilities. Users can circumvent this limitation by manually installing 3rd party apps using the tools included with the SDK while the handset is connected to a computer.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "51664929", "title": "European Union vs. Google", "section": "Section::::Android and mobile apps charges.:EU's investigation.\n", "start_paragraph_id": 31, "start_character": 0, "end_paragraph_id": 31, "end_character": 302, "text": "Google countered to this investigation that their practices with Android were no different with how Apple, Inc. or Microsoft bundles their own proprietary apps on their respective iOS and Windows Phone, and that OEMs were still able to distribute Android-based phones without the Google suite of apps.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "35924721", "title": "Samsung Galaxy Ace 2", "section": "Section::::Samsung Galaxy Ace 2 x / Trend.:Software.\n", "start_paragraph_id": 20, "start_character": 0, "end_paragraph_id": 20, "end_character": 480, "text": "Since these phones run Android 4.0, they are still supported by cloud, communications and social networking services that push the latest versions of their apps, which have in some cases been designed with only the newest hardware in mind. Such applications hog system resources and cause the phones to run slowly. As a remedy, phone owners can replace those apps with less resource-hungry equivalents, or remove them entirely and use a web browser to access the services' sites.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "29684361", "title": "T-Mobile myTouch 4G", "section": "Section::::Controversy.\n", "start_paragraph_id": 17, "start_character": 0, "end_paragraph_id": 17, "end_character": 570, "text": "However, to date, many customers have never received the update, and T-Mobile support representatives on Twitter suggested in October 2011 that the OTA update is no longer available as T-Mobile is working on further improvements before making the update available again. No clarification is ever given on the exact reasons why the update is no longer available, or why it was pulled out, or what further improvements are required to be completed before it would be available again. Moreover, there is no option within the interface that allows you to check for updates.\n", "bleu_score": null, "meta": null } ] } ]
null
69kf9y
What did Paul Mattick mean when he said that Marx was a Socialist and not an economist?
[ { "answer": "I am not Marxian nor Marxist in anyways so my views would not be reflective of these sorts of views. That being said... the context of this quote and of the author is important. \n\n > It is often asserted that while Marx's theory transcends bourgeois economic theory in order to solve \"economic problems\" that cannot be satisfactorily dealt with by bourgeois price theory, it must, for that reason, be as empirical as any other science. It is assumed, in brief, that Marx's Capital is a better part, but still a part, of the \"positive science\" of economics, whereas it is actually its opposition. Marxian theory aims not to resolve \"economic problems\" of bourgeois society but to show them to be unsolvable. Marx was a socialist, not an economist. In Marxian theory the concrete phenomena of bourgeois society are something other than they appear to be. Empirically discovered facts have first to be freed of their fetishistic connotations before they reveal empirical reality. The abstract generalizations of value theory disclose the laws of development of a system that operates with a false comprehension of the concretely given facts. The inductively won data do not correspond with, but camouflage, the real social relations of production. Bourgeois economy is not an empirical science but an ideological substitute for such a science; a pseudo-science, despite its scientific methodology. \n\nIn other words: the author argues that despite the perception that Marx's work Das Kapital was a work of economic theory (compared to for instance Smith, Ricardo, Keynes, and others), Marxist thought rejects outright the constraints of economics. In effect the author argues that economics is a false discipline with no basis in reality. Incidentally, Paul Mattick Jr, the son of the more famous Paul Mattick Sr, is the one that made this quote. \n\nEssentially, modern economics relies on two key ideas: a) people in aggregate are rational (meaning that they will seek (in the aggregate) to pursue their best interests) and b) that we can use math and statistical data to help model these out. Mattick Jr. rejects the use of math as he argues that the assumptions that economics rely on are false and are a construct of our \"money-based\" society. \n\nNow whether or not his allegations have any actual basis behind them is uncertain, but suffice to say that this was published in 1983 and economics is still going strong. If anything, the discipline has become even more quantitative. Make of that what you will.", "provenance": null }, { "answer": null, "provenance": [ { "wikipedia_id": "89988", "title": "W. E. B. Du Bois", "section": "Section::::Return to Atlanta.\n", "start_paragraph_id": 85, "start_character": 0, "end_paragraph_id": 85, "end_character": 924, "text": "After arriving at his new professorship in Atlanta, Du Bois wrote a series of articles generally supportive of Marxism. He was not a strong proponent of labor unions or the Communist Party, but he felt that Marx's scientific explanation of society and the economy were useful for explaining the situation of African Americans in the United States. Marx's atheism also struck a chord with Du Bois, who routinely criticized black churches for dulling blacks' sensitivity to racism. In his 1933 writings, Du Bois embraced socialism, but asserted that \"[c]olored labor has no common ground with white labor\", a controversial position that was rooted in Du Bois's dislike of American labor unions, which had systematically excluded blacks for decades. Du Bois did not support the Communist Party in the U.S. and did not vote for their candidate in the 1932 presidential election, in spite of an African American on their ticket.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "48422", "title": "José Saramago", "section": "Section::::Personal life.\n", "start_paragraph_id": 33, "start_character": 0, "end_paragraph_id": 33, "end_character": 369, "text": "He was also a supporter of Iberian Federalism. In a 2008 press conference for the filming of \"Blindness\" he asked, in reference to the Great Recession, \"Where was all that money poured on markets? Very tight and well kept; then suddenly it appears to save what? lives? no, banks.\" He added, \"Marx was never so right as now\", and predicted \"the worst is still to come.\"\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "19781676", "title": "Michael Hudson (economist)", "section": "Section::::Scientific contributions.:Position on Karl Marx and Marxian economics.\n", "start_paragraph_id": 36, "start_character": 0, "end_paragraph_id": 36, "end_character": 1528, "text": "Hudson identifies himself as a Marxist economist, but his interpretation of Karl Marx is different from most other Marxists. Whilst other Marxists emphasise upon the contradiction of wage labour and capital as the core issue of today capitalist world, Hudson rejects that idea and believes parasitic forms of finance have warped the political economy of modern capitalism. Hudson points to Marx's view of capitalism as the historic force that tends to eliminate all forms of pre-capitalist rent seeking, i.e. land rent, monopoly rent and financial rent (usury). The original meaning of a free market as discussed by classical political economists was a market free from all forms of rent. The gist of classical political economy was to distinguish earned and unearned income (also known as rent or free lunch). He then argues that unlike Marx's optimistic expectation history did not go in that direction and today modern capitalism is dominated by rentier classes. The concept of the proletariat as a class for itself presupposes a rent-free society, saying that \"wages have been going no where recently, I hope you've been making a killing on your house price!\". The other form of rent is imperialist rent, flowing from underdeveloped countries to developed ones. All of these forces distort the political economy of the modern capitalism, pushing labour-capital contradiction to the background and bringing other issues to the foreground. This is as if instead of progress, history has regressed back to a neo-feudal system.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "675321", "title": "Immanuel Wallerstein", "section": "Section::::Theory.\n", "start_paragraph_id": 15, "start_character": 0, "end_paragraph_id": 15, "end_character": 446, "text": "BULLET::::- Karl Marx, whom he follows in emphasizing underlying economic factors and their dominance over ideological factors in global politics, and whose economic thinking he has adopted with such ideas as the dichotomy between capital and labor. He also criticizes the traditional Marxian view of world economic development through stages such as feudalism and capitalism, and its belief in the accumulation of capital, dialectics, and more;\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "53312363", "title": "Marguerite Young (journalist)", "section": "Section::::Personal life and death.:Communism.\n", "start_paragraph_id": 61, "start_character": 0, "end_paragraph_id": 61, "end_character": 1608, "text": "She summarized her views on Communism as follows: I knew how to tell Adam Smith and Karl Marx apart. I'd found out in economics 1-2-3 and in the world the deepest criticism of the classical economics of Adam Smith and of modern capitalism was the owners and managers skimmed the cream. They \"stole the value\" of workingman's labor and called it profit, but it was in fact \"surplus value\", hence the whole system was not just unfair but mildness and one day it would collapse of its own bloated with. I did not agree at all. I rather bothered at seeing that what Clarence Hathaway had called the Communist Party's \"short term program\" often in practice did not differ much from the \"long term program.\" She was a signatory of a public statement on the Moscow Trials \"together with such notorious Communists and Communist fellow travelers fellow travelers as Haakon M. Chevalier, Jack Conroy, Malcolm Cowley, Kyle Crichton, Lester Cole, Jerome Davis (sociologist), Muriel Draper, Guy Endore, Elizabeth Gurley Flynn, Jules Garfield, Robert Gessner, Michael Gold, William Gropper, Harrison George, Dashiell Hammett, Clarence Hathaway, Lillian Hellman, Langston Hughes, V. J. Jerome, H. S. Kraft, John Howard Lawson, Corliss Lamont, Melvin Levy, Albert Maltz, A. B. Magil, Bruce Minton, Moissaye J. Olgin, Samuel Ornitz, Dorothy Parker, Paul Peters, Holland D. Roberts, Paul Romaine, Morris U. Schappes, Edwin Seaver, George Seldes, George Sklar, Lionel Stander, Maxwell S. Stewart, Paul Strand, Anna Louise Strong, John Stuart, Genevieve Taggard, Max Weber\" (the source is unclear but probably in 1936 or 1937).\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "16743", "title": "Karl Marx", "section": "Section::::Biography.:Paris: 1843–1845.\n", "start_paragraph_id": 19, "start_character": 0, "end_paragraph_id": 19, "end_character": 1231, "text": "During the time that he lived at 38 Rue Vanneau in Paris (from October 1843 until January 1845), Marx engaged in an intensive study of political economy (Adam Smith, David Ricardo, James Mill, \"etc.\")\",\" the French socialists (especially Claude Henri St. Simon and Charles Fourier) and the history of France. The study of political economy is a study that Marx would pursue for the rest of his life and would result in his major economic workthe three-volume series called \"Capital\". Marxism is based in large part on three influences: Hegel's dialectics, French utopian socialism and English economics. Together with his earlier study of Hegel's dialectics, the studying that Marx did during this time in Paris meant that all major components of \"Marxism\" were in place by the autumn of 1844. Marx was constantly being pulled away from his study of political economynot only by the usual daily demands of the time, but additionally by editing a radical newspaper and later by organising and directing the efforts of a political party during years of potentially revolutionary popular uprisings of the citizenry. Still Marx was always drawn back to his economic studies: he sought \"to understand the inner workings of capitalism\".\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "354800", "title": "Equality of outcome", "section": "Section::::Political philosophy.:Conflation with Marxism, socialism and communism.\n", "start_paragraph_id": 18, "start_character": 0, "end_paragraph_id": 18, "end_character": 684, "text": "The German economist and philosopher Karl Marx is sometimes mistakenly characterized as an egalitarian and a proponent of equality of outcome and the economic systems of socialism and communism are sometimes misconstrued as being based on equality of outcome. In reality, Marx eschewed the entire concept of equality as abstract and bourgeois in nature, focusing his analysis on more concrete issues such as opposition to exploitation based on economic and materialist logic. Marx renounced theorizing on moral concepts and refrained from advocating principles of justice. Marx's views on equality were informed by his analysis of the development of the productive forces in society.\n", "bleu_score": null, "meta": null } ] } ]
null
21gwht
Do all species eventually face extinction?
[ { "answer": "So I hesitate to answer your question, because it enters more of a philosophical realm to truly answer it. What you're asking is basically:\n\n1) Can species remain indefinitely?\n2) Are all species subject to extinction?\n\nI break these up because they require different answers, which are:\n\n1) sorta\n2) Yeah\n\nWhen we see radial speciation happen (as with Darwin's finches) at what point does the ancestor cease to exist? This question is philosophical as much as it is biological. Certainly we can use the biological species concept to differentiate between species, but from a strictly taxonomical standpoint, the ancestor and it's daughter species are part of the same lineage and sometimes the distinction between the nodes of the evolutionary tree of life becomes a little arbitrary. A descendent never ceases \"being\" it's ancestor, which is why birds are dinosaurs and humans are amphibians (if we're being technical). If we considered the species merely a lineage over time, then yes there have been lineages that have last since life began 3.5 billion years ago. \n\nFor the second answer, all species will eventually go completely extinct, with the exception of small amount lineages that are continued on. Van Valen gets at this with the Red Queen hypothesis: at some point the environment in which an organisms lives will change to the point where the organism cannot adapt and will drive it to extinction. The central idea behind this is **constraint**. Both genetically and physiologically, most species are limited in their capability for diversification, and more often than not, the environment proves too much for the organism driving it to extinction. These timescales are very large, on the order of millions of years.", "provenance": null }, { "answer": null, "provenance": [ { "wikipedia_id": "49417", "title": "Extinction", "section": "Section::::Causes.\n", "start_paragraph_id": 23, "start_character": 0, "end_paragraph_id": 23, "end_character": 907, "text": "There are a variety of causes that can contribute directly or indirectly to the extinction of a species or group of species. \"Just as each species is unique\", write Beverly and Stephen C. Stearns, \"so is each extinction ... the causes for each are varied—some subtle and complex, others obvious and simple\". Most simply, any species that cannot survive and reproduce in its environment and cannot move to a new environment where it can do so, dies out and becomes extinct. Extinction of a species may come suddenly when an otherwise healthy species is wiped out completely, as when toxic pollution renders its entire habitat unliveable; or may occur gradually over thousands or millions of years, such as when a species gradually loses out in competition for food to better adapted competitors. Extinction may occur a long time after the events that set it in motion, a phenomenon known as extinction debt.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "10029", "title": "Timeline of the evolutionary history of life", "section": "Section::::Extinction.\n", "start_paragraph_id": 4, "start_character": 0, "end_paragraph_id": 4, "end_character": 453, "text": "Species go extinct constantly as environments change, as organisms compete for environmental niches, and as genetic mutation leads to the rise of new species from older ones. Occasionally biodiversity on Earth takes a hit in the form of a mass extinction in which the extinction rate is much higher than usual. A large extinction-event often represents an accumulation of smaller extinction- events that take place in a relatively brief period of time.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "22403915", "title": "Body size and species richness", "section": "Section::::Possible mechanisms.:Differential extinction rates.\n", "start_paragraph_id": 15, "start_character": 0, "end_paragraph_id": 15, "end_character": 1110, "text": "It is known that extinction risk is directly correlated to the size of a species population. Small populations tend to go extinct more frequently than large ones (MacArthur and Wilson, 1967). As large species require more daily resources they are forced to have low population densities, thereby lowering the size of the population in a given area and allowing each individual to have access to enough resources to survive. In order to increase the population size and avoid extinction, large organisms are constrained to have large ranges (see Range (biology)). Thus, the extinction of large species with small ranges becomes inevitable (MacArthur and Wilson, 1967; Brown and Maurer, 1989; Brown and Nicoletto, 1991). This results in the amount of space limiting the overall number of large animals that can be present on a continent, while range size (and risk of extinction) prevents large animals from inhabiting only a small area. These constraints undoubtedly have implications for the species richness patterns for both large and small-bodied organisms, however the specifics have yet to be elucidated.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "21424701", "title": "Defaunation", "section": "Section::::Drivers.:Invasive species.\n", "start_paragraph_id": 16, "start_character": 0, "end_paragraph_id": 16, "end_character": 286, "text": "In extinct animal species for which the cause of extinction is known, over 50% were affected by invasive species. For 20% of extinct animal species, invasive species are the only cited cause of extinction. Invasive species are the second-most important cause of extinction for mammals.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "49417", "title": "Extinction", "section": "Section::::Definition.\n", "start_paragraph_id": 13, "start_character": 0, "end_paragraph_id": 13, "end_character": 212, "text": "The extinction of one species' wild population can have knock-on effects, causing further extinctions. These are also called \"chains of extinction\". This is especially common with extinction of keystone species.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "49417", "title": "Extinction", "section": "Section::::Causes.:Predation, competition, and disease.\n", "start_paragraph_id": 39, "start_character": 0, "end_paragraph_id": 39, "end_character": 1707, "text": "In the natural course of events, species become extinct for a number of reasons, including but not limited to: extinction of a necessary host, prey or pollinator, inter-species competition, inability to deal with evolving diseases and changing environmental conditions (particularly sudden changes) which can act to introduce novel predators, or to remove prey. Recently in geological time, humans have become an additional cause of extinction (many people would say premature extinction) of some species, either as a new mega-predator or by transporting animals and plants from one part of the world to another. Such introductions have been occurring for thousands of years, sometimes intentionally (e.g. livestock released by sailors on islands as a future source of food) and sometimes accidentally (e.g. rats escaping from boats). In most cases, the introductions are unsuccessful, but when an invasive alien species does become established, the consequences can be catastrophic. Invasive alien species can affect native species directly by eating them, competing with them, and introducing pathogens or parasites that sicken or kill them; or indirectly by destroying or degrading their habitat. Human populations may themselves act as invasive predators. According to the \"overkill hypothesis\", the swift extinction of the megafauna in areas such as Australia (40,000 years before present), North and South America (12,000 years before present), Madagascar, Hawaii (AD 300–1000), and New Zealand (AD 1300–1500), resulted from the sudden introduction of human beings to environments full of animals that had never seen them before, and were therefore completely unadapted to their predation techniques.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "1765998", "title": "Wildlife conservation", "section": "Section::::Species conservation.\n", "start_paragraph_id": 17, "start_character": 0, "end_paragraph_id": 17, "end_character": 966, "text": "It's estimated that, because of human activities, current species extinction rates are about 1000 times greater than the background extinction rate (the 'normal' extinction rate that occurs without additional influence) . According to the IUCN, out of all species assessed, over 27,000 are at risk of extinction and should be under conservation. Of these, 25% are mammals, 14% are birds, and 40% are amphibians. However, because not all species have been assessed, these numbers could be even higher. A 2019 UN report assessing global biodiversity extrapolated IUCN data to all species and estimated that 1 million species worldwide could face extinction. Yet, because resources are limited, sometimes it's not possible to give all species that need conservation due consideration. Deciding which species to conserve is a function of how close to extinction a species is, whether the species is crucial to the ecosystem it resides in, and how much we care about it.\n", "bleu_score": null, "meta": null } ] } ]
null
3fudmq
when drinking water, what is the mechanism that decides if the water will go to the bladder or be absorbed?
[ { "answer": "The water is absorbed. \n\nThe water that goes to your bladder is excreted by the kidneys as it filters your blood. ", "provenance": null }, { "answer": "All of the water is absorbed into your bloodstream in the intestines. It gets filtered out by the kidneys to help dilute urine and keep you properly hydrated. ", "provenance": null }, { "answer": null, "provenance": [ { "wikipedia_id": "871210", "title": "Utricularia", "section": "Section::::Carnivory.:Trapping mechanism.\n", "start_paragraph_id": 29, "start_character": 0, "end_paragraph_id": 29, "end_character": 412, "text": "As water is pumped out, the bladder's walls are sucked inwards by the partial vacuum created, and any dissolved material inside the bladder becomes more concentrated. The sides of the bladder bend inwards, storing potential energy like a spring. Eventually, no more water can be extracted, and the bladder trap is 'fully set' (technically, osmotic pressure rather than physical pressure is the limiting factor).\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "331951", "title": "Fish anatomy", "section": "Section::::Internal organs.:Swim bladder.\n", "start_paragraph_id": 106, "start_character": 0, "end_paragraph_id": 106, "end_character": 764, "text": "The swim bladder (or gas bladder) is an internal organ that contributes to the ability of a fish to control its buoyancy, and thus to stay at the current water depth, ascend, or descend without having to waste energy in swimming. The bladder is found only in the bony fishes. In the more primitive groups like some minnows, bichirs and lungfish, the bladder is open to the esophagus and doubles as a lung. It is often absent in fast swimming fishes such as the tuna and mackerel families. The condition of a bladder open to the esophagus is called physostome, the closed condition physoclist. In the latter, the gas content of the bladder is controlled through a rete mirabilis, a network of blood vessels effecting gas exchange between the bladder and the blood.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "184828", "title": "Swim bladder", "section": "Section::::Structure and function.\n", "start_paragraph_id": 8, "start_character": 0, "end_paragraph_id": 8, "end_character": 225, "text": "In physostomous swim bladders, a connection is retained between the swim bladder and the gut, the pneumatic duct, allowing the fish to fill up the swim bladder by \"gulping\" air. Excess gas can be removed in a similar manner.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "31097880", "title": "Physostome", "section": "", "start_paragraph_id": 1, "start_character": 0, "end_paragraph_id": 1, "end_character": 412, "text": "Physostomes are fishes that have a pneumatic duct connecting the gas bladder to the alimentary canal. This allows the gas bladder to be filled or emptied via the mouth. This not only allows the fish to fill their bladder by gulping air, but also to rapidly ascend in the water without the bladder expanding to bursting point. In contrast, fish without any connection to their gas bladder are called physoclisti.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "184828", "title": "Swim bladder", "section": "Section::::Structure and function.\n", "start_paragraph_id": 7, "start_character": 0, "end_paragraph_id": 7, "end_character": 605, "text": "The swim bladder normally consists of two gas-filled sacs located in the dorsal portion of the fish, although in a few primitive species, there is only a single sac. It has flexible walls that contract or expand according to the ambient pressure. The walls of the bladder contain very few blood vessels and are lined with guanine crystals, which make them impermeable to gases. By adjusting the gas pressurising organ using the gas gland or oval window the fish can obtain neutral buoyancy and ascend and descend to a large range of depths. Due to the dorsal position it gives the fish lateral stability.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "464770", "title": "Barotrauma", "section": "Section::::Barotrauma in animals.:Swim bladder overexpansion.\n", "start_paragraph_id": 91, "start_character": 0, "end_paragraph_id": 91, "end_character": 457, "text": "Fish with isolated swim bladders are susceptible to barotrauma of ascent when brought to the surface by fishing. The swim bladder is an organ of buoyancy control which is filled with gas extracted from solution in the blood, and which is normally removed by the reverse process. If the fish is brought upwards in the water column faster than the gas can be resorbed, the gas will expand until the bladder is stretched to its elastic limit, and may rupture.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "32259", "title": "Urinary bladder", "section": "", "start_paragraph_id": 1, "start_character": 0, "end_paragraph_id": 1, "end_character": 470, "text": "The urinary bladder is a hollow muscular organ in humans and some other animals that collects and stores urine from the kidneys before disposal by urination. In the human the bladder is a hollow muscular, and distensible (or elastic) organ, that sits on the pelvic floor. Urine enters the bladder via the ureters and exits via the urethra. The typical human bladder will hold between 300 and (10.14 and ) before the urge to empty occurs, but can hold considerably more.\n", "bleu_score": null, "meta": null } ] } ]
null
39rpfv
how can they prove paedophilia, such as rolf harris, decades after the offences?
[ { "answer": "They usually take statements and try to corroborate them with accused testimony alibi.\nI watched a case link to Jimmy Saville where the women described a wall covered in graffiti where she was raped, years later they took new wall paper down and it was still there, all names of underage girls and their phone numbers. \n\nThen it's usually put forward to a jury for them to decide.", "provenance": null }, { "answer": null, "provenance": [ { "wikipedia_id": "550058", "title": "National Living Treasure (Australia)", "section": "Section::::History.\n", "start_paragraph_id": 6, "start_character": 0, "end_paragraph_id": 6, "end_character": 303, "text": "On 30 July 2014, the board of the National Trust of Australia (NSW) voted to remove Rolf Harris from the list after his conviction on 12 charges of indecent assault between 1969 and 1986 and to also withdraw the award. Harris had been among the original 100 Australians selected for the honour in 1997.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "336211", "title": "Ern Malley", "section": "Section::::Immediate impact.\n", "start_paragraph_id": 38, "start_character": 0, "end_paragraph_id": 38, "end_character": 494, "text": "The South Australian police prosecuted Harris for publishing immoral and obscene material. The only prosecution witness was a police detective, whose evidence is full of unintended humour: \"Another evidence of indecency was the word 'incestuous'. Detective Volgelsang said: 'I don't know what \"incestuous\" means, but I think there is a suggestion of indecency about it'\". Despite the woeful case, and several distinguished expert witnesses arguing for Harris, he was found guilty and fined £5.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "157461", "title": "Rolf Harris", "section": "Section::::Arrest and trial.:Trial.\n", "start_paragraph_id": 51, "start_character": 0, "end_paragraph_id": 51, "end_character": 700, "text": "The trial of Harris began on 6 May 2014 at Southwark Crown Court. Seven of the twelve charges involved allegations of a sexual relationship between Harris and one of his daughter's friends. Six charges related to when she was between the ages of 13 and 15, and one when she was 19. Harris denied that he had entered into a sexual relationship with the girl until she was 18. During the trial, a letter Harris had written to the girl's father in 1997 after the end of the relationship was shown in court, saying: \"I fondly imagined that everything that had taken place had progressed from a feeling of love and friendship—there was no rape, no physical forcing, brutality or beating that took place.\"\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "230436", "title": "News of the World", "section": "Section::::Controversies.:Anti-paedophile campaign (2000).\n", "start_paragraph_id": 45, "start_character": 0, "end_paragraph_id": 45, "end_character": 898, "text": "The paper began a controversial campaign to name and shame alleged paedophiles in July 2000, following the abduction and murder of Sarah Payne in West Sussex. During the trial of her killer Roy Whiting, it emerged that he had a previous conviction for abduction and sexual assault against a child. The paper's decision led to some instances of action being taken against those suspected of being child sex offenders, which included several cases of mistaken identity, including one instance where a paediatrician had her house vandalised, and another where a man was confronted because he had a neck brace similar to one a paedophile was wearing when pictured. The campaign was labelled \"grossly irresponsible\" journalism by the then-chief constable of Gloucestershire, Tony Butler. The paper also campaigned for the introduction of Sarah's Law to allow public access to the sex offender registry.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "2782782", "title": "Jehovah's Witnesses' handling of child sex abuse", "section": "Section::::Reporting to civil authorities.:2014 investigations in the United Kingdom.\n", "start_paragraph_id": 38, "start_character": 0, "end_paragraph_id": 38, "end_character": 422, "text": "In 2013 at the Jehovah's Witnesses congregation of Moston, Manchester, England, church elder and convicted paedophile Jonathan Rose, following his completion of a nine-month jail sentence for paedophile offences, was allowed in a series of a public meetings to cross-examine the children he had molested. Rose was finally 'disfellowshipped' after complaints to the police and the Charity Commission for England and Wales.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "840433", "title": "Chris Denning", "section": "Section::::Legal history.\n", "start_paragraph_id": 8, "start_character": 0, "end_paragraph_id": 8, "end_character": 293, "text": "Denning's first conviction for gross indecency and indecent assault was in 1974, when he was convicted at the Old Bailey, although he was not imprisoned. Before his conviction Denning had been working for Jonathan King's newly founded UK Records, but King sacked him after the guilty verdict.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "157461", "title": "Rolf Harris", "section": "Section::::Arrest and trial.:Further charges.\n", "start_paragraph_id": 62, "start_character": 0, "end_paragraph_id": 62, "end_character": 529, "text": "On 12 February 2016, the Crown Prosecution Service announced that Harris would face seven further indecent assault charges. The offences allegedly occurred between 1971 and 2004 and involve seven complainants who were aged between 12 and 27 at the time. Harris pleaded not guilty to all of the charges via videolink at Westminster Magistrates' Court on 17 March and was told to appear at Southwark Crown Court on 14 April. On 14 April, he pleaded not guilty to seven charges of indecent assault and one charge of sexual assault.\n", "bleu_score": null, "meta": null } ] } ]
null
f79nd
Is it possible that our universe exists within something else? Where can I find more information about this?
[ { "answer": "This is more philosophy than science, since it's untestable. It's in that fuzzy realm where everything is still mathematically rigorous, but never demonstrable.\n\n\nCertain string theories have massive quasi-degenerate ground states, each representing a different type of universe. You can tunnel from one state into a slightly lower energy state, and that represents essentially the beginning of the destruction of one universe and conversion into another. In this picture, our universe is a large bubble, existing inside other bubbles.\n\n\nAt any time, a new, more stable universe may start and propagate in our own, destroying ours. However, since the ground state is nearly degenerate (energies are close), the timescale for this to happen is immeasurably long.\n\n\nThis picture matches with the anthropic principle, because in order for the principle to be significant, you must have a large number of potential universes, which is exactly what string theory gives.\n\n[Andre Linde](_URL_0_) is one of the proponents of this idea. His papers on the inflationary multiverse may provide more insight.", "provenance": null }, { "answer": "There are a few things you should know: \n\n1. Science is based on **observation**, not conjecture. An idea is worthless if it has no evidence to uphold it.\n\n2. We observe things that are very far away by detecting the light they emit.\n\n3. For things that are very, very, very far away (say, at the other edge of the universe), the light we use to observe them has been travelling for billions of years, close to the age of the universe itself.\n\n4. We can't observe anything that's more than about 14 billion light-years away, because the universe came to be about 14 billion years ago. The light we use to observe such objects would have had to travel for longer than our universe has existed. \n\nAll these things come together to support one fact: Not only do we not know what's outside our universe, it seems we *CAN'T* know what's outside our universe. It would violate the laws of physics.", "provenance": null }, { "answer": "I love the fact that your question is, word-for-word, \"Where can I find more information about this?\"\n\nThe answer is nowhere. Literally, nowhere. You cannot now, nor at any point in the future, find any information about this at all. Because anything that exists outside the universe is going to be separated from us forever by an effective event horizon. And no information can ever cross an event horizon, in either direction.\n\nI know that's taking your question a bit more literally than you meant it, but it's true nonetheless. People can speculate about \"multiple universes\" and such like — and Lord knows they have — but the fundamental, unbreakable laws of physics that govern *this* universe dictate that we can never know anything about any such thing. Not even definitively whether or not they exist.", "provenance": null }, { "answer": "The universe, by definition, encompasses all that physically exists.", "provenance": null }, { "answer": null, "provenance": [ { "wikipedia_id": "42403016", "title": "Logology (science)", "section": "Section::::Science.:Knowability.\n", "start_paragraph_id": 21, "start_character": 0, "end_paragraph_id": 21, "end_character": 583, "text": "\"There are clear unknowables in science—reasonable questions that, unless currently accepted laws of nature are violated, we cannot find answers to. One example is the multiverse: the conjecture that our universe is but one among a multitude of others, each potentially with a different set of laws of nature. Other universes lie outside our causal horizon, meaning that we cannot receive or send signals to them. Any evidence for their existence would be circumstantial: for example, scars in the radiation permeating space because of a past collision with a neighboring universe.\"\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "251399", "title": "Observable universe", "section": "Section::::The universe versus the observable universe.\n", "start_paragraph_id": 8, "start_character": 0, "end_paragraph_id": 8, "end_character": 1651, "text": "Both popular and professional research articles in cosmology often use the term \"universe\" to mean \"observable universe\". This can be justified on the grounds that we can never know anything by direct experimentation about any part of the universe that is causally disconnected from the Earth, although many credible theories require a total universe much larger than the observable universe. No evidence exists to suggest that the boundary of the observable universe constitutes a boundary on the universe as a whole, nor do any of the mainstream cosmological models propose that the universe has any physical boundary in the first place, though some models propose it could be finite but unbounded, like a higher-dimensional analogue of the 2D surface of a sphere that is finite in area but has no edge. It is plausible that the galaxies within our observable universe represent only a minuscule fraction of the galaxies in the universe. According to the theory of cosmic inflation initially introduced by its founder, Alan Guth (and by D. Kazanas), if it is assumed that inflation began about 10 seconds after the Big Bang, then with the plausible assumption that the size of the universe before the inflation occurred was approximately equal to the speed of light times its age, that would suggest that at present the entire universe's size is at least 3 times the radius of the observable universe. There are also lower estimates claiming that the entire universe is in excess of 250 times larger (by volume, not by radius) than the observable universe and also higher estimates implying that the universe could have the size of at least 10 Mpc.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "5382", "title": "Inflation (cosmology)", "section": "Section::::Theory.\n", "start_paragraph_id": 12, "start_character": 0, "end_paragraph_id": 12, "end_character": 913, "text": "The observable universe is one \"causal patch\" of a much larger unobservable universe; other parts of the Universe cannot communicate with Earth yet. These parts of the Universe are outside our current cosmological horizon. In the standard hot big bang model, without inflation, the cosmological horizon moves out, bringing new regions into view. Yet as a local observer sees such a region for the first time, it looks no different from any other region of space the local observer has already seen: its background radiation is at nearly the same temperature as the background radiation of other regions, and its space-time curvature is evolving lock-step with the others. This presents a mystery: how did these new regions know what temperature and curvature they were supposed to have? They couldn't have learned it by getting signals, because they were not previously in communication with our past light cone.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "1781068", "title": "Edward Tryon", "section": "Section::::Is the universe a vacuum fluctuation?\n", "start_paragraph_id": 21, "start_character": 0, "end_paragraph_id": 21, "end_character": 469, "text": "Even though this paper gives the impression that the mystery of where our universe originated is solved, it is not. In his paper Tryon mentions how there is this \"larger space in which our Universe is embedded,\" but this place is given only a very vague and short description. Additionally, while Tryon says our universe came into being from an accident of the laws of physics, he does not say what created the laws of physics, leaving the mystery incompletely solved.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "99293", "title": "Shape of the universe", "section": "Section::::Shape of the observable universe.\n", "start_paragraph_id": 13, "start_character": 0, "end_paragraph_id": 13, "end_character": 564, "text": "The observable universe can be thought of as a sphere that extends outwards from any observation point for 46.5 billion light years, going farther back in time and more redshifted the more distant away one looks. Ideally, one can continue to look back all the way to the Big Bang; in practice, however, the farthest away one can look using light and other electromagnetic radiation is the cosmic microwave background (CMB), as anything past that was opaque. Experimental investigations show that the observable universe is very close to isotropic and homogeneous.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "34698401", "title": "A Universe from Nothing", "section": "", "start_paragraph_id": 1, "start_character": 0, "end_paragraph_id": 1, "end_character": 624, "text": "A Universe from Nothing: Why There Is Something Rather than Nothing is a non-fiction book by the physicist Lawrence M. Krauss, initially published on January 10, 2012 by Free Press. It discusses modern cosmogony and its implications for the debate about the existence of God. The main theme of the book is how \"we have discovered that all signs suggest a universe that could and plausibly did arise from a deeper nothing—involving the absence of space itself and—which may one day return to nothing via processes that may not only be comprehensible but also processes that do not require any external control or direction.\"\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "158717", "title": "Ekpyrotic universe", "section": "Section::::Implications for cosmology.\n", "start_paragraph_id": 11, "start_character": 0, "end_paragraph_id": 11, "end_character": 480, "text": "The idea that the properties of our universe are an accident and come from a theory that allows a multiverse of other possibilities is hard to reconcile with the fact that the universe is extraordinarily simple (uniform and flat) on large scales and that elementary particles appear to be described by simple symmetries and interactions. Also, the accidental concept cannot be falsified by an experiment since any future experiments can be viewed as yet other accidental aspects.\n", "bleu_score": null, "meta": null } ] } ]
null
47myid
why do people constantly encourage others to vote, when 90% of the public are uneducated about the topics they are voting about?
[ { "answer": "You are right, in principle, that people probably shouldn't vote if they don't know what they're doing. But obtaining a basic overview of issues and candidates is not hard--someone who is encouraged to vote is more likely to educate themselves in this way than someone who does not vote. Besides, a great many people who *do* have basic civic knowledge do not vote.\n\nThere has to be more than just the mechanical action of casting a ballot, you're right. But it makes more sense to encourage people to vote *and* to try to educate them than to discourage them from voting.", "provenance": null }, { "answer": null, "provenance": [ { "wikipedia_id": "26317569", "title": "Maturity (psychological)", "section": "Section::::Legal and political issues.\n", "start_paragraph_id": 15, "start_character": 0, "end_paragraph_id": 15, "end_character": 937, "text": "One reason cited for why children and the mentally disabled are not permitted to vote in elections is that they are too intellectually immature to understand voting issues. This view is echoed in concerns about the adult voting population, with observers citing concern for a decrease in 'civic virtue' and 'social capital,' reflecting a generalized panic over the political intelligence of the voting population. Although critics have cited 'youth culture' as contributing to the malaise of modern mass media's shallow treatment of political issues, interviews with youth themselves about their political views have revealed a widespread sense of frustration in their political powerlessness as well as a strongly cynical view of the actions of politicians. Several researchers have attempted to explain this sense of cynicism as a way of rationalizing the sense of alienation and legal exclusion of youth in political decision-making.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "8276451", "title": "Collective action problem", "section": "Section::::In politics.:Voting.\n", "start_paragraph_id": 20, "start_character": 0, "end_paragraph_id": 20, "end_character": 610, "text": "Despite high levels of political apathy in the United States, however, this collective action problem does not decrease voter turnout as much as some political scientists might expect. It turns out that most Americans believe their political efficacy to be higher than it actually is, stopping millions of Americans from believing their vote does not matter and staying home from the polls. Thus, it appears collective action problems can be resolved not just by tangible benefits to individuals participating in group action, but by a mere belief that collective action will also lead to individual benefits.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "37934151", "title": "Altruism theory of voting", "section": "Section::::The rational calculus of voting.:The \"altruistic\" rationale for voting.\n", "start_paragraph_id": 12, "start_character": 0, "end_paragraph_id": 12, "end_character": 579, "text": "Because of the multitude of different and contradictory definitions of expressive voting, recently another effort by political scientists and public choice theorists has been made to explain voting behavior with reference to instrumental benefits received from influencing the outcome of the election. If voters assumed to be rational but also to have altruistic tendencies and some preference for outcomes enhancing the social welfare of others, they will reliably vote in favor of the policies they perceive to be for the common good, rather than for their individual benefit.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "549462", "title": "Voter turnout", "section": "Section::::Trends of decreasing turnout since the 1980s.:Reasons for decline.\n", "start_paragraph_id": 114, "start_character": 0, "end_paragraph_id": 114, "end_character": 766, "text": "Google extensively studied the causes behind low voter turnout in the United States, and argues that one of the key reasons behind lack of voter participation is the so-called \"interested bystander\". According to Google's study, 48.9% of adult Americans can be classified as \"interested bystanders\", as they are politically informed but are reticent to involve themselves in the civic and political sphere. This category is not limited to any socioeconomic or demographic groups. Google theorizes that individuals in this category suffer from voter apathy, as they are interested in political life but believe that their individual effect would be negligible. These individuals often participate politically on the local level, but shy away from national elections.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "35536466", "title": "Issue voting", "section": "Section::::Causes.\n", "start_paragraph_id": 7, "start_character": 0, "end_paragraph_id": 7, "end_character": 497, "text": "In order for a person to be an issue voter, they must be able to recognize that there is more than one opinion about a particular issue, have formed a solid opinion about it and be able to relate that to a specific political party. According to Campbell, only 40 to 60 percent of the informed population even perceives party differences, and can thus partake in party voting. This would suggest that it is common for individuals to develop opinions of issues without the aid of a political party.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "1129572", "title": "Social loafing", "section": "Section::::Causes.:Dispensability of effort.\n", "start_paragraph_id": 37, "start_character": 0, "end_paragraph_id": 37, "end_character": 532, "text": "One example is voting in the United States. Even though most people say that voting is important, and a right that should be exercised, every election a sub-optimal percentage of Americans turn out to vote, especially in presidential elections (only 51 percent in the 2000 election). One vote may feel very small in a group of millions, so people may not think a vote is worth the time and effort. If too many people think this way, there is a small voter turnout. Some countries enforce compulsory voting to eliminate this effect.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "37280701", "title": "Jon Krosnick", "section": "Section::::Work in political psychology.:Studies in voter turnout.\n", "start_paragraph_id": 28, "start_character": 0, "end_paragraph_id": 28, "end_character": 1395, "text": "One of the results of the study indicated that higher voter turnouts occurred when one candidate is disliked to the point of being a threat to voters, while the other is perceived as a hero. However, subjects who liked both candidates were not as likely to vote, even if they liked one significantly more than the other. This also holds true for subjects who disliked both candidates because in these cases voters would be happy or unhappy with either outcome. The studies also indicated that mudslinging in political campaigns effectively increased voter turnout, provided that candidates vilified their opponents tastefully without tarnishing their own image. The study also revealed that if people liked or disliked the candidate at the first encounter, their opinion was difficult to change later on. In fact, Krosnick's studies show that people become more resistant to changing their views as they learn more and more about a candidate. At the start of a campaign, most candidates are viewed in a mildly positive light. After presenting their positions, impressions of candidates solidify and information gained earlier in the campaign tends to have a greater impact. Krosnick calls this model the \"asymmetrical\" model of voting behavior. This suggests that the current marketing strategy for campaigning - saving money for advertising more at the end of a campaign - is completely wrong.\n", "bleu_score": null, "meta": null } ] } ]
null
10eoiv
How big of a nuclear bomb would be needed to disrupt or destroy a massive wedge Tornado?
[ { "answer": "That is one of the coolest questions I've ever seen.", "provenance": null }, { "answer": null, "provenance": [ { "wikipedia_id": "3145631", "title": "Nuclear weapon yield", "section": "Section::::Examples of nuclear weapon yields.\n", "start_paragraph_id": 5, "start_character": 0, "end_paragraph_id": 5, "end_character": 310, "text": "As a comparison, the blast yield of the GBU-43 Massive Ordnance Air Blast bomb is 0.011 kt, and that of the Oklahoma City bombing, using a truck-based fertilizer bomb, was 0.002 kt. Most artificial non-nuclear explosions are considerably smaller than even what are considered to be very small nuclear weapons.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "343674", "title": "Chinese space program", "section": "Section::::History.\n", "start_paragraph_id": 18, "start_character": 0, "end_paragraph_id": 18, "end_character": 229, "text": "BULLET::::- On October 27, 1966, a nuclear-tipped DF-2A missile was launched from Jiuquan and the 20 kilotons yield nuclear warhead exploded at the height of 569 meters over the target in Lop Nor or Base 21 situated 894 km away.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "4689328", "title": "TNT equivalent", "section": "Section::::Historical derivation of the value.\n", "start_paragraph_id": 12, "start_character": 0, "end_paragraph_id": 12, "end_character": 251, "text": "So, one can state that a nuclear bomb has a yield of 15 kt (63×10 or 6.3×10 J); but an actual explosion of a 15 000 ton pile of TNT may yield (for example) 8×10 J due to additional carbon/hydrocarbon oxidation not present with small open-air charges.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "322533", "title": "Project Orion (nuclear propulsion)", "section": "Section::::Potential problems.\n", "start_paragraph_id": 58, "start_character": 0, "end_paragraph_id": 58, "end_character": 1214, "text": "From many smaller detonations combined the fallout for the entire launch of a 6,000 short ton (5,500 metric ton) Orion is equal to the detonation of a typical 10 megaton (40 petajoule) nuclear weapon as an air burst, therefore most of its fallout would be the comparatively dilute delayed fallout. Assuming the use of nuclear explosives with a high portion of total yield from fission, it would produce a combined fallout total similar to the surface burst yield of the \"Mike\" shot of Operation Ivy, a 10.4 Megaton device detonated in 1952. The comparison is not quite perfect as, due to its surface burst location, \"Ivy Mike\" created a large amount of early fallout contamination. Historical above-ground nuclear weapon tests included 189 megatons of fission yield and caused average global radiation exposure per person peaking at 0.11 mSv/a in 1963, with a 0.007 mSv/a residual in modern times, superimposed upon other sources of exposure, primarily natural background radiation, which averages 2.4 mSv/a globally but varies greatly, such as 6 mSv/a in some high-altitude cities. Any comparison would be influenced by how population dosage is affected by detonation locations, with very remote sites preferred.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "949651", "title": "Criticality accident", "section": "", "start_paragraph_id": 4, "start_character": 0, "end_paragraph_id": 4, "end_character": 316, "text": "Though dangerous and frequently lethal to humans within the immediate area, the critical mass formed would not be capable of producing a massive nuclear explosion of the type that fission bombs are designed to produce. This is because all the design features needed to make a nuclear warhead cannot arise by chance.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "37630", "title": "Neutron bomb", "section": "Section::::Use.\n", "start_paragraph_id": 24, "start_character": 0, "end_paragraph_id": 24, "end_character": 452, "text": "Although neutron bombs are commonly believed to \"leave the infrastructure intact\", with current designs that have explosive yields in the low kiloton range, detonation in (or above) a built-up area would still cause a sizable degree of building destruction, through blast and heat effects out to a moderate radius, albeit considerably less destruction, than when compared to a standard nuclear bomb of the \"exact\" same total energy release or \"yield\".\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "43056101", "title": "Nuclear testing at Bikini Atoll", "section": "Section::::Weapons tests.:Castle Bravo test.\n", "start_paragraph_id": 18, "start_character": 0, "end_paragraph_id": 18, "end_character": 582, "text": "The 15 megaton (Mt) nuclear explosion far exceeded the expected yield of 4 to 8 Mt (6 Mt predicted), and was about 1,000 times more powerful than each of the atomic bombs dropped on Hiroshima and Nagasaki during World War II. The device was the most powerful nuclear weapon ever detonated by the United States and just under one-third the energy of the Tsar Bomba, the largest ever tested. The scientists and military authorities were shocked by the size of the explosion, and many instruments were destroyed which they had put in place to evaluate the effectiveness of the device.\n", "bleu_score": null, "meta": null } ] } ]
null
3b63w8
what was building 7? why do conspiracy theorists use it as an example? what is the "real explanation" behind its collapse? what do the theorists think happened?
[ { "answer": "The World Trade Center was a complex of seven buildings. The twin towers were 1 WTC and 2 WTC. Four other buildings were on the same block, and 7 WTC was across the street.\n\nWhile only the twin towers were struck by planes, their collapse caused substantial, irreperable damage to all the other buildings part of the WTC, and other neighboring buildings as well. 3 WTC immediately collapsed from the twin towers essentially falling on it. Same thing happened to a church across the street. Debris that struck 7 WTC didn't cause it to collapse immediately, but started fires that weakened the building, causing it to collapse later that day.\n\nConspiracy theorists think that, because the building was across the street from the WTC and its collapse wasn't *directly* caused by the collapse of the twin towers, that its collapse must have been a controlled demolition. They add to this that the building had offices of the SEC and Secret Service, theorizing that someone wanted to set back investigations into potential financial wrongdoing. ", "provenance": null }, { "answer": "\"Building 7\" refers to 7 World Trade Center, a 47 storey building which was damaged in the 9/11 attacks and collapsed at roughly 5:20pm that afternoon. Conspiracy theorists claim the building was purposely demolished.\n\n_URL_0_\n\n_URL_1_\n\nWhat happened was that falling debris from the collapse of the north tower (1 WTC) damaged 7 WTC and started fires. The building's sprinkler system had a number of issues (some fundamental design flaws, and some due to the circumstances on the day), in particular there was very low water pressure available to firefighters so the fire was able to burn out of control.\n\nAs the fire burned, the steel beams which ran along the floors of the building heated up and expanded. Ultimately this pushed a key beam off a column, shifting how loads were distributed through the building, causing a column to fail and the building collapsed from there.\n\nConspiracy theorists claim that fire shouldn't be hot enough to deform the beams like that, and that it was actually a controlled demolition. However official reports include analyses which rule out these claims.", "provenance": null }, { "answer": null, "provenance": [ { "wikipedia_id": "1077137", "title": "9/11 conspiracy theories", "section": "Section::::Theories.:World Trade Center.\n", "start_paragraph_id": 55, "start_character": 0, "end_paragraph_id": 55, "end_character": 603, "text": "The National Institute of Standards and Technology (NIST) concluded the accepted version was more than sufficient to explain the collapse of the buildings. NIST and many scientists refuse to debate conspiracy theorists because they feel it would give those theories unwarranted credibility. Specialists in structural mechanics and structural engineering accept the model of a fire-induced, gravity-driven collapse of the World Trade Center buildings without the use of explosives. As a result, NIST said that it did not perform any test for the residue of explosive compounds of any kind in the debris.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "41150308", "title": "Zolitūde shopping centre roof collapse", "section": "Section::::Investigation and cause.\n", "start_paragraph_id": 21, "start_character": 0, "end_paragraph_id": 21, "end_character": 348, "text": "The inquiry into the collapse began just minutes after it occurred. The police investigated three theories: first, that there was an error in structural design, and authorities overseeing planning had been negligent; second, that the cause is related to initial building procedures; third, that it was caused by the construction of the green roof.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "1077137", "title": "9/11 conspiracy theories", "section": "Section::::Media reaction.\n", "start_paragraph_id": 155, "start_character": 0, "end_paragraph_id": 155, "end_character": 759, "text": "On September 5, 2011, \"The Guardian\" published an article entitled, \"9/11 conspiracy theories debunked\". The article noted that unlike the collapse of World Trade Centers 1 and 2 a controlled demolition collapses a building from the bottom and explains that the windows popped because of collapsing floors. The article also said there are conspiracy theories that claim that 7 World Trade Center was also downed by a controlled demolition, that the Pentagon being hit by a missile, that the hijacked planes were packed with explosives and flown by remote control, that Israel was behind the attacks, that a plane headed for the Pentagon was shot down by a missile, that there was insider trading by people who had foreknowledge of the attacks were all false.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "22826860", "title": "Architects & Engineers for 9/11 Truth", "section": "Section::::Advocacy.:7 World Trade Center.\n", "start_paragraph_id": 18, "start_character": 0, "end_paragraph_id": 18, "end_character": 938, "text": "Gage dismisses the explanation of the collapse of 7 World Trade Center given by the National Institute of Standards and Technology (NIST), according to which uncontrolled fires and the buckling of a critical support column caused the collapse, and argues that this would not have led to the uniform way the building actually collapsed. \"The rest of the columns could not have been destroyed sequentially so fast to bring this building straight down into its own footprint,\" he says. Gage argues that skyscrapers that have suffered \"hotter, longer lasting and larger fires\" have not collapsed. \"Buildings that fall in natural processes fall to the path of least resistance,\" says Gage, \"they don't go straight down through themselves.\" Architects & Engineers for 9/11 Truth also questions the computer models used by NIST, and argues that evidence pointing to the use of explosives had been omitted in its report on the collapse of 7 WTC.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "637197", "title": "Collapse of the World Trade Center", "section": "Section::::Investigations.:7 World Trade Center.\n", "start_paragraph_id": 64, "start_character": 0, "end_paragraph_id": 64, "end_character": 775, "text": "The collapse of the old 7 World Trade Center is remarkable because it was the first known instance of a tall building collapsing primarily as a result of uncontrolled fires. Based on its investigation, NIST reiterated several recommendations it had made in its earlier report on the collapse of the twin towers, and urged immediate action on a further recommendation: that fire resistance should be evaluated under the assumption that sprinklers are unavailable; and that the effects of thermal expansion on floor support systems be considered. Recognizing that current building codes are drawn to prevent loss of life rather than building collapse, the main point of NIST's recommendations is that buildings should not collapse from fire even if sprinklers are unavailable.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "7015856", "title": "World Trade Center controlled demolition conspiracy theories", "section": "Section::::Propositions and hypotheses.:7 World Trade Center.:NIST report.\n", "start_paragraph_id": 31, "start_character": 0, "end_paragraph_id": 31, "end_character": 1010, "text": "NIST released its final report on the collapse of 7 World Trade Center on November 20, 2008. Investigators used videos, photographs and building design documents to come to their conclusions. The investigation could not include physical evidence as the materials from the building lacked characteristics allowing them to be positively identified and were therefore disposed of prior to the initiation of the investigation. The report concluded that the building's collapse was due to the effects of the fires which burned for almost seven hours. The fatal blow to the building came when the 13th floor collapsed, weakening a critical steel support column that led to catastrophic failure, and extreme heat caused some steel beams to lose strength, causing further failures throughout the buildings until the entire structure succumbed. Also cited as a factor was the collapse of the nearby towers, which broke the city water main, leaving the sprinkler system in the bottom half of the building without water.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "22826860", "title": "Architects & Engineers for 9/11 Truth", "section": "Section::::Advocacy.\n", "start_paragraph_id": 11, "start_character": 0, "end_paragraph_id": 11, "end_character": 890, "text": "Investigations by the Federal Emergency Management Agency and the National Institute of Standards and Technology (NIST) have concluded that the buildings collapsed as a result of the impacts of the planes and of the fires that resulted from them. In 2005, a report from NIST concluded that the destruction of the World Trade Center towers was the result of progressive collapse initiated by the jet impacts and the resultant fires. A 2008 NIST report described a similar progressive collapse as the cause of the destruction of the third tallest building located at the World Trade Center site, the 7 WTC. Many mainstream scientists choose not to debate proponents of 9/11 conspiracy theories, saying they do not want to lend them unwarranted credibility. The NIST explanation of collapse is universally accepted by the structural engineering, and structural mechanics research communities.\n", "bleu_score": null, "meta": null } ] } ]
null
fi53z3
Why were entertainers looked down on in Ancient Rome?
[ { "answer": "First, disclaimer: Ancient Rome is not my area for this (China is) but I have dug into this a little in my reading.\n\nBoth prostitutes and actors were classified legally as *infames*in Augustus’ moral legislation. This was in part because they were viewed as faking emotions for money, and both groups also engaged in cross-dressing. The root of your question comes down to the phenomena described by Bakhtin as “the low-Other” in society which was further refined by Stallybrass and White as a commodification of desire for the low-Other by those at the top of the social hierarchy in their bid to maintain social control. \n\nThat is, prostitutes and actors were on the same social level, and playwrights took inspiration from the work of prostitutes as that of equivalent to actors and repackaged them for consumption by the masses as both entertainment and cautionary tales. The stock character in Roman Comedy of the *meretrix* is almost always either a “hooker with a heart of gold” type or “heartless man-eater only out for the money” type, and are either commended in the text of plays as support for the main male or denigrated as villainous. There were even direct comparisons between acting and prostitution as professions that fake emotion in the plays. \n\nIn Roman cities neither actors nor prostitutes were segregated from the rest of society—unlike in Renaissance England, or China and Japan. Actors could be legally subjected to being beatings in the streets by Roman citizens, though this was later restricted to only while they were on stage. That actors would satirize powerful political or social figures did make them entertaining to the general populace, but it also was extremely risky. Gladiators were also classified as *infames*, and their audience was highly entertained, but this doesn’t mean they were of high social standing like members of professional sports today. So, too, with actors and prostitutes.\n\nAnd that’s what Stallybrass and White point out: the top of the social food chain will co-opt the figures of the lowest classes for entertainment value initially and then will strip those figures and art forms from that low-Other and realign them with the upper class paradigm. That the 19th century and early 20th century changed the view of acting from a low-class activity and divested it of much of its relationship to sex work and transformed it into an elite class of celebrity has more to do with the growth of capitalism and industrialization than it does the craft itself. \n\nSources:\nFaraone, Cristopher, Christopher A. Faraone, and Laura McClure. Prostitutes and Courtesans in the Ancient World. University of Wisconsin Press, 2006. \nStallybrass, Peter, and Allon White. The Politics and Poetics of Transgression. Ithaca, NY: Cornell Univ. Press, 1995.", "provenance": null }, { "answer": null, "provenance": [ { "wikipedia_id": "25507", "title": "Roman Empire", "section": "Section::::The arts.:Performing arts.\n", "start_paragraph_id": 209, "start_character": 0, "end_paragraph_id": 209, "end_character": 771, "text": "Like gladiators, entertainers were \"infames\" in the eyes of the law, little better than slaves even if they were technically free. \"Stars\", however, could enjoy considerable wealth and celebrity, and mingled socially and often sexually with the upper classes, including emperors. Performers supported each other by forming guilds, and several memorials for members of the theatre community survive. Theatre and dance were often condemned by Christian polemicists in the later Empire, and Christians who integrated dance traditions and music into their worship practices were regarded by the Church Fathers as shockingly \"pagan.\" St. Augustine is supposed to have said that bringing clowns, actors, and dancers into a house was like inviting in a gang of unclean spirits.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "5245886", "title": "Prostitution in ancient Greece", "section": "Section::::Prostitutes in literature.\n", "start_paragraph_id": 45, "start_character": 0, "end_paragraph_id": 45, "end_character": 562, "text": "During the time of the New Comedy (of ancient Greek comedy), prostitute characters became, after the fashion of slaves, the veritable stars of the comedies. This could be for several reasons: while Old Comedy (of ancient Greek comedy) concerned itself with political subjects, New Comedy dealt with private subjects and the daily life of Athenians. Also, social conventions forbade well-born women from being seen in public; while the plays depicted outside activities. The only women who would normally be seen out in the street were logically the prostitutes.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "41363062", "title": "Charles Buck (minister)", "section": "Section::::Life.:Through completing formal education.\n", "start_paragraph_id": 8, "start_character": 0, "end_paragraph_id": 8, "end_character": 223, "text": "During the 18th century \"huge crowds\" attended theaters and some engaged in immoral behavior. \"In front of the stage, young men would drink together, eat nuts and mingle with prostitutes down below in the notorious ‘pit’.\"\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "16884226", "title": "Balatro", "section": "", "start_paragraph_id": 1, "start_character": 0, "end_paragraph_id": 1, "end_character": 206, "text": "In ancient Rome, a balatro was a professional jester or buffoon. Balatrones were paid for their jests, and the tables of the wealthy were generally open to them for the sake of the amusement they afforded.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "21297236", "title": "Infamia", "section": "", "start_paragraph_id": 2, "start_character": 0, "end_paragraph_id": 2, "end_character": 556, "text": "\"Infamia\" was an \"inescapable consequence\" for certain professionals, including prostitutes and pimps, entertainers such as actors and dancers, and gladiators. \"Infames\" could not, for instance, provide testimony in a court of law. They were liable to corporal punishment, which was usually reserved for slaves. The \"infamia\" of entertainers did not exclude them from socializing among the Roman elite, and entertainers who were \"stars\", both men and women, sometimes became the lovers of such high-profile figures as the \"dictator\" Sulla and Mark Antony.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "24171", "title": "Plautus", "section": "Section::::Stagecraft.:Relationship with the audience.\n", "start_paragraph_id": 129, "start_character": 0, "end_paragraph_id": 129, "end_character": 486, "text": "Goldberg says that \"these changes fostered a different relationship between actors and the space in which they performed and also between them and their audiences\". Actors were thrust into much closer audience interaction. Because of this, a certain acting style became required that is more familiar to modern audiences. Because they would have been in such close proximity to the actors, ancient Roman audiences would have wanted attention and direct acknowledgement from the actors.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "34719016", "title": "Walter Scott Moore", "section": "Section::::Public office.:State.\n", "start_paragraph_id": 13, "start_character": 0, "end_paragraph_id": 13, "end_character": 661, "text": ". . . the unsavory apartments over Bob Eckert's saloon on Court Street were the scene of an orgie such as even the most salacious pen of ancient Rome never dared describe. . . . The four drunken male brutes and three of the four drunken female brutes were sprawled, almost nude, all in the most atrocious attitudes, about the front room. . . . The negro waiter was stretched stupid upon the floor, surrounded with a halo of the fifty-eight champagne bottles which the party had emptied, while the man who now asks your suffrage for Secretary of State, almost stark nude, was endeavoring to arouse the waiter to go and get a photographer for an obscene purpose.\n", "bleu_score": null, "meta": null } ] } ]
null
5tgcdd
How were the Romans able to field much larger armies than Medieval Europe?
[ { "answer": "Firstly, keep in mind that the ancient armies you are describing were fielded by what were essentially ancient superpowers. At the time of the Punic Wars, the Carthaginians held an empire that controlled the western Mediterranean, spanning much of North Africa and Spain. Similarly, when you look at the various Persian Empires, they controlled vast territories and had a large population base to draw upon (consider Thermopylae, where the smaller Greece was only able to assemble a few thousand men against the hundred thousand of Persia). The size of the ancient powers' armies was larger than those of medieval kingdoms in part because the ancient powers were simply larger than medieval kingdoms.\n\nWith the Gauls and other European barbarians that the Romans encountered, often the Romans were encountering an entire society of people who were living there (Gaul) or an entire society that had picked up and migrated (Cimbri, Teutones). The size of their armies gets blurred a bit there, since numbers may include civilians as well as soldiers.\n\nAnd lastly, at least later in the Roman Republic and through the Empire, the army consisted in large part of *auxilia,* or foreign auxiliary forces drawn up from allied and conquered territories, which included most of the European countries you are comparing the Roman Army to. So, the reason the Roman Army was so much larger than the English Army is in part due to the fact that the English Army was just one part of the Roman Army.", "provenance": null }, { "answer": null, "provenance": [ { "wikipedia_id": "45420", "title": "Napoleonic Wars", "section": "Section::::Military legacy.:Enlarged scope.\n", "start_paragraph_id": 133, "start_character": 0, "end_paragraph_id": 133, "end_character": 412, "text": "Until the time of Napoleon, European states employed relatively small armies, made up of both national soldiers and mercenaries. These regulars were highly drilled professional soldiers. Ancien Régime armies could only deploy small field armies due to rudimentary staffs and comprehensive yet cumbersome logistics. Both issues combined to limit field forces to approximately 30,000 men under a single commander.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "57015", "title": "Battle of Adrianople", "section": "Section::::Composition of the Gothic forces.\n", "start_paragraph_id": 43, "start_character": 0, "end_paragraph_id": 43, "end_character": 252, "text": "More recent scholarly works mostly agree that the armies were similarly sized, that the Gothic infantry was more decisive than their cavalry, and that neither the Romans nor the Goths used stirrups until the 6th century. probably brought by the Avars.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "7720600", "title": "Anglo-Saxon military organization", "section": "Section::::Military organization in the pre-settlement period (400-600).:Method of fighting and army composition.\n", "start_paragraph_id": 12, "start_character": 0, "end_paragraph_id": 12, "end_character": 324, "text": "It is not clear how large armies were; the Saxons themselves described anything more than 30 warriors as an army. This was about same number as a ship's crew. The general view is that an army would have been made up of a number of warbands under a senior chief, or \"Althing\", and would have been between 200 and 600 strong.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "62583", "title": "History of Italy", "section": "Section::::Roman period.:Roman Empire.\n", "start_paragraph_id": 57, "start_character": 0, "end_paragraph_id": 57, "end_character": 1087, "text": "Despite its military strength, the Empire made few efforts to expand its already vast extent; the most notable being the conquest of Britain, begun by emperor Claudius (47), and emperor Trajan's conquest of Dacia (101–102, 105–106). In the 1st and 2nd century, Roman legions were also employed in intermittent warfare with the Germanic tribes to the north and the Parthian Empire to the east. Meanwhile, armed insurrections (e.g. the Hebraic insurrection in Judea) (70) and brief civil wars (e.g. in 68 CE the year of the four emperors) demanded the legions' attention on several occasions. The seventy years of Jewish–Roman wars in the second half of the 1st century and the first half of the 2nd century were exceptional in their duration and violence. An estimated 1,356,460 Jews were killed as a result of the First Jewish Revolt; the Second Jewish Revolt (115–117) led to the death of more than 200,000 Jews; and the Third Jewish Revolt (132–136) resulted in the death of 580,000 Jewish soldiers. The Jewish people never recovered until the creation of the state of Israel in 1948.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "264291", "title": "Battle of Magnesia", "section": "Section::::The battle.\n", "start_paragraph_id": 11, "start_character": 0, "end_paragraph_id": 11, "end_character": 547, "text": "In all, both writers agree that the Roman army was about 30,000 strong and the Seleucids about 70,000. However, modern sources state that the two armies might have been not that numerically different and supports that the Romans fielded about 50,000 men as did Antiochus. A popular anecdote regarding the array of the two armies is that Antiochus supposedly asked Hannibal whether his vast and well-armed formation would be enough for the Roman Republic, to which Hannibal tartly replied, \"\"quite enough for the Romans, however greedy they are.\"\"\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "2000963", "title": "Limitanei", "section": "Section::::Organization.\n", "start_paragraph_id": 15, "start_character": 0, "end_paragraph_id": 15, "end_character": 414, "text": "The size of the army, and therefore of the limitanei, remains controversial. A.H.M. Jones and Warren Treadgold argue that the late Roman army was significantly larger than earlier Roman armies, and Treadgold estimates they had up to 645,000 troops. Karl Strobel denies this, and Strobel estimates that the late Roman army had some 435,000 troops in the time of Diocletian and 450,000 in the time of Constantine I.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "7948631", "title": "Structural history of the Roman military", "section": "Section::::Successive crises (238 AD– 359 AD).\n", "start_paragraph_id": 56, "start_character": 0, "end_paragraph_id": 56, "end_character": 1944, "text": "By the late Empire, enemy forces in both the East and West were \"sufficiently mobile and sufficiently strong to pierce [the Roman] defensive perimeter on any selected axis of penetration\"; from the 3rd century onwards, both Germanic tribes and Persian armies pierced the frontiers of the Roman Empire. In response, the Roman army underwent a series of changes, more organic and evolutionary than the deliberate military reforms of the Republic and early Empire. A stronger emphasis was placed upon ranged combat ability of all types, such as field artillery, hand-held \"ballistae\", archery and darts. Roman forces also gradually became more mobile, with one cavalryman for every three infantryman, compared to one in forty in the early Empire. Additionally, the Emperor Gallienus took the revolutionary step of forming an entirely cavalry field army, which was kept as a mobile reserve at the city of Milan in northern Italy. It is believed that Gallienus facilitated this concentration of cavalry by stripping the legions of their integral mounted element. A diverse range of cavalry regiments existed, including \"catafractarii\" or \"clibanarii\", \"scutarii\", and legionary cavalry known as \"promoti\". Collectively, these regiments were known as \"equites\". Around 275 AD, the proportion of \"catafractarii\" was also increased. There is some disagreement over exactly when the relative proportion of cavalry increased, whether Gallienus' reforms occurred contemporaneously with an increased reliance on cavalry, or whether these are two distinct events. Alfoldi appears to believe that Gallienus' reforms were contemporaneous with an increase in cavalry numbers. He argues that, by 258, Gallienus had made cavalry the predominant troop type in the Roman army in place of heavy infantry, which dominated earlier armies. According to Warren Treadgold, however, the proportion of cavalry did not change between the early 3rd and early 4th centuries.\n", "bleu_score": null, "meta": null } ] } ]
null
4f3gx9
Did the UK have any options at the start of World War I other than to commit a land army?
[ { "answer": "Not really; for one thing, with Britain now at war, the staff talks with the French Army came into play, wherein the British would despatch an expeditionary force to assist them in fighting the Germans. Plus, the immediate reason for British involvement was the Invasion of Belgium, so the British could hardly be seen to sit around and do nothing while there was serious fighting taking place across the Channel.\n\nYou also have to take into account the fact that it took until November 1914 for the Blockade to actually be in place, and it wasn't until March 1915 that it became much stronger (and also borderline illegal). The British did not have the time to wait around for two months doing nothing, while their now-allies the Russians and the French bore the brunt of the fighting. It's also important to consider that a more 'material' contribution by the British, ie actually sending ground forces to fight instead of just relying on their Navy while the French and Russians absorbed casualties, would give the British greater influence in negotiations when the war was over.", "provenance": null }, { "answer": null, "provenance": [ { "wikipedia_id": "9701860", "title": "Soldier settlement (Australia)", "section": "Section::::World War I.\n", "start_paragraph_id": 3, "start_character": 0, "end_paragraph_id": 3, "end_character": 807, "text": "Such settlement plans initially began during World War I, with South Australia first enacting legislation in 1915. Similar schemes gained impetus across Australia in February 1916 when a conference of representatives from the Commonwealth and all States was held in Melbourne to consider a report prepared by the Federal Parliamentary War Committee regarding the settlement of returned soldiers on the land. The report focused specifically on a Commonwealth-State cooperative process of selling or leasing Crown land to soldiers who had been demobilised following the end of their service in this first global conflict. The meeting agreed that it was the Commonwealth Government's role to select and acquire land whilst the State government authorities would process applications and grant land allotments.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "5664901", "title": "P J Magennis Pty Ltd v Commonwealth", "section": "", "start_paragraph_id": 2, "start_character": 0, "end_paragraph_id": 2, "end_character": 405, "text": "The Commonwealth government wished to purchase land for resettlement after World War II. Because the States are not required to acquire property on just terms, the Commonwealth government entered into a deal with the New South Wales government, which would purchase the land for a lower price. The Commonwealth government would then pay the New South Wales government in the form of a grant (section 96).\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "1088510", "title": "55th (West Lancashire) Infantry Division", "section": "Section::::Formation.\n", "start_paragraph_id": 3, "start_character": 0, "end_paragraph_id": 3, "end_character": 559, "text": "In 1901, following lessons learned from the Second Boer War and diplomatic clashes with the growing German Empire, the United Kingdom sought to reform the British Army to be able to fight a European adversary. This task fell to Secretary of State for War, Richard Haldane who implemented policies known as the Haldane Reforms. The Territorial and Reserve Forces Act 1907 created a new Territorial Force by merging the Yeomanry and the Volunteer Force in 1908. This resulted in the creation of 14 Territorial divisions, including the West Lancashire Division.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "1033528", "title": "48th (South Midland) Division", "section": "Section::::Interwar.\n", "start_paragraph_id": 16, "start_character": 0, "end_paragraph_id": 16, "end_character": 1228, "text": "During the interwar period, the British Army envisioned that, during future conflicts, the Territorial Army would be used as the basis for future expansion so as to avoid raising a new Kitchener's Army. However, as the 1920s and 1930s wore on, the British Government prioritised funding for the regular army over the territorials, allowing recruitment and equipment levels to languish. Baron Templemore, as part of a House of Lords debate on the Territorial Army, stated that the division - on 1 October 1924 - mustered 338 officers and 7,721 other ranks. Historian David French highlights that \"by April 1937 the Territorial Army had reached less than 80 per cent of its shrunken peacetime establishment\" and \"Its value as an immediate reserve was, therefore, limited.\" Edward Smalley comments that \"48th Divisional Signals operated on an improvised organizational structure\" for most of the 1930s, due to being below 50 per cent strength. He further highlights how the TA, and the division in particular, \"never kept pace with technological developments.\" In 1937, the division was operating just two radio sets on a full-time basis and had to borrow additional units from the 3rd Infantry Division for annual training camps.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "466684", "title": "Territorial Force", "section": "Section::::Post-war.\n", "start_paragraph_id": 73, "start_character": 0, "end_paragraph_id": 73, "end_character": 1017, "text": "Another issue was military aid to the civil power during the industrial unrest that followed the war. The thinly-stretched army was reluctant to become involved, so Churchill proposed using the territorials. Concerns that the force would be deployed to break strikes adversely affected recruitment, which had recommenced on 1 February 1920, resulting in promises that the force would not be so used. The government nevertheless deployed the Territorial Force in all but name during the miner's strike of April 1921 by the hasty establishment of the Defence Force. The new organisation relied heavily on territorial facilities and personnel, and its units were given territorial designations. Territorials were specially invited to enlist. Although those that did were required to resign from the Territorial Force, their service in the Defence Force counted towards their territorial obligations, and they were automatically re-admitted to the Territorial Force once their service in the Defence Force was completed.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "1033528", "title": "48th (South Midland) Division", "section": "Section::::Formation.\n", "start_paragraph_id": 4, "start_character": 0, "end_paragraph_id": 4, "end_character": 617, "text": "In 1901, following lessons learned from the Second Boer War and diplomatic clashes with the growing German Empire, the United Kingdom sought to reform the British Army so it would be able to engage in European affairs if required. This task fell to Secretary of State for War, Richard Haldane who implemented several policies known as the Haldane Reforms. As part of these reforms, the Territorial and Reserve Forces Act 1907 created a new Territorial Force by merging the existing Yeomanry and Volunteer Force in 1908. This resulted in the creation of 14 Territorial Divisions, including the South Midland Division.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "57391954", "title": "British home army in the First World War", "section": "", "start_paragraph_id": 2, "start_character": 0, "end_paragraph_id": 2, "end_character": 1007, "text": "Although the territorials could not be compelled to serve outside the United Kingdom, they could volunteer to do so, and when large numbers did, units of the Territorial Force began to be posted overseas. By July 1915, the home army had been stripped of all its original territorial divisions, and their places in the home defences were taken by second-line territorial units. The new units competed for equipment with the 'New Army' being raised to expand the army overseas, the reserves of which were also allocated to home defence while they trained, and suffered from severe shortages. The second line's task in home defence was also complicated by having to supply replacement drafts to the first line and the need to train for their own eventual deployment overseas. Most of the second line divisions had departed the country by 1917, and the territorial brigades in those that remained were replaced by brigades of the Training Reserve, created in 1916 by a reorganisation of New Army reserve units.\n", "bleu_score": null, "meta": null } ] } ]
null
17j2q8
What's the difference between an endosome and lysosome?
[ { "answer": "My understanding is that all material that's internalised by a cell starts as an endosome. If this material is destined for degradation, it becomes a lysosome. I.e., an endosome is a step on the way to lysosome. ", "provenance": null }, { "answer": "An endosome is simply a small, membrane bound compartment in eukaryotic cells that function as a sorting mechanism. Several things can go to the endosome, either from endocytosis from the plasma membrane, or from the Golgi. From here, endosomes can either recycle back to the plasma membrane, Golgi, etc, or mature further into lysosomes (e.g. fuse with existing lysosomes).", "provenance": null }, { "answer": "They are different steps along a trafficking pathway.\n\nThe first step is the early endosome, the vesicle that has been internalized via endocytosis. They mature to late endosomes (some proteins/lipids get sent back to the plasma membrane via recycling endosomes) mainly by action of a proton pump that is acidifying the endosome.\n\nThe next step is multivesicular bodies which are made by fusing multiple late endosomes together and then budding events into the lumen (inside) of the MCB. The internal vesicles are destined for degradation by the lyosome.\n\nMVBs fuse with lyosomes and the internal vesicles and their contents get destroyed.\n\nEndosomes and lysosomes have different lumenal pHs and different protein contents. Lysosomes have enzymes for breaking down lipids and proteins. They also have a low (acidic) pH created by proton pumps. The enzymes breaking down the lipids/proteins only work in low pH which is a method of making sure that they don't function unless inside the lysosome, and so that if a lysosome breaks it doesn't digest the cell.", "provenance": null }, { "answer": null, "provenance": [ { "wikipedia_id": "501861", "title": "Endosome", "section": "", "start_paragraph_id": 1, "start_character": 0, "end_paragraph_id": 1, "end_character": 791, "text": "An endosome is a membrane-bound compartment inside a eukaryotic cell. It is an organelle of the endocytic membrane transport pathway originating from the trans Golgi network. Molecules or ligands internalized from the plasma membrane can follow this pathway all the way to lysosomes for degradation, or they can be recycled back to the plasma membrane, in the endocytic cycle. Molecules are also transported to endosomes from the trans Golgi network and either continue to lysosomes or recycle back to the Golgi apparatus. Endosomes can be classified as early, sorting, or late depending on their stage post internalization. Endosomes represent a major sorting compartment of the endomembrane system in cells. In HeLa cells, endosomes are approximately 500 nm in diameter when fully mature.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "501861", "title": "Endosome", "section": "Section::::Types.\n", "start_paragraph_id": 5, "start_character": 0, "end_paragraph_id": 5, "end_character": 411, "text": "There are three different types of endosomes: \"early endosomes\", \"late endosomes\", and \"recycling endosomes\". They are distinguished by the time it takes for endocytosed material to reach them, and by markers such as rabs. They also have different morphology. Once endocytic vesicles have uncoated, they fuse with early endosomes. Early endosomes then \"mature\" into late endosomes before fusing with lysosomes.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "63263", "title": "Evolution of flagella", "section": "Section::::Eukaryotic flagellum.:Endogenous, autogenous and direct filiation models.\n", "start_paragraph_id": 5, "start_character": 0, "end_paragraph_id": 5, "end_character": 723, "text": "These models argue that cilia developed from pre-existing components of the eukaryotic cytoskeleton (which has tubulin and dynein also used for other functions) as an extension of the mitotic spindle apparatus. The connection can still be seen, first in the various early-branching single-celled eukaryotes that have a microtubule basal body, where microtubules on one end form a spindle-like cone around the nucleus, while microtubules on the other end point away from the cell and form the cilium. A further connection is that the centriole, involved in the formation of the mitotic spindle in many (but not all) eukaryotes, is homologous to the cilium, and in many cases \"is\" the basal body from which the cilium grows.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "9577488", "title": "Exosome (vesicle)", "section": "Section::::Terminology.\n", "start_paragraph_id": 8, "start_character": 0, "end_paragraph_id": 8, "end_character": 275, "text": "Evolving consensus in the field is that the term \"exosome\" should be strictly applied to an EV of endosomal origin. Since it can be difficult to prove such an origin after an EV has left the cell, variations on the term \"extracellular vesicle\" are often appropriate instead.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "501861", "title": "Endosome", "section": "Section::::Pathways.\n", "start_paragraph_id": 17, "start_character": 0, "end_paragraph_id": 17, "end_character": 510, "text": "There are three main compartments that have pathways that connect with endosomes. More pathways exist in specialized cells, such as melanocytes and polarized cells. For example, in epithelial cells, a special process called transcytosis allows some materials to enter one side of a cell and exit from the opposite side. Also, in some circumstances, late endosomes/MVBs fuse with the plasma membrane instead of with lysosomes, releasing the lumenal vesicles, now called exosomes, into the extracellular medium.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "501861", "title": "Endosome", "section": "Section::::Function.\n", "start_paragraph_id": 3, "start_character": 0, "end_paragraph_id": 3, "end_character": 818, "text": "Endosomes provide an environment for material to be sorted before it reaches the degradative lysosome. For example, low-density lipoprotein (LDL) is taken into the cell by binding to the LDL receptor at the cell surface. Upon reaching early endosomes, the LDL dissociates from the receptor, and the receptor can be recycled to the cell surface. The LDL remains in the endosome and is delivered to lysosomes for processing. LDL dissociates because of the slightly acidified environment of the early endosome, generated by a vacuolar membrane proton pump V-ATPase. On the other hand, EGF and the EGF receptor have a pH-resistant bond that persists until it is delivered to lysosomes for their degradation. The mannose 6-phosphate receptor carries ligands from the Golgi destined for the lysosome by a similar mechanism.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "501861", "title": "Endosome", "section": "Section::::Types.\n", "start_paragraph_id": 15, "start_character": 0, "end_paragraph_id": 15, "end_character": 718, "text": "Another unique identifying feature that differs between the various classes of endosomes is the lipid composition in their membranes. Phosphatidyl inositol phosphates (PIPs), one of the most important lipid signaling molecules, is found to differ as the endosomes mature from early to late. PI(4,5)P is present on plasma membranes, PI(3)P on early endosomes, PI(3,5)P on late endosomes and PI(4)P on the trans Golgi network. These lipids on the surface of the endosomes help in the specific recruitment of proteins from the cytosol, thus providing them an identity. The inter-conversion of these lipids is a result of the concerted action of phosphoinositide kinases and phosphatases that are strategically localized \n", "bleu_score": null, "meta": null } ] } ]
null
4spadx
What did people think of fossils before modern archaeology and carbon dating?
[ { "answer": "If you don't get an answer here, I recommend posting to /r/askhistorians instead.", "provenance": null }, { "answer": "Fairly insightful views were held by a few of the Ancient Greek philosophers, most notably Aristotle, who noticed the similarity between shells of contemporary sea creatures and fossilised shells he came across. He speculated that areas of former life had been turned to stone by the particularly strong petrifying forces of vaporous exhalations emanating from nearby bodies of water. Wrong of course, but Aristotle was keen to give an explanation rooted in natural processes of the Earth. There are records of at least one Ancient Greek (I forget who) making the leap that current areas of extensive land had once been underwater, using the fossil shells of marine animals as evidence. \n\nMany explanations elsewhere centred on story telling and legend, there were (and are) countless different explanations and names for various fossils, a lot of which seems to have recognised that a fossil was once something living - many parts of Asia would call any fossilised bones dragon bones. Common finds in England include sharks teeth and ammonites, which were called tongue stones and snake stones respectively, the latter being used to protect against snakebites. Some claimed that they had fallen from the moon, or there was a popular legend that ammonites were snakes which were turned to stone by St. Hilda of Whitby (614-680). Often snake heads were carved on to the ammonites before selling them to tourists. Three ammonites are on the Whitby town shield, complete with snake heads. \n\nA commonly accepted explanation for fossils in the Middle Ages were that they were pieces of preserved life all originating from the same event - the great biblical flood. I'm sure there were explanations linked to other religions in the non-Christian world. \n\nThe Renaissance saw a more rigorous study of many natural things and Da Vinci strongly rejected the biblical flood narrative, with the simple logic that washed up things should be all mixed up, but fossil assemblages were often found in the kind of communities you would expect to see them in during life.\n\nThings became more illuminated with the birth of modern geology. In the late 1700's leading up to 1800 the [law of superposition](_URL_2_) became accepted, ideas that the Earth was actually very much older than a few thousand years started to be incorporated into scientific theories, and [William 'Strata' Smith](_URL_0_) joined up many of these dots when observing the different layers of the Earth and their fossil assemblages, formulating the [principle of faunal succession](_URL_3_). Further study using this principle allowed geologists to determine the *relative* time sequences in which layers were deposited, and is the means by which distinctions in the stratigraphic record are formally made. \n\nIt was also an appreciation of this huge timescale and gradual changes in preserved fauna through layers of earth and time that helped Darwin to formulate a theory of evolution. \n\nWith the advent of radiometric dating the geological timescale could be dated absolutely, providing a picture of how long ago certain life existed, although there was little change in the timing of strata and fossils relative to one another. \n\nCarbon-14 is often used in archaeology as you say, but due to its relatively short half life of 5,730 years, it's not useful in dating anything older than about 60,000 years. Obviously this is no good for the 542 million years since the Cambrian Explosion of life on Earth, and so longer half lives of other radiometric systems (usually the uranium-lead or potassium-argon systems) are used to date the surrounding rock of a fossil rather than the fossil itself. \n\nOne last twist - sedimentary rock cannot be dated directly like this, as it would give a date that the minerals in the rock were originally cooled from igneous rock before being weathered and eventually ending up in a sedimentary sequence, which could be millions or even billions of years later! Radiometric dating must be used on igneous material at the start and end of sedimentary sequences to give an accurate (but potentially large) date range. Layers of preserved volcanic ash are particularly useful as they will have been incorporated into the strata at practically the same time they were created. Therefore using both absolute and relative dating methods together is necessary in order to build up a full picture of the fossil record and it's timescale. \n\nGetting back to the original question on ideas of fossils before all the refinement of timescales, the story of Johann Beringer deserves a mention. A medical academic at the University of Wurzberg in Germany in the 18th Century, Beringer was interested in the serious study of fossils which were not understood at the time, and he became victim to a cruel and extended hoax started in 1725, with the creation of many fakes placed for him to discover. [Beringer's Lying Stones](_URL_1_) as they came to be known were quite fantastical, but Beringer only realised what was going on after publishing a book about them. ", "provenance": null }, { "answer": null, "provenance": [ { "wikipedia_id": "22547077", "title": "Fossils of the Burgess Shale", "section": "Section::::Significance.\n", "start_paragraph_id": 47, "start_character": 0, "end_paragraph_id": 47, "end_character": 1145, "text": "While some geological evidence was presented to suggest that earlier fossils did exist, for a long time this evidence was widely rejected. Fossils from the Ediacaran period, immediately preceding the Cambrian, were first found in 1868, but scientists at that time assumed there was no Precambrian life and therefore dismissed them as products of physical processes. Between 1883 and 1909 Walcott discovered other Precambrian fossils, which were accepted at the time. However, in 1931 Albert Charles Seward dismissed all claims to have found Precambrian fossils. In 1946, Reg Sprigg noticed \"jellyfishes\" in rocks from Australia's Ediacara Hills. However, while these are now recognized as coming from the Ediacaran period, they were thought at the time to have been formed in the Cambrian. From 1872 onwards small shelly fossils, none more than a few millimeters in size, were found in very Early Cambrian rocks, and later also found in rocks dating to the end of the preceding Ediacaran period, but scientists only started in the 1960s to recognize that these were left by a wide range of animals, some of which are now recognized as molluscs.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "7991116", "title": "History of paleontology", "section": "Section::::Prior to the 17th century.\n", "start_paragraph_id": 11, "start_character": 0, "end_paragraph_id": 11, "end_character": 1108, "text": "However, most 16th-century Europeans did not recognize that fossils were the remains of living organisms. The etymology of the word \"fossil\" comes from the Latin for things having been dug up. As this indicates, the term was applied to a wide variety of stone and stone-like objects without regard to whether they might have an organic origin. 16th-century writers such as Gesner and Georg Agricola were more interested in classifying such objects by their physical and mystical properties than they were in determining the objects' origins. In addition, the natural philosophy of the period encouraged alternative explanations for the origin of fossils. Both the Aristotelian and Neoplatonic schools of philosophy provided support for the idea that stony objects might grow within the earth to resemble living things. Neoplatonic philosophy maintained that there could be affinities between living and non-living objects that could cause one to resemble the other. The Aristotelian school maintained that the seeds of living organisms could enter the ground and generate objects resembling those organisms.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "58628442", "title": "Modern archaeology", "section": "Section::::New technology.\n", "start_paragraph_id": 6, "start_character": 0, "end_paragraph_id": 6, "end_character": 882, "text": "Undoubtedly the major technological development in 20th century archaeology was the introduction of radiocarbon dating, based on a theory first developed by American scientist Willard Libby in 1949. Despite its many limitations (compared to later methods it is inaccurate; it can only be used on organic matter; it is reliant on a dataset to calibrate it; and it only works with remains from the last 10,000 years), the technique brought about a revolution in archaeological understanding. For the first time, it was possible to put reasonably accurate dates on discoveries such as bones. This in some cases led to a complete reassessment of the significance of past finds. Classic cases included the Red Lady of Paviland. It was not until 1989 that the Catholic Church allowed the technique to be used on the Turin Shroud, indicating that the linen fibres were of medieval origin.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "19349161", "title": "Cambrian explosion", "section": "Section::::History and significance.\n", "start_paragraph_id": 5, "start_character": 0, "end_paragraph_id": 5, "end_character": 1250, "text": "The first discovered Cambrian fossils were trilobites, described by Edward Lhuyd, the curator of Oxford Museum, in 1698. Although their evolutionary importance was not known, on the basis of their old age, William Buckland (1784–1856) realised that a dramatic step-change in the fossil record had occurred around the base of what we now call the Cambrian. Nineteenth-century geologists such as Adam Sedgwick and Roderick Murchison used the fossils for dating rock strata, specifically for establishing the Cambrian and Silurian periods. By 1859, leading geologists including Roderick Murchison, were convinced that what was then called the lowest Silurian stratum showed the origin of life on Earth, though others, including Charles Lyell, differed. In \"On the Origin of Species\", Charles Darwin considered this sudden appearance of a solitary group of trilobites, with no apparent antecedents, and absence of other fossils, to be \"undoubtedly of the gravest nature\" among the difficulties in his theory of natural selection. He reasoned that earlier seas had swarmed with living creatures, but that their fossils had not been found due to the imperfections of the fossil record. In the sixth edition of his book, he stressed his problem further as:\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "7991116", "title": "History of paleontology", "section": "Section::::Overview of developments in the 20th century.:Pre-Cambrian fossils.\n", "start_paragraph_id": 61, "start_character": 0, "end_paragraph_id": 61, "end_character": 1485, "text": "Prior to 1950 there was no widely accepted fossil evidence of life before the Cambrian period. When Charles Darwin wrote \"The Origin of Species\" he acknowledged that the lack of any fossil evidence of life prior to the relatively complex animals of the Cambrian was a potential argument against the theory of evolution, but expressed the hope that such fossils would be found in the future. In the 1860s there were claims of the discovery of pre-Cambrian fossils, but these would later be shown not to have an organic origin. In the late 19th century Charles Doolittle Walcott would discover stromatolites and other fossil evidence of pre-Cambrian life, but at the time the organic origin of those fossils was also disputed. This would start to change in the 1950s with the discovery of more stromatolites along with microfossils of the bacteria that built them, and the publication of a series of papers by the Soviet scientist Boris Vasil'evich Timofeev announcing the discovery of microscopic fossil spores in pre-Cambrian sediments. A key breakthrough would come when Martin Glaessner would show that fossils of soft bodied animals discovered by Reginald Sprigg during the late 1940s in the Ediacaran hills of Australia were in fact pre-Cambrian not early Cambrian as Sprigg had originally believed, making the Ediacaran biota the oldest animals known. By the end of the 20th century, paleobiology had established that the history of life extended back at least 3.5 billion years.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "44202437", "title": "Ust'-Ishim man", "section": "Section::::Genome sequencing.\n", "start_paragraph_id": 4, "start_character": 0, "end_paragraph_id": 4, "end_character": 527, "text": "The fossil was examined by paleoanthropologists in the Max Planck Institute for Evolutionary Anthropology, located in Leipzig, Germany. Carbon dating showed that the fossil dates back to 45,000 years ago, making it the oldest human fossil to be so dated. Scientists found the DNA intact and were able to sequence the complete genome of Ust'-Ishim man to contemporary standards of quality. Though genomes have been sequenced of hominins pre-dating Ust'-Ishim man, this is the oldest modern human genome to be sequenced to date.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "298509", "title": "La Brea Tar Pits", "section": "Section::::Scientific resource.\n", "start_paragraph_id": 13, "start_character": 0, "end_paragraph_id": 13, "end_character": 569, "text": "Contemporary excavations of the bones started in 1913–1915. In the 1940s and 1950s, public excitement was generated by the preparation of previously recovered large mammal bones. Subsequent study demonstrated the fossil vertebrate material was well preserved, with little evidence of bacterial degradation of bone protein. They were believed to be from the last glacial period, believed to be about 30,000 years ago. After radiocarbon dating redated the last glacial period as still occurring 11 to 12,000 years ago, the fossils were redated to be 10–20,000 years old.\n", "bleu_score": null, "meta": null } ] } ]
null
34v7ry
why do children dislike the taste of alcohol so much?
[ { "answer": "Who likes the taste of alchohol?", "provenance": null }, { "answer": "Why are you giving children alcohol?\n\n\"Why do children hate the back of vans with blacked out windows so much?\"", "provenance": null }, { "answer": null, "provenance": [ { "wikipedia_id": "39164260", "title": "Sweetened beverage", "section": "Section::::In the United States.:Influence of the household and media/advertisement.\n", "start_paragraph_id": 47, "start_character": 0, "end_paragraph_id": 47, "end_character": 565, "text": "Taste preferences and eating behaviors in children are molded at a young age by factors, such as parents' habits and advertisements. One study compared what adults and children considered when choosing beverages. For the most part, adults considered whether beverages had sugar, caffeine, and additives. Some of the 7- to 10-year-old children in the study also mentioned \"additives\" and \"caffeine\", which may be unfamiliar terms to them. This showed the possibility of the parents' influence on their children's decision-making on food choice and eating behaviors.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "40986774", "title": "Covert medication", "section": "Section::::Medical use.\n", "start_paragraph_id": 9, "start_character": 0, "end_paragraph_id": 9, "end_character": 243, "text": "In the care of paediatric patients, young children may be unwilling to take medication with an unpleasant taste or smell, or due to fear of the unfamiliar. In these cases, the medication is mixed with food or drink to make it more acceptable.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "12465661", "title": "Acquired taste", "section": "Section::::Acquiring the taste.:General acquisition of tastes.\n", "start_paragraph_id": 4, "start_character": 0, "end_paragraph_id": 4, "end_character": 668, "text": "The process of acquiring a taste can involve developmental maturation, genetics (of both taste sensitivity and personality), family example, and biochemical reward properties of foods. Infants are born preferring sweet foods and rejecting sour and bitter tastes, and they develop a preference for salt at approximately 4 months. Neophobia (fear of novelty) tends to vary with age in predictable, but not linear, ways. Babies just beginning to eat solid foods generally accept a wide variety of foods, toddlers and young children are relatively neophobic towards food, and older children, adults, and the elderly are often adventurous eaters with wide-ranging tastes. \n", "bleu_score": null, "meta": null }, { "wikipedia_id": "18505697", "title": "Alcoholism in family systems", "section": "Section::::Children.:Resilience.\n", "start_paragraph_id": 40, "start_character": 0, "end_paragraph_id": 40, "end_character": 2045, "text": "Professor and psychiatric Dieter J. Meyerhoff state that the negative effects of alcohol on the body and on health are undeniable, but we should not forget the most important unit in our society that this is affects the family and the children. The family is the main institution in which the child should feel safe and have moral values. If a good starting point is given, it is less likely that when a child becomes an adult, has a mental disorder or is addicted to drugs or alcohol. According to the American Academy of Child and Adolescent Psychiatry (AACAP) children are in a unique position when their parents abuse alcohol. The behavior of a parent is the essence of the problem, because such children do not have and do not receive support from their own family. Seeing changes from happy to angry parents, the children begin to think that they are the reason for these changes. Self-accusation, guilt, frustration, anger arises because the child is trying to understand why this behavior is occurs. Dependence on alcohol has a huge harm in childhood and adolescent psychology in a family environment. Psychologists Michelle L. Kelley and Keith Klostermann describe the effects of parental alcoholism on children, and describe the development and behavior of these children. Alcoholic children often face problems such as behavioral disorders, oppression, crime and attention deficit disorder, and there is a higher risk of internal behavior, such as depression and anxiety. Therefore, they are drinking earlier, drinking alcohol more often and are more likely to grow from moderate to severe alcohol consumption. Young people with parental abuse and parental violence are likely to live in large crime areas, which may have a negative impact on the quality of schools and increase the impact of violence in the area. Paternity alcoholism and the general parental verbal and physical spirit of violence witnessed the fears of children and the internalization of symptoms, greater likelihood of child aggression and emotional misconduct.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "2284563", "title": "Conditioned taste aversion", "section": "", "start_paragraph_id": 3, "start_character": 0, "end_paragraph_id": 3, "end_character": 491, "text": "Conditioned taste aversion sometimes occurs when sickness is merely coincidental to, and not caused by, the substance consumed. For example, a person who becomes very sick after consuming vodka-and-orange-juice cocktails may then become averse to the taste of orange juice, even though the sickness was caused by the over-consumption of alcohol. Under these circumstances, conditioned taste aversion is sometimes known as the \"Sauce-Bearnaise Syndrome\", a term coined by Seligman and Hager.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "62462", "title": "Umami", "section": "Section::::Properties.\n", "start_paragraph_id": 19, "start_character": 0, "end_paragraph_id": 19, "end_character": 348, "text": "Some population groups, such as the elderly, may benefit from umami taste because their taste and smell sensitivity is impaired by age and medication. The loss of taste and smell can contribute to poor nutrition, increasing their risk of disease. Some evidence exists to show umami not only stimulates appetite, but also may contribute to satiety.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "83859", "title": "Adolescence", "section": "Section::::Culture.:Legal issues, rights and privileges.:Alcohol and illicit drug use.:Demographic factors.\n", "start_paragraph_id": 147, "start_character": 0, "end_paragraph_id": 147, "end_character": 845, "text": "Research has generally shown striking uniformity across different cultures in the motives behind teen alcohol use. Social engagement and personal enjoyment appear to play a fairly universal role in adolescents' decision to drink throughout separate cultural contexts. Surveys conducted in Argentina, Hong Kong, and Canada have each indicated the most common reason for drinking among adolescents to relate to pleasure and recreation; 80% of Argentinian teens reported drinking for enjoyment, while only 7% drank to improve a bad mood. The most prevalent answers among Canadian adolescents were to \"get in a party mood,\" 18%; \"because I enjoy it,\" 16%; and \"to get drunk,\" 10%. In Hong Kong, female participants most frequently reported drinking for social enjoyment, while males most frequently reported drinking to feel the effects of alcohol.\n", "bleu_score": null, "meta": null } ] } ]
null
f9m9fz
Are Tardigrades susceptible to viral and/or bacterial infection? Can they get ‘sick’?
[ { "answer": "Virtually all living organisms (probably all, but I am not 100% certain) can get infected with viruses. And many are susceptible to bacteria. \n\nSurprisingly, a web search turned up the following article, [_URL_0_](_URL_0_) , which reports a tardigrade that was infected with a fungal pathogen. The most surprising part is that the paper is over 40 years old. I didn't even realize people studied tardigrades back then.", "provenance": null }, { "answer": "In addition to the fungi u/drkirienko mentions, tardigrades also clearly act as hosts for various bacteria. However, it's more difficult to say which of these are beneficial, neutral, or antagonistic to their hosts.\n\nThis is a little tangentially related to your question, but it's an interesting story nonetheless. When the first tardigrade genome ([*Hypsibius dujardini*](_URL_2_)), was sequenced by [Boothby et al. 2015](_URL_8_), it was reported to have a [large percentage of genes with bacterial origin](_URL_0_). This was initially considered to be evidence for horizontal gene transfer on a pretty unprecedented scale, which was pretty interesting but also led to some [rather garbage science reporting](_URL_1_). \n\nHowever, as you will have noticed if you looked at the Boothby paper carefully, these results have since been challenged by many. [Arakawa 2016](_URL_7_) and [Bemm et al. 2016](_URL_3_) both suggest that the bacterial DNA actually came from, well... bacteria. I.e., they think that the tardigrade samples were contaminated, which does seem like a more plausible explanation. Bemm et al. were even able to assemble the entire [genome of an unknown type of bacteria](_URL_9_) from the published tardigrade data.\n\nOf course, those bacterial contaminants were not necessarily actually living in the tardigrades, and may have been introduced some other way. Fortunately, some other studies have looked in more detail into actual associations between tardigrades and various microorganisms. [Vecchi et al. 2016](_URL_6_) review reports of bacteria living in the heads, skin, and guts of tardigrades, and also point out that they can act as vectors for certain bacteria which infect plants such as [*Xanthomonas campestris*](_URL_5_). The same group published a more detailed study in [Vecchi et al. 2018](_URL_4_) which identified several distinct groups of bacteria. Neither of these studies were really able to say much about the exact nature of the relationship between these bacteria and the tardigrades they live in though, so it's still somewhat unclear whether they are symbiotes, pathogens, or neither. And of course, these roles may fluctuate depending on the conditions even for the same bacterial species!", "provenance": null }, { "answer": "Well yeah. Tardigrades are susceptible to everything that other organisms are outside of their stasis which is only activated when their bodies have been desiccated. Otherwise tardigrades are just tiny bug things that eat algae juice and get eaten by slugs.", "provenance": null }, { "answer": "Tardigrades are animals, they are multi cellular eukaryotes.\nThey are susceptible to viral, and fungal infections as well as bacterial attacks, like the rest of us. While they can \"survive\" extreme conditions, they are not imortal and can die quite easily if they're exposed wrong. They are, like most microorganisms, bound to water. Outside of water, and in other environmental extremes, they go into a dormant hybernation like state of rest where they slow their metabolism and harden themselves. That is the state where they survive extreme pressures, radiation, heat/cold, etc, etc. A live, awake Tardigrade does not enjoy those conditions and goes dormant untill the conditions improve. They can die from infections/disease as you suggest, but also predation, being squished physically (on a slide), lack of food, or any other general causes of death that aren't environmental. Unfortunately their not immortal in the slightest. Bacteria can be extremeophiles, hydras and polyp jellies are actually \"immortal\", but when it comes to animals tardigrades are some of the most *resilient*, especially when in cryptobiosis.\n\nIt's worth noting some bacteria and single celled eukaryotes are similar in size to tardigrades, so rather then a bacterial infection, its more like predation in some cases. Viruses infect an organism cell by cell to modify its dna and reproduce, so they work on this scale to even more effect than on macro-animals. Tardigrades are susceptible to other microorganisms, to answer the question\n\n\n[Tardigrades: chubby, misunderstood, and not immortal](_URL_0_)", "provenance": null }, { "answer": "Although i cant give proper evidence i would like to st least state that recently there has been the discovery of a family of viruses that infects other viruses. Theres a group of nonliving pathogens tjat infect other nonliving pathogens. Basically anythings possible lol", "provenance": null }, { "answer": null, "provenance": [ { "wikipedia_id": "25439126", "title": "Vulva", "section": "Section::::Clinical significance.:Sexually transmitted infections.\n", "start_paragraph_id": 54, "start_character": 0, "end_paragraph_id": 54, "end_character": 578, "text": "Parasitic infections include trichomoniasis, pediculosis pubis, and scabies. Trichomoniasis is transmitted by a parasitic protozoan and is the most common non-viral STI. Most cases are asymptomatic but may present symptoms of irritation and a discharge of unusual odor. Pediculosis pubis commonly called \"crabs\", is a disease caused by the crab louse an ectoparasite. When the pubic hair is infested the irritation produced can be intense. Scabies, also known as the \"seven year itch\", is caused by another ectoparasite, the mite \"Sarcoptes scabiei\", giving intense irritation.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "24151", "title": "Flatworm", "section": "Section::::Interaction with humans.:Parasitism.\n", "start_paragraph_id": 57, "start_character": 0, "end_paragraph_id": 57, "end_character": 841, "text": "Cestodes (tapeworms) and digeneans (flukes) cause diseases in humans and their livestock, whilst monogeneans can cause serious losses of stocks in fish farms. Schistosomiasis, also known as bilharzia or snail fever, is the second-most devastating parasitic disease in tropical countries, behind malaria. The Carter Center estimated 200 million people in 74 countries are infected with the disease, and half the victims live in Africa. The condition has a low mortality rate, but usually presents as a chronic illness that can damage internal organs. It can impair the growth and cognitive development of children, increasing the risk of bladder cancer in adults. The disease is caused by several flukes of the genus \"Schistosoma\", which can bore through human skin; those most at risk use infected bodies of water for recreation or laundry.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "377559", "title": "Schistosoma", "section": "", "start_paragraph_id": 1, "start_character": 0, "end_paragraph_id": 1, "end_character": 375, "text": "Schistosoma is a genus of trematodes, commonly known as blood flukes. They are parasitic flatworms responsible for a highly significant group of infections in humans termed \"schistosomiasis\", which is considered by the World Health Organization as the second-most socioeconomically devastating parasitic disease (after malaria), with hundreds of millions infected worldwide.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "42940", "title": "Biostasis", "section": "Section::::Current research.:Possible approaches.:Tardigrade-disordered proteins.\n", "start_paragraph_id": 17, "start_character": 0, "end_paragraph_id": 17, "end_character": 646, "text": "Tardigrades are microscopic animals that are able to enter a state of diapause and survive a remarkable array of environmental stressors, including freezing and desiccation. Research has shown that intrinsically disordered proteins in these organisms may work to stabilize cell function and protect against these extreme environmental stressors. By using peptide engineering, it is possible that scientists may be able to introduce intrinsically disordered proteins to the biological systems of larger animal organisms. This could allow larger animals to enter a state of biostasis similar to that of tardigrades under extreme biological stress.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "355522", "title": "Trematoda", "section": "Section::::Infections.\n", "start_paragraph_id": 31, "start_character": 0, "end_paragraph_id": 31, "end_character": 408, "text": "Schistosomiasis (also known as bilharzia, bilharziosis or snail fever) is an example of a parasitic disease caused by one of the species of trematodes (platyhelminth infection, or \"flukes\"), a parasitic worm of the genus Schistosoma. \"Clonorchis\", \"Opisthorchis\", \"Fasciola\" and \"Paragonimus\" species, the foodborne trematodes, are another. Other diseases are caused by members of the genus \"Choledocystus\".\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "55900097", "title": "Scuticociliatosis", "section": "Section::::Disease mechanism.\n", "start_paragraph_id": 5, "start_character": 0, "end_paragraph_id": 5, "end_character": 1009, "text": "Scuticociliatosis consists of overwhelming infection of an animal's body by any one of around 20 species of scuticociliate. These unicellular organisms are free-living in marine environments but are opportunistic parasites with a diverse host range. It is unclear what triggers infection, although infection rates are known to be higher, in both experimental and aquaculture conditions, in warmer water. Low salinity has also been reported to reduce disease rates. Under some conditions, ciliates have been reported to successfully infect healthy fish, likely through the gills; other reports suggest abrasions or skin damage may be required. Scuticociliates are histophagous (tissue-eating) and extensively degrade body tissues. Histological postmortem examination of affected fish usually reveals ciliates in the skin and gills, blood, and internal organs, with significant damage to the brain and nervous system, which is likely responsible for behaviors such as abnormal swimming in infected individuals.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "4637216", "title": "Diseases of poverty", "section": "Section::::Diseases.:Schistosomiasis.\n", "start_paragraph_id": 60, "start_character": 0, "end_paragraph_id": 60, "end_character": 878, "text": "Schistosomiasis (bilharzia) is a parasitic disease caused by the parasitic flatworm trematodes. Moreover, more than 80 percent of the 200 million people worldwide who have schistosomiasis live in sub-Saharan Africa. Infections often occur in contaminated water where freshwater snails release larval forms of the parasite. After penetrating the skin and eventually traveling to the intestines or the urinary tract, the parasite lays eggs and infects those organs. It damages the intestines, bladder, and other organs and can lead to anemia and protein-energy deficiency. Along with malaria, schistosomiasis is one of the most important parasitic co-factors aiding in HIV transmission. Epidemiological data shows schistosome-endemic areas coincide with areas of high HIV prevalence, suggesting that parasitic infections such as schistosomiasis increase risk of HIV transmission.\n", "bleu_score": null, "meta": null } ] } ]
null
8b635l
In fiction, the gamma radiation (esp. from nuclear weapons) is usually depicted with a greenish, yellowish colour, and often makes objects glow. Does this occur in real life?
[ { "answer": "Emitting gamma rays doesn't change the appearance of an object. You can't see it at all.", "provenance": null }, { "answer": "You can also look at the aurora borealis, which is caused by interaction of fast electrons (beta radiation) with the molecules in the atmosphere. The different molecules give off different colors after being excited by interaction with the electrons.\n\nIn fiction (movies) you have the problem that the viewer needs to be guided to understand that the object is somehow special without using to much screen time - a glow is easy to do and serves the purpose. Realism is usually not the first priority. ", "provenance": null }, { "answer": "It is a fictional indicator. As you said yourself, and as other have said, gamma radiation is invisible to the human eye.\n\n*Why* fiction depicts it as neon yellow-green, I'll quote /u/thetripp \nwho answered [a similar question in 2013](_URL_0_):\n\n > One of the first widespread applications of radium was luminescence - self-powered lighting. For instance, Radium Dials or clock faces were popular, as they glowed in the dark. These materials convert the kinetic energy of radioactive decay (and subsequent ionization) into visible light. If you combine a radioactive source with the right phosphor, then electrons which were knocked away from their atoms will emit visible light when they fall back into an orbital. Zinc sulfide doped with copper was a common choice for the phosphor component in the early 1900's, which glows green.\n\n > This was also one of the first times that the dangers of radiation became apparent. Many of the factory workers who painted these dials began to be diagnosed with cancers of the blood and bones at very young ages.\n\nTLDR: radium in glow-in-the-dark applications is yellow-green, Radium = early 1900's spooky & dangerous, therefore fictional radiation is glow-in-the-dark yellow-green.", "provenance": null }, { "answer": "Very intense radiation can ionize the air and produce colorful glows — usually blueish but in the wake of nuclear tests all sorts of colors ([purples](_URL_0_), pinks, blues,etc.) have been reported. But this is not the kind of thing you'd see normally from a regular radioactive source; the sources would have to be very, very intense (like the first few minutes after a nuclear test, or during a criticality accident, which creates a brief \"[blue flash](_URL_2_)\") to do this.\n\nNuclear reactions in water make it glow blue. This is known as [Cherenkov radiation](_URL_4_ radiation). This is not the radiation itself but a byproduct of its movement through a slowing medium like water. \n\nEarly applications of radioactive substances in the 1900s through 1930s or so involved making [luminescent dials](_URL_3_) which glow a greenish yellow in the dark. Basically the radiation from the decay of radium was used to excite a phosphor and give off a steady amount of light. This is probably why green in particular became associated with radioactivity. Initially (up until the 1930s) this was associated with the magical, transformational powers of science — modern technology, modern medicine, etc. Radioactivity was hailed as a healthy thing. In the 1930s several cases led to people associating it with something more unpleasant and fearful: the Radium Girls occupational health case (in which many radium dial painters developed terrible bone cancers from licking the paintbrushes) and the horrible death of Eben Byers, a millionaire who essentially overdosed on radium treatments, re-associated the green glow of radium paint with a sense of dread. Hence our use of it today.\n\nVery radioactive objects can also be hot enough to glow in the way that a coal does. So [here is a low-light photo of a pellet of plutonium-238](_URL_1_), which is used for its thermal properties. This is just regular heat, but heat caused by the radioactive decay of the plutonium, so it's going to look like a typical black body (e.g., reddish or yellowish or orangish depending on the amount of heat — like if you put a piece of steel into the oven and heated it up until it was very hot). Note that the photographic settings can exaggerate how much light would be given off (there's a slight glow but it's not as bright as that photo makes it look like).\n\nSo the basic answer is: radioactivity by itself doesn't have any direct color. The particles are not photons within the visible wavelength of light, to put it simply. But radioactivity can interact with various nearby materials, or the medium it is going through, and these can generate photons that are visible. And as should be pretty clear: the Fallout series should not for a moment be taken as a realistic indication of anything relating to nuclear weapons, radioactivity, what have you; it does not in the slightest attempt to be accurate about how these things function and it is variously wrong and inconsistent. (That's not a criticism — it's not meant to be a science textbook — but just be warned that you should not base your understanding of _anything_ real on that particular series.)", "provenance": null }, { "answer": null, "provenance": [ { "wikipedia_id": "18616290", "title": "Gamma ray", "section": "", "start_paragraph_id": 1, "start_character": 0, "end_paragraph_id": 1, "end_character": 655, "text": "A gamma ray, or gamma radiation (symbol γ or formula_1), is a penetrating electromagnetic radiation arising from the radioactive decay of atomic nuclei. It consists of the shortest wavelength electromagnetic waves and so imparts the highest photon energy. Paul Villard, a French chemist and physicist, discovered gamma radiation in 1900 while studying radiation emitted by radium. In 1903, Ernest Rutherford named this radiation \"gamma rays\" based on their relatively strong penetration of matter; he had previously discovered two less penetrating types of decay radiation, which he named alpha rays and beta rays in ascending order of penetrating power.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "118575", "title": "Soft gamma repeater", "section": "", "start_paragraph_id": 1, "start_character": 0, "end_paragraph_id": 1, "end_character": 243, "text": "A soft gamma repeater (SGR) is an astronomical object which emits large bursts of gamma-rays and X-rays at irregular intervals. It is conjectured that they are a type of magnetar or, alternatively, neutron stars with fossil disks around them.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "25856", "title": "Radiation", "section": "Section::::Ionizing radiation.:Gamma radiation.\n", "start_paragraph_id": 21, "start_character": 0, "end_paragraph_id": 21, "end_character": 569, "text": "Gamma (γ) radiation consists of photons with a wavelength less than 3x10 meters (greater than 10 Hz and 41.4 keV). Gamma radiation emission is a nuclear process that occurs to rid an unstable nucleus of excess energy after most nuclear reactions. Both alpha and beta particles have an electric charge and mass, and thus are quite likely to interact with other atoms in their path. Gamma radiation, however, is composed of photons, which have neither mass nor electric charge and, as a result, penetrates much further through matter than either alpha or beta radiation.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "27486376", "title": "Ionized-air glow", "section": "Section::::Occurrence.\n", "start_paragraph_id": 12, "start_character": 0, "end_paragraph_id": 12, "end_character": 913, "text": "BULLET::::- Ionizing radiation is the cause of blue glow surrounding sufficient quantities of strongly radioactive materials in air, e.g. some radioisotope specimens (e.g. radium or polonium), particle beams (e.g. from particle accelerators) in air, the blue flashes during criticality accidents, and the eerie/low brightness \"purple\" to \"blue\" glow enveloping the mushroom cloud during the first several dozen seconds after nuclear explosions near sea level. An effect that has been observed only at night from atmospheric nuclear tests owing to its low brightness, with observers noticing it following the pre-dawn Trinity (test), Upshot-Knothole Annie, and the \"Cherokee\" shot of Operation Redwing. The emission of blue light is often incorrectly attributed to Cherenkov radiation. For more on ionized air glow by nuclear explosions see the near local midnight, high altitude test shot, Bluegill Triple Prime.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "21779590", "title": "GRB 970508", "section": "", "start_paragraph_id": 2, "start_character": 0, "end_paragraph_id": 2, "end_character": 305, "text": "A gamma-ray burst is a highly luminous flash associated with an explosion in a distant galaxy and producing gamma rays, the most energetic form of electromagnetic radiation, and often followed by a longer-lived \"afterglow\" emitted at longer wavelengths (X-ray, ultraviolet, optical, infrared, and radio).\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "307882", "title": "Wireline (cabling)", "section": "Section::::Wireline tools.:Natural gamma ray tools.\n", "start_paragraph_id": 20, "start_character": 0, "end_paragraph_id": 20, "end_character": 944, "text": "Natural gamma ray tools are designed to measure gamma radiation in the Earth caused by the disintegration of naturally occurring potassium, uranium, and thorium. Unlike nuclear tools, these natural gamma ray tools emit no radiation. The tools have a radiation sensor, which is usually a scintillation crystal that emits a light pulse proportional to the strength of the gamma ray striking it. This light pulse is then converted to a current pulse by means of a photomultiplier tube (PMT). From the photomultiplier tube, the current pulse goes to the tool's electronics for further processing and ultimately to the surface system for recording. The strength of the received gamma rays is dependent on the source emitting gamma rays, the density of the formation, and the distance between the source and the tool detector. The log recorded by this tool is used to identify lithology, estimate shale content, and depth correlation of future logs.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "10134", "title": "Electromagnetic spectrum", "section": "Section::::Regions.\n", "start_paragraph_id": 34, "start_character": 0, "end_paragraph_id": 34, "end_character": 582, "text": "The convention that EM radiation that is known to come from the nucleus, is always called \"gamma ray\" radiation is the only convention that is universally respected, however. Many astronomical gamma ray sources (such as gamma ray bursts) are known to be too energetic (in both intensity and wavelength) to be of nuclear origin. Quite often, in high energy physics and in medical radiotherapy, very high energy EMR (in the 10 MeV region)—which is of higher energy than any nuclear gamma ray—is not called X-ray or gamma-ray, but instead by the generic term of \"high energy photons.\"\n", "bleu_score": null, "meta": null } ] } ]
null
3bpfy3
When and why did the "corporation" become the dominant business entity in America, when all of the great gilded age companies were organized as "trusts?"
[ { "answer": "I'm not sure I agree with the premise. The corporation's popularity spread with the growth of railroads: large undertakings, needing lots of investors, operating over large areas, usually with some years before any dividends would accrue—and most importantly, whose operations were inherently dangerous. Investors naturally sought to be shielded from personal liability for wrongs done by some remote employee.\n\nTrusts arose much later, as a way to get around the early restrictions on corporations. State laws often did not allow corporations to own stock in other companies, to operate in more than one state, or to undertake activities (such as owning an office building not entirely for their own use) even slightly peripheral to the powers enumerated in their charters.", "provenance": null }, { "answer": " > when all of the great gilded age companies were organized as \"trusts?\"\n\nThis is a misunderstanding. The trusts were mechanisms to consolidate and streamline control of the vast web of corporations that made up the great \"Gilded Age\" business empires, they weren't the operating businesses themselves.\n\nFor example the first such 19th century business trust was established by Standard Oil in 1882, in which the individual shareholders of the many separate Standard Oil corporations agreed to convey their shares to the trust. The trust was governed by a board of 9 trustees, with John D. Rockefeller owning 41% of the trust's certificates.\n\nYou can see how this makes governing the corporate empire much easier than having to deal with the individual stock and governance of each corporate entity separately.", "provenance": null }, { "answer": null, "provenance": [ { "wikipedia_id": "235657", "title": "Corporate governance", "section": "Section::::History.:United States of America.\n", "start_paragraph_id": 76, "start_character": 0, "end_paragraph_id": 76, "end_character": 561, "text": "Robert E. Wright argues in \"Corporation Nation\" (2014) that the governance of early U.S. corporations, of which over 20,000 existed by the Civil War of 1861-1865, was superior to that of corporations in the late 19th and early 20th centuries because early corporations governed themselves like \"republics\", replete with numerous \"checks and balances\" against fraud and against usurpation of power by managers or by large shareholders. (The term \"robber baron\" became particularly associated with US corporate figures in the Gilded Age - the late 19th century.)\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "712560", "title": "Corporation (feudal Europe)", "section": "", "start_paragraph_id": 2, "start_character": 0, "end_paragraph_id": 2, "end_character": 477, "text": "The term \"corporation\" was used as late as the 18th century in England to refer to such ventures as the East India Company or the Hudson's Bay Company: commercial organizations that operated under royal patent to have exclusive rights to a particular area of trade. In the medieval town, however, corporations were a conglomeration of interests that existed either as a development from, or in competition with, guilds. The most notable corporations were in trade and banking.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "3520221", "title": "Presidency of Theodore Roosevelt", "section": "Section::::Domestic policy.:Trust busting and regulation.\n", "start_paragraph_id": 23, "start_character": 0, "end_paragraph_id": 23, "end_character": 812, "text": "In the late-nineteenth century, several large businesses, including Standard Oil, had either bought their rivals or had established business arrangements that effectively stifled competition. Many companies followed the model of Standard Oil, which organized itself as a trust in which several component corporations were controlled by one board of directors. While Congress had passed the 1890 Sherman Antitrust Act to provide some federal regulation of trusts, the Supreme Court had limited the power of the act in the case of \"United States v. E. C. Knight Co.\". By 1902, the 100 largest corporations held control of 40 percent of industrial capital in the United States. Roosevelt did not oppose all trusts, but sought to regulate trusts that he believed harmed the public, which he labeled as \"bad trusts.\"\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "532813", "title": "Technological and industrial history of the United States", "section": "Section::::Effects of industrialization.:Banking, trading, and financial services.\n", "start_paragraph_id": 85, "start_character": 0, "end_paragraph_id": 85, "end_character": 271, "text": "To finance the larger-scale enterprises required during this era, the Stockholder Corporation emerged as the dominant form of business organization. Corporations expanded by combining into trusts, and by creating single firms out of competing firms, known as monopolies.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "49261", "title": "Corporate personhood", "section": "Section::::In the United States.:Historical background in the United States.\n", "start_paragraph_id": 7, "start_character": 0, "end_paragraph_id": 7, "end_character": 747, "text": "During the colonial era, British corporations were chartered by the crown to do business in North America. This practice continued in the early United States. They were often granted monopolies as part of the chartering process. For example, the controversial Bank Bill of 1791 chartered a 20-year corporate monopoly for the First Bank of the United States. Although the Federal government has from time to time chartered corporations, the general chartering of corporations has been left to the states. In the late 18th and early 19th centuries, corporations began to be chartered in greater numbers by the states, under general laws allowing for incorporation at the initiative of citizens, rather than through specific acts of the legislature.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "7485", "title": "Corporation", "section": "Section::::History.:Development of modern company law.:Further developments.\n", "start_paragraph_id": 36, "start_character": 0, "end_paragraph_id": 36, "end_character": 291, "text": "The end of the 19th century saw the emergence of holding companies and corporate mergers creating larger corporations with dispersed shareholders. Countries began enacting anti-trust laws to prevent anti-competitive practices and corporations were granted more legal rights and protections.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "1061716", "title": "Trusts & Estates (journal)", "section": "Section::::History.\n", "start_paragraph_id": 4, "start_character": 0, "end_paragraph_id": 4, "end_character": 591, "text": "Christian A. Luhnow founded Trust Companies in March 1904 in response to the rise of the trust banking industry in the United States. Most of the 1,300 United States trust companies then in existence had been formed in the previous 25 years. Yet, according to the magazine back then, \"no other financial institutions of comparatively recent growth have made such giant strides and at the same time are so little understood outside of those immediately interested.\"trusts in the 1800 were used as business techniques during the industrial growth these ways were often used by large companies\n", "bleu_score": null, "meta": null } ] } ]
null
1t8t8e
Who decided that north was up?
[ { "answer": "Great answer to this question from /u/khosikulu [here](_URL_0_).\n\n > Historian of cartography (among other things) here. The northward orientation has a great deal to do with the importance of northward orientation to compass navigation. Portolans, and later projections aimed at navigation purposes (e.g., Mercator), made note of latitude and direction much more reliably than longitude, so the coastline was easier to fit to an evolving graticule that way (plus it worked better relative to sun- and star-sighting) while the east-west features were still of uncertain size and distance. Smileyman is right that cartographers often didn't put north at the top before the Renaissance and Enlightenment eras and the flowering of European navigation, and that Claudius Ptolemy is probably a big culprit for why it's north-up and not south-up--the power of classical conventions at that moment is hard to deny. It also helps that we're very clearly north of the Equator in the European Atlantic, so that would be the first area depicted to the terminus of navigation.\n > \n > Have a dig in volume 1 of the monumental History of Cartography Project and you may find a bit more. Volume 3 would also discuss some of the specific developments of the Renaissance era but that's still in print only; I'm not even sure Volume 4 is close to release yet.\n\nThere's some more good threads about this topic listed in the [FAQ](_URL_1_).", "provenance": null }, { "answer": null, "provenance": [ { "wikipedia_id": "56478", "title": "North", "section": "Section::::Roles of north as prime direction.\n", "start_paragraph_id": 26, "start_character": 0, "end_paragraph_id": 26, "end_character": 392, "text": "BULLET::::- Up is a metaphor for north. The notion that north should always be up and east at the right was established by the Greek astronomer Ptolemy. The historian Daniel Boorstin suggests that perhaps this was because the better-known places in his world were in the northern hemisphere, and on a flat map these were most convenient for study if they were in the upper right-hand corner.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "56478", "title": "North", "section": "Section::::Roles of north as prime direction.\n", "start_paragraph_id": 18, "start_character": 0, "end_paragraph_id": 18, "end_character": 450, "text": "The visible rotation of the night sky around the visible celestial pole provides a vivid metaphor of that direction corresponding to up. Thus the choice of the north as corresponding to up in the northern hemisphere, or of south in that role in the southern, is, prior to worldwide communication, anything but an arbitrary one. On the contrary, it is of interest that Chinese and Islamic culture even considered south as the proper top end for maps.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "15055898", "title": "Sun path", "section": "Section::::Visualization.\n", "start_paragraph_id": 23, "start_character": 0, "end_paragraph_id": 23, "end_character": 303, "text": "BULLET::::- In the Northern Hemisphere, north is to the left. The Sun rises in the east (far arrow), culminates in the south (to the right) while moving to the right, and sets in the west (near arrow). Both rise and set positions are displaced towards the north in midsummer and the south in midwinter.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "21208200", "title": "Western world", "section": "Section::::Modern definitions.:Economic definition.\n", "start_paragraph_id": 75, "start_character": 0, "end_paragraph_id": 75, "end_character": 825, "text": "The existence of \"The North\" implies the existence of \"The South\", and the socio-economic divide between North and South. The term \"the North\" has in some contexts replaced earlier usage of the term \"\"the West\"\", particularly in the critical sense, as a more robust demarcation than the terms \"\"West\"\" and \"East\". The North provides some absolute geographical indicators for the location of wealthy countries, most of which are physically situated in the Northern Hemisphere, although, as most countries are located in the northern hemisphere in general, some have considered this distinction equally unhelpful. Modern financial services and technologies are largely developed by Western nations: Bitcoin, most known digital currency is subject to skepticism in the Eastern world whereas Western nations are more open to it.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "56478", "title": "North", "section": "Section::::Etymology.\n", "start_paragraph_id": 3, "start_character": 0, "end_paragraph_id": 3, "end_character": 269, "text": "The word \"north\" is related to the Old High German \"nord\", both descending from the Proto-Indo-European unit *\"ner-\", meaning \"left; below\" as north is to left when facing the rising sun. Similarly, the other cardinal directions are also related to the sun's position.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "219064", "title": "Ojibwe", "section": "Section::::History.:Pre-contact and spiritual beliefs.\n", "start_paragraph_id": 26, "start_character": 0, "end_paragraph_id": 26, "end_character": 286, "text": "The \"westerly group\" of the \"northern branch\" migrated along the Rainy River, Red River of the North, and across the northern Great Plains until reaching the Pacific Northwest. Along their migration to the west, they came across many \"miigis\", or cowry shells, as told in the prophecy.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "10100", "title": "Equinox", "section": "Section::::Equinoxes on Earth.:Geocentric view of the astronomical seasons.:Day arcs of the Sun.\n", "start_paragraph_id": 41, "start_character": 0, "end_paragraph_id": 41, "end_character": 204, "text": "BULLET::::- In the northern hemisphere, north is to the left, the Sun rises in the east (far arrow), culminates in the south (right arrow), while moving to the right and setting in the west (near arrow).\n", "bleu_score": null, "meta": null } ] } ]
null
272pav
what happens when i "zone out" after a few hours of being on the computer?
[ { "answer": "It's sort of like a vegetative state. You're letting your brain run on auto-pilot without paying attention to the world around you. This is why video games warn you to \"take frequent breaks\" these days - they reassert your grasp on reality and keep you from trancing too long.\n\nMy advice? Set something in motion _before_ you get on the computer that will \"interrupt\" you later and snap you out of it. Like setting an alarm in the other room, or telling a friend to come get you for a walk later.", "provenance": null }, { "answer": null, "provenance": [ { "wikipedia_id": "42674300", "title": "State machine (LabVIEW programming)", "section": "Section::::State machines in LabVIEW.:Simple vending-machine example.\n", "start_paragraph_id": 16, "start_character": 0, "end_paragraph_id": 16, "end_character": 359, "text": "The \"end\" case is a very simple case that works to simply delay the program to allow the user enough time to check that they have received their change and picked up their item. After 5000 milliseconds (5 seconds) the wait timer is used, up and the program continues back to the start page to wait for another user to come by to begin the process over again.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "2183606", "title": "Sleep mode", "section": "Section::::Computers.\n", "start_paragraph_id": 3, "start_character": 0, "end_paragraph_id": 3, "end_character": 203, "text": "In computers, entering a sleep state is roughly equivalent to \"pausing\" the state of the machine. When restored, the operation continues from the same point, having the same applications and files open.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "905145", "title": "Out-of-box experience", "section": "", "start_paragraph_id": 1, "start_character": 0, "end_paragraph_id": 1, "end_character": 374, "text": "Out-of-box experience (OOBE pronounced oo-bee) is the experience a consumer (or user) has when preparing to first use a new product. In relation to computing, this includes the setup process of installing and/or performing initial configuration of a piece of hardware or software on a computer. This generally follows the point-of-sale or the interaction of an expert user.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "36157051", "title": "Windows Phone 8", "section": "Section::::Features.:Multitasking.\n", "start_paragraph_id": 20, "start_character": 0, "end_paragraph_id": 20, "end_character": 367, "text": "A user can switch between \"active\" tasks by pressing and holding the Back button, but any application listed may be suspended or terminated under certain conditions, such as a network connection being established or battery power running low. An app running in the background may also automatically suspend, if the user has not opened it for a long duration of time.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "5276769", "title": "Whale (computer virus)", "section": "Section::::Description.\n", "start_paragraph_id": 3, "start_character": 0, "end_paragraph_id": 3, "end_character": 402, "text": "After the file becomes resident in the system memory below the 640k DOS boundary, the operator will experience total system slow down as a result of the virus' polymorphic code. Symptoms include video flicker to the screen writing very slowly. Files may seem to \"hang\" even though they will eventually execute correctly. This is just a product of the total system slow down within the system's memory.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "46764212", "title": "Stop Procrastinating", "section": "Section::::Features.\n", "start_paragraph_id": 6, "start_character": 0, "end_paragraph_id": 6, "end_character": 975, "text": "The software allows users to set a time from one minute to 24 hours and then chose one of three options to block the internet for the time period they have selected. One option allows users to block the internet connection completely but reconnect to the internet by restarting the computer before the time is completed, while a second option prevents users getting back online until the time is up, even if they restart. The software offers a third option called a blacklist, where users can list websites they wish to block, thus still having access to the internet connection, except for the websites they have listed. It was set up by a group of freelance writers and programmers who claim they needed to develop a tool to help cut online distraction. The application is described as helping students, writers, self-employed workers, businesses, office workers, and teenagers who want to block the internet in order to complete their homework, and as a parental control.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "22896447", "title": "Live migration", "section": "Section::::VM memory migration.:Pre-copy memory migration.:Stop-and-copy phase.\n", "start_paragraph_id": 8, "start_character": 0, "end_paragraph_id": 8, "end_character": 528, "text": "After the warm-up phase, the VM will be stopped on the original host, the remaining dirty pages will be copied to the destination, and the VM will be resumed on the destination host. The time between stopping the VM on the original host and resuming it on destination is called \"down-time\", and ranges from a few milliseconds to seconds according to the size of memory and applications running on the VM. There are some techniques to reduce live migration down-time, such as using probability density function of memory change.\n", "bleu_score": null, "meta": null } ] } ]
null
4xoqze
how do humans lose 100 hairs per day and still maintain a full head of long locks?
[ { "answer": "Most people have over 100,000 hairs on their head. The number you lose in the shower is easily replaced without notice in that big of a crowd. With all the hairs growing and falling out all the time the total number stays pretty steady.", "provenance": null }, { "answer": "Because if I have 100,000 hairs and lose 100 every day it would take 1000 days for me to run out IF I wasn't making more. For hair this is about 3 years which in that time My hair would have grown an additional ~18 inches or ~6 inches per year or half an inch per month.\n\nAt that rate, you would have to be losing more than 10x that amount before you noticed your hair was thinning and even then your hairs growth rate might cover it up.", "provenance": null }, { "answer": "Actually we lose 300 to 500 hairs a day. We still have hair because our hair has 3 stages of growth. Anagen catagen and telogen and all hair is in different stages at different times so you will lose hair but new hair replaces it. We also have so many hairs that you wont notice when those hairs fall out.", "provenance": null }, { "answer": "Two reasons \n\n1: You've got a lot of hair all over and your head, regardless of the common assumption, is not the source of all hairs lost. You have Pubic hair, body hair, nose hair, eyelashes, facial hair and so forth. \n\n\n2: Your hair regenerates.\n\nThe level of loss is not greater than the level of regeneration therefore you don't lose it all. IF the rate of regeneration was lower than the shed rate you would run out.", "provenance": null }, { "answer": "You should probably be picking up that hair from the drain after every shower. It clogs the pipes.", "provenance": null }, { "answer": "Btw, if you're shedding A LOT, you might want to get your iron levels checked to make sure you're not anemic or something.", "provenance": null }, { "answer": "Think of it like a relay race. For every one hair that has fallen out, you have another that is about halfway grown in, and another starting to grow. It doesn't all grow at the same time.\n\nThis is why you have to go back every six weeks for 3-6 sessions during hair removal. You can zap the shot out of the hair that is present, but in six weeks, a whole set of new hair follicles will have sprouted.", "provenance": null }, { "answer": null, "provenance": [ { "wikipedia_id": "26060462", "title": "Human hair growth", "section": "Section::::Growth inhibitors and disorders.\n", "start_paragraph_id": 15, "start_character": 0, "end_paragraph_id": 15, "end_character": 273, "text": "In most people, scalp hair growth will halt due to follicle devitalization after reaching a length of generally two or three feet. Exceptions to this rule can be observed in individuals with hair development abnormalities, which may cause an unusual length of hair growth.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "26060462", "title": "Human hair growth", "section": "Section::::Growth inhibitors and disorders.:Radiation therapy to the head.\n", "start_paragraph_id": 23, "start_character": 0, "end_paragraph_id": 23, "end_character": 460, "text": "Human hair follicles are very sensitive to the effects of radiation therapy administered to the head, most commonly used to treat cancerous growths within the brain. Hair shedding may start as soon as two weeks after the first dose of radiation and will continue for a couple of weeks. Hair follicles typically enter the resting telogen phase and regrowth should commence 2.5 to 3 months after the hair begins to shed. Regrowth may be sparser after treatment.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "26060462", "title": "Human hair growth", "section": "Section::::Growth cycle.:Telogen phase.\n", "start_paragraph_id": 12, "start_character": 0, "end_paragraph_id": 12, "end_character": 504, "text": "During the telogen or resting phase (also known as shedding phase) the follicle remains dormant for one to four months. Ten to fifteen percent of the hairs on one's head are in this phase of growth at any given time. In this phase the epidermal cells lining the follicle channel continue to grow as normal and may accumulate around the base of the hair, temporarily anchoring it in place and preserving the hair for its natural purpose without taxing the body's resources needed during the growth phase.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "200129", "title": "Hair loss", "section": "Section::::Signs and symptoms.\n", "start_paragraph_id": 11, "start_character": 0, "end_paragraph_id": 11, "end_character": 482, "text": "People have between 100,000 and 150,000 hairs on their head. The number of strands normally lost in a day varies but on average is 100. In order to maintain a normal volume, hair must be replaced at the same rate at which it is lost. The first signs of hair thinning that people will often notice are more hairs than usual left in the hairbrush after brushing or in the basin after shampooing. Styling can also reveal areas of thinning, such as a wider parting or a thinning crown.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "8416397", "title": "Long hair", "section": "Section::::Hair lengths.:Maximum hair length.\n", "start_paragraph_id": 19, "start_character": 0, "end_paragraph_id": 19, "end_character": 665, "text": "The maximum terminal hair length depends on the length of the anagen (period of hair growth) for the individual. Waist-length hair or longer is only possible to reach for people with long anagen. The anagen lasts between 2 and 7 years, for some individuals even longer, and is followed by shorter catagen (transition) and telogen (resting) periods. At any given time, about 85% of hair strands are in anagen. The fibroblast growth factor 5 (FGF5) gene affects the hair cycle in mammals including humans; blocking FGF5 in the human scalp (by applying a herbal extract that blocked FGF5) extends the hair cycle, resulting in less hair fall and increased hair growth.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "200129", "title": "Hair loss", "section": "Section::::Pathophysiology.\n", "start_paragraph_id": 61, "start_character": 0, "end_paragraph_id": 61, "end_character": 283, "text": "Normally, about 40 (0–78 in men) hairs reach the end of their resting phase each day and fall out. When more than 100 hairs fall out per day, clinical hair loss (telogen effluvium) may occur. A disruption of the growing phase causes abnormal loss of anagen hairs (anagen effluvium).\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "4280639", "title": "Hair hang", "section": "", "start_paragraph_id": 2, "start_character": 0, "end_paragraph_id": 2, "end_character": 383, "text": "Many people underestimate the tensile strength of hair. A single strand can potentially carry a weight of up to 100 grams; in theory, with proper technique, a full head of human hair could eventually hold between 5,600 kg and 8,400 kg (12,345 to 18,518 lbs) without breaking individual hairs or pulling out any follicles. However, the act still hurts, especially for new performers.\n", "bleu_score": null, "meta": null } ] } ]
null
1pxzpq
How was bidirectional travel handled on the transcontinental railroad?
[ { "answer": "Many, many rail lines have only one track. Passing sidings are installed at regular intervals, and the train orders specify things like \"Train #97 take siding at Danville to await passage of eastbound train #38.\" Unlike earlier railroads, the transcontinental was accompanied the entire length by extension of telegraph lines. Thus revised train orders—using updated info about the location of other trains on the line—could be given the conductor at any staffed station.\n\nIn earlier decades, some lines used \"timetable control:\" during a certain period only the eastbound train had authority to use a certain stretch of track. In later decades, electric-light signaling systems would be installed showing whether the \"track block\" ahead was clear, and usually the status of the block beyond that.", "provenance": null }, { "answer": null, "provenance": [ { "wikipedia_id": "252507", "title": "American frontier", "section": "Section::::The Antebellum West.:The Pony Express and the telegraph.\n", "start_paragraph_id": 93, "start_character": 0, "end_paragraph_id": 93, "end_character": 540, "text": "In 1861 Congress passed the Land-Grant Telegraph Act which financed the construction of Western Union's transcontinental telegraph lines. Hiram Sibley, Western Union's head, negotiated exclusive agreements with railroads to run telegraph lines along their right-of-way. Eight years before the transcontinental railroad opened, the First Transcontinental Telegraph linked Omaha, Nebraska and San Francisco (and points in-between) on October 24, 1861. The Pony Express ended in just 18 months because it could not compete with the telegraph.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "43698907", "title": "Overland Limited (UP train)", "section": "Section::::History.\n", "start_paragraph_id": 4, "start_character": 0, "end_paragraph_id": 4, "end_character": 1469, "text": "The first contiguous transcontinental rail service on \"The Great American Over-land Route\" between the eastern terminus of the Union Pacific on the Missouri River at Council Bluffs, Iowa/Omaha, Nebraska via Ogden, Utah (CPRR) and Sacramento (WPRR/CPRR) to the San Francisco Bay at the Oakland Wharf was opened over its full length in late 1869. At that time just one daily passenger express train (and one slower mixed train) ran in each direction taking 102 hours to cover that 1,912 miles of the just completed Pacific Railroad route. The first class fare between Council Bluffs/Omaha and Sacramento (the end of the Central Pacific Railroad proper) was $131.50. The additional fares on connecting trains east of Omaha/Council Bluffs on other lines were $20.00 to St. Louis, $22.00 to Chicago, $42.00 to New York, and $45.00 to Boston. Round trip first class 30-day excursion fares between Omaha and San Francisco in 1870 ranged from $170 per person for groups of 20 to 24 to $130 for groups of 50 or more plus $14 for each double sleeping berth. During the decade of the 1870s the schedule was shortened by only 3 hours. In 1881 the scheduled time for the by then 43 mile shorter trip from Council Bluffs to San Francisco was about 98 hours. The first class fare had dropped to $100 with the combined charges for sleeping car accommodations on the Pullman's (UP) and Silver (CP) Palace Cars totaling $14 for a double berth and $52 for a Drawing Room that slept four.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "3952482", "title": "Port of San Francisco", "section": "Section::::History.:Belt Railroad.\n", "start_paragraph_id": 26, "start_character": 0, "end_paragraph_id": 26, "end_character": 795, "text": "In 1890 the port commissioners began developing a series of switchyards and warehouses on the reclaimed land for use of the San Francisco Belt Railroad, a line of over fifty miles that connected every berth and every pier with the industrial parts of the city and railways of America with all the trade routes of the Pacific. For a decade or more, railcar ferry transfers on steamers were the means of carrying railcars to the transcontinental systems. Later, in 1912, the belt line was driven across Market Street in front of the Ferry Building to link the entire commercial waterfront with railways both south and north and across the continent. The line was extended north along Jefferson Street through the tunnel to link up with U.S. Transport Docks at Fort Mason and south to China Basin.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "3052", "title": "Alameda, California", "section": "Section::::History.:City development.\n", "start_paragraph_id": 10, "start_character": 0, "end_paragraph_id": 10, "end_character": 375, "text": "On September 6, 1869, the Alameda Terminal made history; it was the site of the arrival of the first train via the First Transcontinental Railroad to reach the shores of San Francisco Bay, thus achieving the first coast to coast transcontinental railroad in North America. The transcontinental terminus was switched to the Oakland Pier two months later, on November 8, 1869.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "36541", "title": "First Transcontinental Railroad", "section": "", "start_paragraph_id": 4, "start_character": 0, "end_paragraph_id": 4, "end_character": 512, "text": "The first transcontinental rail passengers arrived at the Pacific Railroad's original western terminus at the Alameda Terminal on September 6, 1869, where they transferred to the steamer \"Alameda\" for transport across the Bay to San Francisco. The road's rail terminus was moved two months later to the Oakland Long Wharf, about a mile to the north, when its expansion was completed and opened for passengers on November 8, 1869. Service between San Francisco and Oakland Pier continued to be provided by ferry.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "51916", "title": "Transcontinental railroad", "section": "Section::::Northern America.:United States of America.:The Gould System.\n", "start_paragraph_id": 33, "start_character": 0, "end_paragraph_id": 33, "end_character": 895, "text": "George J. Gould attempted to assemble a truly transcontinental system in the 1900s. The line from San Francisco, California, to Toledo, Ohio, was completed in 1909, consisting of the Western Pacific Railway, Denver and Rio Grande Railroad, Missouri Pacific Railroad, and Wabash Railroad. Beyond Toledo, the planned route would have used the Wheeling and Lake Erie Railroad (1900), Wabash Pittsburgh Terminal Railway, Little Kanawha Railroad, West Virginia Central and Pittsburgh Railway, Western Maryland Railroad, and Philadelphia and Western Railway, but the Panic of 1907 strangled the plans before the Little Kanawha section in West Virginia could be finished. The Alphabet Route was completed in 1931, providing the portion of this line east of the Mississippi River. With the merging of the railroads, only the Union Pacific Railroad and the BNSF Railway remain to carry the entire route.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "5226320", "title": "Henry W. Corbett", "section": "Section::::Oregon territory.:H. W. Corbett & Co..\n", "start_paragraph_id": 27, "start_character": 0, "end_paragraph_id": 27, "end_character": 728, "text": "The Atlantic shipping terminal was in Colon, Panama. The Pacific terminal was in Panama City. The 48-mile double track railway was the first transcontinental railway and an engineering marvel of the era. Until the opening of the Panama Canal in 1914, the Panama Railway Company carried the heaviest volume of freight per unit length of any railroad in the world. H. W. Corbett and others from Portland would then use it to get back and forth to the connecting ships to and from the East, rather than crossing on mule back. When the transcontinental Union Pacific Railroad to San Francisco was completed on May 10, 1869, this more direct route was then used for shipping and travel connecting to Portland by boat or stage coach.\n", "bleu_score": null, "meta": null } ] } ]
null
7ymx8o
why did adolf hitler consider native americans as equal to “arians”?
[ { "answer": "Germans used to, and to this day often do, have a very favorable view of native americans. Part of the reason for that are the works of writer Karl May, an \"adventure novelist\" who wrote a lot about the \"nobel\" heritage and life of native americans - from a totally racist viewpoint. A good but not ELI5-Explainer is over at _URL_0_ and at _URL_1_\n\nAnother part of the explanation is that Hitler personally had a negative view of America, going as far as saying half of the US was \"jewified\", the other half \"negrified\". \n\nSo really it's a mix of a popular meme and a negative view of the US society.", "provenance": null }, { "answer": "Noble Savage type of thing. He was a member of the Rune Society (sp?). They believed in the restoration of the ancient Germanic way. If memory serves me, he was president of the group at one time. Some of his initial support for becoming Chancellor may have come from this association.", "provenance": null }, { "answer": null, "provenance": [ { "wikipedia_id": "1590630", "title": "New Order (Nazism)", "section": "Section::::Plans for other parts of the world outside Europe.:Hitler's plans for North America.\n", "start_paragraph_id": 73, "start_character": 0, "end_paragraph_id": 73, "end_character": 1171, "text": "U.S. pro-Nazi movements such as the Friends of the New Germany and the German-American Bund played no role in Hitler's plans for the country, and received no financial or verbal support from Germany after 1935. However, certain Native American advocate groups, such as the fascist-leaning American Indian Federation, were to be used to undermine the Roosevelt administration from within by means of propaganda. In addition, in an effort to gain Native American support, the Nazis classified the Sioux, and by extension all Native Americans, to be Aryans, a theory echoed in the sympathetic portrayal of the Natives in German westerns of the 1930s such as \"Der Kaiser von Kalifornien\". Nazi propagandists went as far as declaring that Germany would return expropriated land to the Indians, while Goebbels predicted they possessed little loyalty to the U.S. and would rather rebel than fight against Germany. As a boy, Hitler had been an enthusiastic reader of Karl May westerns and he told Albert Speer that he still turned to them for inspiration as an adult when he was in a tight spot; the Karl May westerns contained highly sympathetic portrayals of American Indians.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "2162835", "title": "Racism in the United States", "section": "Section::::Native Americans.:Reservation marginalization.\n", "start_paragraph_id": 33, "start_character": 0, "end_paragraph_id": 33, "end_character": 652, "text": "The treatment of the Native Americans was admired by the Nazis. Nazi expansion eastward was accompanied with invocation of America's colonial expansion westward under the banner of Manifest Destiny, with the accompanying wars on the Native Americans. In 1928, Hitler praised Americans for having \"gunned down the millions of Redskins to a few hundred thousand, and now kept the modest remnant under observation in a cage\" in the course of founding their continental empire. On Nazi Germany's expansion eastward, Hitler stated, \"Our Mississippi [the line beyond which Thomas Jefferson wanted all Indians expelled] must be the Volga, and not the Niger.\"\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "43644678", "title": "Native Americans in German popular culture", "section": "Section::::\"Indianthusiasm\", hobbyists and politics.\n", "start_paragraph_id": 27, "start_character": 0, "end_paragraph_id": 27, "end_character": 959, "text": "There was a widespread cultural passion for Native Americans in Germany throughout the 19th and 20th centuries. \"Indianthusiasm\" contributed to the evolution of German national identity. Imagery of Native Americans was appropriated in Nazi propaganda and used both against the US and to promote a \"holistic understanding of Nature\" among Germans, which gained widespread support from various segments of the political spectrum in Germany. The connection between anti-American sentiment and sympathetic feelings toward the underprivileged but authentic Indians is common in Germany, and it was to be found among both Nazi propagandists such as Goebbels and left-leaning writers such as Nikolaus Lenau as well. During the German Autumn in 1977, an anonymous text by a leftist \"Göttinger Mescalero\" spoke positively of the murder of German attorney general Siegfried Buback and used the positive image of \"Stadtindianer\" (Urban Indians) within the radical left.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "2162835", "title": "Racism in the United States", "section": "Section::::Consequences.:Institutional racism.:Use of the American racist model.\n", "start_paragraph_id": 171, "start_character": 0, "end_paragraph_id": 171, "end_character": 1110, "text": "Hitler and other Nazis praised America's system of institutional racism and they also believed that it was the model which should be followed in their Reich. In particular, they believed that it was the model for the expansion of German territory into the territory of other nations and the elimination of their indigenous inhabitants, for the implementation of racist immigration laws which banned some races, and laws which denied full citizenship to blacks, which they also wanted to implement against Jews. Hitler's book \"Mein Kampf\" extolled America as the only contemporary example of a country with racist (\"völkisch\") citizenship statutes in the 1920s, and Nazi lawyers made use of the American models in crafting laws for Nazi Germany. U.S. citizenship laws and anti-miscegenation laws directly inspired the two principal Nuremberg Laws—the Citizenship Law and the Blood Law. Establishing a restrictive entry system for Germany, Hitler admiringly wrote: “The American Union categorically refuses the immigration of physically unhealthy elements, and simply excludes the immigration of certain races.”\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "43644678", "title": "Native Americans in German popular culture", "section": "Section::::Background.:Projections of sentiments.\n", "start_paragraph_id": 4, "start_character": 0, "end_paragraph_id": 4, "end_character": 785, "text": "H. Glenn Penny states a striking sense, for over two centuries, of affinity among Germans for their ideas of what American Indians are like. According to him, those affinities stem from German polycentrism, notions of tribalism, longing for freedom, and a melancholy sense of \"shared fate.\" In the 17th and 18th centuries, German intellectuals' image of Native American were based on earlier heroes such as those of the Greeks, the Scythians, or the Polish struggle for independence (as in \"Polenschwärmerei\") as a base for their projections. The then popular recapitulation theory on the evolution of ideas was also involved. Such sentiments underwent ups and downs. Philhellenism, rather strong around 1830, faced a setback when the actual Greeks did not fulfill the classic ideals.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "43644678", "title": "Native Americans in German popular culture", "section": "Section::::German-American heritage.\n", "start_paragraph_id": 69, "start_character": 0, "end_paragraph_id": 69, "end_character": 819, "text": "The harsh condemnation by Marta Carlson, a Native American activist, of Germans for getting pleasure from \"something their whiteness has participated in destroying\", is not shared by others. As with Irish or Scottish immigrants, the \"whiteness\" of German immigrants was not a given for WASP Americans. Both Germans and Native Americans had to regain some of their customs, as a direct heritage tradition was no longer in place. It is however still somewhat disturbing for both sides when German hobby Indians meet Native German enthusiasts. There are allegations of plastic shamanism versus mockery about Native Americans excluding non-Indians and banning alcohol at their events. German (and Czech) hobbyists' concept of multiculturalism includes the inaleniable right to keep and drink beer in their tipis or kohtes.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "42277", "title": "Aryan race", "section": "Section::::Aryanism.:Neo-Nazism.\n", "start_paragraph_id": 54, "start_character": 0, "end_paragraph_id": 54, "end_character": 295, "text": "Since the military defeat of Nazi Germany by the Allies in 1945, some neo-Nazis have developed a more inclusive definition of \"Aryan\", claiming that the peoples of Western Europe are the closest descendants of the ancient Aryans, with Nordic and Germanic peoples being the most \"racially pure.\"\n", "bleu_score": null, "meta": null } ] } ]
null
6478p7
How are gaseous elements harvested and purified?
[ { "answer": "They aren't harvesting cow burps. They are taking all the livestock dung, putting it in a huge airtight container and letting bacteria digest the organic matter. Methane is a biproduct of the bacterial digestion process. They then collect the methane, compress it, dry it and then burn it.\n\n_URL_0_", "provenance": null }, { "answer": null, "provenance": [ { "wikipedia_id": "5879", "title": "Caesium", "section": "Section::::Production.\n", "start_paragraph_id": 41, "start_character": 0, "end_paragraph_id": 41, "end_character": 384, "text": "The metal can also be isolated by electrolysis of fused caesium cyanide (CsCN). Exceptionally pure and gas-free caesium can be produced by thermal decomposition of caesium azide , which can be produced from aqueous caesium sulfate and barium azide. In vacuum applications, caesium dichromate can be reacted with zirconium to produce pure caesium metal without other gaseous products.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "900", "title": "Americium", "section": "Section::::Synthesis and extraction.:Metal generation.\n", "start_paragraph_id": 24, "start_character": 0, "end_paragraph_id": 24, "end_character": 1416, "text": "Most synthesis routines yield a mixture of different actinide isotopes in oxide forms, from which isotopes of americium can be separated. In a typical procedure, the spent reactor fuel (e.g. MOX fuel) is dissolved in nitric acid, and the bulk of uranium and plutonium is removed using a PUREX-type extraction (Plutonium–URanium EXtraction) with tributyl phosphate in a hydrocarbon. The lanthanides and remaining actinides are then separated from the aqueous residue (raffinate) by a diamide-based extraction, to give, after stripping, a mixture of trivalent actinides and lanthanides. Americium compounds are then selectively extracted using multi-step chromatographic and centrifugation techniques with an appropriate reagent. A large amount of work has been done on the solvent extraction of americium. For example, a 2003 EU-funded project codenamed \"EUROPART\" studied triazines and other compounds as potential extraction agents. A \"bis\"-triazinyl bipyridine complex was proposed in 2009 as such a reagent is highly selective to americium (and curium). Separation of americium from the highly similar curium can be achieved by treating a slurry of their hydroxides in aqueous sodium bicarbonate with ozone, at elevated temperatures. Both Am and Cm are mostly present in solutions in the +3 valence state; whereas curium remains unchanged, americium oxidizes to soluble Am(IV) complexes which can be washed away.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "248159", "title": "Inert gas", "section": "Section::::Production.\n", "start_paragraph_id": 6, "start_character": 0, "end_paragraph_id": 6, "end_character": 471, "text": "The inert gases are obtained by fractional distillation of air, with the exception of helium which is separated from a few natural gas sources rich in this element, through cryogenic distillation or membrane separation. For specialized applications, purified inert gas shall be produced by specialized generators on-site. They are often used by chemical tankers and product carriers (smaller vessels). Benchtop specialized generators are also available for laboratories.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "5879", "title": "Caesium", "section": "Section::::Production.\n", "start_paragraph_id": 35, "start_character": 0, "end_paragraph_id": 35, "end_character": 332, "text": "Mining and refining pollucite ore is a selective process and is conducted on a smaller scale than for most other metals. The ore is crushed, hand-sorted, but not usually concentrated, and then ground. Caesium is then extracted from pollucite primarily by three methods: acid digestion, alkaline decomposition, and direct reduction.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "23370463", "title": "Kefir", "section": "Section::::Production.\n", "start_paragraph_id": 19, "start_character": 0, "end_paragraph_id": 19, "end_character": 371, "text": "The resulting fermented liquid, may be drunk, used in recipes, or kept aside in a sealed container for additional time to undergo a secondary fermentation. Because of its acidity the beverage should not be stored in reactive metal containers such as aluminium, copper, or zinc, as these may leach into it over time. The shelf life, unrefrigerated, is up to thirty days. \n", "bleu_score": null, "meta": null }, { "wikipedia_id": "248159", "title": "Inert gas", "section": "Section::::Applications.\n", "start_paragraph_id": 10, "start_character": 0, "end_paragraph_id": 10, "end_character": 525, "text": "Inert gases are often used in the chemical industry. In a chemical manufacturing plant, reactions can be conducted under inert gas to minimize fire hazards or unwanted reactions. In such plants and in oil refineries, transfer lines and vessels can be purged with inert gas as a fire and explosion prevention measure. At the bench scale, chemists perform experiments on air-sensitive compounds using air-free techniques developed to handle them under inert gas. Helium, neon, argon, krypton, xenon, and radon are inert gases.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "3433678", "title": "Kipp's apparatus", "section": "Section::::Further gas treatments.\n", "start_paragraph_id": 51, "start_character": 0, "end_paragraph_id": 51, "end_character": 228, "text": "Disposal of the gases can be done by burning the flammable ones (carbon monoxide, hydrogen, hydrocarbons), absorbing them in water (ammonia, hydrogen sulfide, sulfur dioxide, chlorine), or reacting them with a suitable reagent.\n", "bleu_score": null, "meta": null } ] } ]
null
19mthj
reddit, can you explain to me the relationship between megapixels, resolution, and screen size?
[ { "answer": "A screen's resolution tells you how many individual pixels or dots of light are there both horizontally and vertically.\n\nSo 1920x1080 means there's 1920 pixels horizontally on the screen and 1080 vertically. If that's spread out over a screen that's 42 inches across that just means there's more room to put each individual pixel in. But you get the same amount of pixels on a screen that's 42 inches diagonally or 4 inches diagonally.\n\nThe same goes for photos. This means that if you're looking at a very lage image, ( lots of pixels ) on a screen with a smaller resolution your screen won't be displaying all the pixels that there are in the image.\nBut you can zoom in on a particular part of an image to reveal the extra pixels. So the image you're viewing remains sharp.", "provenance": null }, { "answer": "Now an addition with megapixels, this is something introduced by the camera industry when digital camera's where still a new thing. \nWhen you're talking about megapixels you're talking about the total sum of the pixels in an image.\nSo 1920x1080 gives you about 2 million or 2 megapixels.\nA high quality print on a sheet of A4 stencil needs about 2-4 megapixels.\n\nAfter that the image quality really improves when you're using high quality lenses and other technical stuff most people (including me) dont'really understand.\nManufacturers know this and they see something people DO understand. Bigger numbers. So it was an easy step to focus on the size of the image and use that to advertise their products.\n\nA 12 megapixel image is fun, But unless you want to look really closely at a small detail in a picture, there's not much use for it.", "provenance": null }, { "answer": "I'd just like to point out that 720p is actually 1**2**80x720.\n\nOther than that, it was all explained: megapixels and resolution relate to the number of pixels regardless of display size, and screen size relates to the display size regardless of the pixel count. PPI is the relation between the two.", "provenance": null }, { "answer": null, "provenance": [ { "wikipedia_id": "601399", "title": "Display resolution", "section": "Section::::Considerations.\n", "start_paragraph_id": 7, "start_character": 0, "end_paragraph_id": 7, "end_character": 537, "text": "The eye's perception of \"display resolution\" can be affected by a number of factors see image resolution and optical resolution. One factor is the display screen's rectangular shape, which is expressed as the ratio of the physical picture width to the physical picture height. This is known as the aspect ratio. A screen's physical aspect ratio and the individual pixels' aspect ratio may not necessarily be the same. An array of 1280 × 720 on a display has square pixels, but an array of 1024 × 768 on a 16:9 display has oblong pixels.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "28385304", "title": "Graphics display resolution", "section": "", "start_paragraph_id": 1, "start_character": 0, "end_paragraph_id": 1, "end_character": 437, "text": "The graphics display resolution is the width and height dimension of an electronic visual display device, such as a computer monitor, in pixels. Certain combinations of width and height are standardized and typically given a name and an initialism that is descriptive of its dimensions. A higher display resolution in a display of the same size means that displayed photo or video content appears sharper, and pixel art appears smaller.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "644662", "title": "Pixel density", "section": "Section::::Computer displays.\n", "start_paragraph_id": 5, "start_character": 0, "end_paragraph_id": 5, "end_character": 313, "text": "The PPI/PPCM of a computer display is related to the size of the display in inches/centimetres and the total number of pixels in the horizontal and vertical directions. This measurement is often referred to as dots per inch, though that measurement more accurately refers to the resolution of a computer printer.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "433278", "title": "Display size", "section": "Section::::History.\n", "start_paragraph_id": 3, "start_character": 0, "end_paragraph_id": 3, "end_character": 314, "text": "The size of a screen is usually described by the length of its diagonal, which is the distance between opposite corners, usually in inches. It is also sometimes called the physical image size to distinguish it from the \"logical image size,\" which describes a screen's display resolution and is measured in pixels.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "433278", "title": "Display size", "section": "", "start_paragraph_id": 1, "start_character": 0, "end_paragraph_id": 1, "end_character": 488, "text": "On 2D displays, such as computer monitors and TVs, the display size (or viewable image size or VIS) is the physical size of the area where pictures and videos are displayed. The size of a screen is usually described by the length of its diagonal, which is the distance between opposite corners, usually in inches. It is also sometimes called the physical image size to distinguish it from the \"logical image size,\" which describes a screen's display resolution and is measured in pixels.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "3616597", "title": "Digital photography", "section": "Section::::The digital camera.:Performance metrics.:Pixel counts.\n", "start_paragraph_id": 32, "start_character": 0, "end_paragraph_id": 32, "end_character": 681, "text": "The relative increase in detail resulting from an increase in resolution is better compared by looking at the number of pixels across (or down) the picture, rather than the total number of pixels in the picture area. For example, a sensor of 2560 × 1600 sensor elements is described as \"4 megapixels\" (2560 × 1600= 4,096,000). Increasing to 3200 × 2048 increases the pixels in the picture to 6,553,600 (6.5 megapixels), a factor of 1.6, but the pixels per cm in the picture (at the same image size) increases by only 1.25 times. A measure of the comparative increase in linear resolution is the square root of the increase in area resolution, i.e., megapixels in the entire image.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "644662", "title": "Pixel density", "section": "Section::::Computer displays.\n", "start_paragraph_id": 6, "start_character": 0, "end_paragraph_id": 6, "end_character": 879, "text": "For example, a 15-inch (38 cm) display whose dimensions work out to 12 inches (30.48 cm) wide by 9 inches (22.86 cm) high, capable of a maximum 1024×768 (or XGA) pixel resolution, can display around 85 PPI/33.46PPCM in both the horizontal and vertical directions. This figure is determined by dividing the width (or height) of the display area in pixels by the width (or height) of the display area in inches. It is possible for a display to have different horizontal and vertical PPI measurements (e.g., a typical 4:3 ratio CRT monitor showing a 1280×1024 mode computer display at maximum size, which is a 5:4 ratio, not quite the same as 4:3). The apparent PPI of a monitor depends upon the screen resolution (that is, the number of pixels) and the size of the screen in use; a monitor in 800×600 mode has a lower PPI than does the same monitor in a 1024×768 or 1280×960 mode.\n", "bleu_score": null, "meta": null } ] } ]
null
52fzsq
why if co2 is only .038% of atmospheric gases, does it have so much impact on global warming?
[ { "answer": "The most abundant gases - O2, N2, argon - don't absorb heat. CO2 and H2O do absorb heat so when you increase them, you are directly increasing the greenhouse gas effect because you are increasing the most abundant heat absorbing molecule (with H2O). \n\nThat's very crude, but it's ELI5. Also as an aside, it's somewhat of a diversion tactic to say, \"it's only a small amount therefore it can't be that important.\" Skeptics/denialists love this tactic but it's pretty flawed. Think of it this way: It won't take but a very small amount of cyanide (less than 1% of body weight) in my body to notice it. ", "provenance": null }, { "answer": "All the other reasons here are basically true. But I think there is one big thing they left out. The reason such a small amount of CO2 or methane can make such a huge difference is because there is SO MUCH energy from the sun striking the planet every day (over 12,000 gigawatt-hours, more than 20,000x what the human race consumes) that even a very small change in how much gets absorbed can lead to a significant temperature change.", "provenance": null }, { "answer": "You know your favourite food fish fingers? Well, there are tiny tiny amounts of something called arsenic in there. You don't notice because it's so small but if you increased the amount of arsenic it would make your tummy very unhappy and you would go to sleep for a long time. It would still be a tiny part of your fish fingers but some things have a powerful effect even when they are small relative to everything else around it. CO2 is like that too. A small increase won't effect the climate very much but doubling it will make the earths tummy very upset and it will get a temperature. ", "provenance": null }, { "answer": "It doesn't. There are many other things in the atmosphere which trap heat better than CO2 like water vapor and methane for instance. However, since removing water from the atmosphere is a sysiphian task and we kind of need it for weather, it's cheaper to curb CO2 emissions. \n\nThe earths atmosphere used to have lots of CO2 in it and little oxygen before plants came along. The plants and algae over billions of years reduced most of this CO2 by using it to build their bodies with the carbon, while freeing the oxygen, and they take the carbon with them when they die. Just as long as they don't decompose or get burned. \n\nSo the coal, wood, and oil we use for fuel is billion year old carbon compounds that used to be our ancient atmosphere, got converted to plants which died, got buried over time, and now re-join the atmosphere again when we dig it up and use it for fuel. \n\n", "provenance": null }, { "answer": "First, a small increase can make a bigger difference than it might seem by numbers alone, here's a good demonstration:\n\n_URL_1_\n\n\nNext remember that our atmosphere is about 300 MILES deep (_URL_0_). Yes it gets thinner the higher you go but even at 30,000 feet (over 5 and a half miles) there's more than enough for a huge plane to easily and efficiently get lift. So even if you are only slightly increasing the chance of a beam of light interacting with some CO2, over the course of the traverse through the atmosphere that still piles up to a lot of opportunities for interaction.\n\nFinally as \\u\\ex_stripper said, theres a LOT of energy coming from the sun. So if you increase the amount of energy absorbed by a tiny tiny percentage, it turns into a lot of energy relative to the norm.", "provenance": null }, { "answer": "Congratulation, you've discovered a cascading effect. These are some of the most amazing things to study and there are areas of expertise for studying them in almost every field of science (Bio, Medicine, Computer Science. Cognitive Science, just to name a few)\n\nThere are lots of things like this, where a small move one direction or another can change things massively.\n\nexample: imagine that you stabbed your 3rd grade teacher with a knife. They survived, and only suffered a small scar. Do you think this small change would have a dramatic effect on your life 10, 15, or 20 years later? Of course. We have an intuitive sense about how large connected systems (like the flow of our life, with one act leading to the next) can be greatly impacted by small changes at specific points.\n\nThe same is true for the very connected system of the climate. A small change in how much heat is trapped at the polls causes more ice to melt, causing there to be less snow, which now is no longer reflecting the sun, causing more heat to get trapped. These sort of chains are easy to understand in isolation, but can feel almost magical when viewed as a group.\n\nWhy are do they feel 'magical' as a group? Humans are really bad at understanding compound effects. (This is why people don't intuitively understand interest rates) I've never heard anyone claim they know why this is the case, but it is. So you are in good company. ", "provenance": null }, { "answer": "CO2 (as well as methane and water vapor) is just really good at absorbing energy from the sun that would otherwise just hit the surface and reflect/re-emit back into space. Basically, it keeps more of the sun's energy for a longer period in earth's atmosphere, hence it increases the temperature.\n\nHere is a graph that shows absorption spectra of atmospheric gases: _URL_0_\n\nYou can clearly see that nitrogen(dark green) does not absorb much at all and oxygen(dark blue) also just has 5 relatively discrete absorption lines. CO2(red) absorbs a much wider spectrum in infrared and with 1,000-1,000,000 times the intensity. That way 0.04% CO2 absorbs (more) energy (than) equivalent to 40-40,000% oxygen. \n\nYou can see methane(yellow) and especially water(light green, responsible for 50-75% of greenhouse effect) do absorb a lot of, too, but methane concentration is much lower (about 1/500) and cloud cover actually reflects a lot of energy. Also, water is more of a passive factor/not (directly) caused by humans and levels vary a lot in the atmosphere.\n\nEDIT: Oxygen and nitrogen do absorb most of shorter than visible wavelengths but that accounts only for a very small fraction of the sun's output.", "provenance": null }, { "answer": null, "provenance": [ { "wikipedia_id": "780457", "title": "Sherwood B. Idso", "section": "Section::::Climate science.\n", "start_paragraph_id": 10, "start_character": 0, "end_paragraph_id": 10, "end_character": 444, "text": "In the 1998 paper, \"CO2-induced global warming: a skeptic's view of potential climate change\" Idso said: \"Several of these cooling forces have individually been estimated to be of equivalent magnitude, but of opposite sign, to the typically predicted greenhouse effect of a doubling of the air’s CO2 content, which suggests to me that little net temperature change will ultimately result from the ongoing buildup of CO2 in Earth's atmosphere.\"\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "15030", "title": "Intergovernmental Panel on Climate Change", "section": "Section::::Assessment reports.:First assessment report.\n", "start_paragraph_id": 68, "start_character": 0, "end_paragraph_id": 68, "end_character": 871, "text": "The executive summary of the WG I Summary for Policymakers report says they are certain that emissions resulting from human activities are substantially increasing the atmospheric concentrations of the greenhouse gases, resulting on average in an additional warming of the Earth's surface. They calculate with confidence that CO has been responsible for over half the enhanced greenhouse effect. They predict that under a \"business as usual\" (BAU) scenario, global mean temperature will increase by about 0.3 °C per decade during the [21st] century. They judge that global mean surface air temperature has increased by 0.3 to 0.6 °C over the last 100 years, broadly consistent with prediction of climate models, but also of the same magnitude as natural climate variability. The unequivocal detection of the enhanced greenhouse effect is not likely for a decade or more.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "26198824", "title": "Climate change feedback", "section": "Section::::Positive.:Carbon cycle feedbacks.\n", "start_paragraph_id": 6, "start_character": 0, "end_paragraph_id": 6, "end_character": 1438, "text": "There have been predictions, and some evidence, that global warming might cause loss of carbon from terrestrial ecosystems, leading to an increase of atmospheric levels. Several climate models indicate that global warming through the 21st century could be accelerated by the response of the terrestrial carbon cycle to such warming. All 11 models in the C4MIP study found that a larger fraction of anthropogenic CO will stay airborne if climate change is accounted for. By the end of the twenty-first century, this additional CO varied between 20 and 200 ppm for the two extreme models, the majority of the models lying between 50 and 100 ppm. The higher CO levels led to an additional climate warming ranging between 0.1° and 1.5 °C. However, there was still a large uncertainty on the magnitude of these sensitivities. Eight models attributed most of the changes to the land, while three attributed it to the ocean. The strongest feedbacks in these cases are due to increased respiration of carbon from soils throughout the high latitude boreal forests of the Northern Hemisphere. One model in particular (HadCM3) indicates a secondary carbon cycle feedback due to the loss of much of the Amazon Rainforest in response to significantly reduced precipitation over tropical South America. While models disagree on the strength of any terrestrial carbon cycle feedback, they each suggest any such feedback would accelerate global warming.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "47512", "title": "Climate change", "section": "Section::::Causes.:External forcing mechanisms.:Human influences.\n", "start_paragraph_id": 33, "start_character": 0, "end_paragraph_id": 33, "end_character": 558, "text": "Of most concern in these anthropogenic factors is the increase in CO levels. This is due to emissions from fossil fuel combustion, followed by aerosols (particulate matter in the atmosphere), and the CO released by cement manufacture. Other factors, including land use, ozone depletion, animal husbandry (ruminant animals such as cattle produce methane, as do termites), and deforestation, are also of concern in the roles they play—both separately and in conjunction with other factors—in affecting climate, microclimate, and measures of climate variables.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "31670958", "title": "Tectonic–climatic interaction", "section": "Section::::Orographic controls on climate.\n", "start_paragraph_id": 6, "start_character": 0, "end_paragraph_id": 6, "end_character": 762, "text": "It is commonly agreed upon that global climate fluctuations are strongly dictated by the presence or absence of greenhouse gases in the atmosphere and carbon dioxide (CO) is typically considered the most significant greenhouse gas. Observations infer that large uplifts of mountain ranges globally result in higher chemical erosion rates, thus lowering the volume of CO in the atmosphere as well as causing global cooling. This occurs because in regions of higher elevation there are higher rates of mechanical erosion (i.e. gravity, fluvial processes) and there is constant exposure and availability of materials available for chemical weathering. The following is a simplified equation describing the consumption of CO during chemical weathering of silicates:\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "21185721", "title": "Bio-geoengineering", "section": "Section::::Introduction.\n", "start_paragraph_id": 6, "start_character": 0, "end_paragraph_id": 6, "end_character": 310, "text": "The quick increment in the centralization of air CO2 proceeded with anthropogenic emanations of this gas is the fundamental factor driving worldwide environmental change. Due to many different causes global temperatures are to increase by 3-5 degrees celsius or 5.4 - 9 degrees fahrenheit within this century.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "17964574", "title": "List of ecoregions in North America (CEC)", "section": "Section::::Northwestern Forested Mountains.:Climate change in the Northwestern Forested Mountains.\n", "start_paragraph_id": 198, "start_character": 0, "end_paragraph_id": 198, "end_character": 573, "text": "The effects of fossil fuels emissions, the largest contributor to climate change, cause rising CO2 levels in the earth’s atmosphere. This raises atmospheric temperatures and levels of precipitation in the Northwestern Forested Mountains. Being a very mountainous region, weather patterns contribute higher levels of precipitation. This can cause landslides, channel erosion and floods. The warmer air temperatures also create more rain and less snow, something dangerous for many animal and tree species; with less snow pack comes more vulnerability for trees and insects.\n", "bleu_score": null, "meta": null } ] } ]
null
2v4qtx
How muh gear would a WWII British Commando carry into the field? Also: beret or helmet?
[ { "answer": "Not sure about the packs, but the steel helmet protects against shrapnel, not direct hits from bullets. Since the commandos were involved in small raids and unconventional warfare, it's not unreasonable that they would have preferred to save on weight when shrapnel would have been unlikely.\n\nSee this youtube video for a steel helmet penetration test:\n_URL_0_", "provenance": null }, { "answer": "Helmet, absolutely a helmet; berets in combat are the realm of video games, 1960s Hollywood, or a particularly poor NCO.\n\nCommandos standard combat load would not differ greatly from that of a British rifleman. \n\n > I recently saw a piece of artwork featuring the Commandos in a combat situation; the bullets were flying, men were running around fighting, and yet every single guy had his backpack firmly on\n\nCommando units raiding at night, perhaps not, but many of the 4 and 40 series units fought during full spectrum operations in 1944, such as in the Sword and Gold sectors; Lord Lovat's being a prominent example. Infantry fight with their packs, especially if they intend to *hold* the seized ground. Entrenching tools, ammo, rations, medical supplies; these are not things the [standard British webbing](_URL_1_) could carry without the aid of a pack. As I said at the start; with the exception of low-light raids, a helmet would always be present, or at least with them on their kit. A wool cap would be evident in matters of stealth, to prevent the rattling of the helmet or the sheen of one that hasn't been properly dulled in moonlight, otherwise, if bullets were expected to fly hard and often, a tin helmet would be worn.\n\nIn addition to the standard combat load (100-120 rounds, per man, [.303 British](_URL_0_), several Mill's bombs, bayonet, knife/gravity knife, entrenching tools, 3 spare BREN magazines, and so on... ) a Commando on a raid may have tools for sabotage, additional radio equipment, ammunition, rations or packages for *liason* with Resistance Forces, demolitions...the list is nearly endless. A pack would be absolutely necessary and a unit that is designed to operate well beyond the scope of logistics would indeed require to take all their equipment with them.\n\n", "provenance": null }, { "answer": null, "provenance": [ { "wikipedia_id": "14695226", "title": "Mk III helmet", "section": "", "start_paragraph_id": 1, "start_character": 0, "end_paragraph_id": 1, "end_character": 449, "text": "The Mk III Helmet was a steel military combat helmet first developed for the British Army in 1941 by the Medical Research Council. First worn in combat by British and Canadian troops on D-Day, the Mk III and Mk IV were used alongside the Brodie helmet for the remainder of the Second World War. It is sometimes referred to as the \"turtle\" helmet by collectors, because of its vague resemblance to a turtle shell, as well as the 1944 pattern helmet.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "8551454", "title": "Korps Speciale Troepen", "section": "Section::::The Green and Red berets.\n", "start_paragraph_id": 3, "start_character": 0, "end_paragraph_id": 3, "end_character": 398, "text": "The forces wore the green beret, which was the official headdress of the British Commandos of World War II. Under the name No. 2 (Dutch) Troop, the first Dutch commandos were trained in Achnacarry, Scotland, as part of No. 10 (Inter-Allied) Commando'. After the war, members of No. 2 Dutch troop served in RST (1945–1950). The paratrooper wing of the KST No 1 parachute company wore the Red beret.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "41273100", "title": "M42 Duperite helmet", "section": "", "start_paragraph_id": 1, "start_character": 0, "end_paragraph_id": 1, "end_character": 257, "text": "The M42 Duperite helmet was a paratrooper helmet issued to Australian paratroopers during WW2. The helmet got its eponymous name from the shock impact-absorbing material it was composed of. It was similar to the first of the British dispatch rider helmets.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "15390889", "title": "M1C helmet", "section": "", "start_paragraph_id": 1, "start_character": 0, "end_paragraph_id": 1, "end_character": 541, "text": "The M1C helmet was a variant of the U.S. Army's popular and iconic M1 helmet. Developed in World War II to replace the earlier M2 helmet, it was issued to paratroopers. It was different from the M2 in various ways, most importantly its bails (chinstrap hinges). The M2 had fixed, spot welded \"D\" bales so named for their shape, similar to early M1s. It was found that when sat on or dropped, these bails would snap off. The solution was the implementation of the swivel bail, which could move around and so was less susceptible to breaking.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "9108309", "title": "Maroon beret", "section": "Section::::Origins.\n", "start_paragraph_id": 4, "start_character": 0, "end_paragraph_id": 4, "end_character": 275, "text": "During World War II some British Army units followed the lead of the Armoured Corps and adopted the beret as a practical headgear, for soldiers who needed a hat that could be worn in confined areas, slept in and could be stowed in a small space when they wore steel helmets.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "339746", "title": "Commandos (United Kingdom)", "section": "Section::::Training.:Weapons and equipment.\n", "start_paragraph_id": 28, "start_character": 0, "end_paragraph_id": 28, "end_character": 613, "text": "Initially the Commandos were indistinguishable from the rest of the British Army and volunteers retained their own regimental head-dress and insignia. No. 2 Commando adopted Scottish head-dress for all ranks and No. 11 (Scottish) Commando wore the Tam O'Shanter with a black hackle. The official head-dress of the Middle East Commandos was a bush hat with their own knuckleduster cap badge. This badge was modelled on their issue fighting knife (the Mark I trench knife) which had a knuckleduster for a handle. In 1942 the green Commando beret and the Combined Operations tactical recognition flash were adopted.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "41273144", "title": "Helmet Steel Airborne Troop", "section": "", "start_paragraph_id": 1, "start_character": 0, "end_paragraph_id": 1, "end_character": 349, "text": "The Helmet Steel Airborne Troops is a paratrooper helmet of British origin worn by Paratroopers and Airborne forces. It was introduced in Second World War and was issued to Commonwealth countries in the post-1945 era up to the Falklands War. As with the similarly shaped RAC helmet, it was initially manufactured by Briggs Motor Bodies at Dagenham.\n", "bleu_score": null, "meta": null } ] } ]
null
6hqdbo
how do locksmiths verify that you own a key before making a copy of it?
[ { "answer": "Unfortunately there isn't always a way to tell.\n\nThere are mechanical kiosks at big box stores that copy keys for a small price.\n\nIn that case there is no verification, like you said - There's no proof that you aren't a theif required for copying a key. \n\nEdit: That's why most landlords buy keys with \"DO NOT COPY\" engraved.", "provenance": null }, { "answer": "Quite simply, they don't. Unless it is part of thing called a Restricted System, then you have to sign for the key & you have to be on a list of authorised signatories. These are usually the \"Do Not Copy\" keys.\n\nRestricted keys should have the issuing locksmith stamped on the key head, and *only* they may issue further copies with an authorised signature.\n\n", "provenance": null }, { "answer": "They don't it's up to you to protect your keys and restrict access to them. If you ever lose them or someone steals them you need to be changing your locks.", "provenance": null }, { "answer": "They don't. AFAIK some countries have a centralized system where you request the keys from a special certified locksmith that cross-checks your credentials before making a key for your particular house (A french friend told me this as he was impressed on how easy and cheap you could copy a key around here).", "provenance": null }, { "answer": "They don't know you aren't a thief. However some locks and some keys are protected from this with security measures. Locksmiths aren't worried about copying a house key. But if you try to get a key for a high security lock copied, it's not going to happen. These keys will often have writing on them for \"do not copy\" and use multiple rows of pins. \n\nThe only time that a locksmith might want to verify your identity is if you are asking them to get into a locked car, house, or business. They need reasonable assurances that you are authorized to be there and to enter the premises. If it turns out that you are lying and the police get involved the locksmith has an out if they took reasonable precautions to ensure you were authorized. \n\nSource - I'm an amateur lock smith with about 10 years experience keying, re-pinning, picking, repairing, and bypassing locks. \n\nLocks do not keep someone from breaking in to a home or vehicle in any case. They are there to keep honest people honest, and to deter thieves to pick easier targets. If someone wants to steal from you, there is not a lot you can do to stop them short of guarding your property 24x7. However you can take reasonable precautions so you aren't the low hanging fruit when a thief wants to break in. \n\nAnd if someone does want to steal, they are not going to use a locksmith who could be a witness against them. They will simply smash and grab, or con their way on premises. \n\nAnd being a locksmith, if I wanted to break the law and make a key I don't need the original key to copy. I can cut my own key for most locks using a few simple tricks for pin lengths. But I wouldn't bother. Most residential locks can be opened in under 10 seconds by an amateur simply by raking. ", "provenance": null }, { "answer": null, "provenance": [ { "wikipedia_id": "1975099", "title": "Key blank", "section": "", "start_paragraph_id": 3, "start_character": 0, "end_paragraph_id": 3, "end_character": 206, "text": "The State of California prohibits locksmiths from copying keys marked \"Do Not Duplicate\" or \"Unlawful to Duplicate\", provided the key originator's company name and telephone number are included on the key.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "19892614", "title": "Key relevance", "section": "", "start_paragraph_id": 1, "start_character": 0, "end_paragraph_id": 1, "end_character": 365, "text": "In master locksmithing, key relevance is the measurable difference between an original key and a copy made of that key, either from a wax impression or directly from the original, and how similar the two keys are in size and shape. It can also refer to the measurable difference between a key and the size required to fit and operate the keyway of its paired lock.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "5961174", "title": "Key code", "section": "Section::::Bitting code.\n", "start_paragraph_id": 8, "start_character": 0, "end_paragraph_id": 8, "end_character": 281, "text": "Experienced locksmiths might be able to figure out a bitting code from looking at a picture of a key. This happened to Diebold voting machines in 2007 after they posted a picture of their master key online, people were able to make their own key to match it and open the machines.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "14751215", "title": "Travel Sentry", "section": "Section::::Master key compromise.\n", "start_paragraph_id": 10, "start_character": 0, "end_paragraph_id": 10, "end_character": 494, "text": "In a 2014 article in the \"Washington Post\" a picture of the special tools was included, and while this picture was later removed it quickly spread. Security researchers have pointed out that it is now possible for anyone to make new master keys and open the locks without any sign of entry, and the locks can now be considered compromised. It is likely that professional thieves have possessed the master keys well before the publication, perhaps by reverse engineering the TSA-approved locks.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "18375", "title": "Locksmithing", "section": "Section::::Terminology.\n", "start_paragraph_id": 4, "start_character": 0, "end_paragraph_id": 4, "end_character": 349, "text": "A lock is a mechanism that secures buildings, rooms, cabinets, objects, or other storage facilities. A \"smith\" of any type is one who shapes metal pieces, often using a forge or mould, into useful objects or to be part of a more complex structure. Locksmithing, as its name implies, is the assembly and designing of locks and their respective keys.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "1010669", "title": "Double dispatch", "section": "Section::::Use cases.\n", "start_paragraph_id": 13, "start_character": 0, "end_paragraph_id": 13, "end_character": 381, "text": "BULLET::::- \"Lock and key\" systems where there are many types of locks and many types of keys and every type of key opens multiple types of locks. Not only do you need to know the types of the objects involved, but the subset of \"information about a particular key that are relevant to seeing if a particular key opens a particular lock\" is different between different lock types.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "689470", "title": "Record locking", "section": "Section::::Use of locks.:Exclusive locks.\n", "start_paragraph_id": 18, "start_character": 0, "end_paragraph_id": 18, "end_character": 346, "text": "Exclusive locks are, as the name implies, exclusively held by a single entity, usually for the purpose of writing to the record. If the locking schema was represented by a list, the holder list would contain only one entry. Since this type of lock effectively blocks any other entity that requires the lock from processing, care must be used to:\n", "bleu_score": null, "meta": null } ] } ]
null
z9us8
If a woman's on birth control that stops her menstruating once a month, will she remain fertile for longer?
[ { "answer": "That makes sense biologically (and is why nulliparity is thought to contribute to earlier menopause). However, this is not always supported by epidemiological studies. \n\n[This study](_URL_0_) found that history of oral contraceptive use significantly *increased* the risk for *early* menopause (defined here as prior to 49yo), while parity did not.\n > Ever-users of OC in our study had a mean age at menopause of 45.7 years (SD 6.00 years) while never-users' mean age at menopause was 47.2 years (SD 5.50 years).\n\nIt goes on to explain:\n > It is known that OC use and pregnancy disrupt the ovulation cycle. Whether this contributes to a later age at natural menopause is disputed. We found that ever-use of OC was significantly associated with early rather than later natural menopause. We have no obvious explanation for this finding, thus it is important that others investigate this. A Dutch cohort study found that ever-users of OC had a significantly later natural menopause than never-users (mean 51.2 years, SD 3.29 vs 50.1 years, SD 4.16; P < .01). In contrast to these findings, the Massachusetts Women's Health Study did not find an association between ever-use or duration of OC use and age at menopause.", "provenance": null }, { "answer": "A [recent study](_URL_0_) suggests that women may be able to create new eggs after all, challenging that belief that women are born with all the eggs they will ever have.", "provenance": null }, { "answer": "As far as I have seen in the literature, you are correct that women have a set number of eggs, however the way that most hormonal pills work is by preventing the release of the egg in the first place. Even women who get their monthly \"periods\" shouldn't be ovulating. However your body can't keep eggs in stasis forever so it still attempts to cycle them through normally. What happens is that when they reach a critical point in development in the ovary they do not get the hormonal signal needed to become mature eggs and so they die (atresia). So whether or not you menstruate you are still losing eggs over time. There may be a small difference in the number of eggs you lose, I'm not sure, but it shouldn't be enough to affect when you hit menopause. \n", "provenance": null }, { "answer": null, "provenance": [ { "wikipedia_id": "44198247", "title": "Abortion in Bangladesh", "section": "Section::::Abortion.\n", "start_paragraph_id": 10, "start_character": 0, "end_paragraph_id": 10, "end_character": 264, "text": "Menstrual regulation allows a woman to terminate within 10 weeks of her last period, but unsafe methods to terminate pregnancy are widespread. In response, a hotline was created for women to get information about fertility control, including menstrual regulation.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "3636272", "title": "Lactational amenorrhea", "section": "Section::::Return of fertility.\n", "start_paragraph_id": 15, "start_character": 0, "end_paragraph_id": 15, "end_character": 2366, "text": "Return of menstruation following childbirth varies widely among individuals. This return does not necessarily mean a woman has begun to ovulate again. The first postpartum ovulatory cycle might occur before the first menses following childbirth or during subsequent cycles. A strong relationship has been observed between the amount of suckling and the contraceptive effect, such that the combination of feeding on demand rather than on a schedule and feeding only breast milk rather than supplementing the diet with other foods will greatly extend the period of effective contraception. In fact, it was found that among the Hutterites, more frequent bouts of nursing, in addition to maintenance of feeding in the night hours, led to longer lactational amenorrhea. An additional study that references this phenomenon cross-culturally was completed in the United Arab Emirates (UAE) and has similar findings. Mothers who breastfed exclusively longer showed a longer span of lactational amenorrhea, ranging from an average of 5.3 months in mothers who breastfed exclusively for only two months to an average of 9.6 months in mothers who did so for six months. Another factor shown to affect the length of amenorrhea was the mother's age. The older a woman was, the longer period of lactational amenorrhea she demonstrated. The same increase in length was found in multiparous women as opposed to primiparous. With regards to the use of breastfeeding as a form of contraception, most women who choose not to breastfeed will resume regular menstrual cycling within 1.5 to 2 months following parturition. Furthermore, the closer a woman's behavior is to the Seven Standards of ecological breastfeeding, the later (on average) her cycles will return. Overall, there are many factors including frequency of nursing, mother's age, parity, and introduction of supplemental foods into the infant's diet among others which can influence return of fecundity following pregnancy and childbirth and thus the contraceptive benefits of lactational amenorrhea are not always reliable but are evident and variable among women. Couples who desire spacing of 18 to 30 months between children can often achieve this through breastfeeding alone, though this is not a foolproof method as return of menses is unpredictable and conception can occur in the weeks preceding the first menses. \n", "bleu_score": null, "meta": null }, { "wikipedia_id": "411879", "title": "Dysmenorrhea", "section": "Section::::Mechanism.\n", "start_paragraph_id": 15, "start_character": 0, "end_paragraph_id": 15, "end_character": 230, "text": "During a woman's menstrual cycle, the endometrium thickens in preparation for potential pregnancy. After ovulation, if the ovum is not fertilized and there is no pregnancy, the built-up uterine tissue is not needed and thus shed.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "38203", "title": "Menstruation", "section": "", "start_paragraph_id": 4, "start_character": 0, "end_paragraph_id": 4, "end_character": 433, "text": "The menstrual cycle occurs due to the rise and fall of hormones. This cycle results in the thickening of the lining of the uterus, and the growth of an egg, (which is required for pregnancy). The egg is released from an ovary around day fourteen in the cycle; the thickened lining of the uterus provides nutrients to an embryo after implantation. If pregnancy does not occur, the lining is released in what is known as menstruation.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "181805", "title": "Menarche", "section": "Section::::Physiology.:Relation to fertility.\n", "start_paragraph_id": 17, "start_character": 0, "end_paragraph_id": 17, "end_character": 521, "text": "On the other hand, not every girl follows the typical pattern, and some girls ovulate before the first menstruation. Although unlikely, it is possible for a girl who has engaged in sexual intercourse shortly before her menarche to conceive and become pregnant, which would delay her menarche until after the end of the pregnancy. This goes against the widely held assumption that a woman cannot become pregnant until after menarche. A young age at menarche is not correlated with a young age at first sexual intercourse.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "88003", "title": "Menstrual cycle", "section": "Section::::Other interventions.\n", "start_paragraph_id": 94, "start_character": 0, "end_paragraph_id": 94, "end_character": 315, "text": "Menstruation can be delayed by the use of progesterone or progestins. For this purpose, oral administration of progesterone or progestin during cycle day 20 has been found to effectively delay menstruation for at least 20 days, with menstruation starting after 2–3 days have passed since discontinuing the regimen.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "15738568", "title": "Functional hypothalamic amenorrhea", "section": "Section::::Management.\n", "start_paragraph_id": 62, "start_character": 0, "end_paragraph_id": 62, "end_character": 218, "text": "If menstruation does not resume spontaneously following lifestyle changes, the patient should be monitored for thyroid function, HPO axis function, and concentrations of ACTH, cortisol, and prolactin every 4-5 months.\n", "bleu_score": null, "meta": null } ] } ]
null
476umw
Are there any biographies available about Native North Americans who lived before 1492?
[ { "answer": "Because of the strong oral traditions in many Nations, it is difficult to find records of individuals, and the ones who do get recorded are those who have done something great, and they get wrapped into lessons and tales that it becomes hard to tell if the person existed at all. \n\nWere you looking for a story of the life of someone, or how someone would have lived before European contact? ", "provenance": null }, { "answer": "Well this may not fit your criteria very closely (or even at all, since his life was post-contact and you may or may not include Inuit in Canada under the term \"Native North Americans\"), but [Peter Pitseolak](_URL_0_) wrote a memoir that may be of interest. He opens with stories of his father's life pre-contact, contact, and the impacts of contact throughout his own life. The book is [*People From Our Side: A Life Story with Photographs and Oral Biography*](_URL_2_), co-authored by [Dorothy Harley Eber](_URL_1_).", "provenance": null }, { "answer": null, "provenance": [ { "wikipedia_id": "34063832", "title": "History of the Indian Tribes of North America", "section": "", "start_paragraph_id": 1, "start_character": 0, "end_paragraph_id": 1, "end_character": 600, "text": "The History of the Indian Tribes of North America is a three-volume collection of Native American biographies and accompanying lithograph portraits originally published in the United States from 1836 to 1844 by Thomas McKenney and James Hall. The majority of the portraits were first painted in oil by Charles Bird King. McKenney was working as the US Superintendent of Indian Trade and would head the Office of Indian Affairs, both then within the War Department. He planned publication of the biographical project to be supported by private subscription, as was typical for publishing of the time.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "384160", "title": "Beaver Wars", "section": "Section::::Origins.\n", "start_paragraph_id": 7, "start_character": 0, "end_paragraph_id": 7, "end_character": 688, "text": "The expeditions of French explorer Jacques Cartier in the 1540s made the first written records of the Native Americans in North America. French explorers and fishermen had traded in the region near the mouth of the St. Lawrence River estuary a decade before then for valuable furs. Cartier wrote of encounters with a people later classified as the St. Lawrence Iroquoians, also known as the \"Stadaconan\" or \"Laurentian\" people, who occupied several fortified villages, including \"Stadacona\" and \"Hochelaga\". Cartier recorded an ongoing war between the Stadaconans and another tribe known as the \"Toudaman\", who had destroyed one of their forts the previous year, resulting in 200 deaths.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "4446933", "title": "Jamake Highwater", "section": "Section::::Further reading.\n", "start_paragraph_id": 32, "start_character": 0, "end_paragraph_id": 32, "end_character": 248, "text": "BULLET::::- Hoxie, Frederick E. \"Encyclopedia of North American Indians: Native American History, Culture, and Life From Paleo-Indians to the Present\", Boston: Houghton Mifflin Harcourt, 2006: 191–2. (retrieved through Google Books, July 26, 2009)\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "9075979", "title": "Frank Speck", "section": "", "start_paragraph_id": 1, "start_character": 0, "end_paragraph_id": 1, "end_character": 311, "text": "Frank Gouldsmith Speck (November 8, 1881 – February 6, 1950) was an American anthropologist and professor at the University of Pennsylvania, specializing in the Algonquian and Iroquoian peoples among the Eastern Woodland Native Americans of the United States and First Nations peoples of eastern boreal Canada.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "3765538", "title": "Plausawa", "section": "", "start_paragraph_id": 1, "start_character": 0, "end_paragraph_id": 1, "end_character": 302, "text": "Plausawa (1700February 9, 1754) was a Pennacook Indian who lived in what is now New Hampshire. In 1728 he was the last known Native American living in the town of Suncook. At the start of King George's War in 1740 Plausawa moved to St. Francis in Quebec and fought against the settlers of the British.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "18933066", "title": "Florida", "section": "Section::::History.\n", "start_paragraph_id": 8, "start_character": 0, "end_paragraph_id": 8, "end_character": 353, "text": "By the 16th century, the earliest time for which there is a historical record, major Native American groups included the Apalachee of the Florida Panhandle, the Timucua of northern and central Florida, the Ais of the central Atlantic coast, the Tocobaga of the Tampa Bay area, the Calusa of southwest Florida and the Tequesta of the southeastern coast.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "1467707", "title": "Ella Cara Deloria", "section": "Section::::References and further reading.\n", "start_paragraph_id": 36, "start_character": 0, "end_paragraph_id": 36, "end_character": 259, "text": "BULLET::::- Philip J. Deloria, \"Ella Deloria (\"Anpetu Waste\").\" \"Encyclopedia of North American Indians: Native American History, Culture, and Life from Paleo-Indians to the Present.\" Ed. Frederick E. Hoxie. Boston: Houghton Mifflin Harcourt, 1996. 159-61. .\n", "bleu_score": null, "meta": null } ] } ]
null
2e3bxv
if the metric system is designed to make for easy calculations and conversions, why wasn't the 60 minute hour changed to a base 10 unit?
[ { "answer": "[Metric time]( _URL_0_) is in fact part of the metric system. Its just hasn't become used in most people's every day lives.", "provenance": null }, { "answer": "time is always expressed in seconds in the metric system.\nor multiples, like milliseconds, kiloseconds, etc.\n\"Other units of time, the minute, hour, and day, are accepted for use with the modern metric system, but are not part of it.\"\n_URL_0_", "provenance": null }, { "answer": "The French tried, but couldn't get it to stick. When dealing with SI units, you are technically not *supposed* to use minutes, hours, days, or whatnot, but because the conversions to these units of time are so widely known and accepted, it is never that big of a deal.", "provenance": null }, { "answer": "A) 60 (prime factors 2, 3, 5) is more easily divisible than 100 (prime factors 2, 5). You can slice it into fractions more easily, making it more useful in everyday life. \n\nB) Our entire society is based on non-decimal time. A workday is 8 hours. A week is 7 days. If a week is decimalized to be 10 days, how does that affect the workweek? Do workers still get 2 days off per week, but now they have 8 days to slog through before the weekend instead of 5? If a day has only 10 hours instead of 24, how long is a standard workday - 3.33? Or 4? Or 2? All this would have to be figured out. All for the sake of taking a system that works fine and making it more mathematically \"pretty\".", "provenance": null }, { "answer": "science works in seconds. the rest is just useful for day-to-day stuff. \n\na base 10 system for time would actually reduce the number of factors, so you couldnt chop up and hour as cleanly if it were base 10", "provenance": null }, { "answer": "They tried it doesn't work. Why? Because our natural time cycles; day, seasons, year, don't line up nice and square. And our 60 minute system proves quite useful", "provenance": null }, { "answer": null, "provenance": [ { "wikipedia_id": "33366803", "title": "Introduction to the metric system", "section": "Section::::Units.:Time.\n", "start_paragraph_id": 49, "start_character": 0, "end_paragraph_id": 49, "end_character": 543, "text": "When the metric system was first introduced in 1795, all metric units could be defined by reference to the standard metre or to the standard kilogram. In 1832 Carl Friedrich Gauss, when making the first absolute measurements of the Earth's magnetic field, needed standard units of time alongside the units of length and mass. He chose the second (rather than the minute or the hour) as his unit of time, thereby implicitly making the second a base unit of the metric system. The hour and minute have however been \"accepted for use within SI\".\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "208151", "title": "10", "section": "Section::::In science.\n", "start_paragraph_id": 46, "start_character": 0, "end_paragraph_id": 46, "end_character": 249, "text": "The metric system is based on the number 10, so converting units is done by adding or removing zeros (e.g. 1 centimeter = 10 millimeters, 1 decimeter = 10 centimeters, 1 meter = 100 centimeters, 1 dekameter = 10 meters, 1 kilometer = 1,000 meters).\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "9016829", "title": "List of customary units of measurement in South Asia", "section": "", "start_paragraph_id": 3, "start_character": 0, "end_paragraph_id": 3, "end_character": 440, "text": "Full metrication with the passage of the Standards of Weights and Measures Act, 1956, now replaced by the Standards of Weights and Measures Act, 1976: these Acts quote the legal conversion factors for Imperial units to SI units. Exact conversions can be made for customary units if they had previously been defined in terms of Imperial units: however, even when legally defined, the value of a unit could vary between different localities.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "31743909", "title": "History of the metric system", "section": "", "start_paragraph_id": 2, "start_character": 0, "end_paragraph_id": 2, "end_character": 787, "text": "The first practical realisation of the metric system came in 1799, during the French Revolution, when the existing system of measures, which had become impractical for trade, was replaced by a decimal system based on the kilogram and the metre. The basic units were taken from the natural world: the unit of length, the metre, was based on the dimensions of the Earth, and the unit of mass, the kilogram, was based on the mass of water having a volume of one litre or a cubic decimetre. Reference copies for both units were manufactured in platinum and remained the standards of measure for the next 90 years. After a period of reversion to the \"mesures usuelles\" due to unpopularity of the metric system, the metrication of France as well as much of Europe was complete by mid-century.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "52744", "title": "Jean-Charles de Borda", "section": "Section::::Tables of logarithms.\n", "start_paragraph_id": 10, "start_character": 0, "end_paragraph_id": 10, "end_character": 386, "text": "With the advent of the metric system after the French Revolution it was decided that the quarter circle should be divided into 100 degrees instead of 90 degrees, and the degree into 100 seconds instead of 60 seconds. This required the calculation of trigonometric tables and logarithms corresponding to the new size of the degree and instruments for measuring angles in the new system.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "20353", "title": "Metrication", "section": "Section::::Conversion process.:Chronology and status of conversion by country.\n", "start_paragraph_id": 63, "start_character": 0, "end_paragraph_id": 63, "end_character": 1541, "text": "The third method is to redefine traditional units in terms of metric values. These redefined \"quasi-metric\" units often stay in use long after metrication is said to have been completed. Resistance to metrication in post-revolutionary France convinced Napoleon to revert to \"mesures usuelles\" (usual measures), and, to some extent, the names remain throughout Europe. In 1814, Portugal adopted the metric system, but with the names of the units substituted by Portuguese traditional ones. In this system, the basic units were the \"mão-travessa\" (hand) = 1 decimetre (10 \"mão-travessas\" = 1 \"vara\" (yard) = 1 metre), the \"canada\" = 1 litre and the \"libra\" (pound) = 1 kilogram. In the Netherlands, 500 g is informally referred to as a \"pond\" (pound) and 100 g as an \"ons\" (ounce), and in Germany and France, 500 g is informally referred to respectively as \"ein Pfund\" and \"une livre\" (\"one pound\"). In Denmark, the re-defined \"pund\" (500 g) is occasionally used, particularly among older people and (older) fruit growers, since these were originally paid according to the number of pounds of fruit produced. In Sweden and Norway, a \"mil\" (Scandinavian mile) is informally equal to 10 km, and this has continued to be the predominantly used unit in conversation when referring to geographical distances. In the 19th century, Switzerland had a non-metric system completely based on metric terms (e.g. 1 \"Fuss\" (foot) = 30 cm, 1 \"Zoll\" (inch) = 3 cm, 1 \"Linie\" (line) = 3 mm). In China, the \"jin\" now has a value of 500 g and the liang is 50 g.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "23685094", "title": "List of unusual units of measurement", "section": "Section::::Time.:Decimal time systems.\n", "start_paragraph_id": 127, "start_character": 0, "end_paragraph_id": 127, "end_character": 311, "text": "SI allows for the use of larger prefixed units based on the second, a system known as metric time, but this is seldom used, since the number of seconds in a day (86,400 or, in rare cases, 86,401) negate one of the metric system's primary advantages: easy conversion by multiplying or dividing by powers of ten.\n", "bleu_score": null, "meta": null } ] } ]
null
3oto53
how exactly is there a connection with binaural beats and lucid dreaming -
[ { "answer": "At least in my experience, first of all not all binaural beats do anything and secondly, they do not really get you to dream lucidly, they rather get you to dream more vividly, which makes it easier for you to write a dream diary (important step for lucid dreaming) and makes entering the lucid status more easily. But if you can't do it, binaural beats won't suddenly make you able to.", "provenance": null }, { "answer": "so the sound doesnt help u dream just remember your dream?\n\nthis is the description from the track i was listening to \n\n\"Using a complex pattern of binaural beat and isochronic tone frequencies dedicated to help you achieve good sleep and have lucid dreams, this 8-hour music track is divided into four unique sections. In the first 2 hours we've used frequencies that range from 3-13Hz (Alpha-Theta range) to help calm your mind and feel deeply relaxed. There is a pleasurable feeling of floating and it will give effects such as stress reduction, relaxed awareness, release of serotonin, and an induction to sleep spindles as your mind and body allows itself into sleep. It also contains triggers for creativity and imagery and access to subconscious images as you doze off.\n\nThe second and third sections contain more of the Theta waves, which are also present in dreaming, sleep, deep meditation and creative inspiration. As you have already fallen asleep, the binaural beats tap into your subconsciousness as your mind prepares itself into a lucid dream state. The music is more steady so as not to interrupt your sleep.\n\nThe fourth and last section returns itself to the Alpha range with a mix of Theta and Delta. This is where deep sleep occurs and more often than not, the dream state. There is a decreased awareness of the physical world. This section also contains the Earth Resonance or Schumann Resonance, which will leave you feeling revitalized upon waking up.\n\nIn order to achieve Lucid Dreaming, please research on different tips found on the web. Lucid Dreaming doesn't happen all at once, so patience is an important factor.\n\nWe also advise you to keep a dream journal near you. We hope you'll enjoy our first-ever 8 hour full audio track. Share with us your experiences in the comments section! We'd love to hear from you.\"\n", "provenance": null }, { "answer": null, "provenance": [ { "wikipedia_id": "18130567", "title": "Primary consciousness", "section": "Section::::Miscellaneous studies.:In lucid dreams.\n", "start_paragraph_id": 40, "start_character": 0, "end_paragraph_id": 40, "end_character": 1018, "text": "Hobson asserts that the existence of lucid dreaming means that the human brain can simultaneously occupy two states: waking and dreaming. The dreaming portion has experiences and therefore has primary consciousness, while the waking self recognizes the dreaming and can be seen as having a sort of secondary consciousness in the sense that there is an awareness of mental state. Studies have been able to show that lucid dreaming is associated with EEG power and coherence profiles that are significantly different from both non-lucid dreaming and waking. Lucid dreaming situates itself between those two states. Lucid dreaming is characterized by more 40 Hz power than non-lucid dreaming, especially in frontal regions. Since it is 40 Hz power that has been correlated with waking consciousness in previous studies, it can be suggested that enough 40 Hz power has been added to the non-lucid dreaming brain to support the increase in subjective awareness that permits lucidity but not enough to cause full awakening.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "7743915", "title": "Need for cognition", "section": "Section::::Features.\n", "start_paragraph_id": 13, "start_character": 0, "end_paragraph_id": 13, "end_character": 379, "text": "A study on lucid dreaming found that frequent and occasional lucid dreamers scored higher on NFC than non-lucid dreamers. This suggests there is continuity between waking and dreaming cognitive styles. Researchers have argued that this is because self-reflectiveness or self-focused attention is heightened in lucid dreams and also is associated with greater need for cognition.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "18286", "title": "Lucid dream", "section": "Section::::Scientific research.\n", "start_paragraph_id": 22, "start_character": 0, "end_paragraph_id": 22, "end_character": 431, "text": "Using electroencephalography (EEG) and other polysomnographical measurements, LaBerge and others have shown that lucid dreams begin in the Rapid Eye Movement (REM) stage of sleep. LaBerge also proposes that there are higher amounts of beta-1 frequency band (13–19 Hz) brain wave activity experienced by lucid dreamers, hence there is an increased amount of activity in the parietal lobes making lucid dreaming a conscious process.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "8161085", "title": "Pre-lucid dream", "section": "Section::::Terminology.\n", "start_paragraph_id": 3, "start_character": 0, "end_paragraph_id": 3, "end_character": 601, "text": "The term \"lucid dreaming\" was first coined by Dutch psychologist Frederik Willems Van Eeden who introduced the concept on the 22nd of April during a meeting held by the Society for Psychical Research in 1913, but this phenomenon has been present all throughout historical periods with some findings even dating back to the writings of Aristotle. Stephen LaBerge, American psychophysiologist, introduced his method for physiological investigation of lucid dreaming through eye signals in the 1980s and ever since, more modern research has been established on the studies of the lucid dreaming process.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "29807596", "title": "Secondary consciousness", "section": "Section::::Lucid vs. non lucid dreaming as a model.:Research.\n", "start_paragraph_id": 30, "start_character": 0, "end_paragraph_id": 30, "end_character": 1038, "text": "In one study, researchers sought physiological correlates of lucid dreaming. They showed that the unusual combination of hallucinatory dream activity and wake-like reflective awareness and agentive control experienced in lucid dreams is paralleled by significant changes in electrophysiology. Participants were recorded using 19-channel Electroencephalography (EEG), and 3 achieved lucidity in the experiment. Differences between REM sleep and lucid dreaming were most prominent in the 40-Hz frequency band. The increase in 40-Hz power was especially strong at frontolateral and frontal sites. Their findings include the indication that 40-Hz activity holds a functional role in the modulation of conscious awareness across different conscious states. Furthermore, they termed lucid dreaming as a hybrid state, or that lucidity occurs in a state with features of both REM sleep and waking. In order to move from non-lucid REM sleep dreaming to lucid REM sleep dreaming, there must be a shift in brain activity in the direction of waking.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "44785", "title": "Dream", "section": "Section::::Other associated phenomena.:Lucid dreaming.\n", "start_paragraph_id": 135, "start_character": 0, "end_paragraph_id": 135, "end_character": 565, "text": "Lucid dreaming is the conscious perception of one's state while dreaming. In this state the dreamer may often have some degree of control over their own actions within the dream or even the characters and the environment of the dream. Dream control has been reported to improve with practiced deliberate lucid dreaming, but the ability to control aspects of the dream is not necessary for a dream to qualify as \"lucid\" — a lucid dream is any dream during which the dreamer knows they are dreaming. The occurrence of lucid dreaming has been scientifically verified.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "27294113", "title": "Oneironautics", "section": "Section::::Within one's dream.\n", "start_paragraph_id": 3, "start_character": 0, "end_paragraph_id": 3, "end_character": 235, "text": "A lucid dream is one in which the dreamer is aware of dreaming and may be able to exert some degree of control over the dream's characters, narrative or environment. Early references to the phenomenon are found in ancient Greek texts.\n", "bleu_score": null, "meta": null } ] } ]
null
10g3el
Why do different viruses (HPV/Warts, Herpes) discriminate between different areas of the body?
[ { "answer": "That is called as tropism, specifically tissue or cell tropism. Usually, there is a specific receptor on certain tissue to which the virus attaches (virus attachment protein). A typical example is the human immunodeficiency virus and its affinity to the T lymphocyte cells. \n\nHerpes simplex 1 exhibits tropism towards epithelial and neural cells, papilloma virus to cutaneous tissue and mucosal cells. Oral, plantar/palmar and genital warts are caused by different sub-types of the papilloma virus.", "provenance": null }, { "answer": null, "provenance": [ { "wikipedia_id": "39256418", "title": "Human virome", "section": "Section::::Diversity of human viruses.\n", "start_paragraph_id": 15, "start_character": 0, "end_paragraph_id": 15, "end_character": 348, "text": "In addition, the same viruses were prevalent in multiple body habitats within individuals. For instance, the beta- and gamma-papillomaviruses were the viruses most commonly found in the skin and the nose (anterior nares; see Figure 4A,B), which may reflect proximity and similarities in microenvironments that support infection with these viruses.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "7374970", "title": "Tissue tropism", "section": "", "start_paragraph_id": 2, "start_character": 0, "end_paragraph_id": 2, "end_character": 215, "text": "Some bacteria and viruses have a broad tissue tropism and can infect many types of cells and tissues. Other viruses may infect primarily a single tissue. For example, rabies virus affects primarily neuronal tissue.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "6863192", "title": "Phenotype mixing", "section": "", "start_paragraph_id": 2, "start_character": 0, "end_paragraph_id": 2, "end_character": 213, "text": "In other words; nongenetic interaction in which virus particles released from a cell that is infected with two different viruses have components from both the infecting agents, but with a genome from one of them.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "42114", "title": "Salmonella", "section": "Section::::Molecular mechanisms of infection.\n", "start_paragraph_id": 37, "start_character": 0, "end_paragraph_id": 37, "end_character": 340, "text": "Mechanisms of infection differ between typhoidal and nontyphoidal serotypes, owing to their different targets in the body and the different symptoms that they cause. Both groups must enter by crossing the barrier created by the intestinal cell wall, but once they have passed this barrier, they use different strategies to cause infection.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "897133", "title": "Feline leukemia virus", "section": "Section::::Comparison with feline immunodeficiency virus.\n", "start_paragraph_id": 48, "start_character": 0, "end_paragraph_id": 48, "end_character": 787, "text": "FeLV and feline immunodeficiency virus (FIV) and are sometimes mistaken for one another though the viruses differ in many ways. Although they are both in the same retroviral subfamily (orthoretrovirinae), they are classified in different genera (FeLV is a gamma-retrovirus and FIV is a lentivirus like HIV-1). Their shapes are quite different: FeLV is more circular while FIV is elongated. The two viruses are also quite different genetically, and their protein coats differ in size and composition. Although many of the diseases caused by FeLV and FIV are similar, the specific ways in which they are caused also differs. Also, while the feline leukemia virus may cause symptomatic illness in an infected cat, an FIV infected cat can remain completely asymptomatic its entire lifetime.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "8777", "title": "DNA virus", "section": "Section::::Phylogenetic relationships.:ds DNA viruses.:Herpesviruses and caudoviruses.\n", "start_paragraph_id": 228, "start_character": 0, "end_paragraph_id": 228, "end_character": 422, "text": "A common origin for the herpesviruses and the caudoviruses has been suggested on the basis of parallels in their capsid assembly pathways and similarities between their portal complexes, through which DNA enters the capsid. These two groups of viruses share a distinctive 12-fold arrangement of subunits in the portal complex. A second paper has suggested an evolutionary relationship between these two groups of viruses.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "41979999", "title": "Little cherry disease", "section": "Section::::Causes.\n", "start_paragraph_id": 6, "start_character": 0, "end_paragraph_id": 6, "end_character": 215, "text": "Due to considerable genetic variation among strains, isolates from both viruses have previously been designated as belonging to new and separate species before being reassigned to one of the two recognized viruses.\n", "bleu_score": null, "meta": null } ] } ]
null
15o7bu
What spoken language carries the most information per sound or time of speech?
[ { "answer": "[Here's](_URL_0_) a paper on information density vs speed of speech, done by the University of Lyon. I am not sure how accurate their methods are, but they seem to believe that some languages convey more information per syllable and for 5 out of 7 languages, that ones with lower information density are spoken faster. Note that the sample size was only 59 and only compared how fast 20 different texts were read out, all silences that lasted longer than 150 ms were edited out as well.", "provenance": null }, { "answer": "I remember reading a couple of articles about this a while back.\n\nI tried to find the article, and I found it: _URL_0_\n\nHere's a link to a paper, too:\n_URL_1_\n\nBut I'm not a linguist. Maybe you should wait around for a more informed response.", "provenance": null }, { "answer": "When dealing with natural language (as opposed to 'heads vs. tails') it's quite difficult to count the information encoded in an utterance. Words have connotations, not just single simple meanings, and as protagonic mentioned briefly, there's more to a sentence than just the whole of its parts - pragmatics deals with the context of the utterance, the common ground shared by the interlocutors, prior discourse, and a bunch of other things.\n\nThe study linked to by Lurker378, while interesting, is notably restricted to reading a set sample text. It can't really tell us much about information-conveying strategies employed by native speakers under normal conversational conditions. And the one thing it *might* cue us into is that speech rates *might* differ depending on information conveyance rates. Shooting from the hip here, but it's possible that there might be a limit to information encoding/decoding in the brain that impels a cap on information conveyed over time via natural language.\n\nIt's a valid question, but do know that it's not easily answered, and anyone who provides a simple answer (\"Korean does it fastest!\") is oversimplifying or misleading you.", "provenance": null }, { "answer": "This is one of those topics best learned about in via audio IMO.\n\n[Lexicon Valley](_URL_0_):, a terrific podcast from Slate with the excellent Bob Garfield (of [NPRs On The Media](_URL_1_), my favorite news source in any medium) at the helm, did [a great episode on basally exactly this topic.](_URL_2_)", "provenance": null }, { "answer": "I noticed that with Latin it is possible to use a lot less words than we use to, but on the other hand a good writer like Virgil could also use 3 sentences just to say \"the next day\".", "provenance": null }, { "answer": "Whilst I have neither the qualification or resources to give a concrete answer, I found [this article](_URL_1_) on an artificially created language, Ithkuil. It was designed to be as minimal as possible whilst still expressing much information, and is an interesting read on that subject.\n\n[wiki](_URL_0_) and [grammar reference](_URL_2_)\n\n > A sentence like “On the contrary, I think it may turn out that this rugged mountain range trails off at some point” becomes simply “Tram-mļöi hhâsmařpţuktôx.”", "provenance": null }, { "answer": "I wonder how much metaphor plays a role in this e.g. Pyhrric Victory.", "provenance": null }, { "answer": "You might want to re-ask this on r/linguistics although you'll probably get much the same sort of answers.\n\nAs a linguist, I'd say the language I've worked with that has the most staggering amount of information density would be Navajo and related languages, but they're spoken quite slowly as compared to languages that indo-European speakers are used to. Generally there does seem to be an inverse relationship between semantic density and speed of utterance.\n", "provenance": null }, { "answer": " > Is it possible that some languages allow to convey more information per sound? Per minute of speech? What are these languages?\n\nSign language! No sound, no speech, all of the information.", "provenance": null }, { "answer": "could words like shit and fuck and other 'curse' words be considered a zip file language? where you wanna say so much, but its just faster to say !@#@$ and it conveys the message across.", "provenance": null }, { "answer": "Conversational English uses a lot of idioms; metaphors compared to other languages, which would suggest less information per syllable. That doesn't mean English sentences couldn't be formed which convey a lot of information per syllable, but in practice that's not the case.", "provenance": null }, { "answer": "I just read a [fascinating article](_URL_0_) about a synthetic language called Ithkuil, which aims to be \"an idealized language whose aim is the highest possible degree of logic, efficiency, detail, and accuracy in cognitive expression via spoken human language.\" Long, but highly relevant and recommended.\n\nFor instance:\n\n > Ideas that could be expressed only as a clunky circumlocution in English can be collapsed into a single word in Ithkuil. A sentence like “On the contrary, I think it may turn out that this rugged mountain range trails off at some point” becomes simply “Tram-mļöi hhâsmařpţuktôx.” ", "provenance": null }, { "answer": "I know you're looking for a succinct answer, but if you'd like to learn about this topic, I'd highly recommend James Gleick's [The Information](_URL_0_). It's not a short read (544 pages), but it answers your question perfectly, and gives a great background in information theory. Very accessible, and very enjoyable.", "provenance": null }, { "answer": "Possibly [Ithkuil](_URL_0_)? Probably not what you're looking for since it's an artificial language, but technically it is spoken by a very small number of fanatics. In any even the article I linked is pretty interesting.", "provenance": null }, { "answer": "Great podcast on the subject, which I think discusses the same research in other comments: _URL_0_", "provenance": null }, { "answer": "I believe sanskrit is highly compressed . Words like to,for,by,into,'s, hey,hi,hello does not exist in this ancient language .Also there is form between singular and plural. I can't exatly explain this . Translation of 10 words from sanskrit into hindi/gujrati/marathi/bengali(prakrit based indian languages) can create full paragraph of 30 words .", "provenance": null }, { "answer": "Even flipping a coin and saying heads or tails can carry more than one bit of information. How you say the word can convey things such as enthusiasm or boredom.", "provenance": null }, { "answer": "English haiku poetry is an example of this \"information-per-syllable\" differences between languages. The traditional 5-7-5 syllable creates/forces/ and/or allows for a far more verbose poem in English, than Japanese (ironically running contrary to a main cornerstone of haiku). ", "provenance": null }, { "answer": "One issue we run into with this is what I like to think of as 'auction speech'. If you have ever heard a professional auctioneer doing his thing, you know what I'm talking about. They can string together words at an ungodly speed (I lack data, please provide some if you have any). However the average person is going to really struggle with comprehending what it is that they are saying, as they cannot process that information so quickly. So it all ends up being dependent upon both the ability to interpret information at high speeds and the ability to speak very quickly, unless I misunderstood your question (which is not only very possible, but highly likely.)", "provenance": null }, { "answer": "The New Yorker has a piece about a guy that created his own language with the goal of condensing thought into as little space as possible. \n\n_URL_0_\n\n\"Ideas that could be expressed only as a clunky circumlocution in English can be collapsed into a single word in Ithkuil. A sentence like “On the contrary, I think it may turn out that this rugged mountain range trails off at some point” becomes simply “Tram-mļöi hhâsmařpţuktôx.”\n", "provenance": null }, { "answer": "I'm no scientist but perhaps using programs like operating systems that have been translated into 100's of different languages - judging by how much data is required to provide the translation would be a good way to judge how efficient languages are (in a written form) this could also be applied to wikipedia articles and such things. \n\nJust a thought.", "provenance": null }, { "answer": "Didn't someone postulate once that the reason Germanic-language speakers had pretty much dominant success over Latin speakers was the information per sound?\n\n", "provenance": null }, { "answer": "The problem is one of definition; if a language relies more on context, it can convey 'more' information in less space. But we usually consider that to be a 'higher entropy' language. This is very important for things like machine translation, because it is very difficult to translate from a higher to lower entropy language (lowering entropy is always hard). Whereas the inverse is not so hard. Here is a specific example:\n\nJapanese (high entropy, context reliant): taberu?\n\nEnglish: Do you wanna eat some? Is he going to eat some? Is the cat going to eat it? \n\nThere is literally no way to tell from the sentence as given and it is a totally natural, everyday Japanese sentence. In contrast, each one of the English sentences could easily be translated into Japanese by a machine. It would sound stiff, but the meaning could be accurately conveyed. \n\nSo, although considered a high entropy language, Japanese is actually communicating *more* with substantially less, as it is simply relying more on inference and context.\n", "provenance": null }, { "answer": "Also take into account that different dialects or accents of the same language are not spoken at the same pace.", "provenance": null }, { "answer": "Bear in mind that when it comes to human language, 'information' is a difficult proposition to pin down. : /\n \n\n \n", "provenance": null }, { "answer": "As someone who has had to modify sites to accommodate the length of French text, I can say for sure that French is not the answer. ", "provenance": null }, { "answer": "If you look at some of the multi-language instructions that come with many products it seems that English requires less words/space than other languages. ", "provenance": null }, { "answer": "I'm going to add some information from the signal processing / voice compression world. Right now, the upper bound on the amount of information a voice can transfer (without regards to context) is approximately 350-400 bits per second (2.5-3 kilobytes per minute). This is of course beyond context, and can be narrowed down when limited to a certain language. [Lurker378's post](_URL_0_) links to a study which limits it even further, but I am not sure how effectively. \n\nAs for knowledge and ideas? When an ex girlfriend asked me \"remember us at our best?\", swirling through my head where pictures, videos, even conversations memorized; emotions, who I was at the time, who she was. The bedsheets in her grimy student apartment, the way her boobs looked when we were under the sheets. How we smoked pot in bed, what it's like to have sex when so high on hormones, love and pot. Each of these also has a context.\n\nThe amount sent depends on the listener; there are levels of recursion to depth of information, since we work not according to simple definitions like a computer, but rather through learning. Fire for instance; every baby touches something that is too hot, and is hurt. This sends a rush of dopamine into a very impressionable brain, causing further acceleration in the learning process. Next, when a child sees a fire again, he remembers that touching it hurt. But now he adds an added connotation; fear of pain. The learning process is very tiered, and it goes back to very early parts in the childhood and even genetically encoded information (as assumed by Chomsky about languages, for instance). So a single phrase can contain as much information as the brain processes in order to understand it.\n\nQuite frankly, we do not know enough to quantify this. We're laughably too ignorant as to how the brain actually works.", "provenance": null }, { "answer": null, "provenance": [ { "wikipedia_id": "23247", "title": "Phonology", "section": "Section::::Analysis of phonemes.\n", "start_paragraph_id": 20, "start_character": 0, "end_paragraph_id": 20, "end_character": 482, "text": "Part of the phonological study of a language therefore involves looking at data (phonetic transcriptions of the speech of native speakers) and trying to deduce what the underlying phonemes are and what the sound inventory of the language is. The presence or absence of minimal pairs, as mentioned above, is a frequently used criterion for deciding whether two sounds should be assigned to the same phoneme. However, other considerations often need to be taken into account as well.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "7412607", "title": "Pangloss Collection", "section": "Section::::Principles.:A sound archive with synchronized transcriptions.\n", "start_paragraph_id": 5, "start_character": 0, "end_paragraph_id": 5, "end_character": 382, "text": "For the science of linguistics, language is first and foremost spoken language. The medium of spoken language is sound. The Pangloss collection gives access to original recordings simultaneously with transcriptions and translations, as a resource for further research. After being recorded in its cultural context, texts have been transcribed in collaboration with native speakers.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "92200", "title": "Utterance", "section": "", "start_paragraph_id": 1, "start_character": 0, "end_paragraph_id": 1, "end_character": 381, "text": "In spoken language analysis, an utterance is the smallest unit of speech. It is a continuous piece of speech beginning and ending with a clear pause. In the case of oral languages, it is generally but not always bounded by silence. Utterances do not exist in written language, only their representations do. They can be represented and delineated in written language in many ways.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "42838322", "title": "Kinship Terms: A Numerical Variation", "section": "Section::::Numerical variation.\n", "start_paragraph_id": 13, "start_character": 0, "end_paragraph_id": 13, "end_character": 404, "text": "Humans have a set of distinctive features (known as phonetic features), and by this set they can produce any speech sound (phoneme) of any human language. BUT NOTE: a particular language have limited features and phonemes, thus speaker of language A may not produce phonemes of language B. In this way a particular language is a \"Constraint\" on the ability of humans to produces many more speech sounds.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "37259787", "title": "Forensic speechreading", "section": "Section::::The law.\n", "start_paragraph_id": 6, "start_character": 0, "end_paragraph_id": 6, "end_character": 602, "text": "While lipread speech can carry useful speech information, it is inherently less accurate than (clearly) heard speech because many distinctive features of speech are produced by actions of the tongue within the oral cavity and are not visible. This is a limitation imposed by speech itself, not the expertise of the speechreader. It is the main reason why the accuracy of a speechreader working on a purely visual record cannot be considered wholly reliable, however skilled they may be and irrespective of hearing status. The type of evidence and the utility of such evidence varies from case to case.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "38523090", "title": "Statistical learning in language acquisition", "section": "Section::::Lexical Acquisition.:Original Findings.\n", "start_paragraph_id": 7, "start_character": 0, "end_paragraph_id": 7, "end_character": 1169, "text": "It is a well-established finding that, unlike written language, spoken language does not have any clear boundaries between words; spoken language is a continuous stream of sound rather than individual words with silences between them. This lack of segmentation between linguistic units presents a problem for young children learning language, who must be able to pick out individual units from the continuous speech streams that they hear. One proposed method of how children are able to solve this problem is that they are attentive to the statistical regularities of the world around them. For example, in the phrase \"pretty baby,\" children are more likely to hear the sounds \"pre\" and \"ty\" heard together during the entirety of the lexical input around them than they are to hear the sounds \"ty\" and \"ba\" together. In an artificial grammar learning study with adult participants, Saffran, Newport, and Aslin found that participants were able to locate word boundaries based only on transitional probabilities, suggesting that adults are capable of using statistical regularities in a language-learning task. This is a robust finding that has been widely replicated.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "23060403", "title": "Motor theory of speech perception", "section": "Section::::Support.:Nonauditory gesture information.\n", "start_paragraph_id": 21, "start_character": 0, "end_paragraph_id": 21, "end_character": 216, "text": "If speech is identified in terms of how it is physically made, then nonauditory information should be incorporated into speech percepts even if it is still subjectively heard as \"sounds\". This is, in fact, the case.\n", "bleu_score": null, "meta": null } ] } ]
null
17inkq
bohr's theory of the hydrogen atom
[ { "answer": "Basically, the atom was understood like a solar system. Electrons orbiting the nucleus. Bohr suggested that the electrons could only be in very specific orbits and light was emitted when it went from a high to a lower orbit and light was absorbed when it went from a lower to a higher. ", "provenance": null }, { "answer": null, "provenance": [ { "wikipedia_id": "2796131", "title": "Introduction to quantum mechanics", "section": "Section::::Development of modern quantum mechanics.\n", "start_paragraph_id": 83, "start_character": 0, "end_paragraph_id": 83, "end_character": 268, "text": "Schrödinger was able to calculate the energy levels of hydrogen by treating a hydrogen atom's [[electron]] as a classical wave, moving in a well of electrical potential created by the proton. This calculation accurately reproduced the energy levels of the Bohr model.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "14225", "title": "Hydrogen atom", "section": "Section::::Theoretical analysis.:Schrödinger equation.:Wavefunction.\n", "start_paragraph_id": 64, "start_character": 0, "end_paragraph_id": 64, "end_character": 451, "text": "The solutions to the Schrödinger equation for hydrogen are analytical, giving a simple expression for the hydrogen energy levels and thus the frequencies of the hydrogen spectral lines and fully reproduced the Bohr model and went beyond it. It also yields two other quantum numbers and the shape of the electron's wave function (\"orbital\") for the various possible quantum-mechanical states, thus explaining the anisotropic character of atomic bonds.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "593115", "title": "Hydrogen hypothesis", "section": "", "start_paragraph_id": 1, "start_character": 0, "end_paragraph_id": 1, "end_character": 373, "text": "The hydrogen hypothesis is a model proposed by William F. Martin and Miklós Müller in 1998 that describes a possible way in which the mitochondrion arose as an endosymbiont within an archaeon (without doubts classified as prokaryote at then times), giving rise to a symbiotic association of two cells from which the first eukaryotic cell could have arisen (symbiogenesis).\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "1206", "title": "Atomic orbital", "section": "Section::::History.:Bohr atom.\n", "start_paragraph_id": 34, "start_character": 0, "end_paragraph_id": 34, "end_character": 1070, "text": "The Bohr model was able to explain the emission and absorption spectra of hydrogen. The energies of electrons in the \"n\" = 1, 2, 3, etc. states in the Bohr model match those of current physics. However, this did not explain similarities between different atoms, as expressed by the periodic table, such as the fact that helium (two electrons), neon (10 electrons), and argon (18 electrons) exhibit similar chemical inertness. Modern quantum mechanics explains this in terms of electron shells and subshells which can each hold a number of electrons determined by the Pauli exclusion principle. Thus the \"n\" = 1 state can hold one or two electrons, while the \"n\" = 2 state can hold up to eight electrons in 2s and 2p subshells. In helium, all \"n\" = 1 states are fully occupied; the same for \"n\" = 1 and \"n\" = 2 in neon. In argon the 3s and 3p subshells are similarly fully occupied by eight electrons; quantum mechanics also allows a 3d subshell but this is at higher energy than the 3s and 3p in argon (contrary to the situation in the hydrogen atom) and remains empty.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "199121", "title": "Rydberg constant", "section": "Section::::Occurrence in Bohr model.\n", "start_paragraph_id": 11, "start_character": 0, "end_paragraph_id": 11, "end_character": 425, "text": "The Bohr model explains the atomic spectrum of hydrogen (see hydrogen spectral series) as well as various other atoms and ions. It is not perfectly accurate, but is a remarkably good approximation in many cases, and historically played an important role in the development of quantum mechanics. The Bohr model posits that electrons revolve around the atomic nucleus in a manner analogous to planets revolving around the sun.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "2796131", "title": "Introduction to quantum mechanics", "section": "Section::::Copenhagen interpretation.:Application to the hydrogen atom.\n", "start_paragraph_id": 115, "start_character": 0, "end_paragraph_id": 115, "end_character": 535, "text": "Schrödinger was able to calculate the energy levels of hydrogen by treating a hydrogen atom's [[electron]] as a wave, represented by the \"[[wave function]]\" , in an [[electric potential]] [[potential well|well]], , created by the proton. The solutions to Schrödinger's equation are distributions of probabilities for electron positions and locations. Orbitals have a range of different shapes in three dimensions. The energies of the different orbitals can be calculated, and they accurately match the energy levels of the Bohr model.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "3071612", "title": "Hydrogen spectral series", "section": "Section::::Physics.\n", "start_paragraph_id": 3, "start_character": 0, "end_paragraph_id": 3, "end_character": 625, "text": "A hydrogen atom consists of an electron orbiting its nucleus. The electromagnetic force between the electron and the nuclear proton leads to a set of quantum states for the electron, each with its own energy. These states were visualized by the Bohr model of the hydrogen atom as being distinct orbits around the nucleus. Each energy state, or orbit, is designated by an integer, as shown in the figure. The Bohr model was later replaced by quantum mechanics in which the electron occupies an atomic orbital rather than an orbit, but the allowed energy levels of the hydrogen atom remained the same as in the earlier theory.\n", "bleu_score": null, "meta": null } ] } ]
null
2wswt4
Did Moses exist and was there an exodus of people from Egypt corresponding to the story?
[ { "answer": "This has previously been addressed. _URL_0_", "provenance": null }, { "answer": "I strongly recommend (edit: [this video lecture](_URL_1_) is better than the one I initially recommended) [this video series](_URL_10_) for a synopsis of what's currently known and believed about the exodus and the hebrews. \n\nAs for further reading, try /r/AcademicBiblical:\n\n* [The Exodus (please help!)](_URL_13_)\n\n* [Did Moses write the Torah and why do atheist argue he didn't exist?](_URL_0_)\n\n* [Is the scholarly view about the authorship of the Pentateuch and Isaiah due to a bias against prophecy? Or are there valid reasons why Moses or Isaiah didn't write](_URL_6_)\n\n* [Scholarly consensus (or majority belief) on the Bible authenticity?](_URL_2_)\n\n* [J, P, E, D, etc. is it still the scholarly consensus of the Pentateuch's composition?](_URL_14_)\n\n* [How do scholars determine the age and origins of Old Testament stories?](_URL_17_)\n\n* [Isaiah was written by multiple authors. How many other Biblical texts have multiple authors or which texts do you suspect have multiple authors?](_URL_8_)\n\n* [Was the Exodus a real historical event or how are we generally meant to understand it?](_URL_16_)\n\nThere's also a lot on /r/AskHistorians:\n\n* [Does the Egyptian history record the ten plagues mentioned in the Bible?](_URL_15_)\n\n* [Historicity of Moses and Abraham](_URL_11_)\n\n* [Is there any reference to Moses, the plagues, or the Exodus in Ancient Egyptian writings?](_URL_5_)\n\n* [It's often said that the Pharaoh in the book of Exodus is Ramses II. How accurate is this, and why is Ramses II the go-to for our conception of the historical Pharaoh in the Exodus?](_URL_18_)\n\n* [My Orthodox Jewish Rabbis, insist that the torah scrolls they read from (five books of moses) are exactly as they were written when given to the Jews by God on Mt. Sinai. Is this possible?](_URL_9_)\n\n* [Do we know who the 13 tribes of Israel were?](_URL_4_)\n\n* [Besides the Bible, are there other historical records of the Jewish being enslaved by the Egyptians?](_URL_7_)\n\n* [What is the oldest Biblical story that is also mentioned by non-Jewish primary sources?](_URL_3_)\n\nWhen you search a topic, use google instead of reddit. Try \"site:_URL_12_ searchterm searchterm\". You'll usually get a few hits. (Obviously substitute \"askhistorians\" with whatever sub you're searching.) And please pop in over at /r/AcademicBiblical if you have further questions! We only bite a little.", "provenance": null }, { "answer": "Ultimately you aren't going to get a satisfying answer because the sole piece of evidence is the biblical book itself, and so your take on the historicity of the sorry is more or less entirely dependent on your take on the historicity of the text, or even ancient texts in general. One person may consider important events like that would be remembered, and this we can take it as broadly accurate. Others think that such tales from very distant times are invented to serve the purpose of the tellers. Others think there is a kernel of truth surrounded by embellishment.\n\nPersonally I like Ian Morris' take: there is a memory of the highly multicultural world of the late bronze age, but we can't really say anything about the event itself", "provenance": null }, { "answer": null, "provenance": [ { "wikipedia_id": "55489052", "title": "Sources and parallels of the Exodus", "section": "", "start_paragraph_id": 2, "start_character": 0, "end_paragraph_id": 2, "end_character": 601, "text": "Modern archaeologists believe that the Israelites were indigenous to Canaan and were never in ancient Egypt, and if there is any historical basis to the Exodus it can apply only to a small segment of the population of Israelites at large. Nevertheless, there is also a general understanding that something must lie behind the traditions, even if Moses and the Exodus narrative belong to collective cultural memory rather to history. According to Avraham Faust \"most scholars agree that the narrative has a historical core, and that some of the highland settlers came, one way or another, from Egypt.\"\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "1935334", "title": "History of the Jews in Egypt", "section": "Section::::Ancient times.:Genesis and Exodus.\n", "start_paragraph_id": 6, "start_character": 0, "end_paragraph_id": 6, "end_character": 1252, "text": "The Book of Genesis and Book of Exodus describe a period of Hebrew servitude in ancient Egypt, during decades of sojourn in Egypt, the escape of well over a million Israelites from the Delta, and the three-month journey through the wilderness to Sinai. This episode is not corroborated by any historical evidence and is regarded by scholars to be fictitious. Israelites first appear in the archeological record on the Merneptah Stele from between 1208–3 BCE at the end of the Bronze Age. A reasonably Bible-friendly interpretation is that they were a federation of Habiru tribes of the hill-country around the Jordan River. Presumably, this federation consolidated into the kingdom of Israel, and Judah split from that, during the dark age that followed the Bronze. The Bronze Age term \"Habiru\" was less specific than the Biblical \"Hebrew\". The term referred simply to Levantine nomads, of any religion or ethnicity. Mesopotamian, Hittite, Canaanite, and Egyptian sources describe them largely as bandits, mercenaries, and slaves. Certainly, there were some Habiru slaves in ancient Egypt, but native Egyptian kingdoms were not heavily slave-based. The Exodus story is considered to be historically inaccurate, although important to various religions.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "1823869", "title": "The Exodus", "section": "Section::::Historicity.\n", "start_paragraph_id": 22, "start_character": 0, "end_paragraph_id": 22, "end_character": 699, "text": "According to scholars, the Exodus story is best understood as a myth, and specifically the founding myth of the Jewish faith, which explains its origins and provides an ideological foundation for Jewish culture and institutions. There is no indication that the Israelites lived in ancient Egypt, and the Sinai Peninsula shows no sign of occupation for the 2nd millennium BCE. Israel evolved within Canaan from native Canaanite roots. The modern scholarly consensus is that the figure of Moses is mythical, and while, as William G. Dever writes, \"a Moses-like figure may have existed somewhere in the southern Transjordan in the mid-late 13th century B.C.\", archaeology cannot confirm his existence.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "304141", "title": "Jewish history", "section": "Section::::Ancient Jewish history (c. 1500 BCE – 63 BCE).:Ancient Israelites (until c. 586 BCE).\n", "start_paragraph_id": 12, "start_character": 0, "end_paragraph_id": 12, "end_character": 1233, "text": "However, archaeology reveals a different story of the origins of the Jewish people: they did not necessarily leave the Levant. The archaeological evidence of the largely indigenous origins of Israel in Canaan, not Egypt, is \"overwhelming\" and leaves \"no room for an Exodus from Egypt or a 40-year pilgrimage through the Sinai wilderness\". Many archaeologists have abandoned the archaeological investigation of Moses and the Exodus as \"a fruitless pursuit\". A century of research by archaeologists and Egyptologists has arguably found no evidence that can be directly related to the Exodus narrative of an Egyptian captivity and the escape and travels through the wilderness, leading to the suggestion that Iron Age Israel—the kingdoms of Judah and Israel—has its origins in Canaan, not in Egypt: The culture of the earliest Israelite settlements is Canaanite, their cult-objects are those of the Canaanite god El, the pottery remains in the local Canaanite tradition, and the alphabet used is early Canaanite. Almost the sole marker distinguishing the \"Israelite\" villages from Canaanite sites is an absence of pig bones, although whether this can be taken as an ethnic marker or is due to other factors remains a matter of dispute.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "23059", "title": "Passover", "section": "Section::::The biblical narrative.:In the Book of Exodus.\n", "start_paragraph_id": 15, "start_character": 0, "end_paragraph_id": 15, "end_character": 316, "text": "In the Book of Exodus, the Israelites are enslaved in ancient Egypt. Yahweh, the god of the Israelites, appears to Moses in a burning bush and commands Moses to confront Pharaoh. To show his power, Yahweh inflicts a series of 10 plagues on the Egyptians, culminating in the 10th plague, the death of the first-born.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "60847", "title": "Land of Goshen", "section": "Section::::Goshen in Egypt.\n", "start_paragraph_id": 6, "start_character": 0, "end_paragraph_id": 6, "end_character": 265, "text": "Approximately four hundred and thirty years later, Moses was called to lead the Israelites out of Egypt, from Goshen to Succoth, the first waypoint of the Exodus. They pitched at 41 locations crossing the Nile Delta, to the last station being the \"plains of Moab\".\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "23059", "title": "Passover", "section": "", "start_paragraph_id": 3, "start_character": 0, "end_paragraph_id": 3, "end_character": 500, "text": "In the Book of Exodus, God helped the Israelites escape from slavery in ancient Egypt by inflicting ten plagues upon the Egyptians before the Pharaoh would release the Israelite slaves. The last of the plagues was the death of the Egyptian first-born. The Israelites were instructed to mark the doorposts of their homes with the blood of a slaughtered spring lamb and, upon seeing this, the spirit of the Lord knew to \"pass over\" the first-born in these homes, hence the English name of the holiday.\n", "bleu_score": null, "meta": null } ] } ]
null
9ykt5o
How did chemists explain reactions before the discovery of the atom?
[ { "answer": "To put it bluntly, they didn’t. The first attempts at explaining the states and reactions of matter led to the postulates that theorized the existence of the atom, so they were mutually dependent. Reactions such as fire and the creation of alloys were found empirically, but never studied like they are now. \n\nThere were early theories as to what composed matter, such as the idea that all matter consists of fire, water, earth, etc. But such theories never tried to “explain” reactions other than saying that things were how they were.", "provenance": null }, { "answer": "Here's one example:\n\n_URL_0_\n\n\"Phlogiston theory states that phlogisticated substances are substances that contain phlogiston and dephlogisticate when burned. Dephlogisticating is the process of releasing stored phlogiston, which is absorbed by the air. Growing plants then absorb this phlogiston, which is why air does not spontaneously combust and also why plant matter burns as well as it does.\"\n\nSurprisingly accurate, if you ignore the nonsense.", "provenance": null }, { "answer": "The classical philosophers argued about three forces of life and other reactions: vitalism, purpose, and atomism. \nVitalism where objects and creatures had life forces and heat inside them and when that ran out they died or burned up (instead of the other way around).\nPlato spoke of forms which pointed to things life and objects doing what they did because it was their purpose and made that way.\nOthers thought of a mechanistic world of atoms and void, but with with them being of infinite shapes and sizes.\nSource; Life’s Ratchet be Peter Hoffman", "provenance": null }, { "answer": "Their explanations were very ambiguous. For example, there was knowledge of what an acid was before the discovery of the atom based on physical and chemical properties, but there wasn't an explanation of what an acidic molecule was like. So a reaction would be run and the scientist would say they made an acid based on testing the properties. A lot of the analysis of products was done by comparing melting point, acidity, relative reactivity, and sometimes even taste (!) to known natural chemicals.", "provenance": null }, { "answer": "Like many subjects, chemistry has evolved over time. \n\nInitially things started out as a kind of mythological understanding (i.e. ancient alchemy.) A lack of true understanding lead to people trying to turn various things into gold. We now know that you can't do that, but back in antiquity people had no idea. They knew if you mixed A with B you'd get C. So in theory, you could mix D with E to get gold. You just need to figure out what D and E were. \n\nThe ancient civilizations understood that everything was made up of stuff. Originally this was as simple as fire, earth, water, air. Then people started to understand that stuff was made up of other smaller stuff. The word \"atom\" comes from the latin \"atomos\" meaning indivisible or uncuttable. Atoms therefore became name for the smallest building block of everything. (We now know this isn't exactly correct. Atoms can actually be further divided.) We went from saying things were made of fire and water to understanding that there were other things (i.e. elements.) \n\nThere were varied explanations as to why certain things worked but no real concrete explanation. As time goes on, people start focusing on the *why.* Experiments were designed to test these theories. For example, wind. We can't really see it, but it is there. We feel it on our skin and see it move the leaves on trees. Well, why does wind move trees? It's probably not the breath of Zeus or the wrath of Athena. There must be something that makes the leaves move. What are those things? Atoms! What do those look like? There have been many explanations of this: Bohr, Rutherford, Thomson, etc all had theories. Continued exploration eventually figured out that atoms are positive centers with orbiting negative particles and so on.\n\nThe more we learned about the composition of the parts of compounds, the more we understood how they work. The more we understood the more we could explain.\n\n & #x200B;\n\nSo in short, they just kinda BS'd it. Fake it til you make it irl. If you want a more detailed explanation, grab any gen chem textbook. One of the first chapters in most textbooks will cover the discovery of the atom from the simplest models to the current quantum mechanical theory.\n\n & #x200B;\n\nFor example: Lead white is a compound that has been synthetically made and used since antiquity in white paints. The preparation for this compound has been described as early as 300BC by Theophrastus. Theophrastus' description says lead was placed in a vessel with vinegar and left until it formed a crust. The crust was then scraped off and the lead was placed back in the vessel and the process repeated until there was no more lead. The scraped off crusts were dried and powdered andddd bam! You've got paint pigment. Why does this work? They had no idea, but it's how you get white lead. Now we know it's a process called corrosion (aka the formation of a metal oxide.)\n\n & #x200B;\n\ntl;dr Fake it til you make it. Explanations were given based off the information of the time and evolved as the science became more understood. \n\n & #x200B;", "provenance": null }, { "answer": null, "provenance": [ { "wikipedia_id": "5180", "title": "Chemistry", "section": "Section::::History.:Of discipline.\n", "start_paragraph_id": 108, "start_character": 0, "end_paragraph_id": 108, "end_character": 778, "text": "At the turn of the twentieth century the theoretical underpinnings of chemistry were finally understood due to a series of remarkable discoveries that succeeded in probing and discovering the very nature of the internal structure of atoms. In 1897, J.J. Thomson of Cambridge University discovered the electron and soon after the French scientist Becquerel as well as the couple Pierre and Marie Curie investigated the phenomenon of radioactivity. In a series of pioneering scattering experiments Ernest Rutherford at the University of Manchester discovered the internal structure of the atom and the existence of the proton, classified and explained the different types of radioactivity and successfully transmuted the first element by bombarding nitrogen with alpha particles.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "38890", "title": "Natural science", "section": "Section::::Branches of natural science.:Chemistry.\n", "start_paragraph_id": 16, "start_character": 0, "end_paragraph_id": 16, "end_character": 310, "text": "Early experiments in chemistry had their roots in the system of Alchemy, a set of beliefs combining mysticism with physical experiments. The science of chemistry began to develop with the work of Robert Boyle, the discoverer of gas, and Antoine Lavoisier, who developed the theory of the Conservation of mass.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "200201", "title": "Lactic acid fermentation", "section": "Section::::History.\n", "start_paragraph_id": 6, "start_character": 0, "end_paragraph_id": 6, "end_character": 1861, "text": "Several chemists discovered during the 19th century some fundamental concepts of the domain of organic chemistry. One of them for example was the French chemist Joseph Louis Gay-Lussac, who was especially interested in fermentation processes, and he passed this fascination to one of his best students, Justus von Liebig. With a difference of some years, each of them described, together with colleague, the chemical structure of the lactic acid molecule as we know it today. They had a purely chemical understanding of the fermentation process, which means that you can’t see it using a microscope, and that it can only be optimized by chemical catalyzers. It was then in 1857 when the French chemist Louis Pasteur first described the lactic acid as the product of a microbial fermentation. During this time, he worked at the university of Lille, where a local distillery asked him for advice concerning some fermentation problems. Per chance and with the badly equipped laboratory he had at that time, he was able to discover that in this distillery, two fermentations were taking place, a lactic acid one and an alcoholic one, both induced by some microorganisms. He then continued the research on these discoveries in Paris, where he also published his theories that presented a stable contradiction to the purely chemical version represented by Liebig and his followers. Even though Pasteur described some concepts that are still accepted nowadays, Liebig refused to accept them until his death in 1873. But even Pasteur himself wrote that he was “driven” to a completely new understanding of this chemical phenomenon. Even if Pasteur didn’t find every detail of this process, he still discovered the main mechanism of how the microbial lactic acid fermentation works. He was for example the first to describe fermentation as a “form of life without air.\"\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "17870647", "title": "Name reaction", "section": "", "start_paragraph_id": 2, "start_character": 0, "end_paragraph_id": 2, "end_character": 389, "text": "As organic chemistry developed during the 20th century, chemists started associating synthetically useful reactions with the names of the discoverers or developers; in many cases, the name is merely a mnemonic. Some cases of reactions that were not really discovered by their namesakes are known. Examples include the Pummerer rearrangement, the Pinnick oxidation and the Birch reduction.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "2027451", "title": "Jeremias Benjamin Richter", "section": "Section::::Law of definite proportions (stoichiometry).\n", "start_paragraph_id": 6, "start_character": 0, "end_paragraph_id": 6, "end_character": 843, "text": "Evidence for the existence of atoms was the law of definite proportions proposed by him in 1792. Richter found that the ratio by weight of the compounds consumed in a chemical reaction was always the same. It took 615 parts by weight of magnesia (MgO), for example, to neutralize 1000 parts by weight of sulfuric acid. From his data, Ernst Gottfried Fischer calculated in 1802 the first table of chemical equivalents, taking sulphuric acid as the standard with the figure 1000. When Joseph Proust reported his work on the constant composition of chemical compounds, the time was ripe for the reinvention of an atomic theory. The law of definite proportions and constant composition do not prove that atoms exist, but they are difficult to explain without assuming that chemical compounds are formed when atoms combine in constant proportions.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "38890", "title": "Natural science", "section": "Section::::History.:Newton and the scientific revolution (1600–1800).\n", "start_paragraph_id": 67, "start_character": 0, "end_paragraph_id": 67, "end_character": 777, "text": "Significant advances in chemistry also took place during the scientific revolution. Antoine Lavoisier, a French chemist, refuted the phlogiston theory, which posited that things burned by releasing \"phlogiston\" into the air. Joseph Priestley had discovered oxygen in the 18th century, but Lavoisier discovered that combustion was the result of oxidation. He also constructed a table of 33 elements and invented modern chemical nomenclature. Formal biological science remained in its infancy in the 18th century, when the focus lay upon the classification and categorization of natural life. This growth in natural history was led by Carl Linnaeus, whose 1735 taxonomy of the natural world is still in use. Linnaeus in the 1750s introduced scientific names for all his species.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "2844", "title": "Atomic theory", "section": "Section::::History.:John Dalton.\n", "start_paragraph_id": 9, "start_character": 0, "end_paragraph_id": 9, "end_character": 721, "text": "Near the end of the 18th century, two laws about chemical reactions emerged without referring to the notion of an atomic theory. The first was the law of conservation of mass, closely associated with the work of Antoine Lavoisier, which states that the total mass in a chemical reaction remains constant (that is, the reactants have the same mass as the products). The second was the law of definite proportions. First established by the French chemist Joseph Louis Proust in 1799, this law states that if a compound is broken down into its constituent chemical elements, then the masses of the constituents will always have the same proportions by weight, regardless of the quantity or source of the original substance.\n", "bleu_score": null, "meta": null } ] } ]
null
3curlp
How do they determine the longitude on another planet?
[ { "answer": "For a rocky planet like Venus, the prime meridian is chosen to cross through some arbitrarily chosen reference surface feature, like a crater. The direction of increasing longitude is then measured in a direction opposite to the rotation of the planet about its axis. So, for instance, if you look down at Venus's north pole, then the planet rotates clockwise. So from the prime meridian, longitude increases from 0 to 360 degrees in the anti-clockwise direction (i.e., east).\n\n(Note that the convention of the direction of increasing longitude is for non-Earth planets only. Earth rotates anti-clockwise as seen from above the north pole. But Earth longitude also increases in the anti-clockwise direction.)", "provenance": null }, { "answer": null, "provenance": [ { "wikipedia_id": "21854", "title": "Navigation", "section": "Section::::Basic concepts.:Longitude.\n", "start_paragraph_id": 15, "start_character": 0, "end_paragraph_id": 15, "end_character": 1124, "text": "Similar to latitude, the longitude of a place on Earth is the angular distance east or west of the prime meridian or Greenwich meridian. Longitude is usually expressed in degrees (marked with °) ranging from 0° at the Greenwich meridian to 180° east and west. Sydney, for example, has a longitude of about 151° east. New York City has a longitude of 74° west. For most of history, mariners struggled to determine longitude. Longitude can be calculated if the precise time of a sighting is known. Lacking that, one can use a sextant to take a lunar distance (also called \"the lunar observation\", or \"lunar\" for short) that, with a nautical almanac, can be used to calculate the time at zero longitude (see Greenwich Mean Time). Reliable marine chronometers were unavailable until the late 18th century and not affordable until the 19th century. For about a hundred years, from about 1767 until about 1850, mariners lacking a chronometer used the method of lunar distances to determine Greenwich time to find their longitude. A mariner with a chronometer could check its reading using a lunar determination of Greenwich time.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "143335", "title": "Celestial navigation", "section": "Section::::Practical navigation.:Longitude.\n", "start_paragraph_id": 19, "start_character": 0, "end_paragraph_id": 19, "end_character": 752, "text": "Longitude can be measured in the same way. If the angle to Polaris can be accurately measured, a similar measurement to a star near the eastern or western horizons will provide the longitude. The problem is that the Earth turns 15 degrees per hour, making such measurements dependent on time. A measure a few minutes before or after the same measure the day before creates serious navigation errors. Before good chronometers were available, longitude measurements were based on the transit of the moon, or the positions of the moons of Jupiter. For the most part, these were too difficult to be used by anyone except professional astronomers. The invention of the modern chronometer by John Harrison in 1761 vastly simplified longitudinal calculation.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "2882854", "title": "Longitude by chronometer", "section": "", "start_paragraph_id": 6, "start_character": 0, "end_paragraph_id": 6, "end_character": 917, "text": "Many nations, such as France, have proposed their own reference longitudes as a standard, although the world’s navigators have generally come to accept the reference longitudes tabulated by the British. The reference longitude adopted by the British became known as the Prime Meridian and is now accepted by most nations as the starting point for all longitude measurements. The Prime Meridian of zero degrees longitude runs along the meridian passing through the Royal Observatory at Greenwich, England. Longitude is measured east and west from the Prime Meridian. To determine \"longitude by chronometer,\" a navigator requires a chronometer set to the local time at the Prime Meridian. Local time at the Prime Meridian has historically been called Greenwich Mean Time (GMT), but now, due to international sensitivities, has been renamed as Coordinated Universal Time (UTC), and is known colloquially as \"zulu time\".\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "50149", "title": "Longitude rewards", "section": "Section::::The longitude problem.\n", "start_paragraph_id": 6, "start_character": 0, "end_paragraph_id": 6, "end_character": 457, "text": "The Longitude Act only addressed the determination of longitude at sea. Determining longitude reasonably accurately on land was, from the 17th century onwards, possible using the Galilean moons of Jupiter as an astronomical 'clock'. The moons were easily observable on land, but numerous attempts to reliably observe them from the deck of a ship resulted in failure. For details on other efforts towards determining the longitude, see History of longitude.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "17617", "title": "Longitude", "section": "", "start_paragraph_id": 1, "start_character": 0, "end_paragraph_id": 1, "end_character": 1051, "text": "Longitude (, ), is a geographic coordinate that specifies the east–west position of a point on the Earth's surface, or the surface of a celestial body. It is an angular measurement, usually expressed in degrees and denoted by the Greek letter lambda (λ). Meridians (lines running from pole to pole) connect points with the same longitude. By convention, one of these, the Prime Meridian, which passes through the Royal Observatory, Greenwich, England, was allocated the position of 0° longitude. The longitude of other places is measured as the angle east or west from the Prime Meridian, ranging from 0° at the Prime Meridian to +180° eastward and −180° westward. Specifically, it is the angle between a plane through the Prime Meridian and a plane through both poles and the location in question. (This forms a right-handed coordinate system with the -axis (right hand thumb) pointing from the Earth's center toward the North Pole and the -axis (right hand index finger) extending from the Earth's center through the Equator at the Prime Meridian.)\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "8916854", "title": "25th meridian west from Washington", "section": "Section::::Longitude in the United States.\n", "start_paragraph_id": 5, "start_character": 0, "end_paragraph_id": 5, "end_character": 419, "text": "Latitude and longitude uniquely describe the location of any point on Earth. Latitude may be simply calculated from astronomical or solar observation, either at land or sea, interrupted only by cloudy skies. Longitude, on the other hand, requires both astronomical or solar observation and some form of time reference to a longitude reference point. John Harrison produced the first precise marine chronometer in 1761.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "8937100", "title": "32nd meridian west from Washington", "section": "Section::::Longitude in the United States.\n", "start_paragraph_id": 7, "start_character": 0, "end_paragraph_id": 7, "end_character": 419, "text": "Latitude and longitude uniquely describe the location of any point on Earth. Latitude may be simply calculated from astronomical or solar observation, either at land or sea, interrupted only by cloudy skies. Longitude, on the other hand, requires both astronomical or solar observation and some form of time reference to a longitude reference point. John Harrison produced the first precise marine chronometer in 1761.\n", "bleu_score": null, "meta": null } ] } ]
null
1dh9ps
why is the tea party republican? why aren't they their own party?
[ { "answer": "Because they need the establishment GOP votes to succeed. If they created their own party, Democrats would win every election ever since the Tea Party guys would vote for the Tea Party candidate, the establishment GOP guys would vote for their candidates, and Democrats would vote for the Democratic candidate. There are more Democrats than Tea Partyists and more Democrats than GOPists, but not always more Democrats than all GOP members.\n\nSince they're funded by the Kochs, Adelson, and other shady billionaires looking for more money, they have to have an effective strategy.", "provenance": null }, { "answer": "In the American system a third party isn't viable. If the Tea Party were to split off from the Republicans, they would either end up dying out (the more likely scenario) or kill off the Republican party, in which case the moderate Republicans would probably not follow them, and if they did, the party would basically look like it does now. ", "provenance": null }, { "answer": "Aethec and Tic-Tac are right, but as an explanation, the US uses a First Past the Post system (idk if you're an American or how much of our political system you know). That means that whoever gets the most votes is the winner of the election, no matter what. Third parties thrive in other countries because many countries have a proportional vote; if a party gets 10% of the vote, they'll get some representation, whereas in America they'll get nothing. And there are all kinds of other tools in other countries like instant run-off (you rank candidates, so if your first choice is very unpopular your vote will move from your first choice to the second). The US doesn't have any of those.\n\nSo it's pretty hard for a third party to break into the two party system. And a two party system is pretty inevitable, if one party breaks up another will take its place soon. Think of what would happen if all the conservatives were in one party and all the liberals were split between two parties. The conservatives would win every single election. If the liberals then merge (even though their ideology may differ to an extent) they can compromise and actually focus on getting a unified agenda passed. That means that parties in the US are divided into factions. For example, the Democrats have a bunch of moderates, but they also have a bunch of progressives. The Republicans have theocrats and libertarians, among others. So if the Tea Party formed their own party, the _best_ case scenario for them is that they never became popular and never did well in any elections. Because if they did do well, they'd sap most of those votes from the Republicans (who they are ideologically similar to) and the Democrats (who they are ideologically opposed to) would always win.\n\nThe kinds of people who vote or run third party are generally the ones who are fed up with both majors and refuse to compromise.", "provenance": null }, { "answer": "[This](_URL_0_) video explains pretty well, and is entertaining. What more could you ask for?", "provenance": null }, { "answer": "You and a bunch of your friends decide to vote on what to do this afternoon.\n\nYou want to play baseball.\n\nJimmy wants to play baseball.\n\nMike wants to play soccer.\n\nTommy wants to play soccer.\n\nJenny wants to play dolls.\n\nMary wants to play dolls.\n\nAnne wants to play dolls.\n\n3 people want to play dolls.\n\n2 people want to play soccer.\n\n2 people want to play baseball.\n\nIf everyone votes what they want to do, dolls wins.\n\nBut if the two people who want to play baseball dislike playing with dolls more than they dislike playing soccer, they can switch their vote to football and play soccer.\n\nThe Tea Party is like the boys who want to play baseball. They want to play baseball, but they will accept playing soccer over playing with dolls.", "provenance": null }, { "answer": "Republicans hijacked a libertarian movement.", "provenance": null }, { "answer": "The tea party exists as a reaction to Obama. Their wailing and moaning about the debt and deficit was non-existent when Bush was putting 2 wars on the credit card, pushing through an upaid trillion dollar expansion of medicare, and cutting taxes on the rich to the tune of 350 billiion a year. Its basically racist old white people and some young libertarian idiots.\n\nBut the old gaurd republicans saw it as an opportunity to rebrand. The tea party got astroturfed and is a de facto arm of the republican party, just a lot more conservative, crazy, racist and extreme on social issues. It is not like a great movement got hijacked. \n\nRepublicans just seized on a radical subset of people who thought that Bush was the second coming and could not stand to see the country being \"taken over\" by a socialist darkie. \"We want our country back\" was their rallying cry. Back from who?", "provenance": null }, { "answer": null, "provenance": [ { "wikipedia_id": "22754875", "title": "Tea Party movement", "section": "Section::::History.:U.S. elections.\n", "start_paragraph_id": 45, "start_character": 0, "end_paragraph_id": 45, "end_character": 475, "text": "The Tea Party is generally associated with the Republican Party. Most politicians with the \"Tea Party brand\" have run as Republicans. In recent elections in the 2010s, Republican primaries have been the site of competitions between the more conservative, Tea Party wing of the party and the more moderate, establishment wing of the party. The Tea Party has incorporated various conservative internal factions of the Republican Party to become a major force within the party.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "22754875", "title": "Tea Party movement", "section": "Section::::Organization.\n", "start_paragraph_id": 16, "start_character": 0, "end_paragraph_id": 16, "end_character": 626, "text": "The Tea Party movement is not a national political party; polls show that most Tea Partiers consider themselves to be Republicans and the movement's supporters have tended to endorse Republican candidates. Commentators, including Gallup editor-in-chief Frank Newport, have suggested that the movement is not a new political group but simply a re-branding of traditional Republican candidates and policies. An October 2010 \"Washington Post\" canvass of local Tea Party organizers found 87% saying \"dissatisfaction with mainstream Republican Party leaders\" was \"an important factor in the support the group has received so far\".\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "28120319", "title": "Tea Party Caucus", "section": "Section::::Ideology.\n", "start_paragraph_id": 14, "start_character": 0, "end_paragraph_id": 14, "end_character": 482, "text": "The Tea Party Caucus is often viewed as taking conservative positions, and advocating for both social and fiscal conservatism. Analysis of voting patterns confirm that Caucus members are more conservative than other House Republicans, especially on fiscal matters. Voting trends to the right of the median Republican, and Tea Party Caucus members represent more conservative, southern and affluent districts. Supporters of the Tea Party movement itself are largely economic driven.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "22754875", "title": "Tea Party movement", "section": "", "start_paragraph_id": 1, "start_character": 0, "end_paragraph_id": 1, "end_character": 807, "text": "The Tea Party movement is an American fiscally conservative political movement within the Republican Party. Members of the movement have called for lower taxes, and for a reduction of the national debt of the United States and federal budget deficit through decreased government spending. The movement supports small-government principles and opposes government-sponsored universal healthcare. The Tea Party movement has been described as a popular constitutional movement composed of a mixture of libertarian, right-wing populist, and conservative activism. It has sponsored multiple protests and supported various political candidates since 2009. According to the American Enterprise Institute, various polls in 2013 estimate that slightly over 10 percent of Americans identified as part of the movement.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "44615774", "title": "History of conservatism in the United States", "section": "Section::::Since 1990.:2008–present.:Tea Party.\n", "start_paragraph_id": 193, "start_character": 0, "end_paragraph_id": 193, "end_character": 655, "text": "The Tea Party is a conglomerate of conservatives with diverse viewpoints including libertarians and social conservatives. Most Tea Party supporters self-identify as \"angry at the government\". One survey found that Tea Party supporters in particular distinguish themselves from general Republican attitudes on social issues such as same-sex marriage, abortion and illegal immigration, as well as global warming. However, discussion of abortion and gay rights has also been downplayed by Tea Party leadership. In the lead-up to the 2010 election, most Tea Party candidates have focused on federal spending and deficits, with little focus on foreign policy.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "22754875", "title": "Tea Party movement", "section": "Section::::History.:Current status.\n", "start_paragraph_id": 66, "start_character": 0, "end_paragraph_id": 66, "end_character": 498, "text": "The Tea Party's involvement in the 2012 GOP presidential primaries was minimal, owing to divisions over whom to endorse as well as lack of enthusiasm for all the candidates. However, the 2012 GOP ticket did have an influence on the Tea Party: following the selection of Paul Ryan as Mitt Romney's vice-presidential running mate, \"The New York Times\" declared that the once fringe of the conservative coalition, Tea Party lawmakers are now \"indisputably at the core of the modern Republican Party.\"\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "4157940", "title": "History of the United States Republican Party", "section": "Section::::The rise of the Tea Party and challenging the Obama administration: 2009–2016.:2012–2016.\n", "start_paragraph_id": 154, "start_character": 0, "end_paragraph_id": 154, "end_character": 486, "text": "The party mood was glum in 2013 and one conservative analyst concluded: It would be no exaggeration to say that the Republican Party has been in a state of panic since the defeat of Mitt Romney, not least because the election highlighted American demographic shifts and, relatedly, the party's failure to appeal to Hispanics, Asians, single women and young voters. Hence the Republican leadership's new willingness to pursue immigration reform, even if it angers the conservative base.\n", "bleu_score": null, "meta": null } ] } ]
null
1o37wh
Did the Japanese ever repulse an island invasion by the US during WWII?
[ { "answer": "No. After the initial Japanese victories in the Pacific War, the United States won every major campaign and battle it entered. Even those cases where the Japanese scored tactical victories were strategic losses. Japan lacked the steel to make good its naval losses, and its cadres of experienced pilots were consumed in battle, making its carriers steadily less effective. The tensest point might have been the Japanese naval victory in the Battle of Savo Island, two days after the Allied landing on Guadalcanal; however, the Japanese did not press their advantage. They might have pulled off a victory there with luck and determination. Even so: it wouldn't have delayed the war long, because a Japanese victory at Guadalcanal would have simply fed the Allied strategy of attrition. \n\nThroughout the Pacific war, the United States brought to bear significant and growing advantages in resources and technology. Japanese air and naval forces became increasingly unable to contest American mobility and logistics. The Americans had the ability to dictate the day of battle, with combined-arms support that the Japanese simply could not respond to. \n\nThe Americans were also able to bypass or \"leapfrog\" many of the more difficult targets; thousands of Japanese soldiers were simply stranded across the Pacific. For instance, the island of New Britain was held by 100,000 Japanese soldiers. An invasion would have been risky and extremely costly. Allied bombers neutralized the island's ports and airfields, leaving it surrounded by a ring of air bases. Limited offensives continued throughout the war, and the Japanese bases there were bombed in training missions for new Allied aircrews. When Australian troops accepted the island's surrender at the end of the war, there were still almost 70,000 Japanese soldiers there.\n\n*Edit: conflated Rabaul with New Britain*", "provenance": null }, { "answer": null, "provenance": [ { "wikipedia_id": "14471379", "title": "Makin (islands)", "section": "Section::::History.:World War II.\n", "start_paragraph_id": 24, "start_character": 0, "end_paragraph_id": 24, "end_character": 407, "text": "Japanese forces occupied the island in December 1941, days after the attack on Pearl Harbor, in order to protect their south-eastern flank from allied counterattacks, and isolate Australia, under the codename Operation FS. On 17–18 August 1942, in order to divert Japanese attention from the Solomon Islands and New Guinea areas, the United States launched a raid on the island, known as the raid on Makin.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "1739695", "title": "South Pacific Mandate", "section": "Section::::Pacific War.\n", "start_paragraph_id": 34, "start_character": 0, "end_paragraph_id": 34, "end_character": 369, "text": "In order to capture the islands from Japan, the United States military employed a \"leapfrogging\" strategy which involved conducting amphibious assaults on selected Japanese island fortresses, subjecting some to air attack only and entirely skipping over others. This strategy caused the Japanese Empire to lose control of its Pacific possessions between 1943 and 1945.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "161853", "title": "Harbor Defenses of Manila and Subic Bays", "section": "Section::::World War II.:Fall of Corregidor.\n", "start_paragraph_id": 44, "start_character": 0, "end_paragraph_id": 44, "end_character": 663, "text": "The Philippines, Burma, and the Dutch East Indies were the last major territories the Japanese invaded in World War II. As Corregidor surrendered, the Battle of the Coral Sea was in progress, turning back a Japanese attempt to seize Port Moresby, New Guinea by sea. By the final surrender on 9 June, the Battle of Midway was over, blunting Japan's naval strength with the loss of four large aircraft carriers and hundreds of skilled pilots. Both of these victories were costly to the US Navy as well, with two aircraft carriers lost, but the United States could replace their ships and train more pilots, and Japan, for the most part, could not do so adequately.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "34957285", "title": "Naval Air Facility Adak", "section": "Section::::History.:Adak Army Airfield.\n", "start_paragraph_id": 4, "start_character": 0, "end_paragraph_id": 4, "end_character": 911, "text": "On June 6/7, 1942, the Japanese Navy and Army participated in the only invasion of the United States during World War II through the Aleutian Islands of Kiska and Attu as part of the Aleutian Islands Campaign. Despite the loss of U.S. soil to a foreign enemy since the War of 1812, the campaign was not considered a priority by the Joint Chiefs of Staff. British Prime Minister Churchill stated that sending forces to attack the Japanese presence there was a diversion from the North African Campaign and Admiral Chester Nimitz saw it as a diversion from his operations in the Central Pacific. Commanders in Alaska, however, believed the Japanese occupiers would establish airbases in the Aleutians that would put major cities along the United States West Coast within range of their bombers and once the islands were again in United States hands, forward bases could be established to attack Japan from there.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "91515", "title": "Military strategy", "section": "Section::::Development.:World War II.:American.\n", "start_paragraph_id": 128, "start_character": 0, "end_paragraph_id": 128, "end_character": 668, "text": "After the Japanese were forced into the defensive in the second half of 1942, the Americans were confronted with heavily fortified garrisons on small islands. They decided on a strategy of \"island hopping\", leaving the strongest garrisons alone, just cutting off their supply via naval blockades and bombardment, and securing bases of operation on the lightly defended islands instead. The most notable of these island battles was the Battle of Iwo Jima, where the American victory paved the way for the aerial bombing of the Japanese mainland, which culminated in the atomic bombings of Hiroshima and Nagasaki and the Bombing of Tokyo that forced Japan to surrender.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "57003", "title": "History of the Marshall Islands", "section": "Section::::World War II.\n", "start_paragraph_id": 27, "start_character": 0, "end_paragraph_id": 27, "end_character": 368, "text": "In World War II, the United States, during the Gilbert and Marshall Islands campaign, invaded and occupied the islands in 1944, destroying or isolating the Japanese garrisons. In just one month in 1944, Americans captured Kwajalein Atoll, Majuro and Enewetak, and, in the next two months, the rest of the Marshall Islands, except for Wotje, Mili, Maloelap and Jaluit.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "189095", "title": "Battle of Leyte Gulf", "section": "Section::::Background.\n", "start_paragraph_id": 6, "start_character": 0, "end_paragraph_id": 6, "end_character": 671, "text": "The campaigns of August 1942 to early 1944 had driven Japanese forces from many of their island bases in the south and the central Pacific Ocean, while isolating many of their other bases (most notably in the Solomon Islands, Bismarck Archipelago, Admiralty Islands, New Guinea, Marshall Islands, and Wake Island), and in June 1944, a series of American amphibious landings supported by the Fifth Fleet's Fast Carrier Task Force captured most of the Mariana Islands (bypassing Rota). This offensive breached Japan's strategic inner defense ring and gave the Americans a base from which long-range Boeing B-29 Superfortress bombers could attack the Japanese home islands.\n", "bleu_score": null, "meta": null } ] } ]
null
2ttp4l
Bacteria can only live at certain temperatures, so when I eat cooked meat, am I eating a lot of dead bacteria? If not where do they go?
[ { "answer": "Yes you are. Or at least the chemical composition or chemical products of cooking that made them up. \n\nCooking kills bacteria by raising their internal temperature to the point where they die. \n\nDepending on the process and the temperature, the cell walls of the bacteria can rupture, they can carbonize and effectively turn to carbon char, they can just sit there as a dead cell, or they could be partially digested by enzyme or other chemical processes that destroy and/or dissociate the chemicals that compose them. \n\nSome of those constituents, such as water, could boil off or be washed out in the cooking water or oil in the frying pan or \"burn\" into carbon dioxide, and others will simply go into your mouth and be digested same as any other food.\n\nFinally, if you're eating leftovers or rare meat products, or eating meat that's been sitting out for a while, you're eating live bacteria too. But your body's digestive systems can easily handle most types of live bacteria without any trouble, it's only certain ones that cause problems, so that's usually nothing to worry about.", "provenance": null }, { "answer": null, "provenance": [ { "wikipedia_id": "9461357", "title": "Low-temperature cooking", "section": "Section::::Theory.:Bacteria.\n", "start_paragraph_id": 12, "start_character": 0, "end_paragraph_id": 12, "end_character": 632, "text": "Bacteria are typically killed at temperatures of around . Most harmful bacteria live on the surface of pieces of meat which have not been ground or shredded before cooking. As a result, for unprocessed steaks or chops of red meat it is usually safe merely to bring the surface temperature of the meat to this temperature and hold it there for a few minutes. See food safety. Meat which has been ground needs to be cooked at a temperature and time sufficient to kill bacteria. Poultry such as chicken has a porous texture not visible to the eye, and can harbour pathogens in its interior even if the exterior is heated sufficiently.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "233579", "title": "Dormancy", "section": "Section::::Bacteria.\n", "start_paragraph_id": 26, "start_character": 0, "end_paragraph_id": 26, "end_character": 415, "text": "Many bacteria can survive adverse conditions such as temperature, desiccation, and antibiotics by endospores, cysts, conidia or states of reduced metabolic activity lacking specialized cellular structures. Up to 80% of the bacteria in samples from the wild appear to be metabolically inactive—many of which can be resuscitated. Such dormancy is responsible for the high diversity levels of most natural ecosystems.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "14447311", "title": "Napa cabbage", "section": "Section::::Pests and diseases.:Bacterial diseases.\n", "start_paragraph_id": 68, "start_character": 0, "end_paragraph_id": 68, "end_character": 297, "text": "Bacteria survive mainly on plant residues in the soil. They are spread by insects and by cultural practices, such as irrigation water and farm machinery. The disease is tolerant to low temperatures; it can spread in storages close to 0 °C, by direct contact and by drippint onto the plants below.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "42112", "title": "Yersinia", "section": "Section::::Microbial physiology.\n", "start_paragraph_id": 4, "start_character": 0, "end_paragraph_id": 4, "end_character": 373, "text": "An interesting feature peculiar to some of the \"Yersinia\" bacteria is the ability to not only survive, but also to actively proliferate at temperatures as low as 1–4 °C (e.g., on cut salads and other food products in a refrigerator). \"Yersinia\" bacteria are relatively quickly inactivated by oxidizing agents such as hydrogen peroxide and potassium permanganate solutions.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "1057083", "title": "Microbial ecology", "section": "Section::::In built environment and human interaction.\n", "start_paragraph_id": 21, "start_character": 0, "end_paragraph_id": 21, "end_character": 352, "text": "The lifespan of microbes in the home varies similarly. Generally bacteria and viruses require a wet environment with a humidity of over 10 percent. \"E. coli\" can survive for a few hours to a day. Bacteria which form spores can survive longer, with \"Staphylococcus aureus\" surviving potentially for weeks or, in the case of \"Bacillus anthracis\", years.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "31055046", "title": "Jundiz recycling plant", "section": "Section::::Biodigestor.\n", "start_paragraph_id": 26, "start_character": 0, "end_paragraph_id": 26, "end_character": 569, "text": "Different species of bacteria are able to survive at different temperature ranges. Ones living optimally at temperatures between 35–40 °C are called mesophiles or mesophilic bacteria. Some of the bacteria can survive at the hotter and more hostile conditions of 55–60 °C, these are called thermophile.Methanogens come from the domain of archaea. This family includes species that can grow in the hostile conditions of hydrothermal vents. These species are more resistant to heat and can therefore operate at high temperatures, a property that is unique to thermophile.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "1624954", "title": "Raw feeding", "section": "Section::::Food safety.\n", "start_paragraph_id": 70, "start_character": 0, "end_paragraph_id": 70, "end_character": 1157, "text": "Raw meats may also contain harmful parasites. As with bacteria, these parasites are destroyed during the heat processing of cooking meat or manufacturing pet foods. Some raw diet recipes call for freezing meat before serving it, which greatly reduces (but does not necessarily eliminate) extant parasites. According to a former European Union directive, freezing fish at -20 °C (-4 °F) for 24 hours kills parasites. The U.S. Food and Drug Administration (FDA) recommends freezing at -35 °C (-31 °F) for 15 hours, or at -20 °C (-4 °F) for 7 days. The most common parasites in fish are roundworms from the family Anisakidae and fish tapeworm. While freezing pork at -15 °C (5 °F) for 20 days will kill any \"Trichinella spiralis\" worm, trichinosis is rare in countries with well established meat inspection programs, with cases of trichinosis in humans in the United States mostly coming from consumption of raw or undercooked wild game. Trichinella species in wildlife are resistant to freezing. In dogs and cats symptoms of trichinellosis would include mild gastrointestinal upset (vomiting and diarrhea) and in rare cases, muscle pain and muscle stiffness.\n", "bleu_score": null, "meta": null } ] } ]
null
50cliw
why is it that lead in paint is harmful, but the 40% lead in solder material isn't?
[ { "answer": "Lead in solder is harmful. There just aren't many alternatives. Lead free solder does exist, but it has a tendency to \"whisker\" which can create shorts that damage parts. ", "provenance": null }, { "answer": "Large areas are covered with lead paint, often where there's lots of casual contact and rubbing off onto people. As lead paint ages, it crumbles into flakes (sometime very tiny) and even rubs off as powdered lead, which is easily eaten or inhaled.\n\nMost people don't come into direct contact with lead in solder since it's usually enclosed in some sort of box (so people don't get shocked, or electronics don't get shorted out), and it simply doesn't flake off where is gets on and into people like the lead in paint.\n\nPaint = uncontained, lots of casual exposure\nSolder = contained, little to no exposure", "provenance": null }, { "answer": null, "provenance": [ { "wikipedia_id": "23035042", "title": "Environmental impact of paint", "section": "Section::::Issues.:Heavy metals.\n", "start_paragraph_id": 22, "start_character": 0, "end_paragraph_id": 22, "end_character": 637, "text": "Lead paint contains lead as pigment. Lead is also added to paint to speed drying, increase durability, retain a fresh appearance, and resist moisture that causes corrosion. Paint with significant lead content is still used in industry and by the military. For example, leaded paint is sometimes used to paint roadways and parking lot lines. Lead, a poisonous metal, can damage nerve connections (especially in young children) and cause blood and brain disorders. Because of lead's low reactivity and solubility, lead poisoning usually only occurs in cases when it is dispersed, such as when sanding lead-based paint prior to repainting.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "294338", "title": "Lead poisoning", "section": "Section::::Exposure routes.:Paint.\n", "start_paragraph_id": 48, "start_character": 0, "end_paragraph_id": 48, "end_character": 1675, "text": "Some lead compounds are colorful and are used widely in paints, and lead paint is a major route of lead exposure in children. A study conducted in 1998–2000 found that 38 million housing units in the US had lead-based paint, down from a 1990 estimate of 64 million. Deteriorating lead paint can produce dangerous lead levels in household dust and soil. Deteriorating lead paint and lead-containing household dust are the main causes of chronic lead poisoning. The lead breaks down into the dust and since children are more prone to crawling on the floor, it is easily ingested. Many young children display pica, eating things that are not food. Even a small amount of a lead-containing product such as a paint chip or a sip of glaze can contain tens or hundreds of milligrams of lead. Eating chips of lead paint presents a particular hazard to children, generally producing more severe poisoning than occurs from dust. Because removing lead paint from dwellings, e.g. by sanding or torching creates lead-containing dust and fumes, it is generally safer to seal the lead paint under new paint (excepting moveable windows and doors, which create paint dust when operated). Alternatively, special precautions must be taken if the lead paint is to be removed. In oil painting it was once common for colours such as yellow or white to be made with lead carbonate. Lead white oil colour was the main white of oil painters until superseded by compounds containing zinc or titanium in the mid-20th century. It is speculated that the painter Caravaggio and possibly Francisco Goya and Vincent Van Gogh had lead poisoning due to overexposure or carelessness when handling this colour.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "51449150", "title": "Lead abatement in the United States", "section": "Section::::Causes of lead poisoning.\n", "start_paragraph_id": 10, "start_character": 0, "end_paragraph_id": 10, "end_character": 268, "text": "Even though lead paint usage has been abolished, there are still houses and buildings that have not had the lead paint removed. The removal of lead paint may also cause symptoms because of the dust created in the process that still contains unhealthy amounts of lead.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "24176636", "title": "Lead-based paint in the United Kingdom", "section": "", "start_paragraph_id": 1, "start_character": 0, "end_paragraph_id": 1, "end_character": 271, "text": "Most lead-based paint in the United Kingdom was banned from sale to the general public in 1992, apart from for specialist uses. Prior to this lead compounds had been used as the pigment and drying agent in different types of paint, for example brick and some tile paints\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "23776", "title": "Paint", "section": "Section::::Components.:Pigment and filler.\n", "start_paragraph_id": 37, "start_character": 0, "end_paragraph_id": 37, "end_character": 527, "text": "Some pigments are toxic, such as the lead pigments that are used in lead paint. Paint manufacturers began replacing white lead pigments with titanium white (titanium dioxide), before lead was banned in paint for residential use in 1978 by the US Consumer Product Safety Commission. The titanium dioxide used in most paints today is often coated with silica/alumina/zirconium for various reasons, such as better exterior durability, or better hiding performance (opacity) promoted by more optimal spacing within the paint film.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "1970496", "title": "Lead paint", "section": "Section::::Regulation.\n", "start_paragraph_id": 19, "start_character": 0, "end_paragraph_id": 19, "end_character": 347, "text": "In South Africa, the Hazardous Substances Act of 2009 classifies lead as a hazardous substance and limits its use in paint to 600 parts per million (ppm). A proposed amendment will modify this to 90 ppm, thereby almost completely eradicating lead from paint. The amendment would also include all industrial paints, which were previously excluded.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "51449150", "title": "Lead abatement in the United States", "section": "Section::::History of lead poisoning in the U.S..\n", "start_paragraph_id": 6, "start_character": 0, "end_paragraph_id": 6, "end_character": 422, "text": "The reason that lead paint is such a common issue is because of its durability and widespread use. It was constantly endorsed by local and state governments until the 1970s, despite domestic occurrences of lead poisoning and reports from European countries that revealed its toxicity. By 1940, it was commonly associated with negative effects. It was only in the 1970s when the U.S. took action against lead based paints.\n", "bleu_score": null, "meta": null } ] } ]
null
17l4cr
what is the "cursive writing" thing i keep reading and what is the big deal about it?
[ { "answer": "Yes, that's cursive, in contrast to \"print\".\n\n[Here are some examples](_URL_0_)\n\nIn the USA, kids usually learn cursive around ages 7-10, and print before that.", "provenance": null }, { "answer": "Cursive writing doesn't look like that. Sure, the individual letters do, but you can't really call it cursive if it isn't all joined up. You don't get the full picture unless we see an actual example of writing.\n\nI don't know what the deal is with Belgium and cursive writing, or continental Europe in general. All I know is how Americans and the British deal with it. In the UK, cursive writing is synonymous with \"joined-up\" writing. Joined-up writing is just writing with all the letters joined together. They don't lift their pen for every letter. It's very simple.\n\n*American* cursive, on the other hand, is quite different. Schools teach a *very specific* cursive that is quite strict, and they start it from the fourth grade. All work *must* be cursive for it to count. They emphasized that in college, you *have* to write in cursive. However, by the end of middle school, American children have already stopped. \n\nThis isn't because \"Americans are so dumb lol\". It's because the specific way they teach it (or taught it, at least) was so specific that they didn't allow any deviance from the model, or else it wouldn't be \"proper cursive\". We learned it *after* we learn print-writing, so it doesn't come across as natural anyway, and the rest of society doesn't *use* cursive anyway. It's more difficult to read. \n\nThe controversy is whether it's worth teaching cursive so much in a society where no one uses it anymore. Keep in mind that it takes up a *lot* of teaching time, and slows down the class significantly when it comes to doing homework and tests, etc. Time that could be spent teaching math or science.", "provenance": null }, { "answer": null, "provenance": [ { "wikipedia_id": "514932", "title": "Cursive", "section": "Section::::Descriptions.\n", "start_paragraph_id": 4, "start_character": 0, "end_paragraph_id": 4, "end_character": 794, "text": "Cursive is a style of penmanship in which the symbols of the language are written in a conjoined and/or \"flowing\" manner, generally for the purpose of making writing faster. This writing style is distinct from \"printscript\" using block letters, in which the letters of a word are unconnected and in Roman/Gothic letterform rather than joined-up script. Not all cursive copybooks join all letters: formal cursive is generally joined, but casual cursive is a combination of joins and pen lifts. In the Arabic, Syriac, Latin, and Cyrillic alphabets, many or all letters in a word are connected (while other must not), sometimes making a word one single complex stroke. In Hebrew cursive and Roman cursive, the letters are not connected. In Maharashtra, there is a version of Cursive called 'Modi'\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "2225122", "title": "Pe̍h-ōe-jī", "section": "Section::::Name.\n", "start_paragraph_id": 5, "start_character": 0, "end_paragraph_id": 5, "end_character": 364, "text": "The name \"pe̍h-ōe-jī\" () means \"vernacular writing\", written characters representing everyday spoken language. The name \"vernacular writing\" could be applied to many kinds of writing, romanized and character-based, but the term \"pe̍h-ōe-jī\" is commonly restricted to the Southern Min romanization system developed by Presbyterian missionaries in the 19th century.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "514932", "title": "Cursive", "section": "", "start_paragraph_id": 1, "start_character": 0, "end_paragraph_id": 1, "end_character": 422, "text": "Cursive (also known as script or longhand, among other names) is any style of penmanship in which some characters are written joined together in a flowing manner, generally for the purpose of making writing faster, in opposition to block letters. Formal cursive is generally joined, but casual cursive is a combination of joins and pen lifts. The writing style can be further divided as \"looped\", \"italic\" or \"connected\".\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "1103686", "title": "Roman cursive", "section": "Section::::Old Roman cursive.\n", "start_paragraph_id": 3, "start_character": 0, "end_paragraph_id": 3, "end_character": 625, "text": "Old Roman cursive, also called majuscule cursive and capitalis cursive, was the everyday form of handwriting used for writing letters, by merchants writing business accounts, by schoolchildren learning the Latin alphabet, and even by emperors issuing commands. A more formal style of writing was based on Roman square capitals, but cursive was used for quicker, informal writing. It was most commonly used from about the 1st century BC to the 3rd century AD, but it probably existed earlier than that. In the early 2nd century BC, the comedian Plautus, in \"Pseudolus\", makes reference to the illegibility of cursive letters:\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "59125", "title": "Comma", "section": "Section::::Uses in English.:Before quotations.\n", "start_paragraph_id": 68, "start_character": 0, "end_paragraph_id": 68, "end_character": 287, "text": "Some writers precede quoted material that is the grammatical object of an active verb of speaking or writing with a comma, as in \"Mr. Kershner says, \"You should know how to use a comma.\"\" Quotations that follow and support an assertion are often preceded by a colon rather than a comma.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "527604", "title": "Blackletter", "section": "Section::::Forms.:Cursiva.\n", "start_paragraph_id": 37, "start_character": 0, "end_paragraph_id": 37, "end_character": 446, "text": "\"Cursiva\" refers to a very large variety of forms of blackletter; as with modern cursive writing, there is no real standard form. It developed in the 14th century as a simplified form of \"textualis\", with influence from the form of \"textualis\" as used for writing charters. \"Cursiva\" developed partly because of the introduction of paper, which was smoother than parchment. It was therefore, easier to write quickly on paper in a cursive script.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "35024575", "title": "Linguistic landscape", "section": "Section::::Development of the field of study.\n", "start_paragraph_id": 5, "start_character": 0, "end_paragraph_id": 5, "end_character": 589, "text": "Because \"the methodologies employed in the collection and categorisation of written signs is still controversial\", basic research questions are still being discussed, such as: \"do small, hand-made signs count as much as large, commercially made signs?\". The original technical scope of \"linguistic landscape\" involved plural languages, and almost all writers use it in that sense, but Papen has applied the term to the way public writing is used in a monolingual way in a German city and Heyd has applied the term to the ways that English is written, and people's reactions to these ways.\n", "bleu_score": null, "meta": null } ] } ]
null
2k42hf
What were the geographical boundaries of the "Old West?" Would we see "cowboy culture" in Canada? Mexico? The Caribbean?
[ { "answer": "I can speak for Canada a bit, having grown up in a cattle town. We have plenty of cowboy culture, especially in my home province of Alberta.\n\nFor most of the our province's history, our main industries were agriculture and ranching, and even today, they are second only to oil and gas.\n\nAlthough a lot of the cowboy culture has faded, many of the values remain. Rodeo still thrives in Canada, centred around the Calgary Stampede and many other rural rodeos. We also have our own rodeo sport, chuck wagon racing, which I actually have a lot of family competing in. It's the main event here, but to my knowledge, has never caught on elsewhere.\n\nFor the most part, Canada's cowboys resemble the American variety. This is because borders were basically meaningless back in our frontier days. One big difference might have been gun laws. In Canada, the North West Mounted Police (the first mounties) enforced strict laws on where guns went. It was illegal to carry a gun in most towns, especially during the Klondike gold rush. Another difference was the lack of Mexican cultural influences, slavery, and the American civil war. The first cowboy in Alberta, the man who brought cattle to the province, was actually a freed slave from the states named John Ware.\n\nThe native population was never as violent as it is depicted in the states. They were relegated to reserves fairly early. When the Calgary stampede first opened, the natives were actually included in the festivities and events. Before that, their admission into the city was very regulated.\n\nThe gold rush in the Yukon and the whiskey trade brought a lot of 'old west' culture to Canada as well. Canadian prohibition was never as strict as it was in the states, so lots of boot leggers out west here smuggled whiskey and rye across the border to serve Americans.\n\nSmaller populations and in turn less government and industry meant that the cowboy era in Canada started and ended a little later than in the states, but it was basically a branch of the same tree. It was all the frontier 'old west', and then somebody drew a line through it.", "provenance": null }, { "answer": null, "provenance": [ { "wikipedia_id": "30865437", "title": "Ranch", "section": "Section::::History in North America.:United States.\n", "start_paragraph_id": 13, "start_character": 0, "end_paragraph_id": 13, "end_character": 251, "text": "As settlers from the United States moved west, they brought cattle breeds developed on the east coast and in Europe along with them, and adapted their management to the drier lands of the west by borrowing key elements of the Spanish vaquero culture.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "22589753", "title": "Pacific Southwest", "section": "Section::::Culture.\n", "start_paragraph_id": 18, "start_character": 0, "end_paragraph_id": 18, "end_character": 788, "text": "Cultures combine and collaborate in the Pacific Southwest. Traces of the Old American West can still be seen in some areas, especially in the deserts. Hip-hop is one of the many cultures prevalent here, most noticeable in Los Angeles and the Bay Area. Polynesian culture flourishes in Hawaii, and Hawaiian Pidgin can still be heard in certain areas of the state. Spanish/Mexican culture is the most visible in the region, due to the fact four of the five states were once Spanish/Mexican possessions. Cowboys can be found anywhere in the Pacific Southwest. Hawaii has its own version of the American cowboy, the paniolo. Asian culture is demonstrated in the region, especially in California and Hawaii. The area also has a sizeable black population, along with Arabic and Jewish culture.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "9372358", "title": "Southwestern archaeology", "section": "", "start_paragraph_id": 2, "start_character": 0, "end_paragraph_id": 2, "end_character": 693, "text": "This area, identified with the current states of Colorado, Arizona, New Mexico, Utah, and Nevada in the western United States, and the states of Sonora and Chihuahua in northern Mexico, has seen successive prehistoric cultural traditions for a minimum of 12,000 years. An often-quoted statement from Erik Reed (1964) defined the Greater Southwest culture area as extending north to south from Durango, Mexico, to Durango, Colorado, and east to west from Las Vegas, Nevada, to Las Vegas, New Mexico. Differently areas of this region are also known as the American Southwest, North Mexico, and Oasisamerica, while its southern neighboring cultural region is known as Aridoamerica or Chichimeca.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "2483636", "title": "Cuyuteco", "section": "Section::::References.\n", "start_paragraph_id": 15, "start_character": 0, "end_paragraph_id": 15, "end_character": 375, "text": "BULLET::::- Eric Van Young, \"The Indigenous Peoples of Western Mexico from the Spanish Invasion to the Present: The Center-West as Cultural Region and Natural Environment,\" in Richard E. W. Adams and Murdo J. MacLeod, The Cambridge History of the Native Peoples of the Americas, Volume II: Mesoamerica, Part 2. Cambridge, U.K.: Cambridge University Press, 2000, pp. 136–186.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "60199660", "title": "Indigenous peoples of the North American Southwest", "section": "", "start_paragraph_id": 1, "start_character": 0, "end_paragraph_id": 1, "end_character": 693, "text": "Indigenous peoples of the North American Southwest refers to the area identified with the current states of Colorado, Arizona, New Mexico, Utah, and Nevada in the western United States, and the states of Sonora and Chihuahua in northern Mexico. An often quoted statement from Erik Reed (1964) defined the Greater Southwest culture area as extending north to south from Durango, Mexico to Durango, Colorado and east to west from Las Vegas, Nevada to Las Vegas, New Mexico. Other names sometimes used to define the region include \"American Southwest\", \"North Mexico\", \"Chichimeca\", and \"Oasisamerica/Aridoamerica\". This region has long been occupied by hunter-gatherers and agricultural people.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "9116357", "title": "National Multicultural Western Heritage Museum", "section": "", "start_paragraph_id": 1, "start_character": 0, "end_paragraph_id": 1, "end_character": 742, "text": "The National Multicultural Western Heritage Museum, formerly the National Cowboys of Color Museum and Hall of Fame, is a museum and hall of fame in Fort Worth, Texas. NMWHM takes a look at the people and activities that built the unique culture of the American West, in particular the contributions of Hispanic Americans, Native Americans, European Americans, and African Americans. The work of artists who documented the people and events of the time through journals, photographs and other historical items are part of this new collection. These long overlooked materials tell, perhaps for the first time, the complete story. The American West of today still operates on many of the principles and cultural relationships begun so long ago.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "3689905", "title": "Southwestern New Mexico", "section": "", "start_paragraph_id": 1, "start_character": 0, "end_paragraph_id": 1, "end_character": 660, "text": "Southwestern New Mexico is a region of the U.S. state of New Mexico commonly defined by Hidalgo County, Grant County, Catron County, Luna County, Doña Ana County, Sierra County, and Socorro County. Some important towns there are Lordsburg, Silver City, Deming, Las Cruces, Truth or Consequences, Socorro, Reserve, and Rodeo. Natural attractions there include White Sands National Monument, the Organ Mountains, Bosque del Apache National Wildlife Refuge, and the Gila Wilderness surrounding the Gila Cliff Dwellings National Monument. Southwestern New Mexico is also home to both the Very Large Array and White Sands Missile Range containing the Trinity Site.\n", "bleu_score": null, "meta": null } ] } ]
null
3g8p4m
How did the Native American tribes in the western portion of the U.S. get firearms, and when did these tribes first come into contact with firearms?
[ { "answer": "With the exception of the unwieldy, unreliable early firearms that might have been brought to the Plains by the Coronado *entrada*'s [search for Quivira in 1540-1542](_URL_0_), the Plains nations would have started to regularly see firearms in the mid-seventeenth century. While horses flowed up from Mexico, or through mission communities in New Mexico, the Spanish tended to avoid supplying their Native American neighbors with firearms. When Iroquois raids caused the westward Algonkian-Huron diaspora, the refugees and their French allies brought firearms to the Upper Mississippi watershed. The French provided firearms to the Algonkian and Huron, who then used them to carve out some territory among the Quapaws, Poncas, Omahas, and Eastern Sioux. While the Sioux naturally fought against the Fox, Potawatomi, Ottawa, Kickapoo, and Miami immigrants, the Hurons tried to stem the flow of firearms to the Plains in an effort to maintain their advantage. They nurtured hostilities between the Plains nations and the French to maintain their favored trading status, though gradually the weapons made inroads into the interior of the continent. \n\nFurther south the Osages, pushed west by the reverberations of contact, remade themselves as the trade middlemen on the doorstep of the Plains. They used an alliance with the French to secure firearms, then used those weapons against Wichitas, Pawnees, and Caddos further west to raid for horses and captives. They blocked the westward expansion of firearms, in 1719 a visitor to the Wichita saw only half a dozen firearms even though the residents were eagerly trying to purchase more. The Osage were so powerful they even started attacking French traders when they encroached on their lands. When the Comanches-Wichita-Pawnee peace was struck in the late 1740s they were able to purchase weapons and go on the offensive against the Osage. The overall access to firearms on the frontier decreased during the Seven Year's War, but many of the shocks of contact, including the introduction of the horse, the gun, and the displacement of nations to the east, were already transforming the Eastern Plains. \n\nObviously, this is just a brief introduction to the politics of trade on the doorstep of the Plains. Calloway's *One Vast Winter Count: The Native American West Before Lewis and Clark* is a great resource if you would like to learn more.", "provenance": null }, { "answer": null, "provenance": [ { "wikipedia_id": "91404", "title": "Williamson County, Texas", "section": "Section::::History.:Prehistoric.\n", "start_paragraph_id": 6, "start_character": 0, "end_paragraph_id": 6, "end_character": 592, "text": "The earliest known historical Native American occupants, the Tonkawa, were a flint-working, hunting people who followed the buffalo on foot and periodically set fire to the prairie to aid them in their hunts. During the 18th century, they made the transition to a horse culture and used firearms to a limited extent. After they were crowded out by white settlement, the Comanches continued to raid settlements in the county until the 1860s. Also, small numbers of Kiowa, Yojuane, Tawakoni, and Mayeye Indians apparently wereliving in the county at the time of the earliest Anglo settlements.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "50429851", "title": "Indian commerce with early English colonists and the early United States", "section": "Section::::Trade with the early United States.:Trade with the West.\n", "start_paragraph_id": 45, "start_character": 0, "end_paragraph_id": 45, "end_character": 714, "text": "In the early 1800s trade with Native Americans was already commonplace. Trade with the Native Americans in the mid-west states mostly consisted of beaver fur, and in return natives would receive horses, guns, and other commodities that they themselves could not produce. During the Indian Removal in this time, many tribes were pushed by the government into western states such as Kansas, Kentucky, and Missouri. The Shawnee was one such tribe that was pushed from Pennsylvania and into Ohio, Alabama, and Illinois. The Shawnee traded fur in exchange for rum or brandy. Alcohol abuse became a serious problem within the tribe. Another tribe that heavily populated the southern plains region was the Comanche tribe\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "25090908", "title": "Gunstock war club", "section": "Section::::History.\n", "start_paragraph_id": 3, "start_character": 0, "end_paragraph_id": 3, "end_character": 345, "text": "Although well known as an indigenous weapon encountered in several North American First Nations tribes across the northern United States and Canada, details of its early development continue to elude historians. They were first used in the late 17th century but were in use by Northern Plains tribes, such as the Lakota by the mid-19th century.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "28801622", "title": "Protohistory of West Virginia", "section": "Section::::Other historic groups.\n", "start_paragraph_id": 46, "start_character": 0, "end_paragraph_id": 46, "end_character": 617, "text": "Many native groups other than the Algonquian, Iroquois, and Sioux also inhabited this area. For example, the Occhenechees, also known as the Akenatzy, were the middle men in the regional trade network. Another group known as the Ocanahonon dressed like Europeans and carried curved swords at a village ten days west beyond the mountains by 1607. Ocanahonon archaeological sites excavated between the Great Lakes and the Gulf of Mexico in the Ohio Valley have been found with both gun and knife parts. Trade goods found here and in the greater \"Riviere de la Ronceverte\" were likely the result of intertribal contact.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "885293", "title": "Opotiki", "section": "Section::::Human history.:Late eighteenth to early nineteenth century.\n", "start_paragraph_id": 44, "start_character": 0, "end_paragraph_id": 44, "end_character": 242, "text": "The 1820s saw numerous well-armed invasions by Ngapuhi war parties from Northland. Although the Opotiki tribes had begun to acquire firearms by that time, they were outgunned and had to retreat from the coast to the rugged forested interior.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "91664", "title": "Armstrong County, Texas", "section": "Section::::History.:Native Americans.\n", "start_paragraph_id": 6, "start_character": 0, "end_paragraph_id": 6, "end_character": 257, "text": "Paleo-Indians first inhabitants as far back as 10,000 BC. Apachean cultures roamed the county until Comanche dominated around 1700. The Comanches were defeated by the United States Army in the Red River War of 1874. Later tribes include Kiowa and Cheyenne.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "93434", "title": "Morris County, New Jersey", "section": "Section::::History.:Paleo Indians and Native Americans.\n", "start_paragraph_id": 12, "start_character": 0, "end_paragraph_id": 12, "end_character": 294, "text": "The Native Americans traded furs and food with the Dutch for various goods. In return the Dutch gave the Native Americans metal pots, knives, guns, axes, and blankets. Trading with the Native Americans occurred until 1643 when a series of wars broke out between the Dutch and Native Americans.\n", "bleu_score": null, "meta": null } ] } ]
null
3m79y0
why can we use controllers with pcs but not keyboard and mouse with consoles?
[ { "answer": "Consoles can use a mouse and keyboard. Almost any USB mouse/kb will plug in and function with modern consoles. You can type messages, browse the web, etc.\n\nSome games do support kb/m on console: Counterstrike and War Thunder for example.\n\nMany games don't just because it takes extra effort to program for, and the kb/m has a distinct advantage in many game types that makes it unfair.\n\nTL;DR: It's extra work and an unfair advantage, but kb/m can be used on consoles and on certain console games.", "provenance": null }, { "answer": "It all depends on whether support is there. On PS2 I am able to play Dirge of Cerberus: FF7 with a keyboard and mouse. On PS3 I am able to do the same with Unreal Tournament 3. If the game supports it, you're good. ", "provenance": null }, { "answer": "For the gaming factor at least, KB+M isn't allowed on most console titles because of the precision they offer over controllers. It wouldn't really be an even playing field. On PC, pretty much everyone uses KB+M, but you can choose to use a controller if you wish, though it will be a bit less accurate.", "provenance": null }, { "answer": "Microsoft did a study back in 2010, because they were considering adding keyboard and mouse support to the xbox. \n\n_URL_0_\n\nThe controller people lost so bad that it would be totally unfair if they were matched up together. So Microsoft abandoned the idea. \n\n > the console players got destroyed every time. So much so that it would be embarrassing to the XBOX team in general had Microsoft launched this initiative.", "provenance": null }, { "answer": "Because it will be too obvious that is a lesser PC after all. They want you to think that consoles are specialized hardware black boxes to run games. \n\nAlso KB/Mouse doesn't fit in a living room.", "provenance": null }, { "answer": null, "provenance": [ { "wikipedia_id": "1336512", "title": "PC game", "section": "Section::::PC gaming technology.:Hardware.\n", "start_paragraph_id": 71, "start_character": 0, "end_paragraph_id": 71, "end_character": 267, "text": "Virtually all personal computers use a keyboard and mouse for user input. Other common gaming peripherals are a headset for faster communication in online games, joysticks for flight simulators, steering wheels for driving games and gamepads for console-style games.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "15822958", "title": "PlayStation 2", "section": "Section::::Accessories.:Mouse and Keyboard.\n", "start_paragraph_id": 49, "start_character": 0, "end_paragraph_id": 49, "end_character": 402, "text": "Unlike the PlayStation, which requires the use of an official Sony PlayStation Mouse to play mouse-compatible games, the few PS2 games with mouse support work with a standard USB mouse as well as a USB trackball. In addition, some of these games also support the usage of a USB keyboard for text input, game control (in lieu of a DualShock or DualShock 2 gamepad, in tandem with a USB mouse), or both.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "99482", "title": "Deathmatch", "section": "Section::::Description.\n", "start_paragraph_id": 5, "start_character": 0, "end_paragraph_id": 5, "end_character": 878, "text": "Players are able to control their characters and interact with the virtual world by using various controller systems. When using a PC, a typical example of a games control system would be the use of a mouse and keyboard combined. For example, the movement of the mouse could provide control of the players viewpoint from the character and the mouse buttons may be used for weapon trigger control. Certain keys on the keyboard would control movement around the virtual scenery and also often add possible additional functions. Games consoles however, use hand held 'control pads' which normally have a number of buttons and joysticks (or 'thumbsticks') which provide the same functions as the mouse and keyboard. Players often have the option to communicate with each other during the game by using microphones and speakers, headsets or by 'instant chat' messages if using a PC.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "249402", "title": "Computer terminal", "section": "Section::::History.:Contemporary.\n", "start_paragraph_id": 43, "start_character": 0, "end_paragraph_id": 43, "end_character": 319, "text": "Since the advent and subsequent popularization of the personal computer, few genuine hardware terminals are used to interface with computers today. Using the monitor and keyboard, modern operating systems like Linux and the BSD derivatives feature virtual consoles, which are mostly independent from the hardware used.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "17431092", "title": "Space flight simulation game", "section": "Section::::Control systems.:Video games.\n", "start_paragraph_id": 28, "start_character": 0, "end_paragraph_id": 28, "end_character": 1010, "text": "Most modern space flight games on the personal computer allow a player to utilise a combination of the WASD keys of the keyboard and mouse as a means of controlling the game (games such as Microsoft's \"Freelancer\" use this control system exclusively). By far the most popular control system among genre enthusiasts, however, is the joystick. Most fans prefer to use this input method whenever possible, but expense and practicality mean that many are forced to use the keyboard and mouse combination (or gamepad if such is the case). The lack of uptake among the majority of modern gamers has also made joysticks a sort of an anachronism, though some new controller designs and simplification of controls offer the promise that space sims may be playable in their full capacity on gaming consoles at some time in the future. In fact, \"\", sometimes considered one of the more cumbersome and difficult series to master within the trading and combat genre, was initially planned for the Xbox but later cancelled.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "7293946", "title": "Sixaxis", "section": "Section::::Features and design.:Wireless technology.\n", "start_paragraph_id": 17, "start_character": 0, "end_paragraph_id": 17, "end_character": 368, "text": "PlayStation 3 controllers are also supported in Linux; simply connect the controller to the computer using a USB cable and press the PS button. One application to map controller buttons and joysticks to the keyboard keys used by a particular game is Qjoypad. The documentation is extensive and the application requires you to configure Profiles for each game you use.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "1069443", "title": "Lighting control console", "section": "Section::::Types of control consoles.:Personal computer-based controllers.\n", "start_paragraph_id": 17, "start_character": 0, "end_paragraph_id": 17, "end_character": 367, "text": "Personal Computer (PC) based controllers are relatively new, and use software with a feature set similar to that found in a hardware-based console. As dimmers, automated fixtures and other standard lighting devices do not generally have current standard computer interfaces, options such as DMX-512 ports and fader/submaster panels connected via USB are commonplace.\n", "bleu_score": null, "meta": null } ] } ]
null
5z4hza
Why does there seem to be such a lack of emphasis on the Pacific Theater of WWII in American pop culture and History?
[ { "answer": "I would wager it has partially to do with the different racial components of the two theaters, and the subsequent disparity in the \"goodness\" of the war in each.\n\nThe fight against the Nazis has been continuously held up since the 1940s as the epitome of a \"good war.\" American soldiers fought and died to liberate Western Europe from a Fascist anti-democratic foe, which has been consistently depicted in propaganda and pop culture as evil incarnate (an example of the latter might be the frequency with which Nazi soldiers are the bad guys in FPS video games, whose deaths in the games are never controversial). The eugenicist and genocidal practices of the Nazis lend greater support to this idea of the European theater being a fight between the forces of good and evil (this is helped by the fact that American eugenicist and anti-Semitic policies in the 1930s and 1940s are largely unknown or under-known by Americans today). \n\nMeanwhile, the American war in the Pacific is not nearly so easily depicted in stark moral terms. Although the war was initiated by a sneak attack carried out by Japan on the US, the American response to that attack was to corral the West coast's Japanese American population, citizens and noncitizens alike, into concentration camps. Furthermore, American propaganda throughout the war depicted the Japanese in explicitly racist terms; while the war in Europe was depicted as a fight between freedom and fascism, the war in the Pacific was depicted as a fight between white democracy and Oriental despotism. Finally, anti-Japanese sentiment lingered for decades after the war ended, while anti-German sentiment almost immediately disappeared at the beginning of the Cold War.\n\nIn short, it has been relatively easy to depict WWII in Europe in terms of stark moral and political contrast (good vs evil, democracy vs fascism, liberty vs tyranny), while America's war with Japan was much more controversial, both in terms of its conduct (concentration camps, racist propaganda) and its aftermath (lingering anti-Asian sentiments and violence). Given this state of affairs, pop culture more readily focuses on the European Theater while paying much less attention to the Pacific. ", "provenance": null }, { "answer": null, "provenance": [ { "wikipedia_id": "167964", "title": "South Pacific (musical)", "section": "Section::::Themes and cultural effect.:Race.\n", "start_paragraph_id": 109, "start_character": 0, "end_paragraph_id": 109, "end_character": 1043, "text": "Part of the reason why \"South Pacific\" is considered a classic is its confrontation of racism. According to professor Philip Beidler, \"Rodgers and Hammerstein's attempt to use the Broadway theater to make a courageous statement against racial bigotry in general and institutional racism in the postwar United States in particular\" forms part of \"South Pacific\" 's legend. Although \"Tales of the South Pacific\" treats the question of racism, it does not give it the central place that it takes in the musical. Andrea Most, writing on the \"politics of race\" in \"South Pacific\", suggests that in the late 1940s, American liberals, such as Rodgers and Hammerstein, turned to the fight for racial equality as a practical means of advancing their progressive views without risking being deemed communists. Trevor Nunn, director of the 2001 West End production, notes the importance of the fact that Nellie, a southerner, ends the play about to be the mother in an interracial family: \"It's being performed in America in 1949. That's the resonance.\"\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "1762526", "title": "Two-front war", "section": "Section::::World War II.\n", "start_paragraph_id": 15, "start_character": 0, "end_paragraph_id": 15, "end_character": 388, "text": "In the case of the United States, the Pacific Theatre was primarily a naval and air effort despite losing ships during the 1941 Pearl Harbor Attack while ground forces were used in Europe. Like in Japan, most ground troops were fighting China, and the Pacific Theatre was also primarily a naval and aerial battle. It was also the first time the United States ever fought a two-front war.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "342641", "title": "Pacific War", "section": "Section::::Overview.:Names for the war.\n", "start_paragraph_id": 6, "start_character": 0, "end_paragraph_id": 6, "end_character": 515, "text": "In Allied countries during the war, the \"Pacific War\" was not usually distinguished from World War II in general, or was known simply as the \"War against Japan\". In the United States, the term \"Pacific Theater\" was widely used, although this was a misnomer in relation to the Allied campaign in Burma, the war in China and other activities within the Southeast Asian Theater. However, the US Armed Forces considered the China-Burma-India Theater to be distinct from the Asiatic-Pacific Theater during the conflict.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "342641", "title": "Pacific War", "section": "Section::::Overview.:Theaters.\n", "start_paragraph_id": 18, "start_character": 0, "end_paragraph_id": 18, "end_character": 317, "text": "Between 1942 and 1945, there were four main areas of conflict in the Pacific War: China, the Central Pacific, South-East Asia and the South West Pacific. US sources refer to two theaters within the Pacific War: the Pacific theater and the China Burma India Theater (CBI). However these were not operational commands.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "1234481", "title": "Settling Accounts: Drive to the East", "section": "Section::::Plot summary.\n", "start_paragraph_id": 7, "start_character": 0, "end_paragraph_id": 7, "end_character": 729, "text": "In this history, the Pacific War against Japan is treated as essentially a sideshow, getting only a trickle of resources - since the US are facing a dangerous invasion of their industrial heartland. Strategic aims in the Pacific are confined to recapturing Midway to remove the threat to the Sandwich Islands, and characters consider the idea of conducting an island-hopping war all the way to the Japanese home islands (as the US did in World War II) as an unrealistic fantasy. Also, in this history, the Philippines and Guam are long-standing and recognized possessions of the Japanese, which they had wrested from Spain during the Hispano-Japanese War between the late 1800s and early 1900s and to which the US laid no claim.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "34202871", "title": "Entertainment industry during World War II", "section": "Section::::Film.:Hollywood (United States).\n", "start_paragraph_id": 11, "start_character": 0, "end_paragraph_id": 11, "end_character": 1292, "text": "From December 8, 1941 is when the USA entered the WWII. It was a big year for the country because they had to arrange for the results of the war. The main focus that the US wanted to make on films was there own historical phenomena and a spread of US culture. The war films made focused mostly on the \"desperate affirmation\" and the \"societal tensions\". Many films main focus was about the war; they wanted to make sure that they explain the objectives. The US war films were good and bad, many of them showed the different lives of the people during the war. The importance of these films and as studies have mentioned, is the influence behind these films. Furthermore, war films showed a lot of information about the war and the life of their families just like the film Since You Went Away. When the US government noticed the content of the feature films they became more interested in the political and social significance messages in the film. This shows how Hollywood wanted to raise two important production of films together with war films. With the growth of the film industry came the growth of the influence of Hollywood celebrities. Hollywood stars appeared in advertisements and toured the country to encourage citizens to purchase war bonds to support their country in the war.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "44041240", "title": "Pacification theory", "section": "Section::::Politics.\n", "start_paragraph_id": 13, "start_character": 0, "end_paragraph_id": 13, "end_character": 1140, "text": "During the social uprisings in the 1960s in North America and Europe against the Vietnam War, pacification came to connote bombing people into submission and waging an ideological war against the opposition. However, after the Vietnam War, pacification was dropped from the official discourse as well as from the discourse of opposition. Although approach towards the term and practices of pacification both in the concept's sixteenth-century and twentieth-century colonial meanings were somehow related to the concepts of war, security and police power, the real connection between pacification and these concepts has never been revealed in the literature on international relations, conflict studies, criminology or political science. Neocleous has argued that the connection between pacification and the ideological discourse on security is related to the terms use in broader Western social and political thought in general, and liberal theory in particular. In short, that liberalism’s key concept is less liberty and more security and that liberal doctrine is inherently less committed to peace and far more to legitimizing violence.\n", "bleu_score": null, "meta": null } ] } ]
null
812nw0
Can someone with a weakened immune system receive a vaccine?
[ { "answer": "It depends on the vaccine and illness.\nLive vaccinations tend not to be given to persons with compromised immune systems (e.g. yellow fever vaccination), whereas some inactivated viral vaccinations may be given to those with weakened immune systems (depending on their clinical condition).\n\nFor example, it might be preferable for someone on long term immune modulating drugs to receive a flu vaccine to prevent them developing full on flu.\n\nIn the UK we use \"the green book\" for vaccine requirements and contraindications as well as taking into account an individual's clinical picture.", "provenance": null }, { "answer": null, "provenance": [ { "wikipedia_id": "30540048", "title": "Cocooning (immunization)", "section": "Section::::Rationale.\n", "start_paragraph_id": 4, "start_character": 0, "end_paragraph_id": 4, "end_character": 313, "text": "Some people cannot be fully protected from vaccine-preventable diseases by direct vaccination. These are often people with weak immune systems, who are more likely to get seriously ill. Their risk of infection can be significantly reduced if those who are most likely to infect them get the appropriate vaccines.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "36130807", "title": "Pre-conception counseling in the United States", "section": "Section::::Screening and monitoring in the United States.:Varicella screening.\n", "start_paragraph_id": 23, "start_character": 0, "end_paragraph_id": 23, "end_character": 684, "text": "Immunity status of varicella should be performed at the pre-conception counseling session, in order to prevent the occurrence of congenital varicella syndrome and other adverse effects of varicella in pregnancy. Generally, a person with a positive medical history of varicella infection can be considered immune. Among adults in the United States having a negative or uncertain history of varicella, approximately 85%-90% will be immune. Therefore, an effective method is that people with a negative or uncertain history of varicella infection have a serology to check antibody production before receiving the vaccine. The CDC recommends that all adults be immunized if seronegative.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "87175", "title": "Herd immunity", "section": "Section::::Effects.:Protection of those without immunity.\n", "start_paragraph_id": 6, "start_character": 0, "end_paragraph_id": 6, "end_character": 1091, "text": "Some individuals either cannot develop immunity after vaccination or for medical reasons cannot be vaccinated. Newborn infants are too young to receive many vaccines, either for safety reasons or because passive immunity renders the vaccine ineffective. Individuals who are immunodeficient due to HIV/AIDS, lymphoma, leukemia, bone marrow cancer, an impaired spleen, chemotherapy, or radiotherapy may have lost any immunity that they previously had and vaccines may not be of any use for them because of their immunodeficiency. Vaccines are typically imperfect as some individuals' immune systems may not generate an adequate immune response to vaccines to confer long-term immunity, so a portion of those who are vaccinated may lack immunity. Lastly, vaccine contraindications may prevent certain individuals from becoming immune. In addition to not being immune, individuals in one of these groups may be at a greater risk of developing complications from infection because of their medical status, but they may still be protected if a large enough percentage of the population is immune.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "1281756", "title": "Conjugate vaccine", "section": "", "start_paragraph_id": 4, "start_character": 0, "end_paragraph_id": 4, "end_character": 643, "text": "However, the antigen of some pathogenic bacteria does not elicit a strong response from the immune system, so a vaccination against this weak antigen would not protect the person later in life. In this case, a conjugate vaccine is used in order to invoke an immune system response against the weak antigen. In a conjugate vaccine, the weak antigen is covalently attached to a strong antigen, thereby eliciting a stronger immunological response to the weak antigen. Most commonly, the weak antigen is a polysaccharide that is attached to strong protein antigen. However, peptide/protein and protein/protein conjugates have also been developed.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "14851478", "title": "Rotavirus vaccine", "section": "Section::::Medical uses.:Effectiveness.\n", "start_paragraph_id": 10, "start_character": 0, "end_paragraph_id": 10, "end_character": 414, "text": "Additionally, the vaccines may also prevent illness in non-vaccinated children by limiting exposure through the number of circulating infections. A 2014 review of available clinical trial data from countries routinely using rotavirus vaccines in their national immunization programs found that rotavirus vaccines have reduced rotavirus hospitalizations by 49–92% and all-cause diarrhea hospitalizations by 17–55%.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "1058672", "title": "Ataxia–telangiectasia", "section": "Section::::Management.:Immune problems.\n", "start_paragraph_id": 132, "start_character": 0, "end_paragraph_id": 132, "end_character": 1397, "text": "If the tests show significant abnormalities of the immune system, a specialist in immunodeficiency or infectious diseases will be able to discuss various treatment options. Absence of immunoglobulin or antibody responses to vaccine can be treated with replacement gamma globulin infusions, or can be managed with prophylactic antibiotics and minimized exposure to infection. If antibody function is normal, all routine childhood immunizations including live viral vaccines (measles, mumps, rubella and varicella) should be given. In addition, several “special” vaccines (that is, licensed but not routine for otherwise healthy children and young adults) should be given to decrease the risk that an A–T patient will develop lung infections. The patient and all household members should receive the influenza (flu) vaccine every fall. People with A–T who are less than two years old should receive three (3) doses of a pneumococcal conjugate vaccine (Prevnar) given at two month intervals. People older than two years who have not previously been immunized with Prevnar should receive two (2) doses of Prevnar. At least 6 months after the last Prevnar has been given and after the child is at least two years old, the 23-valent pneumococcal vaccine should be administered. Immunization with the 23-valent pneumococcal vaccine should be repeated approximately every five years after the first dose.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "30540048", "title": "Cocooning (immunization)", "section": "Section::::Rationale.\n", "start_paragraph_id": 8, "start_character": 0, "end_paragraph_id": 8, "end_character": 360, "text": "Some vaccines require multiple doses, spaced over time, to be effective. Those who have not yet received all the doses (including all young babies) are not yet fully immune, and rely on the immunity of those around them. Some vaccines also require booster doses in later life; those who have not received their booster doses can be infected and infect others.\n", "bleu_score": null, "meta": null } ] } ]
null
krqch
Could we in theory create Saturn style rings around the Earth, for shits and giggles.
[ { "answer": "The only thing I know about this is [The Roche Limit](_URL_0_). \n\n > is the distance within which a celestial body, held together only by its own gravity, will disintegrate due to a second celestial body's tidal forces exceeding the first body's gravitational self-attraction. Inside the Roche limit, orbiting material will tend to disperse and form rings, while outside the limit, material will tend to coalesce. \n\nAll you have to do is figure out Earth's Roche Limit, as a first step. Then figure out a mechanism to bring the Moon there. Then its' a long waiting game.", "provenance": null }, { "answer": "Some people theorize that the Earth may have had a ring or two in it's past, and that they caused massive climate changes. Because of the Earth's significant tilt, a ring would cause winters to be far colder due to the increased shade. It is also thought that the Earth could not hold on to a ring for more than a million years or so due to solar wind and interference from the moon.\n\nSource: [Sandia National Laboratories](_URL_1_)\n\n[Astroscience](_URL_0_)", "provenance": null }, { "answer": "If you send up enough space junk up there on the right orbits we will have rings, using reflective objects would mean we would need less material to see it with the naked eye. So it is possible, if we had enough money I think we could do it (1000 tons of metal filings would probably be visible and we could do that).\n\nWith that said, i think it would mess with satellites, its a lot of space junk.", "provenance": null }, { "answer": "Don't we have a ring now made of little tiny bits of something? (someone smart please come and say what those little bits of something are, I just read reddit)\n\n*Edit*\n[found it](_URL_0_)", "provenance": null }, { "answer": "There's a good image here showing the location of satellites and debris in Earth orbit: _URL_0_\n\nNow, the important thing to bear in mind here is that to maintain a stable orbit any object is falling towards the earth, so must be travelling sideways fast enough to 'miss'. If you want an idea of how this works I can thoroughly recommend going and having a play with Kerbal Space Programme - a free game which lets you play with launching rockets and reaching orbits. The physics is slightly different to earth, but the principals are identical.\n\nThe basics of the problem are that objects in low orbit have to be travelling very fast, and objects in higher orbit need to be travelling slower. That ring in the image of space objects represents the geosynchronous orbit - i.e. the altitude at which an object has to travel at a velocity such that it orbits once every 24 hours (meaning it sits exactly above the same point above the equator as the earth rotates). For all the other points, bear in mind that most of those points are in elliptical orbits, so occupy a lot more space over time than a single dot can represent.\n\nGenerating a disk is pretty straight forward - dump enough material up there in the right orbital plane and a disk will self generate. Objects moving too fast for their current altitude will prograde out, objects travelling too slowly for their current altitude will fall back in, over time generating a disc. There's a demonstration here. _URL_1_\n\nNot sure how you're envisaging a gap between the rings and the satellite range - as with anythign in orbit, it either progrades out and eventually we lose it to space, or it retrogrades in eventually entering our atmosphere.", "provenance": null }, { "answer": "On that note, would it be practical or beneficial if we built a ring space station around Earth? Would there be negative effects on the Earth? Or maybe something like Halo? I guess that's getting a little too carried away...", "provenance": null }, { "answer": "Until someone gets this done for you, watch [this](_URL_0_).", "provenance": null }, { "answer": "I'm not sure this qualifies as \"saturn style\" but an artificial ring around the earth was created in the early sixties.\n\n_URL_0_", "provenance": null }, { "answer": "[We already are.](_URL_0_)", "provenance": null }, { "answer": "Rings are pretty and all but in terms of practicality, that's like putting a minefield in orbit for anything else you want up there. The larger particles in these rings are still small and pretty well spaced out, so you could plan around it, but it would be a serious hindrance to earth-orbit spaceflight.", "provenance": null }, { "answer": "The only thing I know about this is [The Roche Limit](_URL_0_). \n\n > is the distance within which a celestial body, held together only by its own gravity, will disintegrate due to a second celestial body's tidal forces exceeding the first body's gravitational self-attraction. Inside the Roche limit, orbiting material will tend to disperse and form rings, while outside the limit, material will tend to coalesce. \n\nAll you have to do is figure out Earth's Roche Limit, as a first step. Then figure out a mechanism to bring the Moon there. Then its' a long waiting game.", "provenance": null }, { "answer": "Some people theorize that the Earth may have had a ring or two in it's past, and that they caused massive climate changes. Because of the Earth's significant tilt, a ring would cause winters to be far colder due to the increased shade. It is also thought that the Earth could not hold on to a ring for more than a million years or so due to solar wind and interference from the moon.\n\nSource: [Sandia National Laboratories](_URL_1_)\n\n[Astroscience](_URL_0_)", "provenance": null }, { "answer": "If you send up enough space junk up there on the right orbits we will have rings, using reflective objects would mean we would need less material to see it with the naked eye. So it is possible, if we had enough money I think we could do it (1000 tons of metal filings would probably be visible and we could do that).\n\nWith that said, i think it would mess with satellites, its a lot of space junk.", "provenance": null }, { "answer": "Don't we have a ring now made of little tiny bits of something? (someone smart please come and say what those little bits of something are, I just read reddit)\n\n*Edit*\n[found it](_URL_0_)", "provenance": null }, { "answer": "There's a good image here showing the location of satellites and debris in Earth orbit: _URL_0_\n\nNow, the important thing to bear in mind here is that to maintain a stable orbit any object is falling towards the earth, so must be travelling sideways fast enough to 'miss'. If you want an idea of how this works I can thoroughly recommend going and having a play with Kerbal Space Programme - a free game which lets you play with launching rockets and reaching orbits. The physics is slightly different to earth, but the principals are identical.\n\nThe basics of the problem are that objects in low orbit have to be travelling very fast, and objects in higher orbit need to be travelling slower. That ring in the image of space objects represents the geosynchronous orbit - i.e. the altitude at which an object has to travel at a velocity such that it orbits once every 24 hours (meaning it sits exactly above the same point above the equator as the earth rotates). For all the other points, bear in mind that most of those points are in elliptical orbits, so occupy a lot more space over time than a single dot can represent.\n\nGenerating a disk is pretty straight forward - dump enough material up there in the right orbital plane and a disk will self generate. Objects moving too fast for their current altitude will prograde out, objects travelling too slowly for their current altitude will fall back in, over time generating a disc. There's a demonstration here. _URL_1_\n\nNot sure how you're envisaging a gap between the rings and the satellite range - as with anythign in orbit, it either progrades out and eventually we lose it to space, or it retrogrades in eventually entering our atmosphere.", "provenance": null }, { "answer": "On that note, would it be practical or beneficial if we built a ring space station around Earth? Would there be negative effects on the Earth? Or maybe something like Halo? I guess that's getting a little too carried away...", "provenance": null }, { "answer": "Until someone gets this done for you, watch [this](_URL_0_).", "provenance": null }, { "answer": "I'm not sure this qualifies as \"saturn style\" but an artificial ring around the earth was created in the early sixties.\n\n_URL_0_", "provenance": null }, { "answer": "[We already are.](_URL_0_)", "provenance": null }, { "answer": "Rings are pretty and all but in terms of practicality, that's like putting a minefield in orbit for anything else you want up there. The larger particles in these rings are still small and pretty well spaced out, so you could plan around it, but it would be a serious hindrance to earth-orbit spaceflight.", "provenance": null }, { "answer": null, "provenance": [ { "wikipedia_id": "40772439", "title": "Rings of Saturn (band)", "section": "", "start_paragraph_id": 1, "start_character": 0, "end_paragraph_id": 1, "end_character": 705, "text": "Rings of Saturn is an American deathcore band from the Bay Area, California. The band was formed in 2009 and was originally just a studio project. However, after gaining a wide popularity and signing to Unique Leader Records, the band formed a full line-up and became a full-time touring band. Rings of Saturn's music features a highly technical style, heavily influenced by themes of alien life and outer space. They have released four full-length albums, with their third, \"Lugal Ki En\", released in 2014 and peaking at 126 on the American \"Billboard\" 200 chart while their fourth, \"Ultu Ulla\" was released in 2017 and peaked at 76 on the Billboard 200 chart, making it the band's highest peak to date.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "24718", "title": "Ring system", "section": "Section::::Ring systems of planets.:Saturn.\n", "start_paragraph_id": 12, "start_character": 0, "end_paragraph_id": 12, "end_character": 644, "text": "Saturn's rings are the most extensive ring system of any planet in the Solar System, and thus have been known to exist for quite some time. Galileo Galilei first observed them in 1610, but they were not accurately described as a disk around Saturn until Christiaan Huygens did so in 1655. With help from the NASA/ESA/ASI Cassini mission, a further understanding of the ring formation and active movement was understood. The rings are not a series of tiny ringlets as many think, but are more of a disk with varying density. They consist mostly of water ice and trace amounts of rock, and the particles range in size from micrometers to meters.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "11896200", "title": "(Drawing) Rings Around the World", "section": "Section::::Themes and recording.\n", "start_paragraph_id": 4, "start_character": 0, "end_paragraph_id": 4, "end_character": 649, "text": "According to lead vocalist Gruff Rhys, \"(Drawing) Rings Around the World\" is about \"all the rings of communication around the world. All the rings of pollution, and all the radioactivity that goes around. If you could visualize all the things we don't see, Earth could look like some kind of fucked-up Saturn. And that's the idea I have in my head – surrounded by communication lines and traffic and debris thrown out of spaceships.\" Rhys has claimed that the theory was initially his girlfriend's father's. The track was recorded in 2000 at Monnow Valley Studio, Rockfield, Monmouthshire and was produced by the Super Furry Animals and Chris Shaw.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "40772439", "title": "Rings of Saturn (band)", "section": "Section::::History.\n", "start_paragraph_id": 3, "start_character": 0, "end_paragraph_id": 3, "end_character": 982, "text": "Rings of Saturn was formed in 2009 in high school only as a studio recording project with Lucas Mann on guitars, bass, and keyboards, Peter Pawlak on vocals, and Brent Silletto on drums. The band posted a track titled \"Abducted\" online and quickly gained listeners. The band recorded their debut album, \"Embryonic Anomaly\", with Bob Swanson at Mayhemnness Studios in Sacramento, CA. The album was self-released by the band on May 25, 2010. Four months after releasing \"Embryonic Anomaly\", the band signed to Unique Leader Records. In the months following the band's signing, Joel Omans was added as a second guitarist and the band graduated high school which led to their embarking on tours. \"Embryonic Anomaly\" was re-released through Unique Leader on March 1, 2011, and their two following albums would later also be released through the label. In December 2011, Brent Silletto and Peter Pawlak both left the band on their own decisions, mainly to seek out a different lifestyle.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "40772439", "title": "Rings of Saturn (band)", "section": "Section::::Musical style.\n", "start_paragraph_id": 9, "start_character": 0, "end_paragraph_id": 9, "end_character": 335, "text": "AllMusic described Rings of Saturn as a \"progressive, technical deathcore outfit\", writing that they have \"humorously deemed their brand of technical death metal 'aliencore.'\" The band employs fast riffing with an added harmony effect, fast tempos, ambient elements, and lyrics that deal with space invasion and extraterrestrial life.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "14336886", "title": "Inge King", "section": "Section::::Major works.:\"Rings of Saturn\".\n", "start_paragraph_id": 46, "start_character": 0, "end_paragraph_id": 46, "end_character": 216, "text": "\"Rings of Saturn\" is located in the Sir Rupert Hamer Garden, in the grounds of the Heide Museum of Modern Art in Bulleen, a suburb of Melbourne. Shortly after the dedication of this work, in August 2006, King said:\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "977592", "title": "Rings of Saturn", "section": "", "start_paragraph_id": 1, "start_character": 0, "end_paragraph_id": 1, "end_character": 544, "text": "The rings of Saturn are the most extensive ring system of any planet in the Solar System. They consist of countless small particles, ranging in size from micrometers to meters, that orbit about Saturn. The ring particles are made almost entirely of water ice, with a trace component of rocky material. There is still no consensus as to their mechanism of formation. Although theoretical models indicated that the rings were likely to have formed early in the Solar System's history, new data from \"Cassini\" suggest they formed relatively late.\n", "bleu_score": null, "meta": null } ] } ]
null
1fkazw
In the American Civil War, was the Union victory at Vicksburg of equal, lesser, or greater significance than Antietam and/or Gettysburg were to ending the war?
[ { "answer": "This is a good question and difficult to asses still nowadays.\nFirst Antietam, I believe it was never considered a big victory like Vicksburg or Gettysburg. It was still considered a victory, and good enough for Lincoln to issue his Emancipation declaration, but for the public in general it was obscured by the fact it was the bloodiest day of the war up to then, and Lee left the field by extricating his troops over night which made a Union victory for the standards of the time. But it was an inconclusive victory anyway.\nReaction to Vicksburg and Gettysburg was quite different. On one hand Vicksburg was well documented and expected, Grant had put the city under siege for months and everyone expected the outcome, the campaign was well covered by the newspapers. \nGettysburg on the other hand just happened, and engagement was foreseen sooner or later as soon as both armies were set in motion, they should clash at one point.\nReaction to both victories varied however. Grant's victory was much praised, Meade's victory in Gettysburg however seems to have received some criticism specially from President Lincoln himself. Meade was criticized for not counterattacking and pursuing Lee's army to destroy it. Pemberton's army at Vicksburg collapsed and surrendered, the place was lost and the Mississippi river closed to the Confederacy. Gettysburg on the other hand represented no territorial gains, Lee's army retreated (with heavy losses and never to regain the initiative) but kept its cohesion to fight again for 2 more years, the Union army of the Potomac was heavily battered too after 3 days of fighting, and overall the South did not seem to have perceived it as a major defeat. Yes, Lee was repulsed and it was a setback but he was not bowed. \nAt the end after all these battles nobody could see a clear end to the war, and they were right as the war went on for 2 more years. Professor Gary Gallagher argues around it extensively.\nAnswering your question, Vicksburg seem to have had a greater impact on the American public in terms of victory perception. Vicksburg would also precipitate the rise of Grant as military commander in chief of all Union armies bringing a much needed change in the chain of command and a badly needed change in the Eastern theater and the overall Union strategy.\n", "provenance": null }, { "answer": "The Vicksburg campaign was absolutely huge for the Union victory. What it essentially did was cut off vital sections of the Confederacy from the rest of it. Furthermore, the fall of Vicksburg gave the Union a much easier route into the South in the western theater of the war. Think of the Mississippi River as a huge road into the South with Vicksburg as the largest defense of it. Once the city fell, the Union now had free reign to use the river as it pleased to get South. The victory was also a significant blow to the Confederate fighting force as 30,000+ soldiers surrendered and were no longer able to fight. In addition to this, Vicksburg was a huge morale blow to the Confederacy. The city of Vicksburg was one of the most heavily fortified cities in the Confederacy with several natural barriers that gave it the reputation of being impregnable, and when did fell, a huge blow morale-wise was felt in the CSA.\n\nThe importance of Gettysburg truly lies in the fact that it was the last time that Lee was able to take the war to the North. While Meade was not able to completely destroy Lee, the CSA would not be able to invade north anymore, and ultimately, the war became defensive for them. Furthermore, the idea that Robert E. Lee was an invincible general no longer remained intact, and the Union became encouraged to launch offensives on Southern soil in the east.\n\nImo, both battles act in conjunction as one giant turning point, as they take place only a day apart. I will say that Vicksburg holds a bit more significance militarily and strategically, in that the victory split the Confederacy in 2 and cut off vital supply lines from Texas and Arkansas. Aside from this, the victory and disabling of the Confederate army, gave Grant a huge amount of prominence which eventually led to him becoming chief commander of Union forces. Ultimately, Vicksburg split the CSA, cut CSA supply routes, and gave the Union an easy road south, while Gettysburg changed the complexion of the war from the North being on the defensive to quickly taking the offensive.", "provenance": null }, { "answer": "In recent decades there has been a torrent of criticism regarding the emphasis of ACW military studies. Shelby Foote for instance has repeatedly criticized historians for focusing far too much on the Army of Northern Virginia. Foote has also repeatedly said that the Confederacy itself focused far too much on Lee's army and not enough on the west, where the confederacy repeatedly met with defeat. Foote famously said that Lee \"was marching the wrong way\" when he set out for Gettysburg noting that Lee should have moved to relieve Vicksburg, or redeployed forces to Johnson's army. It is notable that the major Confederate Victory to occur in the West, Chickamauga, occurred with Longstreet's Corps having been redeployed to the West.", "provenance": null }, { "answer": null, "provenance": [ { "wikipedia_id": "10890860", "title": "Presidency of Abraham Lincoln", "section": "Section::::American Civil War.:Eastern Theater to 1864.:Gettysburg Campaign.\n", "start_paragraph_id": 55, "start_character": 0, "end_paragraph_id": 55, "end_character": 649, "text": "The Confederate and Union armies met at the Battle of Gettysburg on July 1. The battle, fought over three days, resulted in the highest number of casualties in the war. Along with the Union victory in the Siege of Vicksburg, the Battle of Gettysburg is often referred to as a turning point in the war. Though the battle ended with a Confederate retreat, Lincoln was dismayed that Meade had failed to destroy Lee's army. Feeling that Meade was a competent commander despite his failure to pursue Lee, Lincoln allowed Meade to remain in command of the Army of the Potomac. The Eastern Theater would be locked in a stalemate for the remainder of 1863.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "122459", "title": "Vicksburg, Mississippi", "section": "Section::::History.\n", "start_paragraph_id": 10, "start_character": 0, "end_paragraph_id": 10, "end_character": 602, "text": "During the American Civil War, the city finally surrendered during the Siege of Vicksburg, after which the Union Army gained control of the entire Mississippi River. The 47-day siege was intended to starve the city into submission. Its location atop a high bluff overlooking the Mississippi River proved otherwise impregnable to assault by federal troops. The surrender of Vicksburg by Confederate General John C. Pemberton on July 4, 1863, together with the defeat of General Robert E. Lee at Gettysburg the day before, has historically marked the turning point of the Civil War in the Union's favor.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "360126", "title": "Union Army", "section": "Section::::Union victory.\n", "start_paragraph_id": 40, "start_character": 0, "end_paragraph_id": 40, "end_character": 579, "text": "The decisive victories by Grant and Sherman resulted in the surrender of the major Confederate armies. The first and most significant was on April 9, 1865, when Robert E. Lee surrendered the Army of Northern Virginia to Grant at Appomattox Court House. Although there were other Confederate armies that surrendered in the following weeks, such as Joseph E. Johnston's in North Carolina, this date was nevertheless symbolic of the end of the bloodiest war in American history, the end of the Confederate States of America, and the beginning of the slow process of Reconstruction.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "10545741", "title": "Religious views of Abraham Lincoln", "section": "Section::::Later years.:1863: Gettysburg.\n", "start_paragraph_id": 41, "start_character": 0, "end_paragraph_id": 41, "end_character": 202, "text": "1863 was to be the year, however, in which the tide turned in favor of the Union. The Battle of Gettysburg in July 1863 was the first time that Lee was soundly defeated. Prompted by Sarah Josepha Hale,\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "4849", "title": "Battle of Gettysburg", "section": "Section::::Historical assessment.:Decisive victory controversies.\n", "start_paragraph_id": 113, "start_character": 0, "end_paragraph_id": 113, "end_character": 913, "text": "It is currently a widely held view that Gettysburg was a decisive victory for the Union, but the term is considered imprecise. It is inarguable that Lee's offensive on July 3 was turned back decisively and his campaign in Pennsylvania was terminated prematurely (although the Confederates at the time argued that this was a temporary setback and that the goals of the campaign were largely met). However, when the more common definition of \"decisive victory\" is intended—an indisputable military victory of a battle that determines or significantly influences the ultimate result of a conflict—historians are divided. For example, David J. Eicher called Gettysburg a \"strategic loss for the Confederacy\" and James M. McPherson wrote that \"Lee and his men would go on to earn further laurels. But they never again possessed the power and reputation they carried into Pennsylvania those palmy summer days of 1863.\"\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "469583", "title": "Pickett's Charge", "section": "Section::::Aftermath.\n", "start_paragraph_id": 38, "start_character": 0, "end_paragraph_id": 38, "end_character": 523, "text": "The Union counteroffensive never came; the Army of the Potomac was exhausted and nearly as damaged at the end of the three days as the Army of Northern Virginia. Meade was content to hold the field. On July 4, the armies observed an informal truce and collected their dead and wounded. Meanwhile, Maj. Gen. Ulysses S. Grant accepted the surrender of the Vicksburg garrison along the Mississippi River, splitting the Confederacy in two. These two Union victories are generally considered the turning point of the Civil War.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "14807802", "title": "Battle of Marion", "section": "Section::::Background.\n", "start_paragraph_id": 4, "start_character": 0, "end_paragraph_id": 4, "end_character": 848, "text": "By 1864, the American Civil War was slowly drawing to a close. With Abraham Lincoln re-elected as President of the Union, and Gen. Ulysses Grant made commander of the Union Army, the possibility of a Confederate victory was steadily lessened. Along the Eastern Seaboard, Union forces pushed the Confederate forces of Gen. Robert E. Lee steadily back in successive Union victories at Wilderness and Spotsylvania. In the Appalachian mountains, Phillip Sheridan had defeated Confederate armies in the Shenandoah valley. As Union forces pushed southward, they destroyed significant portions of the Confederate agriculture base. As Union forces defeated Confederate armies in the northern reaches of the CSA, Gen. William T. Sherman began his march to the sea, which would eventually succeed in destroying 20% of the agricultural production in Georgia.\n", "bleu_score": null, "meta": null } ] } ]
null
16s2in
please explain utilitarianism to me like i'm 5.
[ { "answer": "There's this Cookie Monster, and he's obsessed with getting cookies. Whatever gets him the most cookies is what makes him the happiest. So if by taking a cookie from someone, the Cookie Monster can get two cookies, even though the person you took the cookie from is losing a cookie, there's still a net gain of one cookie. \n\nUtilitarianism is that idea on a grander scale. Whatever causes the greatest worldwide happiness is the best thing to.", "provenance": null }, { "answer": null, "provenance": [ { "wikipedia_id": "3406017", "title": "Secular ethics", "section": "Section::::Key philosophers and philosophical texts.:Utilitarianism.\n", "start_paragraph_id": 64, "start_character": 0, "end_paragraph_id": 64, "end_character": 558, "text": "Utilitarianism (from the Latin utilis, useful) is a theory of ethics that prescribes the quantitative maximization of good consequences for a population. It is a form of consequentialism. This good to be maximized is usually happiness, pleasure, or preference satisfaction. Though some utilitarian theories might seek to maximize other consequences, these consequences generally have something to do with the welfare of people (or of people and nonhuman animals). For this reason, utilitarianism is often associated with the term welfarist consequentialism.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "31792", "title": "Utilitarianism", "section": "", "start_paragraph_id": 1, "start_character": 0, "end_paragraph_id": 1, "end_character": 462, "text": "Utilitarianism is a family of consequentialist ethical theories that promotes actions that maximize happiness and well-being for the majority of a population. Although different varieties of utilitarianism admit different characterizations, the basic idea behind all of them is to in some sense maximize utility, which is often defined in terms of well-being or related concepts. For instance, Jeremy Bentham, the founder of utilitarianism, described utility as\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "31792", "title": "Utilitarianism", "section": "", "start_paragraph_id": 2, "start_character": 0, "end_paragraph_id": 2, "end_character": 271, "text": "Utilitarianism is a version of consequentialism, which states that the consequences of any action are the only standard of right and wrong. Unlike other forms of consequentialism, such as egoism and altruism, utilitarianism considers the interests of all beings equally.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "10847522", "title": "Two-level utilitarianism", "section": "Section::::Utilitarianism.\n", "start_paragraph_id": 5, "start_character": 0, "end_paragraph_id": 5, "end_character": 695, "text": "Utilitarianism is a type of consequentialist ethical theory. According to such theories, only the outcome of an action is morally relevant (this contrasts with deontology, according to which moral actions flow from duties or motives). Utilitarianism is \"a combination of consequentialism and the\" philosophical position hedonism, which states that pleasure, or happiness, is the only good worth pursuing. Therefore, since only the consequences of an action matter, and only happiness matters, \"only happiness that is the consequence of an action is morally relevant\". There are similarities with preference utilitarianism, where utility is defined as individual preference rather than pleasure.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "216411", "title": "Land ethic", "section": "Section::::Utilitarian-based land ethic.\n", "start_paragraph_id": 7, "start_character": 0, "end_paragraph_id": 7, "end_character": 1366, "text": "Utilitarianism was most prominently defended by British philosophers Jeremy Bentham and John Stuart Mill. Though there are many varieties of utilitarianism, generally it is the view that a morally right action is an action that produces the maximum good for people. Utilitarianism has often been used when deciding how to use land and it is closely connected with an economic-based ethic. For example, it forms the foundation for industrial farming; an increase in yield, which would increase the number of people able to receive goods from farmed land, is judged from this view to be a good action or approach. In fact, a common argument in favor of industrial agriculture is that it is a good practice because it increases the benefits for humans; benefits such as food abundance and a drop in food prices. However, a utilitarian-based land ethic is different from a purely economic one as it could be used to justify the limiting of a person's rights to make profit. For example, in the case of the farmer planting crops on a slope, if the runoff of soil into the community creek led to the damage of several neighbor's properties, then the good of the individual farmer would be overridden by the damage caused to his neighbors. Thus, while a utilitarian-based land ethic can be used to support economic activity, it can also be used to challenge this activity.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "60810780", "title": "Legal norms", "section": "Section::::Normative Legal Theory.:Utilitarianism.\n", "start_paragraph_id": 13, "start_character": 0, "end_paragraph_id": 13, "end_character": 265, "text": "Utilitarianism is a form of consequentialism whereby decisions are made by predicting the outcome that determines the moral worth of an action. It assumes that the system of legal rules as opposed to individual moral rules provide the relevant scope of a decision.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "13470", "title": "Hedonism", "section": "Section::::History of development.:Utilitarianism.\n", "start_paragraph_id": 34, "start_character": 0, "end_paragraph_id": 34, "end_character": 886, "text": "Utilitarianism addresses problems with moral motivation neglected by Kantianism by giving a central role to happiness. It is an ethical theory holding that the proper course of action is the one that maximizes the overall good of the society. It is thus one form of consequentialism, meaning that the moral worth of an action is determined by its resulting outcome. The most influential contributors to this theory are considered to be the 18th and 19th-century British philosophers Jeremy Bentham and John Stuart Mill. Conjoining hedonism—as a view as to what is good for people—to utilitarianism has the result that all action should be directed toward achieving the greatest total amount of happiness (see Hedonic calculus). Though consistent in their pursuit of happiness, Bentham and Mill's versions of hedonism differ. There are two somewhat basic schools of thought on hedonism:\n", "bleu_score": null, "meta": null } ] } ]
null
5e70wo
what do ionizers in airpurifiers do?
[ { "answer": "It introduces a mild charge to the small particles in the air, which makes them stick to things in the room rather than float around forever. \n\nBut on a practical level with consumer devices...i cant actually tell when they're on or off so i don't think they do much. I leave mine off for the most part. I wouldn't factor the ionizer into your decision at all. \n\nI have cats, but I'm allergic to cats, so my allergy symptoms are a decent indicator of whether something works or not. ", "provenance": null }, { "answer": null, "provenance": [ { "wikipedia_id": "551777", "title": "Air ioniser", "section": "Section::::Electrostatic neutraliser in electronics.\n", "start_paragraph_id": 17, "start_character": 0, "end_paragraph_id": 17, "end_character": 446, "text": "Air ionisers are often used in places where work is done involving static-electricity-sensitive electronic components, to eliminate the build-up of static charges on non-conductors. As those elements are very sensitive to electricity, they cannot be grounded because the discharge will destroy them as well. Usually, the work is done over a special dissipative table mat, which allows a very slow discharge, and under the air gush of an ioniser.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "551777", "title": "Air ioniser", "section": "Section::::Ionic air purifiers.\n", "start_paragraph_id": 4, "start_character": 0, "end_paragraph_id": 4, "end_character": 848, "text": "Air ionisers are used in air purifiers to remove particles from air. Airborne particles become charged as they attract charged ions from the ioniser by electrostatic attraction. The particles in turn are then attracted to any nearby earthed (grounded) conductors, either deliberate plates within an air cleaner, or simply the nearest walls and ceilings. The frequency of nosocomial infections in British hospitals prompted the National Health Service (NHS) to research the effectiveness of anions for air purification, finding that repeated airborne acinetobacter infections in a ward were eliminated by the installation of a negative air ioniser—the infection rate fell to zero, an unexpected result. Positive and negative ions produced by air conditioning systems have also been found by a manufacturer to inactivate viruses including influenza.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "551777", "title": "Air ioniser", "section": "", "start_paragraph_id": 1, "start_character": 0, "end_paragraph_id": 1, "end_character": 718, "text": "An air ioniser (or negative ion generator or Chizhevsky's chandelier) is a device that uses high voltage to ionise (electrically charge) air molecules. Negative ions, or anions, are particles with one or more extra electron, conferring a net negative charge to the particle. Cations are positive ions missing one or more electrons, resulting in a net positive charge. Some commercial air purifiers are designed to generate negative ions. Another type of air ioniser is the electrostatic discharge (ESD) ioniser (balanced ion generator) used to neutralise static charge. In 2002, in an obituary in \"The Independent\" newspaper, Cecil Alfred 'Coppy' Laws was credited with being the inventor of the domestic air ioniser.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "1197569", "title": "Air purifier", "section": "Section::::Purifying techniques.:Other methods.\n", "start_paragraph_id": 22, "start_character": 0, "end_paragraph_id": 22, "end_character": 729, "text": "BULLET::::- Ionizer purifiers use charged electrical surfaces or needles to generate electrically charged air or gas ions. These ions attach to airborne particles which are then electrostatically attracted to a charged collector plate. This mechanism produces trace amounts of ozone and other oxidants as by-products. Most ionizers produce less than 0.05 ppm of ozone, an industrial safety standard. There are two major subdivisions: the fanless ionizer and fan-based ionizer. Fanless ionizers are noiseless and use little power, but are less efficient at air purification. Fan-based ionizers clean and distribute air much faster. Permanently mounted home and industrial ionizer purifiers are called electrostatic precipitators.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "75049", "title": "Electrostatic discharge", "section": "Section::::Damage prevention in electronics.:Protection during manufacturing.\n", "start_paragraph_id": 22, "start_character": 0, "end_paragraph_id": 22, "end_character": 471, "text": "Ionizers are used especially when insulative materials cannot be grounded. Ionization systems help to neutralize charged surface regions on insulative or dielectric materials. Insulating materials prone to triboelectric charging of more than 2,000 V should be kept away at least 12 inches from sensitive devices to prevent accidental charging of devices through field induction. On aircraft, static dischargers are used on the trailing edges of wings and other surfaces.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "551777", "title": "Air ioniser", "section": "Section::::Ions versus ozone.\n", "start_paragraph_id": 9, "start_character": 0, "end_paragraph_id": 9, "end_character": 526, "text": "Ionisers are distinct from ozone generators, although both devices operate in a similar way. Ionisers use electrostatically charged plates to produce positively or negatively charged gas ions (for instance N or O) that particulate matter sticks to in an effect similar to static electricity. Even the best ionisers will also produce a small amount of ozone—triatomic oxygen, O—which is unwanted. Ozone generators are optimised to attract an extra oxygen ion to an O molecule, using either a corona discharge tube or UV light.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "182734", "title": "Magnetohydrodynamic drive", "section": "Section::::Typology.:Aircraft propulsion.:Active flow control.\n", "start_paragraph_id": 34, "start_character": 0, "end_paragraph_id": 34, "end_character": 404, "text": "Air ionization is achieved at high altitude (electrical conductivity of air increases as atmospheric pressure reduces according to Paschen's law) using various techniques: high voltage electric arc discharge, RF (microwaves) electromagnetic glow discharge, laser, e-beam or betatron, radioactive source… with or without seeding of low ionization potential alkali substances (like caesium) into the flow.\n", "bleu_score": null, "meta": null } ] } ]
null
72q0jx
how exactly are compounds named?
[ { "answer": "-ide is the suffix of any negatively charged anion, eg. the anion of chlorine (Cl) is called chloride (Cl^(-))\n\n---\n\n-ate and -ite are the suffixes of some polyatomic ions. That's a really messy topic to get in to and some of the naming isn't always logical.\n\nNitr**ate** is NO*_3_*^(-), nitr**ite** is NO*_2_*^-\n\nChlor**ate** is ClO*_3_*^(-), chlor**ite** is ClO*_2_*^- (also there is **per**chlor**ate** which is ClO*_4_*^(-) and **hypo**chlor**ite** which is ClO^(-)...)\n\nSulf**ate** is SO*_4_*^(2-), sulf**ite** is SO*_3_*^(2-)\n\nPhosph**ate** is PO*_4_*^(3-), phosph**ite** is HPO*_3_*^(2-)\n\nWhich is which sort of just has to be memorised, sorry...\n\n---\n\n-ous and -ic have been depreciated but some syllabuses haven't been updated\n\n-ous is the lower of two oxidation states, -ic is the higher, and this gets applied to the latin name eg. ferrous is iron(II) and ferric is iron(III), cuprous is copper(I) and cupric is copper(II). Like I said, this system has been depreciated, IUPAC recommends everyone uses names like iron(III) chloride instead of ferric chloride.\n\n---\n\nperoxide denotes an oxygen-oxygen single bond, hydrogen peroxide looks like this: H-O-O-H\n\npermanganate is another polyatomic ion like the other -ate ones above...it is MnO*_4_*^- not to be confused with just regular manganate which is MnO*_4_*^(2-)...\n\n\n---\n\nAs above the latin names have been depreciated, but you'll still have to learn them...\n\nThe modern way is to write the oxidation state in brackets immediately after the metal.", "provenance": null }, { "answer": null, "provenance": [ { "wikipedia_id": "37855063", "title": "Drug nomenclature", "section": "Section::::Types.:Chemical names.\n", "start_paragraph_id": 4, "start_character": 0, "end_paragraph_id": 4, "end_character": 813, "text": "The chemical names are the scientific names, based on the molecular structure of the drug. There are various systems of chemical nomenclature and thus various chemical names for any one substance. The most important is the IUPAC name. Chemical names are typically very long and too complex to be commonly used in referring to a drug in speech or in prose documents. For example, \"1-(isopropylamino)-3-(1-naphthyloxy) propan-2-ol\" is a chemical name for propranolol. Sometimes, a company that is developing a drug might give the drug a company code, which is used to identify the drug while it is in development. For example, CDP870 was UCB’s company code for certolizumab pegol; UCB later chose \"Cimzia\" as its trade name. Many of these codes, although not all, have prefixes that correspond to the company name.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "5180", "title": "Chemistry", "section": "Section::::Modern principles.:Matter.:Compound.\n", "start_paragraph_id": 21, "start_character": 0, "end_paragraph_id": 21, "end_character": 771, "text": "A \"compound\" is a pure chemical substance composed of more than one element. The properties of a compound bear little similarity to those of its elements. The standard nomenclature of compounds is set by the International Union of Pure and Applied Chemistry (IUPAC). Organic compounds are named according to the organic nomenclature system. The names for inorganic compounds are created according to the inorganic nomenclature system. When a compound has more than one component, then they are divided into two classes, the electropositive and the electronegative components. In addition the Chemical Abstracts Service has devised a method to index chemical substances. In this scheme each chemical substance is identifiable by a number known as its CAS registry number.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "555747", "title": "List of chemical compounds with unusual names", "section": "", "start_paragraph_id": 1, "start_character": 0, "end_paragraph_id": 1, "end_character": 629, "text": "Chemical nomenclature, replete as it is with compounds with complex names, is a repository for some very peculiar and sometimes startling names. A browse through the \"Physical Constants of Organic Compounds\" in the \"CRC Handbook of Chemistry and Physics\" (a fundamental resource) will reveal not just the whimsical work of chemists, but the sometimes peculiar compound names that occur as the consequence of simple juxtaposition. Some names derive legitimately from their chemical makeup, from the geographic region where they may be found, the plant or animal species from which they are isolated or the name of the discoverer.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "522192", "title": "Schiff base", "section": "", "start_paragraph_id": 2, "start_character": 0, "end_paragraph_id": 2, "end_character": 251, "text": "A number of special naming systems exist for these compounds. For instance a Schiff base derived from an aniline, where R is a phenyl or a substituted phenyl, can be called an \"anil\", while bis-compounds are often referred to as salen-type compounds.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "3156262", "title": "Oxyacid", "section": "Section::::Names of inorganic oxyacids.\n", "start_paragraph_id": 20, "start_character": 0, "end_paragraph_id": 20, "end_character": 384, "text": "This practice is fully well-established, and IUPAC has accepted such names. In light of the current chemical nomenclature, this practice is, however, very exceptional, because systematic names of all other compounds are formed only according to what elements they contain and what is their molecular structure, not according to what other properties (for example, acidity) they have.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "9722260", "title": "Chemical substance", "section": "Section::::Naming and indexing.\n", "start_paragraph_id": 35, "start_character": 0, "end_paragraph_id": 35, "end_character": 427, "text": "Many compounds are also known by their more common, simpler names, many of which predate the systematic name. For example, the long-known sugar glucose is now systematically named 6-(hydroxymethyl)oxane-2,3,4,5-tetrol. Natural products and pharmaceuticals are also given simpler names, for example the mild pain-killer Naproxen is the more common name for the chemical compound (S)-6-methoxy-α-methyl-2-naphthaleneacetic acid.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "49583513", "title": "IUPAC nomenclature of chemistry", "section": "Section::::Use.\n", "start_paragraph_id": 6, "start_character": 0, "end_paragraph_id": 6, "end_character": 241, "text": "IUPAC nomenclature is used for the naming of chemical compounds, based on their chemical composition and their structure. For example, on can deduce that 1-chloropropane has a Chlorine atom on the first carbon in the 3-carbon propane chain.\n", "bleu_score": null, "meta": null } ] } ]
null
1a1xzj
I've read a little bit about the affects of THC on Cancer. Is any of this research substantial or is it just not known enough?
[ { "answer": "The research done in this paper was done to cell lines so it was not quite *in vivo*.\n\n > One possible drawback could be that use of select CB2 agonists to kill tumor cells may also cause immunosuppression. Thus, further studies are necessary to address the relative sensitivity of normal and transformed immune cells to CB2 agonists in vivo.\n\nThe pathway they are testing here is very specific, so the researchers need to test it in a living specimen to see if they will get the same results. A lot can change from *in vitro* to *in vivo*. But it still is a cool study on THC. It was published in 2006 so there may be more modern articles", "provenance": null }, { "answer": "one thing to notice is that Jurkat cells do not form solid tumors. And that kind of rules out majority of the cancers as potentially targetable using THC for treatment. \n\nTHC research has been mainly focused on palliative care for patients undergoing chemotherapy or other treatments. A brief overview of the research done on THC's anti-cancer effects can be found on [NCI's website](_URL_0_)\n\nAlso, you can check out the research by [Donald Abrams from UCSF](_URL_1_). he is a big proponent of using marijuana for treating cancer and AIDS. ", "provenance": null }, { "answer": null, "provenance": [ { "wikipedia_id": "23789332", "title": "2,3,7,8-Tetrachlorodibenzodioxin", "section": "Section::::Toxicology.:Mechanism of action.\n", "start_paragraph_id": 12, "start_character": 0, "end_paragraph_id": 12, "end_character": 699, "text": "While the mutagenic and genotoxic effects of TCDD are sometimes disputed and sometimes confirmed it does foster the development of cancer. Its main action in causing cancer is cancer promotion; it promotes the carcinogenicity initiated by other compounds. Very high doses may, in addition, cause cancer indirectly; one of the proposed mechanisms is oxidative stress and the subsequent oxygen damage to DNA. There are other explanations such as endocrine disruption or altered signal transduction. The endocrine disrupting activities seem to be dependent on life stage, being anti-estrogenic when estrogen is present (or in high concentration) in the body, and estrogenic in the absence of estrogen.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "8573406", "title": "Phenothrin", "section": "Section::::Effects.\n", "start_paragraph_id": 6, "start_character": 0, "end_paragraph_id": 6, "end_character": 272, "text": "The EPA has not assessed its effect on cancer in humans. However, one study performed by the Mt. Sinai School of Medicine linked Sumithrin with breast cancer; the link made by its effect on increasing the expression of a gene responsible for mammary tissue proliferation.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "153316", "title": "Alpha-Linolenic acid", "section": "Section::::Potential role in nutrition and health.\n", "start_paragraph_id": 12, "start_character": 0, "end_paragraph_id": 12, "end_character": 512, "text": "Multiple studies have shown a relationship between α-linolenic acid and an increased risk of prostate cancer. This risk was found to be irrespective of source of origin (e.g., meat, vegetable oil). However, a large 2006 study found no association between total α-linolenic acid intake and overall risk of prostate cancer; and a 2009 meta-analysis found evidence of publication bias in earlier studies, and concluded that if ALA contributes to increased prostate cancer risk, the increase in risk is quite small.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "4498159", "title": "Estramustine phosphate", "section": "Section::::Medical uses.\n", "start_paragraph_id": 6, "start_character": 0, "end_paragraph_id": 6, "end_character": 452, "text": "Due to its relatively severe side effects and toxicity, EMP has rarely been used in the treatment of prostate cancer. This is especially true in Western countries today. As a result, and also due to the scarce side effects of gonadotropin-releasing hormone modulators (GnRH modulators) like leuprorelin, EMP was almost abandoned. However, encouraging clinical research findings resulted in renewed interest of EMP for the treatment of prostate cancer.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "33726612", "title": "Cytostasis", "section": "Section::::Medical uses.\n", "start_paragraph_id": 12, "start_character": 0, "end_paragraph_id": 12, "end_character": 295, "text": "Breast cancer – One study indicates nitric oxide (NO) is able to have a cytostatic effect on the human breast cancer cell line MDA-MB-231. Not only does nitric oxide stop cell growth, the study shows that it can also induce apoptosis after the cancer cells have been exposed to NO over 48 hours\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "574326", "title": "Low molecular weight heparin", "section": "Section::::Medical uses.\n", "start_paragraph_id": 9, "start_character": 0, "end_paragraph_id": 9, "end_character": 482, "text": "Patients with cancer are at higher risk of venous thromboembolism and LMWHs are used to reduce this risk. The CLOT study, published in 2003, showed that, in patients with malignancy and acute venous thromboembolism, dalteparin was more effective than warfarin in reducing the risk of recurrent embolic events. Use of LMWH in cancer patients for at least the first 3 to 6 months of long-term treatment is recommended in numerous guidelines and is now regarded as a standard of care.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "286282", "title": "Seveso disaster", "section": "Section::::Aftermath.\n", "start_paragraph_id": 39, "start_character": 0, "end_paragraph_id": 39, "end_character": 447, "text": "Several studies have been completed on the health of the population of surrounding communities. While it has been established that people from Seveso exposed to TCDD are more susceptible to certain rare cancers, when all types of cancers are grouped into one category, no statistically significant excess has yet been observed. This indicates that more research is needed to determine the true long-term health effects on the affected population.\n", "bleu_score": null, "meta": null } ] } ]
null
1dvem0
why can dishwashers both wash and dry dishes, but clothes washers cannot wash and dry clothes?
[ { "answer": "[They can. But they aren't particularly efficient at it.](_URL_0_)", "provenance": null }, { "answer": "Because clothes can't be tried as easily by just making them super hot like a dishwasher does. Since dishes don't absorb water, and they also don't burn.\n\nDual machines for clothes can and do exist, but they're more expensive, and more prone to failure. Since the two jobs are really quite different (and plenty of clothes can be machine washed but not machine tried) it just makes more sense to buy them separate.", "provenance": null }, { "answer": "They can. There are units that can do both. They're popular with small apartments. From what I've heard from my friends that have them, they take a long time and don't do a very good job.", "provenance": null }, { "answer": null, "provenance": [ { "wikipedia_id": "651576", "title": "Dishwashing", "section": "", "start_paragraph_id": 1, "start_character": 0, "end_paragraph_id": 1, "end_character": 462, "text": "Dishwashing or dish washing, also known as washing up, is the process of cleaning cooking utensils, dishes, cutlery and other items to prevent foodborne illness. This is either achieved by hand in a sink using dishwashing detergent or by using a dishwasher and may take place in a kitchen, utility room, scullery or elsewhere. In Britain to do the washing up also includes to dry and put away. There are cultural divisions over rinsing and drying after washing.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "8433568", "title": "Washer-dryer", "section": "Section::::Description.\n", "start_paragraph_id": 3, "start_character": 0, "end_paragraph_id": 3, "end_character": 504, "text": "Combination washer dryers are popular among those living in smaller urban properties as they only need half the amount of space usually required for a separate washing machine and clothes dryer, and may not require an external air vent. Additionally, combination washer dryers allow clothes to be washed and dried \"in one go\", saving time and effort from the user. Many washer dryer combo units are also designed to be portable so it can be attached to a sink instead of requiring a separate water line.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "651576", "title": "Dishwashing", "section": "Section::::Implements.\n", "start_paragraph_id": 3, "start_character": 0, "end_paragraph_id": 3, "end_character": 916, "text": "Dish washing is usually done using an implement for the washer to wield, unless done using an automated dishwasher. Commonly used implements include cloths, sponges, brushes or even steel wool. As fingernails are often more effective than soft implements like cloths at dislodging hard particles, washing simply with the hands is also done and can be effective as well. Dishwashing detergent is also generally used, but bar soap can be used acceptably, as well. Rubber gloves are often worn when washing dishes by people who are sensitive to hot water or dish-washing liquids. According to dermatologists, the use of protective gloves is highly recommended whenever working with water and cleaning products, since some chemicals may damage the skin, or allergies may develop in some individuals. Dish gloves are also worn by those who simply don't want to touch the old food particles. Many people also wear aprons.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "81036", "title": "Dishwasher", "section": "Section::::Adoption.:Commercial use.\n", "start_paragraph_id": 61, "start_character": 0, "end_paragraph_id": 61, "end_character": 486, "text": "Commercial dishwashers often have significantly different plumbing and operations than a home unit, in that there are often separate spray arms for washing and rinsing/sanitizing. The wash water is heated with an in-tank electric heat element and mixed with a cleaning solution, and is used repeatedly from one load to the next. The wash tank usually has a large strainer basket to collect food debris, and the strainer may not be emptied until the end of the day's kitchen operations.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "81036", "title": "Dishwasher", "section": "", "start_paragraph_id": 1, "start_character": 0, "end_paragraph_id": 1, "end_character": 304, "text": "A dishwasher is a machine for cleaning dishware and cutlery automatically. Unlike manual dishwashing, which relies largely on physical scrubbing to remove soiling, the mechanical dishwasher cleans by spraying hot water, typically between , at the dishes, with lower temperatures used for delicate items.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "3263286", "title": "Dishwashing liquid", "section": "Section::::Primary uses.:Hand dishwashing.\n", "start_paragraph_id": 29, "start_character": 0, "end_paragraph_id": 29, "end_character": 405, "text": "Hand dishwashing detergents utilize surfactants to play the primary role in cleaning. The reduced surface tension of dishwashing water, and increasing solubility of modern surfactant mixtures, allows the water to run off the dishes in a dish rack very quickly. However, most people also rinse the dishes with pure water to make sure to get rid of any soap residue that could affect the taste of the food.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "3263286", "title": "Dishwashing liquid", "section": "Section::::Primary uses.:Hand dishwashing.\n", "start_paragraph_id": 28, "start_character": 0, "end_paragraph_id": 28, "end_character": 500, "text": "Hand dishwashing is generally performed in the absence of a dishwashing machine, when large \"hard-to-clean\" items are present, or through preference. Some dishwashing liquids can harm household silver, fine glassware, anything with gold leaf, disposable plastics, and any objects made of brass, bronze, cast iron, pewter, tin, or wood, especially when combined with hot water and the action of a dishwasher. When dishwashing liquid is used on such objects it is intended that they be washed by hand.\n", "bleu_score": null, "meta": null } ] } ]
null
3gnu1e
Are there waves of air on top of our atmosphere like waves of water on the surface of the ocean?
[ { "answer": "Well, sort of.\n\nThere's no real top to our atmosphere the way there is a surface of the ocean - it just sort of gradually thins out.\n\nWith that said, though, both experience the same kind of [gravity waves](_URL_0_). Note these are not at all the same as *gravitational* waves you'd see around a black hole - similar name, very different phenomena. Gravity waves are essentially waves driven by a buoyancy force. In the ocean, you see them manifest as surface waves; in the atmosphere, they can sometimes be seen as undulations in clouds.\n\nGravity waves in the ocean break when they hit the beach. In the atmosphere, they tend to propagate upwards, breaking when the air gets so thin that it can't really carry them any more. There's good evidence to show that quite a few upper atmospheres are warmer than expected due to gravity waves breaking and depositing their energy at those locations.", "provenance": null }, { "answer": "Kelvin-Helmholtz wave clouds are formed when there are two parallel layers of air that are usually moving at different speeds and in opposite directions. The upper layer of air usually moves faster than the lower layer because there is less friction. In order for us to see this shear layer, there must be enough water vapor in the air for a cloud to form. Even if clouds are not present to reveal the shear layer, pilots need to be aware of invisible atmospheric phenomenon.\n\n_URL_0_", "provenance": null }, { "answer": null, "provenance": [ { "wikipedia_id": "900160", "title": "Internal wave", "section": "Section::::Internal waves in the ocean.\n", "start_paragraph_id": 30, "start_character": 0, "end_paragraph_id": 30, "end_character": 509, "text": "Most people think of waves as a surface phenomenon, which acts between water (as in lakes or oceans) and the air. Where low density water overlies high density water in the ocean, internal waves propagate along the boundary. They are especially common over the continental shelf regions of the world oceans and where brackish water overlies salt water at the outlet of large rivers. There is typically little surface expression of the waves, aside from slick bands that can form over the trough of the waves.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "19112738", "title": "Glossary of fishery terms", "section": "Section::::O.\n", "start_paragraph_id": 186, "start_character": 0, "end_paragraph_id": 186, "end_character": 231, "text": "BULLET::::- Ocean surface waves - are surface waves that occur on the free surface of the ocean. They usually result from wind, and are also referred to as wind waves. Some waves can travel thousands of miles before reaching land.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "387440", "title": "Physical oceanography", "section": "Section::::Rapid variations.:Tsunamis.\n", "start_paragraph_id": 92, "start_character": 0, "end_paragraph_id": 92, "end_character": 219, "text": "A series of surface waves can be generated due to large-scale displacement of the ocean water. These can be caused by sub-marine landslides, seafloor deformations due to earthquakes, or the impact of a large meteorite.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "164313", "title": "Gravity wave", "section": "Section::::The generation of ocean waves by wind.\n", "start_paragraph_id": 44, "start_character": 0, "end_paragraph_id": 44, "end_character": 284, "text": "Wind waves, as their name suggests, are generated by wind transferring energy from the atmosphere to the ocean's surface, and capillary-gravity waves play an essential role in this effect. There are two distinct mechanisms involved, called after their proponents, Phillips and Miles.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "164313", "title": "Gravity wave", "section": "Section::::The generation of ocean waves by wind.\n", "start_paragraph_id": 45, "start_character": 0, "end_paragraph_id": 45, "end_character": 858, "text": "In the work of Phillips, the ocean surface is imagined to be initially flat (\"glassy\"), and a turbulent wind blows over the surface. When a flow is turbulent, one observes a randomly fluctuating velocity field superimposed on a mean flow (contrast with a laminar flow, in which the fluid motion is ordered and smooth). The fluctuating velocity field gives rise to fluctuating stresses (both tangential and normal) that act on the air-water interface. The normal stress, or fluctuating pressure acts as a forcing term (much like pushing a swing introduces a forcing term). If the frequency and wavenumber formula_45 of this forcing term match a mode of vibration of the capillary-gravity wave (as derived above), then there is a resonance, and the wave grows in amplitude. As with other resonance effects, the amplitude of this wave grows linearly with time.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "4412802", "title": "Hydroacoustics", "section": "", "start_paragraph_id": 4, "start_character": 0, "end_paragraph_id": 4, "end_character": 468, "text": "One of the main causes of hydro acoustic noise from fully submerged lifting surfaces is the unsteady separated turbulent flow near the surface's trailing edge that produces pressure fluctuations on the surface and unsteady oscillatory flow in the near wake.The relative motion between the surface and the ocean creates a turbulent boundary layer (TBL) that surrounds the surface. The noise is generated by the fluctuating velocity and pressure fields within this TBL.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "710251", "title": "Wind wave", "section": "", "start_paragraph_id": 1, "start_character": 0, "end_paragraph_id": 1, "end_character": 401, "text": "In fluid dynamics, wind waves, or wind-generated waves, are water surface waves that occur on the free surface of the oceans and other bodies (like lakes, rivers, canals, puddles or ponds). They result from the wind blowing over an area of fluid surface. Waves in the oceans can travel thousands of miles before reaching land. Wind waves on Earth range in size from small ripples, to waves over high.\n", "bleu_score": null, "meta": null } ] } ]
null
48o3y3
If computers/electronics short circuit due to water damage, and if pure water does not carry current, could an electronic technically run under pure water?
[ { "answer": "Yes it's technically possible, but the hazard that water poses extends beyond simply shorting circuits. Water can be corrosive to a lot of the different metals and chemical on a circuit board and it can especially react when exposed to metal containing flowing current. However, a circuit would most certainly be able to survive much longer in distilled water than it would tap water. \n\nOne interesting fact about distilled water is that it is still slightly conductive! Even with 100% pure water, it will still be slightly conductive due to a thing called hydronium, however I'm not sure if this would be conductive enough to short any circuits. ", "provenance": null }, { "answer": null, "provenance": [ { "wikipedia_id": "39218481", "title": "Water capacitor", "section": "Section::::Water as a dielectric.\n", "start_paragraph_id": 8, "start_character": 0, "end_paragraph_id": 8, "end_character": 591, "text": "Water has been shown not to be a very reliable substance to store electric charge long term, so more reliable materials are used for capacitors in industrial applications. However water has the advantage of being self healing after a breakdown, and if the water is steadily circulated through a de-ionizing resin and filters, then the loss resistance and dielectric behavior can be stabilized. Thus, in certain unusual situations, such as the generation of extremely high voltage but very short pulses, a water capacitor may be a practical solution – such as in an experimental Xray pulser.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "1268562", "title": "Purified water", "section": "Section::::Uses.:Other uses.\n", "start_paragraph_id": 61, "start_character": 0, "end_paragraph_id": 61, "end_character": 265, "text": "Because of its high relative dielectric constant (~80), deionized water is also used (for short durations, when the resistive losses are acceptable) as a high voltage dielectric in many pulsed power applications, such as the Sandia National Laboratories Z Machine.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "39218481", "title": "Water capacitor", "section": "Section::::Water as a dielectric.\n", "start_paragraph_id": 7, "start_character": 0, "end_paragraph_id": 7, "end_character": 490, "text": "The drawback to using water is the short length of time it can hold off the voltage, typically in the microsecond to ten microsecond (μs) range. Deionized water is relatively inexpensive and is environmentally safe. These characteristics, along with the high dielectric constant, make water an excellent choice for building large capacitors. If a way can be found to reliably increase the hold off time for a given field strength, then there will be more applications for water capacitors.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "1342988", "title": "Capacitor plague", "section": "Section::::Non-solid aluminum electrolytic capacitors.:Electrolyte composition.\n", "start_paragraph_id": 58, "start_character": 0, "end_paragraph_id": 58, "end_character": 359, "text": "It was known that water is a very good solvent for low ohmic electrolytes. However, the corrosion problems linked to water hindered, up to that time, the use of it in amounts larger than 20% of the electrolyte, the water-driven corrosion using the above-mentioned electrolytes being kept under control with chemical inhibitors that stabilize the oxide layer.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "1268562", "title": "Purified water", "section": "Section::::Uses.:Other uses.\n", "start_paragraph_id": 62, "start_character": 0, "end_paragraph_id": 62, "end_character": 370, "text": "Distilled water can be used in PC watercooling systems and Laser Marking Systems. The lack of impurity in the water means that the system stays clean and prevents a buildup of bacteria and algae. Also, the low conductance reduces the risk of electrical damage in the event of a leak. However, deionized water has been known to cause cracks in brass and copper fittings.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "1909939", "title": "Hot tub", "section": "Section::::Safety.\n", "start_paragraph_id": 39, "start_character": 0, "end_paragraph_id": 39, "end_character": 274, "text": "It is also recommended to install residual-current devices for protection against electrocution. The greater danger associated with electrical shock in the water is that the person may be rendered immobile and unable to rescue themselves or to call for help and then drown.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "1246810", "title": "Humidifier", "section": "Section::::Problems.\n", "start_paragraph_id": 45, "start_character": 0, "end_paragraph_id": 45, "end_character": 283, "text": "In addition, a stuck or malfunctioning water supply valve can deliver large amounts of water, causing extensive water damage if undetected for any period of time. A water alarm, possibly with an automatic water shutoff, can help prevent this malfunction from causing major problems.\n", "bleu_score": null, "meta": null } ] } ]
null
3ipsld
Why do wireless electronics only use 2.4 and 5ghz bands?
[ { "answer": "The relevant regulation can be found [in this Wikipedia page for ISM Band](_URL_0_). Your short range consumer electronics are designed to operate in the ISM band because it does not require a license.\n\nThe Wikipedia page for [frequency allocation](_URL_1_) will also give you an idea of what the other bands are used for.", "provenance": null }, { "answer": "2.4 GHz and 5GHz are more-or-less globally allocated for non-licensed use, so electronic devices can be sold and interoperate with devices from other countries.\n\nConsider that many other RF spectrums are not the same in the US, Europe, and Asia. First example that comes to mind is FM radio, which uses a lower band in Japan (starting at 76 MHz), channels on odd frequencies (106.1 MHz) in the US, and channels on even frequencies in Europe (94.2 MHz).", "provenance": null }, { "answer": "It's part of the radio frequencies the government has deemed unlicensed spectrum. If you were to make a device on other bands of spectrum the license owner would have legal recourse against you. Your cell phone uses licensed band for LTE and they vary depending on the carrier to prevent overlap. ", "provenance": null }, { "answer": "Licensing. \n\nYou can't just broadcast on any frequency you choose. FTC and other organizations sell, monitor, release consumer frequency, and enforce certain rules and regulations.\n\n2.4 ghz, 5.8 ghz (not 5) and some things like 900 mhz are unlicensed. Moreover that's not the whole story.\n\nYou are also limited to certain channals and also a certain power output, in some cases how you manipulate the frequency and specifications. For 2.4 ghz often we use 802.11.\n\nWhy do we do this? To make sure people aren't interfering with signals, preserve some frequencies for special purpose and companies and many other reasons.\n\nNew mesh network I'm working on uses 900 mhz, 2.4 and 5.8 and forms a network automatically.\n\nSource:rf tech.", "provenance": null }, { "answer": "The 2.4GHz band is overcrowded and noisey as many more WiFi devices operate in this spectrum (802.11 b, g and n). As more devices are capable of using \"5GHz\" spectrum this will become more crowded. Some older smartphones only have a 2.4GHz radio and cannot connect to 5GHz hotspots. ", "provenance": null }, { "answer": "There is a crucial bit here:\n2.4 Ghz (which seems to be the most crowded) is common because it's unregulated - you don't need an FCC license, which makes consumer sales possible/easier.\n\nBut it's unregulated because when regulations were, it was already noisy. A microwave oven runs at ~ 2.4 Ghz. So they didn't bother regulating it further, since it was already the realm of uncontrolled noise. If you've never tried it, connect to an A/B/G hotspot with your phone, stand near your microwave and cook something, and watch your signal.", "provenance": null }, { "answer": "Because literally every other frequency is jam-packed with other things. Around 3ghz is probably satellites and WiMax already. The reason we even have ISM bands is that RF heating is _a thing_ in the form of induction welders, microwave ovens, etc. and there has to be a spot to put those in. So WiFi piggybacked on a slice of \"junk\" spectrum that's earmarked for heaters and the like and was never intended to become communications spectrum.", "provenance": null }, { "answer": "In plain english - Most of the other frequencies are being used or reserved by law for things like Cell phones, TV, Radio, Commercial and Government Comunications, Scientific research, etc etc.. 2.4 and 5ghz just happen to be a couple of the few \"free to use\" unlicensed frequencies (in the US at least, although there is a lot of commonality with international bodies and standards). \n\nHere is a graphic that shows how all the frequencies are allocated, and how much radio crap is actually filling the air around us. This is why spectrum allocations are so valuable and why companies like Verizon or AT & T pays billions when a piece comes up for auction, so they can add bandwidth to their mobile device networks \n\n_URL_0_", "provenance": null }, { "answer": "Governments carve up the frequency spectrums for different purposes. \n\nTechnically you can transmit at any frequency, up until someone from the government sends you a BIG FINE and/or takes your illegal hardware!\n", "provenance": null }, { "answer": "All the (correct) regulatory answers aside, there's also physics - those frequencies are high enough that small, easily-concealed antennas can still transmit efficiently, but low enough that they don't suffer from extreme attenuation due to ordinary building materials. So they're a good compromise in terms of wavelength. ", "provenance": null }, { "answer": "Lots of folks have commented about the FCC and US guidelines; however, it's the ITU that sets the World Radio Regulations found here:\n\n_URL_0_\n\nEvery 3-4 years members of every country in the world descend upon ITU HQ in Geneva and discuss changes to the Radio Regs. It is a consensus organization. Once the changes have been agreed upon, it is then up to specific administrations to implement those regs nationally. \n\nIn the US the FCC is responsible for commercial spectrum and the NTIA is responsible for government spectrum. In most countries, they only have 1 agency for this, like OFCOM in the UK or ACMA in Australia. But this is 'merica so we need two.....\n\nAnyway, frequency bands below 6 GHz are HIGHLY sought after, they have the best propagation characteristics in the atmosphere. The unlicensed spectrum in 2.4 and 5 GHz was a compromise, it was enough to placate (then) the unlicensed people. \n\n\nTL:DR. The Radio Regs are complicated and full of horse trades and compromises.", "provenance": null }, { "answer": null, "provenance": [ { "wikipedia_id": "15460472", "title": "Headset (audio)", "section": "Section::::Wireless.:2.4 GHz.\n", "start_paragraph_id": 36, "start_character": 0, "end_paragraph_id": 36, "end_character": 600, "text": "Because DECT specifications are different between countries, developers who use the same product across different countries have launched wireless headsets which use 2.4GHz RF as opposed to the 1.89 or 1.9 GHz in DECT. Almost all countries in the world have the 2.4 GHz band open for wireless communications, so headsets using this RF band is sellable in most markets. However, the 2.4 GHz frequency is also the base frequency for many wireless data transmission, i.e. Wireless LAN, Wi-Fi, Bluetooth..., the bandwidth may be quite crowded, so using this technology may be more prone to interference.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "300602", "title": "Internet access", "section": "Section::::Technologies.:Wireless broadband access.:Wireless ISP.\n", "start_paragraph_id": 92, "start_character": 0, "end_paragraph_id": 92, "end_character": 403, "text": "With the increasing popularity of unrelated consumer devices operating on the same 2.4 GHz band, many providers have migrated to the 5GHz ISM band. If the service provider holds the necessary spectrum license, it could also reconfigure various brands of off the shelf Wi-Fi hardware to operate on its own band instead of the crowded unlicensed ones. Using higher frequencies carries various advantages:\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "33143", "title": "Wireless LAN", "section": "Section::::History.\n", "start_paragraph_id": 7, "start_character": 0, "end_paragraph_id": 7, "end_character": 524, "text": "In 2009 802.11n was added to 802.11. It operates in both the 2.4 GHz and 5 GHz bands at a maximum data transfer rate of 600 Mbit/s. Most newer routers are able to utilise both wireless bands, known as dualband. This allows data communications to avoid the crowded 2.4 GHz band, which is also shared with Bluetooth devices and microwave ovens. The 5 GHz band is also wider than the 2.4 GHz band, with more channels, which permits a greater number of devices to share the space. Not all channels are available in all regions.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "9228804", "title": "Wi-Fi Protected Setup", "section": "Section::::Band or radio selection.\n", "start_paragraph_id": 26, "start_character": 0, "end_paragraph_id": 26, "end_character": 896, "text": "Some devices with dual-band wireless network connectivity do not allow the user to select the 2.4 GHz or 5 GHz band (or even a particular radio or SSID) when using Wi-Fi Protected Setup, unless the wireless access point has separate WPS button for each band or radio; however, a number of later wireless routers with multiple frequency bands and/or radios allow the establishment of a WPS session for a specific band and/or radio for connection with clients which cannot have the SSID or band (e.g., 2.4/5 GHz) explicitly selected by the user on the client for connection with WPS (e.g. pushing the 5 GHz, where supported, WPS button on the wireless router will force a client device to connect via WPS on only the 5 GHz band after a WPS session has been established by the client device which cannot explicitly allow the selection of wireless network and/or band for the WPS connection method).\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "8763689", "title": "Long-range Wi-Fi", "section": "Section::::Obstacles to long-range Wi-Fi.:2.4 GHz interference.\n", "start_paragraph_id": 58, "start_character": 0, "end_paragraph_id": 58, "end_character": 562, "text": "Due to the intended nature of the 2.4 GHz band, there are many users of this band, with potentially dozens of devices per household. By its very nature, \"long range\" connotes an antenna system which can see many of these devices, which when added together produce a very high noise floor, whereby no single signal is usable, but nonetheless are still received. The aim of a long-range system is to produce a system which over-powers these signals and/or uses directional antennas to prevent the receiver \"seeing\" these devices, thereby reducing the noise floor.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "10680827", "title": "ANT (network)", "section": "Section::::Interference immunity.\n", "start_paragraph_id": 16, "start_character": 0, "end_paragraph_id": 16, "end_character": 223, "text": "ANT, ZigBee, Bluetooth, Wi-Fi, and some cordless phones all use the 2.4 GHz band (as well as 868- and 915 MHz for regional variants in the latter's case), along with proprietary forms of wireless Ethernet and wireless USB.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "12656607", "title": "Chemlink", "section": "Section::::Background.\n", "start_paragraph_id": 4, "start_character": 0, "end_paragraph_id": 4, "end_character": 537, "text": "Wireless solutions make use of the 5.8 GHz range to avoid interference from the increasingly crowded 2.4GHz radio band, which is widely used by WLAN 802.11b/g, Bluetooth devices, Cordless phones and Microwave ovens. Therefore, 5.8 GHz solutions are getting more and more public to use in home video transmission, especially in North America and Australia. In the security and surveillance markets, especially for long range video transmissions, more people are starting to use 5.8 GHz frequency for cleaner bandwidth for better outcome.\n", "bleu_score": null, "meta": null } ] } ]
null
40sdzi
When was the last time a president was elected who was "filled in" during local ballots?
[ { "answer": "Just to clarify, as I think I understand what you're asking about, but I want to be sure, you're talking about a 'straight ticket' ballot [such as this one](_URL_0_)?", "provenance": null }, { "answer": null, "provenance": [ { "wikipedia_id": "55944120", "title": "1852 United States presidential election in Arkansas", "section": "", "start_paragraph_id": 1, "start_character": 0, "end_paragraph_id": 1, "end_character": 263, "text": "The 1852 United States presidential election in Arkansas took place on November 2, 1852, as part of the 1852 United States presidential election. Voters chose four representatives, or electors to the Electoral College, who voted for president and vice president.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "53951874", "title": "1856 United States presidential election in Rhode Island", "section": "", "start_paragraph_id": 1, "start_character": 0, "end_paragraph_id": 1, "end_character": 267, "text": "The 1856 United States presidential election in Rhode Island took place on November 4, 1856, as part of the 1856 United States presidential election. Voters chose four representatives, or electors to the Electoral College, who voted for president and vice president.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "54461857", "title": "1852 United States presidential election in New York", "section": "", "start_paragraph_id": 1, "start_character": 0, "end_paragraph_id": 1, "end_character": 261, "text": "The 1852 United States presidential election in New York took place on November 2, 1852, as part of the 1852 United States presidential election. Voters chose 35 representatives, or electors to the Electoral College, who voted for President and Vice President.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "53951635", "title": "1856 United States presidential election in Connecticut", "section": "", "start_paragraph_id": 1, "start_character": 0, "end_paragraph_id": 1, "end_character": 265, "text": "The 1856 United States presidential election in Connecticut took place on November 4, 1856, as part of the 1856 United States presidential election. Voters chose six representatives, or electors to the Electoral College, who voted for president and vice president.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "1803490", "title": "1914 United States Senate elections", "section": "", "start_paragraph_id": 1, "start_character": 0, "end_paragraph_id": 1, "end_character": 310, "text": "The United States Senate elections of 1914, with the ratification of the 17th Amendment in 1913, were the first time that all seats up for election were popularly elected instead of chosen by their state legislatures. These elections occurred in the middle of Democratic President Woodrow Wilson's first term.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "53939129", "title": "1852 United States presidential election in Rhode Island", "section": "", "start_paragraph_id": 1, "start_character": 0, "end_paragraph_id": 1, "end_character": 267, "text": "The 1852 United States presidential election in Rhode Island took place on November 2, 1852, as part of the 1852 United States presidential election. Voters chose four representatives, or electors to the Electoral College, who voted for President and Vice President.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "22612619", "title": "1946 California's 12th congressional district election", "section": "", "start_paragraph_id": 1, "start_character": 0, "end_paragraph_id": 1, "end_character": 554, "text": "An election for a seat in the United States House of Representatives took place in California's 12th congressional district on November 5, 1946, the date set by law for the elections for the 80th United States Congress. In the 12th district election, the candidates were five-term incumbent Democrat Jerry Voorhis, Republican challenger Richard Nixon, and former congressman and Prohibition Party candidate John Hoeppel. Nixon was elected with 56% of the vote, starting him on the road that would, almost a quarter century later, lead to the presidency.\n", "bleu_score": null, "meta": null } ] } ]
null
cotmzh
does the music you listen to in your childhood affect your future personality?
[ { "answer": "I truly think it has a great affect. I grew up listening to a lot of 60s and 70s rock, I still listen to it and it's shaped a lot of who I am and how I see the world. I love the lyrics, sound, expressionism and aesthetic. And I don't think I couldve gotten through the dark periods of my life without the wisdom those songs installed in me throughout my whole childhood. I sometimes feel as though I'm from that eras but reincarnated lol. Long live the hippie movement ✌️", "provenance": null }, { "answer": null, "provenance": [ { "wikipedia_id": "33106889", "title": "Culture in music cognition", "section": "Section::::Memory.:Effect of culture.:Development.\n", "start_paragraph_id": 38, "start_character": 0, "end_paragraph_id": 38, "end_character": 365, "text": "Enculturation affects music memory in early childhood before a child's cognitive schemata for music is fully formed, perhaps beginning at as early as one year of age. Like adults, children are also better able to remember novel music from their native culture than from unfamiliar ones, although they are less capable than adults at remembering more complex music.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "36161211", "title": "Exploitation of women in mass media", "section": "Section::::Criticisms of the media.:Music.\n", "start_paragraph_id": 21, "start_character": 0, "end_paragraph_id": 21, "end_character": 942, "text": "Music is a key factor in the socialization of children. Children and adolescents often turn to music lyrics as an outlet away from loneliness or as a source of advice and information. The results of a study through \"A Kaiser Family Foundation Study\" in 2005 showed that 85% of youth ages 8–18 listen to music each day. While music is commonly thought of as only a means of entertainment, studies have found that music is often chosen by youth because it mirrors their own feelings and the content of the lyrics is important to them. Numerous studies have been conducted to research how music influences listeners behaviors and beliefs. For example, a study featured in the \"Journal of Youth and Adolescence\" found that when compared to adolescent males who did not like heavy metal music, those who liked heavy metal had a higher occurrence of deviant behaviors. These behaviors included sexual misconduct, substance abuse and family issues.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "33107185", "title": "Music and emotion", "section": "Section::::Conveying emotion through music.:Specific listener features.:Development.\n", "start_paragraph_id": 28, "start_character": 0, "end_paragraph_id": 28, "end_character": 588, "text": "Studies indicate that the ability to understand emotional messages in music starts early, and improves throughout child development. Studies investigating music and emotion in children primarily play a musical excerpt for children and have them look at pictorial expressions of faces. These facial expressions display different emotions and children are asked to select the face that best matches the music's emotional tone. Studies have shown that children are able to assign specific emotions to pieces of music; however, there is debate regarding the age at which this ability begins.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "5405045", "title": "Music education for young children", "section": "Section::::World music.\n", "start_paragraph_id": 11, "start_character": 0, "end_paragraph_id": 11, "end_character": 323, "text": "Studies done on children who have had a musical background, have shown that it increases brain function as well as brain stimulation. When children are exposed to music from other countries and cultures, they are able to learn about the instrument while at the same time being educated about a different part of the world.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "35587000", "title": "Developmentally Appropriate Musical Practice", "section": "Section::::Types of Developmentally Appropriate Musical Practice (DAMP).:Identifying sounds.\n", "start_paragraph_id": 11, "start_character": 0, "end_paragraph_id": 11, "end_character": 704, "text": "Children’s musical interest may vary from exploring a specific instrument to listening to a type of musical literature that the child finds interesting because of his or her cultural background. In other words, early childhood musical interest lies with the involvement that the child is actively engaged in the learning milieu. Morin’s article suggests that in order for students to develop a personal interest during the exploration of music, they need opportunities and experiences that have been aligned by the educator as developmentally appropriate. In essence, Morin communicates about the importance of giving young children ample opportunities to explore, manipulate, and play in the classroom.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "37614966", "title": "Music therapy for Alzheimer's disease", "section": "Section::::Power of music.\n", "start_paragraph_id": 12, "start_character": 0, "end_paragraph_id": 12, "end_character": 596, "text": "Music influences many regions of the brain including those associated with emotional and creative areas has the power to evoke emotion and memories from deep in the past, so it is understandable that Alzheimer's patients can recall musical memories from many decades prior given the richness and vividness of these memories. Music memory can be preserved for those living with Alzheimer's Disease and brought forth through various techniques of music therapy. Areas of the brain influenced by music are one of the final parts of the brain to degenerate in the progression of Alzheimer's Disease.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "33107185", "title": "Music and emotion", "section": "Section::::Conveying emotion through music.:Structural features.\n", "start_paragraph_id": 16, "start_character": 0, "end_paragraph_id": 16, "end_character": 706, "text": "Music also affects socially-relevant memories, specifically memories produced by nostalgic musical excerpts (e.g., music from a significant time period in one’s life, like music listened to on road trips). Musical structures are more strongly interpreted in certain areas of the brain when the music evokes nostalgia. The interior frontal gyrus, substantia nigra, cerebellum, and insula were all identified to have a stronger correlation with nostalgic music than not. Brain activity is a very individualized concept with many of the musical excerpts having certain effects based on individuals’ past life experiences, thus this caveat should be kept in mind when generalizing findings across individuals.\n", "bleu_score": null, "meta": null } ] } ]
null
5r1uc6
Did the people actually believe that World War I would be over by Christmas when it started or is that a common myth originated after the war?
[ { "answer": "Not my area of particular expertise, but the answer to this is, perhaps annoyingly: \"yes and no.\"\n\nTo my reading the \"no\" camp-- that is, those who thought that a great power conflict in Europe would be a long, protracted conflict-- was relatively small but included some incredibly influential people involved in war strategy and planning on all sides including Kitchener, Haig, Falkenhayn and Joffre.\n\nThose who thought the war would be over quickly, I think, have actually been widely misunderstood. \"Over by Christmas\" was not some pie-in-the-sky, chauvinistic belief in victory for one's own side; it was the necessary outcome given the strategies and assumptions employed on all sides about the consequences and nature of the coming conflict.\n\nThe German strategic necessity for a short war is perhaps the best illustration of this. The logic of the German Schlieffen plan was that the war *had* to be over quickly. That Germany *had* to knock out France and then pivot to knock out Russia because a protracted two-front conflict implied a German defeat once its much larger neighbors were able to reach full mobilization. Adding Britain to the side of the entente made that logic even clearer. A British naval blockade of of Germany would not be sustainable.\n\nThe assumptions of British war planning was a precise mirror image of that German concern. British war planners believed that a kind of \"economic warfare\" would devastate Germany. That lack of access not only to global shipping but to global *capital* out of the city of London would rapidly leave Germany impotent.\n\nThinking from a more global perspective there were also influential thinkers who thought that the devastation involved in a great power conflict would be so great as to not be possible to continue for more than a few months. British Admiral Beatty wrote: \n\n > There is not sufficient money in the world to provide such a gigantic struggle to be continued for any great length of time.\n\nHe thought the war would be over by winter.\n\nJan Bloch's *The War of the Future in its Technical, Economic and Political Relations*, the abridged English translation of which was titled *Is War Now Impossible?* argued among other things that a great power war would bring about a kind of financial and economic apocalypse, and therefore couldn't be sustained. \n\n(Mind you, he wasn't entirely wrong on that front. Britain had to back away from aspects of economic warfare plans when it became clear that it would be economic suicide. The outbreak of war nearly destroyed global financial markets and easily ranks alongside 1929 as one the great financial crises of the 20th century.) \n\nIn that sense many British thought that the war would be short, but knew that the longer it went on the more assured of victory they would be.\n\nPolitically, as Hew Strachan writes in his chapter on \"The Short War Illusion\":\n\n > Both armies feared that general mobilization would give rise to strikes and demonstrations. They expected domestic disaffection to deepen rather than dissipate. After all, in the 1880s Engels had anticipated with relish the possibility of a war lasting three to four years precisely because it could create the conditions for the victory of the working class.\"\n\nLogistically it was not thought that basic supplies could be long enough maintained. Strachan again:\n\n > [The German general staff's] focus was less on the raw-material needs of the war industries than on the maintenance of food supplies. The initial involvement of the general staff in the issue of economic mobilization was motivated by the need to feed the army. Of related concern were the problem of liquidity (as cash was needed to buy food, fodder, and horses), and the interruption to civilian transport.... The possibility of domestic opposition to, and disruption of, its plans for war fed on the fear of socialism. \n\nI'm less familiar with the popular understanding of a short war at the soldier's level, but Strachan writes that it was pervasive, and actually continued throughout the war, such that the end to the war was perpetually thought to be only months away even years into the conflict and that this attitude was nearly universal. This was not the result of kind of war enthusiasm or innocence that was then supplanted by disillusionment, however, so much as popular inability to conceive of the long war.\n\nI think you'll agree that it's easy to see how many of these arguments might have seemed compelling, so I think it's also worth examining why they were wrong. As I already mentioned, one reason was that actually going ahead with total economic war nearly proved to be economic suicide for Britain and could not actually be carried out. German access to capital was reduced, but still possible via banking houses in neutral countries, which thrived during the war. Likewise, British shipping to neutral countries which then ultimately ended up in Germany was extensive.\n\nVirtually all of the \"knockout\" war plans or battles on all sides failed, including the Schlieffen plan, the British attempts to force the Straits at Constantinople, the confrontation between the British navy and the German High Seas Fleet, and so on. In the case of the Ottoman campaigns which I'm most familiar with, the British simply constantly underestimated Ottoman capabilities and overestimated their own.\n\nMichael Neiberg also makes what I think is a [great point](_URL_0_) which is that because virtually all sides viewed the conflict as defensive, there really were no clearly articulated strategic goals in the conflict. In Germany the scale of the conflict meant that in order to compensate for its losses, its war aims had [ballooned to such proportions that only a total defeat of the enemy could possibly accomplish them](_URL_1_) and made compromise impossible. It also explains why virtually every socialist party in Europe actually backed the war, thus delaying or averting the socio-political socialist nightmare that conservative war planners so feared.\n\nSource wise:\n\nHew Strachan's *The First World War*, specifically as I said the section on the short war illusion.\n\nNicholas Lambert's *Planning Armageddon*, and his lecture [\"The Short War Assumption\".](_URL_2_)\n\nThat Michael Neiberg lecture that I linked too is excellent and that entire YouTube channel, the WWI Museum and Memorial, is great.\n", "provenance": null }, { "answer": null, "provenance": [ { "wikipedia_id": "1318008", "title": "Causes of World War I", "section": "Section::::Technical and military factors.:Short war illusion.\n", "start_paragraph_id": 214, "start_character": 0, "end_paragraph_id": 214, "end_character": 645, "text": "Traditional narratives of the war suggested that when the war began, both sides believed that the war would end quickly. Rhetorically speaking there was an expectation that the war would be \"over by Christmas\" 1914. This is important for the origins of the conflict since it suggests that since it was expected that the war would be short, the statesmen did not tend to take gravity of military action as seriously as they might have done. Modern historians suggest a nuanced approach. There is ample evidence to suggest that statesmen and military leaders thought the war would be lengthy and terrible and have profound political consequences.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "22165111", "title": "Association football during World War I", "section": "Section::::Christmas truce.\n", "start_paragraph_id": 9, "start_character": 0, "end_paragraph_id": 9, "end_character": 368, "text": "The Christmas truce, was a series of brief unofficial cessations of hostilities occurring on Christmas Eve or Christmas Day of 1914 between German and British or French troops in World War I, particularly that between British and German troops stationed along the Western Front. During the truce, a game of football was played between the British and German soldiers.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "20763116", "title": "Christmas in the American Civil War", "section": "", "start_paragraph_id": 1, "start_character": 0, "end_paragraph_id": 1, "end_character": 603, "text": "Christmas in the American Civil War (1861–1865) was celebrated in both the United States and the Confederate States of America although the day did not become an official holiday until five years after the war ended. The war continued to rage on Christmas and skirmishes occurred throughout the countryside. Celebrations for both troops and civilians saw significant alteration. Propagandists, such as Thomas Nast, used wartime Christmases to reflect their beliefs. In 1870, Christmas became an official Federal holiday when President Ulysses S. Grant made it so in an attempt to unite north and south.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "53687987", "title": "Diplomatic history of World War I", "section": "Section::::War aims.\n", "start_paragraph_id": 4, "start_character": 0, "end_paragraph_id": 4, "end_character": 701, "text": "Years later a false myth grew up the crowds and all the belligerent nations cheered and welcomed the war. That was not true – everywhere there was a deep sense of foreboding. In wartime Britain, And in neutral United States, accurate reports of German atrocities and killing thousands of civilians, rounding up hostages, and destroying historic buildings and libraries caused a change of heart to an antiwar population. For example, suffragists took up the cause of the war, as did intellectuals. Very few expected a short happy war – the slogan \"over by Christmas\" was coined three years after the war began. Historians find that, \"The evidence for mass enthusiasm at the time is surprisingly weak.\"\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "9571528", "title": "Snoopy's Christmas", "section": "Section::::Overview.\n", "start_paragraph_id": 6, "start_character": 0, "end_paragraph_id": 6, "end_character": 1005, "text": "Although fictitious, the song is set against the backdrop of a legitimate historical event. During World War I, in 1914, \"The Christmas Truce\" was initiated not by German and British commanders, but by the soldiers themselves. The length of the cease-fire varied by location, and was reported to have been as brief as Christmas Day or as long as the week between Christmas and New Year's Day. Trench-bound combatants exchanged small gifts across the lines, with Germans giving beer to the British, who sent tobacco and tinned meat back in return. No Man's Land was cleared of dead bodies, trenches were repaired and drained, and troops from both sides shared pictures of their families and, in some places, used No Man's Land for friendly games of football. The song even has the initiator correct as it was generally the German soldiers who called over to the British and initiated the truce and, in the song, it is the Red Baron—a German WWI hero—who extends the hand of Christmas friendship to Snoopy.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "592591", "title": "Ceasefire", "section": "Section::::Historical examples.:World War I.\n", "start_paragraph_id": 4, "start_character": 0, "end_paragraph_id": 4, "end_character": 904, "text": "During World War I, on December 24, 1914, there was an unofficial ceasefire on the Western Front as France, the United Kingdom, and Germany observed Christmas. There are accounts that claimed the unofficial ceasefire took place through the week leading to Christmas and British and German troops exchanged seasonal greetings and songs between their trenches. It was brief but spontaneous, beginning when German soldiers lit Christmas trees, and it quickly spread up and down the Western Front. One account described this development in the following words:It was good to see the human spirit prevailed amongst all sides at the front, the sharing and fraternity. All was well until the higher echelons of command got to hear about the effect of the ceasefire, whereby their wrath ensured a return to hostilities.There was no treaty signed during this Christmas truce and the war resumed after a few days.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "840340", "title": "Richard Schirrmann", "section": "", "start_paragraph_id": 4, "start_character": 0, "end_paragraph_id": 4, "end_character": 439, "text": "\"When the Christmas bells sounded in the villages of the Vosges behind the lines... something fantastically unmilitary occurred. German and French troops spontaneously made peace and ceased hostilities; they visited each other through disused trench tunnels, and exchanged wine, cognac, and cigarettes for Westphalian black bread, biscuits, and ham. This suited them so well that they remained good friends even after Christmas was over.\"\n", "bleu_score": null, "meta": null } ] } ]
null
2lup7k
why is it impossible to fold a piece of paper in half more than eight times?
[ { "answer": "For a regular A4 at eight folds its 256 layers thick and since the paper is so small at that point the amount of space needed to make a fold in each of the 256 layers there isn't enough room. \n\nHowever if you just get a bigger paper then you can, even though you are folding it by half each time. \n_URL_0_", "provenance": null }, { "answer": "Not true\n\n_URL_0_\n\nMight have been beaten since.\n\n", "provenance": null }, { "answer": "Additionally to the area problem with A4 you get different sizes of areas per layer.... \nif the innermost layer folds onto itself basically all the paper is a flat sheet with a very tight bend where the fold is. Now the outermost layer The bit on top and below the whole stack of 256 layers are separated by 256*thickness of paper. Which even if the thickness is only 0.1 mm will amount to about 1 inch... so you need more paper for the outer layer than the inner layer.. 8 fold is probably a value of experience where this problem makes the paper unfoldable ... also eventually the outer layers will rip... ", "provenance": null }, { "answer": "Something becomes unfoldable when its width becomes comparable to its thickness. It has to be long enough to span the circumference of the fold. A macaroni noodle is a decent example of something barely foldable. Paper is easily folded, because it is typically much longer than it is thick. If you fold the paper many times, though, its thickness grows exponentially, and its width decreases exponentially. The two becomes comparable very quickly. It happens to work out that 8 folds is the limit for a standard sized piece of a paper.", "provenance": null }, { "answer": "When you fold something you turn a flat piece of paper into two flat pieces of paper, and a semicircle section where the fold is. The length of this semi circle is about 3.14 times the length of the thickness of the paper before the fold. (1/2 circumference = pi * radius)\n\nOthers have said that each fold doubles the thickness of the paper, so after a while, and it also halves the size of the paper, after a certain number of folds, there just is not enough paper to even do the semi circle, so it is literally impossible to fold the paper further.\n\nHow many folds depends on the thickness of the paper and the original size of the paper. ", "provenance": null }, { "answer": "Fun fact: if you fold a piece of paper in half 103 times it would stretch across the observable universe from end to end. ", "provenance": null }, { "answer": null, "provenance": [ { "wikipedia_id": "232840", "title": "Mathematics of paper folding", "section": "Section::::Related problems.\n", "start_paragraph_id": 32, "start_character": 0, "end_paragraph_id": 32, "end_character": 724, "text": "The maximum number of times an incompressible material can be folded has been derived. With each fold a certain amount of paper is lost to potential folding. The loss function for folding paper in half in a single direction was given to be formula_4, where \"L\" is the minimum length of the paper (or other material), \"t\" is the material's thickness, and \"n\" is the number of folds possible. The distances \"L\" and \"t\" must be expressed in the same units, such as inches. This result was derived by Gallivan in 2001, who also folded a sheet of paper in half 12 times, contrary to the popular belief that paper of any size could be folded at most eight times. She also derived the equation for folding in alternate directions.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "3072632", "title": "Britney Gallivan", "section": "Section::::Biography.\n", "start_paragraph_id": 3, "start_character": 0, "end_paragraph_id": 3, "end_character": 782, "text": "In January 2002, while a junior in high school, Gallivan demonstrated that a single piece of toilet paper 4000 ft (1200 m) in length can be folded in half twelve times. This was contrary to the popular conception that the maximum number of times any piece of paper could be folded in half was seven. She calculated that, instead of folding in half every other direction, the least volume of paper to get 12 folds would be to fold in the same direction, using a very long sheet of paper. A special kind of $85-per-roll toilet paper in a set of six met her length requirement. Not only did she provide the empirical proof, but she also derived an equation that yielded the width of paper or length of paper necessary to fold a piece of paper of thickness \"t\" any \"n\" number of times.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "286327", "title": "Gutenberg Bible", "section": "Section::::The production process: \"Das Werk der Bücher\".:Pages.\n", "start_paragraph_id": 14, "start_character": 0, "end_paragraph_id": 14, "end_character": 926, "text": "The paper size is 'double folio', with two pages printed on each side (four pages per sheet). After printing the paper was folded once to the size of a single page. Typically, five of these folded sheets (10 leaves, or 20 printed pages) were combined to a single physical section, called a quinternion, that could then be bound into a book. Some sections, however, had as few as four leaves or as many as 12 leaves. Some sections may have been printed in a larger number, especially those printed later in the publishing process, and sold unbound. The pages were not numbered. The technique was not new, since it had been used to make blank \"white-paper\" books to be written afterwards. What was new was determining \"beforehand\" the correct placement and orientation of each page on the five sheets to result in the correct sequence when bound. The technique for locating the printed area correctly on each page was also new.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "11278795", "title": "Origami paper", "section": "", "start_paragraph_id": 1, "start_character": 0, "end_paragraph_id": 1, "end_character": 382, "text": "Origami paper is used to fold \"origami\", the art of paper folding. The only real requirement of the folding medium is that it must be able to hold a crease, but should ideally also be thinner than regular paper for convenience when multiple folds over the same small paper area are required (e.g. such as would be the case if creating an origami bird's \"legs\", \"feet\", and \"beak\").\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "341682", "title": "Square root of 2", "section": "Section::::Paper size.\n", "start_paragraph_id": 125, "start_character": 0, "end_paragraph_id": 125, "end_character": 696, "text": "In 1786, German physics professor Georg Lichtenberg found that any sheet of paper whose long edge is times longer than its short edge could be folded in half and aligned with its shorter side to produce a sheet with exactly the same proportions as the original. This ratio of lengths of the longer over the shorter side guarantees that cutting a sheet in half along a line results in the smaller sheets having the same (approximate) ratio as the original sheet. When Germany standarised paper sizes at the beginning of the 20 century, they used Lichtenberg's ratio to create the \"A\" series of paper sizes. Today, the (approximate) aspect ratio of paper sizes under ISO 216 (A4, A0, etc.) is 1:. \n", "bleu_score": null, "meta": null }, { "wikipedia_id": "19987041", "title": "Folded leaflet", "section": "Section::::Parallel fold.:Double parallel fold.\n", "start_paragraph_id": 13, "start_character": 0, "end_paragraph_id": 13, "end_character": 219, "text": "In double parallel folds the paper is folded in half and then folded in half again with a fold parallel to the first fold. To allow for proper nesting the two inside folded panels are smaller than the two outer panels.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "26185707", "title": "Map folding", "section": "", "start_paragraph_id": 1, "start_character": 0, "end_paragraph_id": 1, "end_character": 418, "text": "In the mathematics of paper folding, map folding and stamp folding are two problems of counting the number of ways that a piece of paper can be folded. In the stamp folding problem, the paper is a strip of stamps with creases between them, and the folds must lie on the creases. In the map folding problem, the paper is a map, divided by creases into rectangles, and the folds must again lie only along these creases.\n", "bleu_score": null, "meta": null } ] } ]
null
1zobb1
if lockpicking guides and tools are available widely, why are so few houses lockpicked into?
[ { "answer": "Its far easier and more efficient to break a window or kick in a door.", "provenance": null }, { "answer": "Houselocks are too hard for most amateurs. Plus lockpicks are considered burglary tools, which means they are illegal to carry on your person without a locksmithing licence. Additionally, some people have morals.", "provenance": null }, { "answer": "It's far more valuable to get in and out quickly, than it is to do so quietly.", "provenance": null }, { "answer": "Speed is the name of the game. [Lock bumping](_URL_0_) (where possible) is far quicker and less noticeable to outsiders than picking a lock.\n\nAlso, [about 30 percent of all burglaries are through an open or unlocked window or door](_URL_1_), so picking a lock isn't even necessary many times.", "provenance": null }, { "answer": "As someone who picks as a hobby (Yea, [toool](_URL_0_)!) The answer really is most of the people who pick have no desire to break into a house and those that do, would rather smash and grab than take the time to pick. Picking - it's fun to do while passing time and as a hobby, but even the simplest of locks can pose problems at times and there is no guarantee that you will pick it.\n\nAs someone that can pick, if for some reason I was going to break into a neighbors house - I'd pick popping a window with a crowbar or screwdriver. Window locks are even more vulnerable than door locks and a lot of people forget to lock them. \n\nPeople also don't maintain their locks - I pick with pretty much pristine store bought locks to play around with, but in the wild there are going to be all kinds of various gunk and junk built up in and on the locks that will make them harder. \n\nThat being said - in the US, probably 95%+ of the locks on homes are qwikset or dexter - even someone who has never picked before can probably rake open one of those in less than 5 minutes. \n", "provenance": null }, { "answer": "Firefighter here and I consider myself well versed in forcible entry as well as finesse-ible entry. \n\nIf we get called to a house for a house lockout I will size up the house and look for a weakness. You would be surprised how many people have open windows. Pop a screen and your in. If I can't find anything for a commercial structure we can do some through the lock and with minimal damage gain access to a locked building. \n\nThis is if we have time. If your house is on fire, we are breaking in quick. \n\nI think a burglar would take the same approach if he can quickly find a weakness he would take it. But walking around a house. Or bending down at a door lock for an extended period of would look suspicious. I think a thief would opt for the quick in and out kick of the door or quick pry with a crow bar. ", "provenance": null }, { "answer": null, "provenance": [ { "wikipedia_id": "6685241", "title": "Lock bumping", "section": "Section::::Legal Issues.:United States.\n", "start_paragraph_id": 13, "start_character": 0, "end_paragraph_id": 13, "end_character": 287, "text": "Posession of lockpicking tools, such as a bump key, are highly regulated by criminal law in four states in the United States, and are considered prima facie evidence of a crime in another four states in the United States. They are generally legal in the remaining states within the U.S.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "142763", "title": "Lock picking", "section": "", "start_paragraph_id": 3, "start_character": 0, "end_paragraph_id": 3, "end_character": 214, "text": "In some countries, such as Japan, lock-picking tools are illegal for most people to possess, but in many others, they are available and legal to own as long as there is no intent to use them for criminal purposes.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "525465", "title": "Lock and key", "section": "Section::::Locksmithing.\n", "start_paragraph_id": 56, "start_character": 0, "end_paragraph_id": 56, "end_character": 548, "text": "Historically, locksmiths constructed or repaired an entire lock, including its constituent parts. The rise of cheap mass production has made this less common; the vast majority of locks are repaired through like-for-like replacements, high-security safes and strongboxes being the most common exception. Many locksmiths also work on any existing door hardware, including door closers, hinges, electric strikes, and frame repairs, or service electronic locks by making keys for transponder-equipped vehicles and implementing access control systems.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "374977", "title": "Disc tumbler lock", "section": "Section::::Design.\n", "start_paragraph_id": 7, "start_character": 0, "end_paragraph_id": 7, "end_character": 298, "text": "The mechanism makes it easy to construct locks that can be opened with multiple different keys: \"blank\" discs with a circular hole are used, and only notches shared by the keys are employed in the lock mechanism. This is commonly used for locks of common areas such as garages in apartment houses.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "18375", "title": "Locksmithing", "section": "Section::::\"Full disclosure\".\n", "start_paragraph_id": 19, "start_character": 0, "end_paragraph_id": 19, "end_character": 507, "text": "Rogues knew a good deal about lock-picking long before locksmiths discussed it among themselves, as they have lately done. If a lock, let it have been made in whatever country, or by whatever maker, is not so inviolable as it has hitherto been deemed to be, surely it is to the interest of honest persons to know this fact, because the dishonest are tolerably certain to apply the knowledge practically; and the spread of the knowledge is necessary to give fair play to those who might suffer by ignorance.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "17928147", "title": "Rim lock", "section": "", "start_paragraph_id": 2, "start_character": 0, "end_paragraph_id": 2, "end_character": 293, "text": "The oldest type of lock used in the United Kingdom and Ireland. It is of a basic design using (usually) a single lever and a sliding bolt. Wards can be used for additional security. They are not used where high security is required. Most older locks were large, some as big as 40 cm by 25 cm.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "377688", "title": "Tubular pin tumbler lock", "section": "Section::::Uses.\n", "start_paragraph_id": 6, "start_character": 0, "end_paragraph_id": 6, "end_character": 395, "text": "Tubular pin tumbler locks are generally considered to be safer and more resistant to picking than standard locks. This is primarily because they are often seen on coin boxes for vending machines and coin-operated machines, such as those used in a laundromat. However, the primary reason this type of lock is used in these applications is that it can be made physically shorter than other locks.\n", "bleu_score": null, "meta": null } ] } ]
null
21bbh3
Does light accelerate to the speed of light, or is it instantly the speed of light as soon as it is released from an electron?
[ { "answer": "They don't start off at zero, and there's no acceleration. They start off at c and always travel at c. This is because, due to special relativity, any massless particle can only ever move at c, any other speed isn't allowed physically. Source: [adamsolomon](_URL_0_)", "provenance": null }, { "answer": "/u/SlimfishJim's answer is great, but I'd like to approach it from a different angle.\n\nSuppose you have a stretched out slinky. Maybe the slinky is nailed between two walls. You then make a triangular bend in the slinky near one end so that it looks like this:\n\n--^-----------------\n\nAnd you let go. You snap three photos, and they look like this:\n\nPhoto 1: --^ -----------------\n\nPhoto 2: -------^ ------------\n\nPhoto 3: ------------^ -------\n\nDid that wave take time to accelerate up to speed in moving from left to right? No! The wave isn't accelerating in the x direction at all! The individual piece of mass that make up the slinky are accelerating up and downward, like [this GIF](_URL_0_) (look at the red dot on one wave crest). There's no x-acceleration, only exciting parts of the metal further and further along the slinky, which looks to us like a wave moving left to right. How fast will this wave appear to move along the slinky? It depends on how heavy the slinky is, how tightly it's stretched, etc. The fancy word for this kind of thing is the \"bulk modulus.\"\n\nLight is similar to this situation. I have an elecromagnetic field (the analog to our slinky). I excite this field (maybe I wave an electron around). The excitation of the electric field causes an excitation of the magnetic field. This excitation of the magnetic field causes an excitation of the electric field. (Faraday's law at work.) Rinse and repeat. This repeated excitation of the electromagnetic field looks to us like a wave moving through space, just like the slinky wave 'moves' through space. And how fast does it do this? Well, like the slinky, it depends sort of on the \"bulk modulus\" of light. We call those things the permitivity and permeability constants.", "provenance": null }, { "answer": null, "provenance": [ { "wikipedia_id": "9476", "title": "Electron", "section": "Section::::Characteristics.:Motion and energy.\n", "start_paragraph_id": 79, "start_character": 0, "end_paragraph_id": 79, "end_character": 732, "text": "According to Einstein's theory of special relativity, as an electron's speed approaches the speed of light, from an observer's point of view its relativistic mass increases, thereby making it more and more difficult to accelerate it from within the observer's frame of reference. The speed of an electron can approach, but never reach, the speed of light in a vacuum, \"c\". However, when relativistic electrons—that is, electrons moving at a speed close to \"c\"—are injected into a dielectric medium such as water, where the local speed of light is significantly less than \"c\", the electrons temporarily travel faster than light in the medium. As they interact with the medium, they generate a faint light called Cherenkov radiation.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "43553104", "title": "J. G. Fox", "section": "Section::::Special relativity and the extinction theorem.\n", "start_paragraph_id": 5, "start_character": 0, "end_paragraph_id": 5, "end_character": 1246, "text": "The second postulate of Einstein's theory of special relativity states that the speed of light is invariant, regardless of the velocity of the source from which the light emanates. The extinction theorem (essentially) states that light passing through a transparent medium is simultaneously extinguished and re-emitted by the medium itself. This implies that information about the velocity of light from a moving source might be lost if the light passes through enough intervening transparent material before being measured. All measurements previous to the 1960s intending to verify the constancy of the speed of light from moving sources (primarily using moving mirrors, or extraterrestrial sources) were made only after the light had passed through such stationary material — that material being that of a glass lens, the terrestrial atmosphere, or even the incomplete vacuum of deep space. In 1961, Fox decided that there might not yet be any conclusive evidence for the second postulate: \"This is a surprising situation in which to find ourselves half a century after the inception of special relativity.\" Regardless, he remained fully confident in special relativity, noting that this created only a \"small gap\" in the experimental record.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "277702", "title": "Electron diffraction", "section": "Section::::Theory.:Wavelength of electrons.\n", "start_paragraph_id": 20, "start_character": 0, "end_paragraph_id": 20, "end_character": 579, "text": "However, in an electron microscope, the accelerating potential is usually several thousand volts causing the electron to travel at an appreciable fraction of the speed of light. A SEM may typically operate at an accelerating potential of 10,000 volts (10 kV) giving an electron velocity approximately 20% of the speed of light, while a typical TEM can operate at 200 kV raising the electron velocity to 70% the speed of light. We therefore need to take relativistic effects into account. The relativistic relation between energy and momentum is E=pc+mc and it can be shown that,\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "1206", "title": "Atomic orbital", "section": "Section::::Electron placement and the periodic table.:Relativistic effects.\n", "start_paragraph_id": 111, "start_character": 0, "end_paragraph_id": 111, "end_character": 1405, "text": "In the Bohr Model, an  electron has a velocity given by formula_40, where is the atomic number, formula_41 is the fine-structure constant, and is the speed of light. In non-relativistic quantum mechanics, therefore, any atom with an atomic number greater than 137 would require its 1s electrons to be traveling faster than the speed of light. Even in the Dirac equation, which accounts for relativistic effects, the wave function of the electron for atoms with formula_42 is oscillatory and unbounded. The significance of element 137, also known as untriseptium, was first pointed out by the physicist Richard Feynman. Element 137 is sometimes informally called feynmanium (symbol Fy). However, Feynman's approximation fails to predict the exact critical value of  due to the non-point-charge nature of the nucleus and very small orbital radius of inner electrons, resulting in a potential seen by inner electrons which is effectively less than . The critical  value which makes the atom unstable with regard to high-field breakdown of the vacuum and production of electron-positron pairs, does not occur until is about 173. These conditions are not seen except transiently in collisions of very heavy nuclei such as lead or uranium in accelerators, where such electron-positron production from these effects has been claimed to be observed. See Extension of the periodic table beyond the seventh period.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "894774", "title": "Emission theory", "section": "Section::::Refutations of emission theory.\n", "start_paragraph_id": 12, "start_character": 0, "end_paragraph_id": 12, "end_character": 590, "text": "where \"c\" is the speed of light, \"v\" that of the source, \"c' \" the resultant speed of light, and \"k\" a constant denoting the extent of source dependence which can attain values between 0 and 1. According to special relativity and the stationary aether, \"k\"=0, while emission theories allow values up to 1. Numerous terrestrial experiments have been performed, over very short distances, where no \"light dragging\" or extinction effects could come into play, and again the results confirm that light speed is independent of the speed of the source, conclusively ruling out emission theories.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "24383048", "title": "Cherenkov radiation", "section": "Section::::Physical origin.:Basics.\n", "start_paragraph_id": 8, "start_character": 0, "end_paragraph_id": 8, "end_character": 613, "text": "While electrodynamics holds that the speed of light \"in a vacuum\" is a universal constant (\"c\"), the speed at which light propagates in a material may be significantly less than \"c\". For example, the speed of the propagation of light in water is only 0.75\"c\". Matter can be accelerated beyond this speed (although still to less than \"c\") during nuclear reactions and in particle accelerators. Cherenkov radiation results when a charged particle, most commonly an electron, travels through a dielectric (electrically polarizable) medium with a speed greater than that at which light propagates in the same medium.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "1437696", "title": "Velocity-addition formula", "section": "Section::::History.\n", "start_paragraph_id": 5, "start_character": 0, "end_paragraph_id": 5, "end_character": 802, "text": "The speed of light in a fluid is slower than the speed of light in vacuum, and it changes if the fluid is moving along with the light. In 1851, Fizeau measured the speed of light in a fluid moving parallel to the light using a interferometer. Fizeau's results were not in accord with the then-prevalent theories. Fizeau experimentally correctly determined the zeroth term of an expansion of the relativistically correct addition law in terms of as is described below. Fizeau's result led physicists to accept the empirical validity of the rather unsatisfactory theory by Fresnel that a fluid moving with respect to the stationary aether \"partially\" drags light with it, i.e. the speed is instead of , where is the speed of light in the aether, and is the speed of the fluid with respect to the aether.\n", "bleu_score": null, "meta": null } ] } ]
null
1v1gd5
when i'm hungover why do i always crave greasy foods like pizza rather than foods that are better for me?
[ { "answer": "Your body is depleted of various electrolytes and calories since alcohol zaps your blood sugar levels, and dehydrates you, fatty salty food is an efficient albeit unhealthy way to replenish those stores. ", "provenance": null }, { "answer": "FWIW: Fat is not bad for you, carbohydrates are.", "provenance": null }, { "answer": "_URL_0_ this is a really informative video about hangovers and greasy foods :)", "provenance": null }, { "answer": null, "provenance": [ { "wikipedia_id": "51548076", "title": "Cuisine of Pembrokeshire", "section": "Section::::Traditions associated with St David.\n", "start_paragraph_id": 67, "start_character": 0, "end_paragraph_id": 67, "end_character": 649, "text": "Everyone relieves his weary limbs by partaking of dinner, but not to excess - for being filled to excess, even with bread on its own, gives rise to dissipation - rather, everyone receives a meal according to the varying condition of their bodies or their age. They do not serve dishes of different flavours, nor richer types of food, but feeding on bread and herbs seasoned with salt, they quench their burning thirst with a temperate kind of drink. Then, for either the sick, those advanced in age, or likewise those tired by a long journey, they provide some other pleasures of tastier food, for it is not to be dealt out to all in equal measure.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "36168126", "title": "Iranian traditional medicine", "section": "Section::::Choleric, sanguine, melancholic, phlegmatic.:Choleric: warm and dry.\n", "start_paragraph_id": 177, "start_character": 0, "end_paragraph_id": 177, "end_character": 239, "text": "BULLET::::- Due to the warmness in their body they soon digest food and as they don't have adequate amount of nutrient supply to provide the energy needs of the body they soon get hungry and irritated if they don't get enough food timely.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "1324898", "title": "Food addiction", "section": "Section::::Signs and symptoms.\n", "start_paragraph_id": 18, "start_character": 0, "end_paragraph_id": 18, "end_character": 478, "text": "Food addiction has some physical signs and symptoms. Decreased energy; not being able to be as active as in the past, not being able to be as active as others around, also a decrease in efficiency due to the lack of energy. Having trouble sleeping; being tired all the time such as fatigue, oversleeping, or the complete opposite and not being able to sleep such as insomnia. Other physical signs and symptoms are restlessness, irritability, digestive disorders, and headaches.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "36924117", "title": "Hunger in the United States", "section": "Section::::Causes.:Agricultural policy.\n", "start_paragraph_id": 18, "start_character": 0, "end_paragraph_id": 18, "end_character": 805, "text": "Another cause of hunger is related to agricultural policy. Due to the heavy subsidization of crops such as corn and soybeans, healthy foods such as fruits and vegetables are produced in lesser abundance and generally cost more than highly processed, packaged goods. Because unhealthful food items are readily available at much lower prices than fruits and vegetables, low-income populations often heavily rely on these foods for sustenance. As a result, the poorest people in the United States are often simultaneously undernourished and overweight or obese. This is because highly processed, packaged goods generally contain high amounts of calories in the form of fat and added sugars yet provide very limited amounts of essential micronutrients. These foods are thus said to provide \"empty calories.\" \n", "bleu_score": null, "meta": null }, { "wikipedia_id": "30816165", "title": "The Naked Ghost, Burp! and Blue Jam", "section": "Section::::Burp!\n", "start_paragraph_id": 6, "start_character": 0, "end_paragraph_id": 6, "end_character": 344, "text": "A boy buys junk food from the school canteen every day. His teacher gets annoyed, as does one of his classmates. But when he moves to a new house, he finds a spellbook; one of the spells allows him to pass his obesity to others. So every day, he eats enough junk food to make him sick; whenever someone insults him, he casts the spell on them.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "24768", "title": "Pizza", "section": "Section::::Health concerns.\n", "start_paragraph_id": 46, "start_character": 0, "end_paragraph_id": 46, "end_character": 414, "text": "Some mass-produced pizzas by fast food chains have been criticized as having an unhealthy balance of ingredients. Pizza can be high in salt, fat and calories (food energy). The USDA reports an average sodium content of 5,101 mg per pizza in fast food chains. There are concerns about negative health effects. Food chains have come under criticism at various times for the high salt content of some of their meals.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "1355944", "title": "Mr Creosote", "section": "Section::::Synopsis.\n", "start_paragraph_id": 12, "start_character": 0, "end_paragraph_id": 12, "end_character": 525, "text": "He finishes the feast, and several other courses, vomiting profusely all over himself, his table, and the restaurant's staff throughout his meal, causing other diners to lose their appetite, and in some cases, throw up as well. Finally, after being persuaded by the smooth maître d' to eat a single \"wafer-thin mint\", his stomach begins to rapidly expand until it explodes: covering the restaurant and diners with viscera and partially digested food—even starting a \"vomit-wave\" among the other diners, who leave in disgust.\n", "bleu_score": null, "meta": null } ] } ]
null
495n8d
why does this camera distortion happen?
[ { "answer": "That is buffeting. Something is shaking the back of a digital video camera. You're seeing the effect of that vibration beat frequency interacting with the camera's 50-60Hz frame rate.", "provenance": null }, { "answer": null, "provenance": [ { "wikipedia_id": "12448204", "title": "Image quality", "section": "Section::::Image quality attributes.\n", "start_paragraph_id": 30, "start_character": 0, "end_paragraph_id": 30, "end_character": 391, "text": "BULLET::::- Distortion is an aberration that causes straight lines to curve. It can be troublesome for architectural photography and metrology (photographic applications involving measurement). Distortion tends to be noticeable in low cost cameras, including cell phones, and low cost DSLR lenses. It is usually very easy to see in wide angle photos. It can be now be corrected in software.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "146903", "title": "Texture mapping", "section": "Section::::Rasterisation algorithms.:Affine texture mapping.\n", "start_paragraph_id": 36, "start_character": 0, "end_paragraph_id": 36, "end_character": 217, "text": "This leads to noticeable distortion with perspective transformations (see figure – textures (the checker boxes) appear bent), especially as primitives near the camera. Such distortion may be reduced with subdivision.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "53586358", "title": "Conservation and restoration of film", "section": "Section::::Acetate deterioration.:Types of deterioration.:Distortion.\n", "start_paragraph_id": 76, "start_character": 0, "end_paragraph_id": 76, "end_character": 316, "text": "Distortion is caused by uneven shrinkage across a film's dimension and starts to warp or curl. This can be due to the difference in shrinkage between the film and emulsion layers, or areas of the film that start to shrink more than other areas. Temporary distortion can be reversed, but permanent distortion cannot.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "53586358", "title": "Conservation and restoration of film", "section": "Section::::Nitrate deterioration.:Types of deterioration.:Distortion.\n", "start_paragraph_id": 47, "start_character": 0, "end_paragraph_id": 47, "end_character": 316, "text": "Distortion is caused by uneven shrinkage across a film's dimension and starts to warp or curl. This can be due to the difference in shrinkage between the film and emulsion layers, or areas of the film that start to shrink more than other areas. Temporary distortion can be reversed, but permanent distortion cannot.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "511025", "title": "Perspective projection distortion", "section": "Section::::Cause.\n", "start_paragraph_id": 16, "start_character": 0, "end_paragraph_id": 16, "end_character": 395, "text": "It logically follows that all film photography (now almost in disuse) distorted the image beheld by the eye, among other reasons because the film surface was flat in the manner of the picture plane. Artifactual characteristics of a camera lens may aggravate the distortion. This is demonstrated with a pinhole camera which has no lens but which produces the same distortion as described herein.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "2704", "title": "Optical aberration", "section": "Section::::Theory of monochromatic aberration.:Distortion of the image.\n", "start_paragraph_id": 42, "start_character": 0, "end_paragraph_id": 42, "end_character": 910, "text": "Even if the image is sharp, it may be distorted compared to ideal pinhole projection. In pinhole projection, the magnification of an object is inversely proportional to its distance to the camera along the optical axis so that a camera pointing directly at a flat surface reproduces that flat surface. Distortion can be thought of as stretching the image non-uniformly, or, equivalently, as a variation in magnification across the field. While \"distortion\" can include arbitrary deformation of an image, the most pronounced modes of distortion produced by conventional imaging optics is \"barrel distortion\", in which the center of the image is magnified more than the perimeter (figure 3a). The reverse, in which the perimeter is magnified more than the center, is known as \"pincushion distortion\" (figure 3b). This effect is called lens distortion or image distortion, and there are algorithms to correct it.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "5250453", "title": "Perspective control", "section": "", "start_paragraph_id": 6, "start_character": 0, "end_paragraph_id": 6, "end_character": 527, "text": "The popularity of amateur photography has made distorted photos made with cheap cameras so familiar that many people do not immediately realise the distortion. This \"distortion\" is relative only to the accepted norm of constructed perspective (where vertical lines in reality do not converge in the constructed image), which in itself is distorted from a true perspective representation (where lines that are vertical in reality would begin to converge above and below the horizon as they become more distant from the viewer).\n", "bleu_score": null, "meta": null } ] } ]
null
2qpgsb
what do car fog lights *actually* do?
[ { "answer": "Fog lights are not for you. It's for other drivers so that they can see you better!\n\nI always turn them on during heavy rain or fog", "provenance": null }, { "answer": "Fog lights produce a short but wide beam spread which illuminates the road close to the front of the vehicle. The driver can then see the edges of the road and slightly ahead without the blinding glare primary headlights would create during a heavy fog. \n\nMy guess is only those who have been in a very thick fog or snowstorm appreciate the value of having fog lights -- the rest just have it for show.", "provenance": null }, { "answer": "This is where EU and USA differ. In the EU the rear for lights are most important, so no one runs into you as you creep along. In the US only the front fog lights are important so you don't need to slow down in your charge into the future ", "provenance": null }, { "answer": "Fog lights are an optional second set of front headlights that aim downwards so you can see the road about 10 feet ahead in poor visibility.", "provenance": null }, { "answer": null, "provenance": [ { "wikipedia_id": "2357908", "title": "Automotive lighting", "section": "Section::::Forward illumination.:Auxiliary lamps.:Front fog lamps.\n", "start_paragraph_id": 25, "start_character": 0, "end_paragraph_id": 25, "end_character": 383, "text": "The respective purposes of front fog lamps and driving lamps are often confused, due in part to the misconception that fog lamps are necessarily selective yellow, while any auxiliary lamp that makes white light is a driving lamp. Automakers and aftermarket parts and accessories suppliers frequently refer interchangeably to \"fog lamps\" and \"driving lamps\" (or \"fog/driving lamps\").\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "2357908", "title": "Automotive lighting", "section": "Section::::Forward illumination.:Auxiliary lamps.:Front fog lamps.\n", "start_paragraph_id": 21, "start_character": 0, "end_paragraph_id": 21, "end_character": 358, "text": "Front fog lamps provide a wide, bar-shaped beam of light with a sharp cutoff at the top, and are generally aimed and mounted low. They may produce white or selective yellow light, and were designed for use at low speed to increase the illumination directed towards the road surface and verges in conditions of poor visibility due to rain, fog, dust or snow.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "2357908", "title": "Automotive lighting", "section": "Section::::Forward illumination.:Auxiliary lamps.:Front fog lamps.\n", "start_paragraph_id": 23, "start_character": 0, "end_paragraph_id": 23, "end_character": 583, "text": "In most countries, weather conditions rarely necessitate the use of front fog lamps and there is no legal requirement for them, so their primary purpose is frequently cosmetic. They are often available as optional extras or only on higher trim levels of many cars. An SAE study has shown that in the United States more people inappropriately use their fog lamps in dry weather than use them properly in poor weather. Because of this, use of the fog lamps when visibility is not seriously reduced is often prohibited in most jurisdictions; for example, in New South Wales, Australia:\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "14749792", "title": "Decorative vehicle lighting", "section": "Section::::Cars.\n", "start_paragraph_id": 6, "start_character": 0, "end_paragraph_id": 6, "end_character": 272, "text": "Custom cars sometimes have indirect lighting underneath, glowing a color like green or purple which could not be confused with that of an emergency or other vehicle's normal lighting. These can be provided by strips of tubes of cold-cathode fluorescent lighting, or LEDs.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "13673345", "title": "Car", "section": "Section::::Lighting.\n", "start_paragraph_id": 44, "start_character": 0, "end_paragraph_id": 44, "end_character": 793, "text": "Cars are typically fitted with multiple types of lights. These include headlights, which are used to illuminate the way ahead and make the car visible to other users, so that the vehicle can be used at night; in some jurisdictions, daytime running lights; red brake lights to indicate when the brakes are applied; amber turn signal lights to indicate the turn intentions of the driver; white-coloured reverse lights to illuminate the area behind the car (and indicate that the driver will be or is reversing); and on some vehicles, additional lights (e.g., side marker lights) to increase the visibility of the car. Interior lights on the ceiling of the car are usually fitted for the driver and passengers. Some vehicles also have a trunk light and, more rarely, an engine compartment light.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "2843559", "title": "Emergency vehicle lighting", "section": "Section::::Mounting types.:Vehicle integral.\n", "start_paragraph_id": 44, "start_character": 0, "end_paragraph_id": 44, "end_character": 339, "text": "Sometimes, the existing lighting on a vehicle is modified to create warning beacons. In the case of wig-wag lighting, this involves adding a device to alternately flash the high-beam headlights, or, in some countries, the rear fog lights. It can also involve drilling out other lights on the vehicle to add ‘hideaway’ or ‘corner strobes’.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "1588525", "title": "Fog machine", "section": "", "start_paragraph_id": 1, "start_character": 0, "end_paragraph_id": 1, "end_character": 763, "text": "A fog machine, fog generator, or smoke machine is a device that emits a dense vapor that appears similar to fog or smoke. This artificial fog is most commonly used in professional entertainment applications, but smaller, more affordable fog machines are becoming common for personal use. Fog machines can also be found in use in a variety of industrial, training, and some military applications. Typically, fog is created by vaporizing proprietary water and glycol-based or glycerin-based fluids or through the atomization of mineral oil. This fluid (often referred to colloquially as \"fog juice\") vaporizes or atomizes inside the fog machine. Upon exiting the fog machine and mixing with cooler outside air the vapor condenses, resulting in a thick visible fog.\n", "bleu_score": null, "meta": null } ] } ]
null
edvhss
is it true that consuming your own species' flesh can cause madness?
[ { "answer": "I think you’re thinking of Kuru. Here’s the definition:\n\nKuru is a very rare disease. It is caused by an infectious protein (prion) found in contaminated human brain tissue. Kuru is found among people from New Guinea who practiced a form of cannibalism in which they ate the brains of dead people as part of a funeral ritual.", "provenance": null }, { "answer": "When humans use animals for food, they use what they think are the best parts. Some bright spark had the idea that the leftovers - the parts we didn’t want to eat (nervous system tissues) - could be fed back to the animals as a protein supplement.\n\nHowever, what they didn’t realise was an organism (prions) that caused certain diseases (BSE in cows), could survive the standard treatment process and get into the food chain.\n\nOver time, it became clear that these prions destroyed the brains of the animals that were eating the protein supplement. This brain damage caused the animals to act unnaturally, and the lay terms similar to “mad cow disease” became popular.", "provenance": null }, { "answer": "Its not so much flesh, it's brain. \n\n\nSo there are these really scary things called \"prions\". They are pretty much a copy of proteins that your body makes, but they get jumbled up and then start jumbling up all the other copies of that protien that they come in contact with. \n\nThere are a few different prion diseases humans can get, and they all have terrible symptoms, there is no cure, and you will %100 die if you have it.", "provenance": null }, { "answer": null, "provenance": [ { "wikipedia_id": "414267", "title": "Lust murder", "section": "Section::::Characteristics.\n", "start_paragraph_id": 7, "start_character": 0, "end_paragraph_id": 7, "end_character": 328, "text": "Although the dynamic of violent fantasy in lust murders is understood, an individual's violence fantasy alone is not enough to determine if an individual has or has not engaged in lust murder. Moreover, to conclude that an individual is a violent psychopath because they have drawn multitudes of violent images is overreaching.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "45653960", "title": "Shadow of the Demon Lord", "section": "Section::::Game.:Game Mechanics.:Insanity and Corruption.\n", "start_paragraph_id": 10, "start_character": 0, "end_paragraph_id": 10, "end_character": 461, "text": "Characters may gain insanity when they see or experience something that strains the way they understand the world or something that harms them in a way that’s difficult to accept. Events which can inflict insanity include coming back from the dead, suffering a grievous wound, witnessing the brutal death of a loved one, or seeing a 30-foot tall demon waddle across the countryside as slime-covered, fleshy monstrosities spill from its countless drooling maws.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "149411", "title": "Insanity", "section": "Section::::Historical views and treatment.\n", "start_paragraph_id": 5, "start_character": 0, "end_paragraph_id": 5, "end_character": 744, "text": "Madness, the non-legal word for insanity, has been recognized throughout history in every known society. Some traditional cultures have turned to witch doctors or shamans to apply magic, herbal mixtures, or folk medicine to rid deranged persons of evil spirits or bizarre behavior, for example. Archaeologists have unearthed skulls (at least 7000 years old) that have small, round holes bored in them using flint tools. It has been conjectured that the subjects may have been thought to have been possessed by spirits which the holes would allow to escape. However, more recent research on the historical practice of trepanning supports the hypothesis that this procedure was medical in nature and intended as means of treating cranial trauma.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "18469573", "title": "Asylum confinement of Christopher Smart", "section": "Section::::Background.\n", "start_paragraph_id": 6, "start_character": 0, "end_paragraph_id": 6, "end_character": 349, "text": "[we] find that Madness is, contrary to the opinion of some unthinking persons, as manageable as many other distempers, which are equally dreadful and obstinate, and yet are not looked upon as incurable, and that such unhappy objects ought by no means to be abandoned, much less shut up in loathsome prisons as criminals or nuisances to the society.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "3174322", "title": "Teratophilia", "section": "Section::::In general.\n", "start_paragraph_id": 3, "start_character": 0, "end_paragraph_id": 3, "end_character": 496, "text": "Teratophilia is classified as paraphilia. Rather than view the condition as a kink, defenders of teratophilia believe it allows people to see beauty outside of societal standards. Among other things it has been suggested, that monsters can function as an escapist fantasy for some straight women since the monster is able to embody masculine attributes without presenting itself as a man, which may embody trauma and terror in extreme cases, or aggravating patriarchal arrangements in the least.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "35012925", "title": "Une Fenêtre ouverte", "section": "Section::::Synopsis.\n", "start_paragraph_id": 3, "start_character": 0, "end_paragraph_id": 3, "end_character": 294, "text": "Can madness be described? Is it possible to express the pain that it entails? In 1994, when she was about to fall prey to her illness, Khady Sylla met Aminta Ngom, who exhibited her madness freely, without fear of provocation. During her years of suffering, Aminta was her window to the world.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "6229866", "title": "Spider cannibalism", "section": "Section::::Non-reproductive cannibalism.\n", "start_paragraph_id": 14, "start_character": 0, "end_paragraph_id": 14, "end_character": 934, "text": "Some spiders, such as \"Pholcus phalangioides\", will prey on their own kind when food is scarce. Also, females of \"Phidippus johnsoni\" have been observed carrying dead males in their fangs. This behavior may be triggered by aggression, where females carry over hostility from their juvenile state and consume males just as they would prey. Sih and Johnson surmise that non-reproductive cannibalism can occur due to a remnant of an aggression trait in juvenile females. Known as the \"aggressive spillover hypothesis\", this tendency to unselectively attack anything that moves is cultivated by a positive correlation between hostility, foraging capability, and fecundity. Aggression at a young age leads to an increase in prey consumption and as such, a larger adult size. This behavior \"spills over\" into adulthood, and shows up as a nonadaptive trait that manifests itself through adult females preying on males of their same species.\n", "bleu_score": null, "meta": null } ] } ]
null
3ct6wt
does a person who has unprotected sex for 15 seconds have the same exposure to sti's that a person who has unprotected sex for 15 minutes?
[ { "answer": "No, person B has much more exposure to the STI. However, depending on the STI and whether the other person is actively showing symptoms, 15 seconds and 15 minutes might not make much of a difference in terms of whether or not the person gets infected.", "provenance": null }, { "answer": "I want to say yes and no. It's like the classic 5 second rule, when you drop that piece of honey ham on the ground bacteria basically spontaneously transfers. The transfer won't be as severe if you pick it up immediately, the longer it sits the worse it will get. Moral, protect yourself, don't do people with sti's", "provenance": null }, { "answer": null, "provenance": [ { "wikipedia_id": "9064442", "title": "Adolescent sexuality in the United States", "section": "Section::::Physical effects.:Sexually transmitted infections.\n", "start_paragraph_id": 32, "start_character": 0, "end_paragraph_id": 32, "end_character": 281, "text": "Lloyd Kolbe, director of the Center for Disease Control's Adolescent and School Health program, called the STI problem \"a serious epidemic.\" The younger an adolescent is when they first have any type of sexual relations, including oral sex, the more likely they are to get an STI.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "17601160", "title": "Economic epidemiology", "section": "Section::::Prevalence-dependence.\n", "start_paragraph_id": 10, "start_character": 0, "end_paragraph_id": 10, "end_character": 412, "text": "Recent analysis suggests that an individual’s likelihood of engaging in unprotected sex is related to their personal analysis of risk, with those who believed that receiving HAART or having an undetectable viral load protects against transmitting HIV or who had reduced concerns about engaging in unsafe sex given the availability of HAART were more likely to engage in unprotected sex regardless of HIV status.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "32304375", "title": "Partner notification", "section": "Section::::Practice.\n", "start_paragraph_id": 4, "start_character": 0, "end_paragraph_id": 4, "end_character": 430, "text": "A 2002 survey in the United States showed that with regards to STIs, healthcare providers conduct screenings with less frequency than recommended by health department guidelines. Furthermore, when a person was found to have a sexually transmitted infection, it was much more common for the physician to ask the person to notify their partners rather than for the physician to arrange for this to be done on behalf of the patient.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "15179951", "title": "Human sexuality", "section": "Section::::Sexual behavior.:General activities and health.\n", "start_paragraph_id": 90, "start_character": 0, "end_paragraph_id": 90, "end_character": 647, "text": "Sexual intercourse can also be a disease vector. There are 19 million new cases of sexually transmitted diseases (STD) every year in the U.S., and worldwide there are over 340 million STD infections each year. More than half of these occur in adolescents and young adults aged 15–24 years. At least one in four U.S. teenage girls has a sexually transmitted disease. In the U.S., about 30% of 15- to 17-year-olds have had sexual intercourse, but only about 80% of 15- to 19-year-olds report using condoms for their first sexual intercourse. In one study, more than 75% of young women age 18–25 years felt they were at low risk of acquiring an STD.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "49765748", "title": "Sex education in India", "section": "Section::::Types of sex education.:HIV/AIDS and STD prevention education.:Efficacy.\n", "start_paragraph_id": 42, "start_character": 0, "end_paragraph_id": 42, "end_character": 549, "text": "Additionally, a 2008 survey conducted among 11 and 12 class girls (aged 14 to 19; mean age was 16.38) in South Delhi found that 71% had no knowledge about the effects of genital herpes. 43% did not know the effects of syphilis and 28% did not know gonorrhoea was an STD. 46% thought the all STDs, except AIDS, could be cured. The major sources of information about STDs and safe sex among the girls were their friends (76%), media (72%), books and magazines (65%) or the internet (52%). 48% felt that they could not talk to their parents about sex.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "18006737", "title": "Gonorrhea", "section": "Section::::Cause.:Spread.\n", "start_paragraph_id": 12, "start_character": 0, "end_paragraph_id": 12, "end_character": 461, "text": "The infection is usually spread from one person to another through vaginal, oral, or anal sex. Men have a 20% risk of getting the infection from a single act of vaginal intercourse with an infected woman. The risk for men that have sex with men (MSM) is higher. Active MSM may get a penile infection, while passive MSM may get anorectal gonorrhea. Women have a 60–80% risk of getting the infection from a single act of vaginal intercourse with an infected man.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "2460", "title": "Anal sex", "section": "Section::::Health risks.:General risks.\n", "start_paragraph_id": 43, "start_character": 0, "end_paragraph_id": 43, "end_character": 544, "text": "As with other sexual practices, people without sound knowledge about the sexual risks involved are susceptible to STIs. Because of the view that anal sex is not \"real sex\" and therefore does not result in virginity loss, or pregnancy, teenagers and other young people may consider vaginal intercourse riskier than anal intercourse and believe that a STI can only result from vaginal intercourse. It may be because of these views that condom use with anal sex is often reported to be low and inconsistent across all groups in various countries.\n", "bleu_score": null, "meta": null } ] } ]
null
aggsvf
nyquist theorem perfect signal reproduction
[ { "answer": "The key element you are missing is the Nyquist limit. For perfect reconstruction the signal must have all frequencies at less than half the sampling rate. This limiting of the frequencies guarantees that only one continuous wave could have produced those samples. In the case you mentioned of sampling a sine wave at the peaks and troughs (which doesn't actually quite meet the Nyquist limit) cannot have been produced by a saw tooth wave. This is because a saw tooth wave would have very frequencies exceeding the Nyquist limit. A saw tooth wave isn't continuous either for that matter. ", "provenance": null }, { "answer": " > Is there some type of assumed curvature\n\nThere is usually a low pass filter on the analog circuits to avoid aliasing. The filter's performance is very important. ", "provenance": null }, { "answer": "The digital to analog convertor will adjust the output at the beginning of each sample, and hold it till the beginning of the next sample, creating a stairstep pattern. Any waveform, including this stairstep can be reconstructed by adding sine waves together. The frequency of the sine waves needed to produce the sharp jumps from one sample to the next will be far above the Nyquist frequency. Those frequencies were never in the original signal, so you want to cut them out, which the DAC does with a lowpass filter. That filter will (theoretically) roll off all frequencies above the Nyquist frequency. If you look at the resulting output, it will look (theoretically) identical to the input signal before you digitized it.\n\nI saw a video where someone used a signal generator and an oscilloscope to compare the signal from the generator and the one being replayed from the sound card. They were basically the same even though he was able to show the stairstep pattern in his digital audio workstation. That's why when vinyl bugs tell you that digital sounds worse because it's choppy, and they show you the stairsteps, they are misinformed.\n\nI said theoretically a couple of times because you can't get a filter to let all frequencies below a cutoff through, and mute all frequencies above that cutoff. In the real world, frequencies are attenuated (their power reduced) starting at the cutoff , and this attenuation becomes more and more extreme at higher frequencies. That's partly why, the other being historical reasons, we use a sampling rate of 44100 Hz, which puts the Nyquist frequency at 22050 Hz, when the limit of human hearing is only around 20,000 Hz, and even then only for young people. The extra 2050 Hz leaves room for cheap lowpass filters to do their work.", "provenance": null }, { "answer": null, "provenance": [ { "wikipedia_id": "1697331", "title": "Nyquist stability criterion", "section": "Section::::Nyquist plot.\n", "start_paragraph_id": 4, "start_character": 0, "end_paragraph_id": 4, "end_character": 699, "text": "A Nyquist plot is a parametric plot of a frequency response used in automatic control and signal processing. The most common use of Nyquist plots is for assessing the stability of a system with feedback. In Cartesian coordinates, the real part of the transfer function is plotted on the X axis. The imaginary part is plotted on the Y axis. The frequency is swept as a parameter, resulting in a plot per frequency. The same plot can be described using polar coordinates, where gain of the transfer function is the radial coordinate, and the phase of the transfer function is the corresponding angular coordinate. The Nyquist plot is named after Harry Nyquist, a former engineer at Bell Laboratories.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "60116911", "title": "Signal (model checking)", "section": "Section::::Properties.:Bipartite signal.\n", "start_paragraph_id": 21, "start_character": 0, "end_paragraph_id": 21, "end_character": 271, "text": "A signal is said to be \"bipartite\" if the sequence of intervals start with a singular interval - i.e. a closed interval whose lower and upper bound are equal, hence a set which is a singleton. And if the sequence alternate between singular intervals and open intervals. \n", "bleu_score": null, "meta": null }, { "wikipedia_id": "561949", "title": "Bandlimiting", "section": "Section::::Sampling bandlimited signals.\n", "start_paragraph_id": 8, "start_character": 0, "end_paragraph_id": 8, "end_character": 286, "text": "The signal whose Fourier transform is shown in the figure is also bandlimited. Suppose formula_6 is a signal whose Fourier transform is formula_7, the magnitude of which is shown in the figure. The highest frequency component in formula_6 is formula_9. As a result, the Nyquist rate is\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "1894582", "title": "Dielectric spectroscopy", "section": "Section::::Principles.:Dynamic behavior.:Double layer capacitance.\n", "start_paragraph_id": 38, "start_character": 0, "end_paragraph_id": 38, "end_character": 237, "text": "Nyquist diagram of the impedance of the circuit shown in Fig. 3 is a semicircle with a diameter formula_18 and an angular frequency at the apex equal to formula_19 (Fig. 3). Other representations, Bode plots, or Black plans can be used.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "23551012", "title": "Non-uniform discrete Fourier transform", "section": "", "start_paragraph_id": 1, "start_character": 0, "end_paragraph_id": 1, "end_character": 483, "text": "In applied mathematics, the nonuniform discrete Fourier transform (NUDFT or NDFT) of a signal is a type of Fourier transform, related to a discrete Fourier transform or discrete-time Fourier transform, but in which the input signal is not sampled at equally spaced points or frequencies (or both). It is a generalization of the shifted DFT. It has important applications in signal processing, magnetic resonance imaging, and the numerical solution of partial differential equations.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "857897", "title": "Time–frequency analysis", "section": "Section::::Motivation.\n", "start_paragraph_id": 8, "start_character": 0, "end_paragraph_id": 8, "end_character": 714, "text": "Whereas the technique of the Fourier transform can be extended to obtain the frequency spectrum of any slowly growing locally integrable signal, this approach requires a complete description of the signal's behavior over all time. Indeed, one can think of points in the (spectral) frequency domain as smearing together information from across the entire time domain. While mathematically elegant, such a technique is not appropriate for analyzing a signal with indeterminate future behavior. For instance, one must presuppose some degree of indeterminate future behavior in any telecommunications systems to achieve non-zero entropy (if one already knows what the other person will say one cannot learn anything).\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "186963", "title": "Morlet wavelet", "section": "Section::::Uses.:Use in music.\n", "start_paragraph_id": 22, "start_character": 0, "end_paragraph_id": 22, "end_character": 318, "text": "The Morlet wavelet transform method is applied to music transcription. It produces very accurate results that were not possible using Fourier transform techniques. The Morlet wavelet transform is capable of capturing short bursts of repeating and alternating music notes with a clear start and end time for each note.\n", "bleu_score": null, "meta": null } ] } ]
null