id
stringlengths
5
6
input
stringlengths
3
301
output
list
meta
null
2l43qs
why does the us use fahrenheit when the rest of the world uses celsius?
[ { "answer": "Because most of us are around 5 feet 7 inches and weigh around 200lbs and drive at an average speed of 50 miles per hour and have temperatures of 98.6 degrees Fahrenheit. \nThe US customary system developed from English units which were in use in the British Empire before American independence. Consequently most US units are virtually identical to the British imperial units. However, the British system was overhauled in 1824. Advocates of the customary system saw the French Revolutionary, or metric, system as atheistic. ", "provenance": null }, { "answer": "The Americans weren't invited (by the French) to the summit where the rest of The World/Europe made standardized units. \n\nI believe the podcast \"how stuff works\" had an episode about this a few years ago.", "provenance": null }, { "answer": "There is no valid reason why you shouldn't be using it.\n\nIt does carry a significant cost in both time and resources to change though. I remember when we did it in Canada. It is not a small job to change how you measure pretty much everything. That may be why you still haven't adopted it.", "provenance": null }, { "answer": "Whenever people get into a metric vs. US debate, they make it seem like Americans have no idea what the metric system is. We use both, we learn both, we just use our system more.", "provenance": null }, { "answer": "There is really no reason to change. Metric is used where necessary/practical here.", "provenance": null }, { "answer": "Let's build a cube. A meter tall (or 100 centimeters, 1,000 ..*millimeters*) wide and deep. Let's now fill it with water: Exactly 1,000 liters of water will be needed. For a weight of 1,000 kilos, or a ton. This water will boil at 100 degrees C and freeze at 0C.\n\n*oh my god metric's haaaard, man!*", "provenance": null }, { "answer": "All joking aside, Americans actually learn both, and use them interchangeably. I.e. Soda comes in 2-liter bottles, milks comes in gallons, hospitals measure liquids in cc's, and weigh/measure newborns in pounds/feet and inches.", "provenance": null }, { "answer": null, "provenance": [ { "wikipedia_id": "11524", "title": "Fahrenheit", "section": "Section::::Usage.\n", "start_paragraph_id": 21, "start_character": 0, "end_paragraph_id": 21, "end_character": 351, "text": "The Fahrenheit scale was the primary temperature standard for climatic, industrial and medical purposes in English-speaking countries until the 1960s. In the late 1960s and 1970s, the Celsius scale replaced Fahrenheit in almost all of those countries—with the notable exception of the United States—typically during their general metrication process.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "8663", "title": "Daniel Gabriel Fahrenheit", "section": "Section::::Fahrenheit scale.\n", "start_paragraph_id": 9, "start_character": 0, "end_paragraph_id": 9, "end_character": 226, "text": "The Fahrenheit scale was the primary temperature standard for climatic, industrial and medical purposes in English-speaking countries until the 1970s, nowadays replaced by the Celsius scale long used in the rest of the world.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "9520026", "title": "January 1962", "section": "Section::::January 15, 1962 (Monday).\n", "start_paragraph_id": 79, "start_character": 0, "end_paragraph_id": 79, "end_character": 433, "text": "BULLET::::- After the United Kingdom sought to join the European Economic Community, the Meteorological Office first began using Celsius temperature values in its public weather information, following the Fahrenheit values. In October, the Celsius values were listed first, and by January 1, 1973, when the government entered the EEC and completed its conversion to the metric system, Fahrenheit numbers were only used occasionally.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "11524", "title": "Fahrenheit", "section": "", "start_paragraph_id": 2, "start_character": 0, "end_paragraph_id": 2, "end_character": 823, "text": "At the end of the 2010s, Fahrenheit was used as the official temperature scale only in the United States (including its unincorporated territories), its freely associated states in the Western Pacific (Palau, the Federated States of Micronesia and the Marshall Islands), the Bahamas, the Cayman Islands and Liberia. Antigua and Barbuda and other islands which use the same meteorological service, such as Anguilla, the British Virgin Islands, Montserrat and Saint Kitts and Nevis, as well as Bermuda, Belize and the Turks and Caicos Islands, use Fahrenheit and Celsius. All other countries in the world officially now use the Celsius scale, named after Swedish astronomer Anders Celsius. Also, despite metrication in 1962, Fahrenheit is still used occasionally in the United Kingdom, although not in any official contexts.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "11524", "title": "Fahrenheit", "section": "Section::::Usage.\n", "start_paragraph_id": 22, "start_character": 0, "end_paragraph_id": 22, "end_character": 416, "text": "Fahrenheit is used in the United States, its territories and associated states (all served by the U.S. National Weather Service), as well as the Bahamas, the Cayman Islands and Liberia for everyday applications. For example, U.S. weather forecasts, food cooking, and freezing temperatures are typically given in degrees Fahrenheit. Scientists, such as meteorologists, use degrees Celsius or kelvin in all countries.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "25428398", "title": "Public opinion on global warming", "section": "Section::::Issues.:Media.\n", "start_paragraph_id": 52, "start_character": 0, "end_paragraph_id": 52, "end_character": 385, "text": "September 2011 Angus Reid Public Opinion poll found that Britons (43%) are less likely than Americans (49%) or Canadians (52%) to say that \"global warming is a fact and is mostly caused by emissions from vehicles and industrial facilities\". The same poll found that 20% of Americans, 20% of Britons and 14% of Canadians think \"global warming is a theory that has not yet been proven\".\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "1965462", "title": "Tropical rain belt", "section": "Section::::Northward movement and the effects.\n", "start_paragraph_id": 6, "start_character": 0, "end_paragraph_id": 6, "end_character": 629, "text": "As the earth warms, the rain belt is projected to move north of the current position. Recent climate change can be attributed to rising carbon dioxide concentrations in the atmosphere; caused by the burning of fossil fuels. The correlation between the concentration of carbon dioxide in the atmosphere and average global temperature is undeniably direct, meaning that as more carbon dioxide is released into the atmosphere, the temperature of the earth is expected to rise as well. Even though the earth is warming as a whole entity, the Northern Hemisphere is warming faster than the Southern because of melting Arctic sea ice.\n", "bleu_score": null, "meta": null } ] } ]
null
1meqgg
how can a country sell bonds with negative interest rate?
[ { "answer": "They can't and don't.\n\nThe thing is that inflation happens. So if the nominal interest rate on the bonds is less than the rate of inflation, the *effective* interest rate is negative. They're still a good deal, though, because cash is also affected by inflation; keeping your money in cash would just make you lose more.", "provenance": null }, { "answer": null, "provenance": [ { "wikipedia_id": "163115", "title": "Interest rate", "section": "Section::::Negative nominal or real rates.:On government bond yields.\n", "start_paragraph_id": 115, "start_character": 0, "end_paragraph_id": 115, "end_character": 366, "text": "During the European debt crisis, government bonds of some countries (Switzerland, Denmark, Germany, Finland, the Netherlands and Austria) have been sold at negative yields. Suggested explanations include desire for safety and protection against the eurozone breaking up (in which case some eurozone countries might redenominate their debt into a stronger currency).\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "695460", "title": "Government debt", "section": "Section::::Risk.\n", "start_paragraph_id": 27, "start_character": 0, "end_paragraph_id": 27, "end_character": 732, "text": "In practice, the market interest rate tends to be different for debts of different countries. An example is in borrowing by different European Union countries denominated in euros. Even though the currency is the same in each case, the yield required by the market is higher for some countries' debt than for others. This reflects the views of the market on the relative solvency of the various countries and the likelihood that the debt will be repaid. Further, there are historical examples where countries defaulted, i.e., refused to pay their debts, even when they had the ability of paying it with printed money. This is because printing money has other effects that the government may see as more problematic than defaulting.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "60737", "title": "Bond (finance)", "section": "Section::::Investing in bonds.\n", "start_paragraph_id": 110, "start_character": 0, "end_paragraph_id": 110, "end_character": 766, "text": "BULLET::::- Fixed rate bonds are subject to \"interest rate risk\", meaning that their market prices will decrease in value when the generally prevailing interest rates rise. Since the payments are fixed, a decrease in the market price of the bond means an increase in its yield. When the market interest rate rises, the market price of bonds will fall, reflecting investors' ability to get a higher interest rate on their money elsewhere — perhaps by purchasing a newly issued bond that already features the newly higher interest rate. This does not affect the interest payments to the bondholder, so long-term investors who want a specific amount at the maturity date do not need to worry about price swings in their bonds and do not suffer from interest rate risk.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "1307966", "title": "Impossible trinity", "section": "Section::::Trilemma in practice.\n", "start_paragraph_id": 24, "start_character": 0, "end_paragraph_id": 24, "end_character": 349, "text": "This involves an increase of the monetary supply, and a fall of the domestically available interest rate. Because the internationally available interest rate adjusted for foreign exchange differences has not changed, market participants are able to make a profit by borrowing in the country's currency and then lending abroad a form of carry trade.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "2237734", "title": "Mundell–Fleming model", "section": "Section::::Mechanics of the model.:Fixed exchange rate regime.:Changes in the global interest rate.\n", "start_paragraph_id": 70, "start_character": 0, "end_paragraph_id": 70, "end_character": 476, "text": "If the global interest rate declines below the domestic rate, the opposite occurs. The BoP curve shifts down, foreign money flows in and the home currency is pressured to appreciate, so the central bank offsets the pressure by selling domestic currency (equivalently, buying foreign currency). The inflow of money causes the LM curve to shift to the right, and the domestic interest rate becomes lower (as low as the world interest rate if there is perfect capital mobility).\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "180311", "title": "Exchange rate", "section": "Section::::Factors affecting the change of exchange rate.\n", "start_paragraph_id": 46, "start_character": 0, "end_paragraph_id": 46, "end_character": 363, "text": "BULLET::::2. Interest rate level: Interest rates are the cost and profit of borrowing capital. When a country raises its interest rate or its domestic interest rate is higher than the foreign interest rate, it will cause capital inflow, thereby increasing the demand for domestic currency, allowing the currency to appreciate and the foreign exchange depreciate.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "163115", "title": "Interest rate", "section": "Section::::Negative nominal or real rates.\n", "start_paragraph_id": 101, "start_character": 0, "end_paragraph_id": 101, "end_character": 626, "text": "\"Nominal\" interest rates are normally positive, but not always. In contrast, \"real\" interest rates can be negative, when nominal interest rates are below inflation. When this is done via government policy (for example, via reserve requirements), this is deemed financial repression, and was practiced by countries such as the United States and United Kingdom following World War II (from 1945) until the late 1970s or early 1980s (during and following the Post–World War II economic expansion). In the late 1970s, United States Treasury securities with negative real interest rates were deemed \"certificates of confiscation\".\n", "bleu_score": null, "meta": null } ] } ]
null
62ya8y
what's the process to get into u.s from mexico legally (immigrate)?
[ { "answer": "It depends. The US will offer x amount of work visas every year for jobs where there are a shortage of American workers (it could be working on farms or being a doctor), student visas and travel visas (you don't need a visa to visit the US if you are Mexican) \n\nAfter you get your foot in the door you are only allowed to stay for until your visa expires. After that you are supposed to return. If you want to legally stay, you have to apply for permanent resident status (which means you are not a citizen, but are legally entitled to stay permanently [a green card]). This can by done a few ways. First, you can apply based on family ties, if you marry an American you can apply, if you are unmarried but are the offspring of an US citizen, etc. If you posses extraordinary capabilities or training, if it is of the national interest or if you belong to a group that is being persecuted and your country's government does not have control of the situation, or are a refugee you can apply for refugee status. If you belong to the last group the state department will investigate your case and grant or deny you asylum; all require an interview with US Citizenship and Immigration Services.\n\nEmployers can also petition the government to grant x amount of visas to do x job because these jobs require workers to be in the US for extended periods of time and there are not enough people in the US that can do it.\n\nAll th is does not grant you citizenship, it grants you the right to live in the US, you do not serve on jury duty, there are certain benefits you are not eligible for but you are fully protected by the constitution (even if you are here illegally).\n\nGenerally, if you h ave no special skills then you will not be granted a green card.", "provenance": null }, { "answer": null, "provenance": [ { "wikipedia_id": "42965915", "title": "Immigration by country", "section": "Section::::By country.:North America.:Mexico.\n", "start_paragraph_id": 56, "start_character": 0, "end_paragraph_id": 56, "end_character": 768, "text": "After the United States returned to a more closed border, immigration has been more difficult than ever for Mexican residents hoping to migrate. Mexico is the leading country of migrants to the U.S.. A Mexican Repatriation program was founded by the United States government to encourage people to voluntarily move to Mexico. However, the program was not found successful and many immigrants were deported against their will. In 2010, there was a total of 139,120 legal immigrants who migrated to the United States. This put Mexico as the top country for emigration. In subsequent years China and India have surpassed Mexico as the top sources of immigrants to the United States, and since 2009 there has been a net decline in the number of Mexicans living in the US.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "19230", "title": "Foreign relations of Mexico", "section": "Section::::Transnational issues.:Illegal migration.\n", "start_paragraph_id": 130, "start_character": 0, "end_paragraph_id": 130, "end_character": 694, "text": "Almost a third of all immigrants in the U.S. were born in Mexico, being the source of the greatest number of both authorized (20%) and unauthorized (56%) migrants who come to the U.S. every year. Since the early 1990s, Mexican immigrants are no longer concentrated in California, the Southwest, and Illinois, but have been coming to new gateway states, including New York, North Carolina, Georgia, Nevada, and Washington, D.C., in increasing numbers. This phenomenon can be mainly attributed to poverty in Mexico, the growing demand for unskilled labor in the U.S., the existence of established family and community networks that allow migrants to arrive in the U.S. with people known to them.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "327441", "title": "California Republic", "section": "Section::::Background of the Bear Flag Revolt.:Texas, immigration and land.\n", "start_paragraph_id": 11, "start_character": 0, "end_paragraph_id": 11, "end_character": 740, "text": "Mexican law had long allowed grants of land to naturalized Mexican citizens. Obtaining Mexican citizenship was not difficult and many earlier American immigrants had gone through the process and obtained free grants of land. That same year (1845) anticipation of war with the United States and the increasing number of immigrants reportedly coming from the United States resulted in orders from Mexico City denying immigrants from the United States entry into California. The orders also required California's officials not to allow land grants, sales or even rental of land to non-citizen emigrants already in California. All non-citizen immigrants, who had arrived without permission, were threatened with being forced out of California.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "15051", "title": "Immigration to the United States", "section": "Section::::History.:Since 1965.\n", "start_paragraph_id": 27, "start_character": 0, "end_paragraph_id": 27, "end_character": 824, "text": "For those who enter the US illegally across the Mexico–United States border and elsewhere, migration is difficult, expensive and dangerous. Virtually all undocumented immigrants have no avenues for legal entry to the United States due to the restrictive legal limits on green cards, and lack of immigrant visas for low-skilled workers. Participants in debates on immigration in the early twenty-first century called for increasing enforcement of existing laws governing illegal immigration to the United States, building a barrier along some or all of the Mexico-U.S. border, or creating a new guest worker program. Through much of 2006 the country and Congress was immersed in a debate about these proposals. few of these proposals had become law, though a partial border fence had been approved and subsequently canceled.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "3395615", "title": "Who Are We? The Challenges to America's National Identity", "section": "Section::::Challenges to American identity.\n", "start_paragraph_id": 28, "start_character": 0, "end_paragraph_id": 28, "end_character": 215, "text": "BULLET::::- Persistence: It is estimated that nearly half a million Mexicans will immigrate to the United States each year until 2030, culminating in nearly a half century of high immigration from a single country.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "19224", "title": "Demographics of Mexico", "section": "Section::::International migration.:Immigration to Mexico.\n", "start_paragraph_id": 27, "start_character": 0, "end_paragraph_id": 27, "end_character": 508, "text": "Discrepancies between the figures of official legal aliens and all foreign-born residents is quite large. The official figure for foreign-born residents in Mexico in 2000 was 493,000, with a majority (86.9%) of these born in the United States (except Chiapas, where the majority of immigrants are from Central America). The six states with the most immigrants are Baja California (12.1% of total immigrants), Mexico City (the \"Federal District\"; 11.4%), Jalisco (9.9%), Chihuahua (9%) and Tamaulipas (7.3%).\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "1033756", "title": "Repatriation flight program", "section": "", "start_paragraph_id": 6, "start_character": 0, "end_paragraph_id": 6, "end_character": 475, "text": "Immigrants caught and flown back to Mexico would usually be taken back to Guadalajara, Jalisco, or into Benito Juarez International Airport in Mexico City. Despite the conditions of travelling by foot from Mexico to the United States and treatment by coyotes, a survey run at Mexico's largest international airport by a North American newspaper showed that 50 percent of those returned to Mexico by air would be willing to try to return to the United States illegally again.\n", "bleu_score": null, "meta": null } ] } ]
null
4wzn0d
Why do some places have two high and two low tides a day, and other places have only one?
[ { "answer": "In short, this is because the ocean basins aren't uniform and tides don't have the same impact in all areas. Imagine sloshing water back an forth in a bucket- there will be areas that experience more extreme water level change relative to the regular surface level. \n\nThere are certain areas in the oceans, nodes, which would be like the center of the bucket when sloshing the water: the water height stays relatively level", "provenance": null }, { "answer": null, "provenance": [ { "wikipedia_id": "30718", "title": "Tide", "section": "Section::::Characteristics.\n", "start_paragraph_id": 13, "start_character": 0, "end_paragraph_id": 13, "end_character": 481, "text": "Tides are commonly \"semi-diurnal\" (two high waters and two low waters each day), or \"diurnal\" (one tidal cycle per day). The two high waters on a given day are typically not the same height (the daily inequality); these are the \"higher high water\" and the \"lower high water\" in tide tables. Similarly, the two low waters each day are the \"higher low water\" and the \"lower low water\". The daily inequality is not consistent and is generally small when the Moon is over the Equator.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "15102", "title": "Isle of Wight", "section": "Section::::Geography.\n", "start_paragraph_id": 41, "start_character": 0, "end_paragraph_id": 41, "end_character": 371, "text": "The north coast is unusual in having four high tides each day, with a double high tide every twelve and a half hours. This arises because the western Solent is narrower than the eastern; the initial tide of water flowing from the west starts to ebb before the stronger flow around the south of the island returns through the eastern Solent to create a second high water.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "2385382", "title": "Rule of twelfths", "section": "Section::::Caveats.\n", "start_paragraph_id": 21, "start_character": 0, "end_paragraph_id": 21, "end_character": 233, "text": "The rule assumes that all tides behave in a regular manner, this is not true of some geographical locations, such as Poole Harbour or the Solent where there are \"double\" high waters or Weymouth Bay where there is a double low water.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "2361543", "title": "Chennai Port", "section": "Section::::Location and geography.\n", "start_paragraph_id": 11, "start_character": 0, "end_paragraph_id": 11, "end_character": 883, "text": "The tides in the port area are semi-diurnal in nature, that is, occurrence of two high and two low waters every day. The spring tides are up to . The mean tidal range varies from 0.914 m to 1.219 m at spring and from 0.805 m to 0.610 m at neap tides. The change in water levels combined due to astronomical tide, wind setup, wave setup, barometric pressure, seiches and global sea level rise are estimated as 1.57 m, 1.68 m and 1.8 m at 15 m, 10 m and 5 m depth contours, respectively. Waves ranging from 0.4 m to 2.0 m in the deep water around Chennai harbour have been experienced with the predominant being 0.4 m to 1.2 m with wave periods predominantly in the order of 4 to 10 seconds. During cyclone season, waves of height exceeding 2.5 m are common. The predominant wave directions during southwest and northeast monsoons are 145° from north and 65° from north, respectively.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "30718", "title": "Tide", "section": "Section::::Tidal constituents.:Range variation: springs and neaps.\n", "start_paragraph_id": 34, "start_character": 0, "end_paragraph_id": 34, "end_character": 306, "text": "Spring tides result in high waters that are higher than average, low waters that are lower than average, 'slack water' time that is shorter than average, and stronger tidal currents than average. Neaps result in less extreme tidal conditions. There is about a seven-day interval between springs and neaps.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "66769", "title": "Bay of Fundy", "section": "Section::::Hydrology.:Tides.\n", "start_paragraph_id": 4, "start_character": 0, "end_paragraph_id": 4, "end_character": 346, "text": "The tidal range in the Bay of Fundy is about . (The average tidal range worldwide is about .) Some tides are higher than others, depending on the position of the moon, the sun, and atmospheric conditions. Tides are semidiurnal, meaning they have two highs and two lows each day with about six hours and 13 minutes between each high and low tide.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "18842323", "title": "Sea", "section": "Section::::Physical science.:Tides.\n", "start_paragraph_id": 31, "start_character": 0, "end_paragraph_id": 31, "end_character": 1322, "text": "Tidal flows of seawater are resisted by the water's inertia and can be affected by land masses. In places like the Gulf of Mexico where land constrains the movement of the bulges, only one set of tides may occur each day. Inshore from an island there may be a complex daily cycle with four high tides. The island straits at Chalkis on Euboea experience strong currents which abruptly switch direction, generally four times per day but up to 12 times per day when the moon and the sun are 90 degrees apart. Where there is a funnel-shaped bay or estuary, the tidal range can be magnified. The Bay of Fundy is the classic example of this and can experience spring tides of . Although tides are regular and predictable, the height of high tides can be lowered by offshore winds and raised by onshore winds. The high pressure at the centre of an anticyclones pushes down on the water and is associated with abnormally low tides while low-pressure areas may cause extremely high tides. A storm surge can occur when high winds pile water up against the coast in a shallow area and this, coupled with a low pressure system, can raise the surface of the sea at high tide dramatically. In 1900, Galveston, Texas experienced a surge during a hurricane that overwhelmed the city, killing over 3,500 people and destroying 3,636 homes.\n", "bleu_score": null, "meta": null } ] } ]
null
bzjaoc
what is academic probation and how would somebody get it?
[ { "answer": "It means your grades are shit and if you don't shape up you're out.\n\nThis is more typical in colleges or private schools since they don't want some slacker tanking their performance numbers.", "provenance": null }, { "answer": "It's when your grades are poor so the school you go to warns you. If they don't improve, there will be negative consequences such as losing funding, certain privileges, or just plain getting kicked out of school.", "provenance": null }, { "answer": "Academic probation happens to a person when their grades drop below a certain level. It usually happens when a person is really struggling with their class work, or when they simply stop doing it altogether. The GPA a person needs to drop below to be placed on academic probation varies from school to school, but it’s usually between the 2.0-2.5 range.", "provenance": null }, { "answer": null, "provenance": [ { "wikipedia_id": "6158798", "title": "Academic probation", "section": "", "start_paragraph_id": 1, "start_character": 0, "end_paragraph_id": 1, "end_character": 626, "text": "Academic probation in the United Kingdom is a period served by a new academic staff member at a university or college when they are first given their job. It is specified in the conditions of employment of the staff member, and may vary from person to person and from institution to institution. In universities founded prior to the Further and Higher Education Act 1992, it is usually three years for academic staff and six months to a year for other staff. In the universities created by that Act, and in colleges of higher education, the period is generally just a year across the board, for both academic and other staff.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "28682747", "title": "Disciplinary probation", "section": "", "start_paragraph_id": 1, "start_character": 0, "end_paragraph_id": 1, "end_character": 351, "text": "Disciplinary probation is a disciplinary status that can apply to students at a higher educational institution or to employees in the workplace. For employees, it can result from both poor performance at work or from misconduct. For students, it results from misconduct alone, with poor academic performance instead resulting in scholastic probation.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "30551673", "title": "Rehabilitation Policy", "section": "Section::::Policies.:Probation.\n", "start_paragraph_id": 13, "start_character": 0, "end_paragraph_id": 13, "end_character": 1005, "text": "Probation is a period of time where an offender lives under supervision and under a set of restrictions. Violations of these restrictions could result in arrest. Probation is typically an option for first time offenders with high rehabilitative capacity. At its core, it is \"a substitute for prison\", with the goal being to \"spare the worthy first offender from the demoralizing influences of imprisonment and save him from recidivism\". In the United States, there are 4,162,536 probationers. Probationers are supervised by probation officers just as parolees are supervised by parole officers. Probation officers have similar authority as parole officers do to restrict mobility, social contact, and mandate various other conditions and requirements. Probationers just like parolees are at high risk of imprisonment due to violation of their restrictions that may not be classified as criminal. In the United States, 40% of probationers were sent to jail or prison for technical and criminal violations.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "2116398", "title": "Probation (workplace)", "section": "", "start_paragraph_id": 1, "start_character": 0, "end_paragraph_id": 1, "end_character": 553, "text": "In a workplace setting, probation (or probationary period) is a status given to new employees of a company or business or new members of organizations, such churches, associations, clubs or orders. It is widely termed as the Probation Period of an employee. This status allows a supervisor or other company manager to evaluate closely the progress and skills of the newly hired worker, determine appropriate assignments, and monitor other aspects of the employee such as honesty, reliability, and interactions with co-workers, supervisors or customers.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "54975972", "title": "Probation in Ukraine", "section": "", "start_paragraph_id": 1, "start_character": 0, "end_paragraph_id": 1, "end_character": 304, "text": "Probation is a system of supervision and social-pedagogic activities over offender, ordered by a Court and in accordance to the legislation; enforcement of certain types of a criminal penalty, not concerned the deprivation of liberty and to provide the Court with information characterized the offender.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "6158798", "title": "Academic probation", "section": "", "start_paragraph_id": 3, "start_character": 0, "end_paragraph_id": 3, "end_character": 684, "text": "The extended length of the probationary period in universities prior to the FHE Act 1992 is the result of an agreement made in 1974 between the University Authorities Panel and the Association of University Teachers, the Academic and Related Salaries Settlement. The working party that formed the agreement stated that the purpose of academic probation was to decide, at the end of the probationary period whether a member of staff should be retained, and that this decision is based upon \"the long-term interests of the university itself, of the other members of its staff, and of its students\". The working group set out several criteria that a probationer was expected to satisfy:\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "406786", "title": "Probation", "section": "", "start_paragraph_id": 2, "start_character": 0, "end_paragraph_id": 2, "end_character": 244, "text": "In some jurisdictions, the term \"probation\" applies only to community sentences (alternatives to incarceration), such as suspended sentences. In others, probation also includes supervision of those conditionally released from prison on parole.\n", "bleu_score": null, "meta": null } ] } ]
null
2er3kp
What were the intentions of Edward, the Black Prince, preceding the battle of Poitiers (1356), was he looking to confront King John II of France?
[ { "answer": "Are you referring to the letter Prince Edward sent to the City of London following his victory? Edward doesn't exactly say he was planning on retreating. Instead, he says he was withdrawing to link up with the Duke of Lancaster after abandoning an assault on Tours. After he rejected the French negotiators, Edward waited at Chatelleraute for four days in order to determine where exactly the French king and his army were. After that, it was in fact Edward who was pursuing the French, rather than the other way around. After that, the campaign was a messy series of maneuvers and skirmishes as both armies attempted to locate the other. Edward was clearly intending on having a battle, but it would be on his terms and on terrain carefully chosen to protect his men from the French attack. \n\nThe Poitiers campaign is (in terms of strategy) not very different from the Crecy campaign, or the Black Prince's 1355 *chevauchée* (although in that case, the French refused to come out to fight in the field). It is a common misconception that the English engaged in *chevauchées* in order to avoid confrontations with larger French armies. Most sources point to the exact opposite scenario: the English actively desired open combat in the field, while the French only attacked marauding English armies when they felt they had no other choice. The English knew that their armies were highly effective in the field, and frequently won battles against numerically superior foes. The prospect of engaging larger forces was not a cause for much fear, provided that the English were able to maneuver so that they 1)fought the battle on favorable terrain and 2)prevented multiple French forces from enveloping them. What has often been interpreted in the past as English armies being chased by French armies is more often English armies attempting to maneuver in order to pick the best location for open combat. For more on English military strategy in the opening stages of the Hundred Years War, I highly recommend Clifford Rogers' *The Wars of Edward III: Sources and Interpretations*.", "provenance": null }, { "answer": null, "provenance": [ { "wikipedia_id": "37393071", "title": "Treaty of London (1358)", "section": "", "start_paragraph_id": 2, "start_character": 0, "end_paragraph_id": 2, "end_character": 519, "text": "Edward III of England's son, Edward the Black Prince, invaded France from English held Gascony in 1356, winning a victory at the Battle of Poitiers. During the battle, the Gascon noble Jean III de Grailly, captal de Buch, captured the French king, John II, and many of his nobles. At the instigation of the pope, negotiations were opened and resulted in a truce being declared on 13 March 1357. The Black Prince brought John to London where negotiations were reopened and the First treaty of London signed in May 1358.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "58917", "title": "Edward the Black Prince", "section": "Section::::Aquitaine campaign.\n", "start_paragraph_id": 24, "start_character": 0, "end_paragraph_id": 24, "end_character": 2011, "text": "When Edward III determined to renew the war with France in 1355, he ordered the Black Prince to lead an army into Aquitaine while he, as his plan was, acted with the king of Navarre in Normandy, and the Duke of Lancaster upheld the cause of John of Montfort in Brittany. The prince's expedition was made in accordance with the request of some of the Gascon lords who were anxious for plunder. On 10 July the king appointed him his lieutenant in Gascony, and gave him powers to act in his stead, and, on 4 August, to receive homages. He left London for Plymouth on 30 June, was detained there by contrary winds, and set sail on 8 September with about three hundred ships, in company with four earls (Thomas Beauchamp, Earl of Warwick, William Ufford, Earl of Suffolk, William Montagu, Earl of Salisbury, and John Vere, Earl of Oxford), and in command of a thousand men-at-arms, two thousand archers, and a large body of Welsh foot. At Bordeaux the Gascon lords received him with much rejoicing. It was decided to make a short campaign before the winter, and on 10 October he set out with fifteen hundred lances, two thousand archers, and three thousand light foot. Whatever scheme of operations the King may have formed during the summer, this expedition of the Prince was purely a piece of marauding. After grievously harrying the counties of Juliac, Armagnac, Astarac, and part of Comminges, he crossed the Garonne at Sainte-Marie a little above Toulouse, which was occupied by John I, Count of Armagnac and a considerable force. The count refused to allow the garrison to make a sally, and the prince passed on, stormed and burnt Mont Giscar, where many men, women, and children were ill-treated and slain, and took and pillaged Avignonet and Castelnaudary. All the country was rich, and the people \"good, simple, and ignorant of war\", so the prince took great spoil, especially of carpets, draperies, and jewels, for \"the robbers\" spared nothing, and the Gascons who marched with him were especially greedy.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "1662976", "title": "House of Plantagenet", "section": "Section::::Main line.:Conflict with the House of Valois.\n", "start_paragraph_id": 86, "start_character": 0, "end_paragraph_id": 86, "end_character": 1231, "text": "Edward, the Black Prince resumed the war with destructive chevauchées starting from Bordeaux. His army was caught by a much larger French force at Poitiers, but the ensuing battle was a decisive English victory resulting in the capture of John II of France. John agreed a treaty promising the French would pay a four million écus ransom. The subsequent Treaty of Brétigny was demonstrably popular in England, where it was both ratified in parliament and celebrated with great ceremony. To reach agreement, clauses were removed that would have had Edward renounce his claim to the French crown in return for territory in Aquitaine and the town of Calais. These were entered in another agreement to be effected only after the transfer of territory by November 1361 but both sides prevaricated over their commitments for the following nine years. Hostages from the Valois family were held in London while John returned to France to raise his ransom. Edward had restored the lands of the former Angevin Empire holding Normandy, Brittany, Anjou, Maine and the coastline from Flanders to Spain. When the hostages escaped back to France, John was horrified that his word had been broken and returned to England, where he eventually died.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "58917", "title": "Edward the Black Prince", "section": "Section::::War in Aquitaine.\n", "start_paragraph_id": 73, "start_character": 0, "end_paragraph_id": 73, "end_character": 570, "text": "The Black Prince had already warned his father of the intentions of the French king, but there was evidently a party at Edward's court that was jealous of his power, and his warnings were slighted. In April 1369, however, war was declared. Edward sent the Earls of Cambridge and Pembroke to his assistance, and Sir Robert Knolles, who now again took service with, him, added much to his strength. The war in Aquitaine was desultory, and, though the English maintained their ground fairly in the field, every day that it was prolonged weakened their hold on the country.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "2520343", "title": "Jean III de Grailly", "section": "", "start_paragraph_id": 3, "start_character": 0, "end_paragraph_id": 3, "end_character": 494, "text": "Attached to the English side in the conflict, he was made Count of Bigorre by Edward III of England, and was also a founder and the fourth Knight of the Garter in 1348. He played a decisive role as a cavalry leader under Edward, the Black Prince in the Battle of Poitiers (1356), with de Buch leading a flanking move against the French that resulted in the capture of the king of France (John II), as well as many of his nobles. John was taken to London by the Black Prince and held to ransom.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "59365523", "title": "Black Prince's chevauchée of 1356", "section": "", "start_paragraph_id": 1, "start_character": 0, "end_paragraph_id": 1, "end_character": 492, "text": "The Black Prince's \"chevauchée\" of 1356, which began on 4 August at Bordeaux and ended with the Battle of Poitiers on 19 September, was a devastating raid of Edward of Woodstock, Prince of Wales (known as the Black Prince), the eldest son of King Edward III of England. This expedition of the Black Prince devastated large parts of Bergerac, Périgord, Nontronnais, Confolentais, Nord-Ouest, Limousin, La Marche, Boischaut, Champagne Berrichonne, Berry, Sologne, south of Touraine and Poitou.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "1547153", "title": "Hundred Years' War (1337–1360)", "section": "Section::::Collapse of the French government (1351–1360).\n", "start_paragraph_id": 44, "start_character": 0, "end_paragraph_id": 44, "end_character": 1002, "text": "In August 1356 the Black Prince was threatened by a larger army under John II. The English attempted to retreat but their way was blocked at Poitiers. The Black Prince tried to negotiate terms with the French, but John's army attacked the English on 19 September 1356. The English archers were able to bring down the first three assaults of the French cavalry. At the point when the English archers were running out of arrows and many were wounded or exhausted, the French king deployed his reserves, an elite force of men. It seemed that the French would win the day, however, the Gascon noble Captal de Buch managed to thwart the French by leading a flanking movement, with a small group of men, that succeeded in capturing John II, and many of his nobles. John signed a truce with Edward III, and in his absence much of the government began to collapse. John's ransom was set to two million, but John believed he was worth more than that and insisted that his ransom be raised to four million écus.\n", "bleu_score": null, "meta": null } ] } ]
null
ztcu2
How common was it to be executed for being a "witch" around the time of the Salem Witch Trials?
[ { "answer": "By the time of the Salem trials the witch craze was already well into its decline. It was an anomalous outburst for the time. If you want to know the witch craze in general I can talk a bit about that, though I only really know about the European trials, and only English ones in any detail.\n\nDifferent parts of Europe experienced the witch craze differently, and to different degrees. In the mainland (Germany, France, etc), the witch hunts were an institutionalised phenomenon presided over by the Church and enforced by the local authorities, from the top down. Witchcraft was considered a form of heresy, and so the Inquisition was granted the power to carry out its own investigations. Continental witches were believed to be part of an organised cult in league with the devil as part of some grand diabolical conspiracy. They were tortured for confessions and forced to name names, tried in ecclesiastical courts as heretics and burned alive. \n\nEngland's treatment of witchcraft was quite unique. For a start, the Catholic Church had no authority there, which meant no Inquisition. Accusations arose from within local communities, rather than from above. With the exception of Matthew Hopkins (the self styled Witch Finder General'), there was never any attempt by the authorities to incite a witch hunt. Witchcraft was not seen as a heresy, but rather as an extreme form of public disorder. There was no conspiracy, no 'witch cult', no witch's Sabbath, no flying and rarely any references to a diabolic pact. In fact English witches had a pretty boring time, though unlike continental witches they did get to have [familiars](_URL_0_), which I guess is kinda cool. English witches were usually tried by jury in secular, common law courts just like any other felon, and the guilty were hanged rather than burned.\n\nIn England, accusations of witchcraft tended to follow a basic narrative. Typically they would begin with an old woman going door to door begging for alms, and then being turned away by a disgruntled neighbour. Some tragedy would inevitably befall the neighbour, who would then accuse the old woman of witchcraft. It's amazing how many pamphlets from the time describe cases that follow this exact pattern. So how easy was it to accuse somebody? Quite easy. But it wasn't unheard of to be found innocent. In fact, it was relatively common. Some courts were naturally reluctant to prosecute something so unprovable, and if you could find enough people to vouch in your favour, you could be let off automatically.\nTrials on the continent were significantly scarier due to the use of torture and the large amount of authority given to individual judges/inquisitors to prosecute witches.\n\nThe total number of executions is usually estimated to be around 40 to 50,000, but higher estimates do exist.", "provenance": null }, { "answer": null, "provenance": [ { "wikipedia_id": "259421", "title": "Topsfield, Massachusetts", "section": "Section::::History.:Colonial period.\n", "start_paragraph_id": 7, "start_character": 0, "end_paragraph_id": 7, "end_character": 1640, "text": "The Salem witch trials of 1692 touched Topsfield directly. Belief in witches was normal in the seventeenth century. People were accused of witchcraft in Europe and the colonies during this time, but executions were relatively rare in the colonies. Historians conclude that only fifteen people were executed as witches in the American colonies before 1692. In that year alone, however, over 160 people, mostly from Essex County, Massachusetts, were accused of witchcraft. Of these, nineteen were hanged and one was pressed to death for refusing to plead. In July 1692, Rebecca Nurse of Salem Village (then part of the town of Salem, now part of present-day Danvers) was hanged at Gallows Hill in Salem. She was the daughter of William Towne of Topsfield. Young Salem Village girls allegedly possessed by the devil – the source of Rebecca Nurse's witchcraft accusation and most others – also named as witches Rebecca's Topsfield sisters, Sarah Cloyce and Mary Esty; while Sarah was eventually set free, Mary was hanged in September. Sarah Wildes and Elizabeth Howe from Topsfield were hanged along with Rebecca Nurse. Many other Topsfield residents were accused of witchcraft until the hysteria ended in May 1693, when the governor of Massachusetts set free all of the remaining persons accused of witchcraft and issued a proclamation of general pardon. While the causes of the 1692 witchcraft episode continue to be the subject of historical and sociological study, there is a consensus view that land disputes and perhaps economic rivalry among factions in Salem, Salem Village and Topsfield fueled animosity and played an underlying role.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "13215988", "title": "List of people of the Salem witch trials", "section": "", "start_paragraph_id": 1, "start_character": 0, "end_paragraph_id": 1, "end_character": 273, "text": "This is a list of people associated with the Salem Witch Trials, a series of hearings and prosecutions of people accused of witchcraft in colonial Massachusetts between February 1692 and May 1693. The trials resulted in the executions of twenty people, most of them women.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "1384343", "title": "Bridget Bishop", "section": "", "start_paragraph_id": 1, "start_character": 0, "end_paragraph_id": 1, "end_character": 225, "text": "Bridget Bishop (c. 1632 – 10 June 1692) was the first person executed for witchcraft during the Salem witch trials in 1692. Altogether, about 200 people were tried, and 18 others were executed (19 total: 14 women and 5 men).\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "205246", "title": "Salem witch trials", "section": "", "start_paragraph_id": 1, "start_character": 0, "end_paragraph_id": 1, "end_character": 455, "text": "The Salem witch trials were a series of hearings and prosecutions of people accused of witchcraft in colonial Massachusetts between February 1692 and May 1693. More than 200 people were accused, 19 of whom were found guilty and executed by hanging (14 women and 5 men). One other man, Giles Corey, was crushed to death for refusing to plead, and at least five people died in jail. It was the deadliest witch hunt in the history of colonial North America.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "24091", "title": "Puritans", "section": "Section::::Beliefs.:Demonology and witch hunts.\n", "start_paragraph_id": 60, "start_character": 0, "end_paragraph_id": 60, "end_character": 460, "text": "The Salem witch trials of 1692 had a lasting impact on the historical reputation of New England Puritans. Though this witch hunt occurred after Puritans lost political control of the Massachusetts colony, Puritans instigated the judicial proceedings against the accused and comprised the members of the court that convicted and sentenced the accused. By the time Governor William Phips ended the trials, fourteen women and five men had been hanged as witches.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "4035", "title": "Black", "section": "Section::::Art.:Modern.:16th and 17th centuries.\n", "start_paragraph_id": 34, "start_character": 0, "end_paragraph_id": 34, "end_character": 366, "text": "Witch trials were common in both Europe and America during this period. During the notorious Salem witch trials in New England in 1692–93, one of those on trial was accused of being able turn into a \"black thing with a blue cap,\" and others of having familiars in the form of a black dog, a black cat and a black bird. Nineteen women and men were hanged as witches.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "21898297", "title": "History of Christianity in the United States", "section": "Section::::Early Colonial era.:British colonies.:New England.\n", "start_paragraph_id": 19, "start_character": 0, "end_paragraph_id": 19, "end_character": 691, "text": "The Salem witch trials were a series of hearings before local magistrates followed by county court trials to prosecute people accused of witchcraft in Essex, Suffolk and Middlesex counties of colonial Massachusetts, between February 1692 and May 1693. Over 150 people were arrested and imprisoned, with even more accused but not formally pursued by the authorities. The two courts convicted twenty-nine people of the capital felony of witchcraft. Nineteen of the accused, fourteen women and five men, were hanged. One man (Giles Corey) who refused to enter a plea was crushed to death under heavy stones in an attempt to force him to do so. At least five more of the accused died in prison.\n", "bleu_score": null, "meta": null } ] } ]
null
26rwh1
Has the water released by combusting hydrocarbons had any effect on the environment?
[ { "answer": "Absolutely! Water vapour, like CO2, is a greenhouse gas. Water Vapour released by combustion--but this isn't the sole source for water vapour, of course--helps create a sort of positive feedback loop in the atmosphere, increasing the amount of warming experienced by climate change. \n\nGenerally, Water vapour tends to double the amount of warming; if we were to experience an increase of 1°C from CO2 alone,, water vapour would make it more like 2°C. \n\nSo water vapour is an extremely powerful greenhouse gas. However, and this is the most important point, it's also a fairly short lived greenhouse gas. Whereas CO2 can stay in the atmosphere for a very long time, Water Vapour condenses into clouds and returns to the water table in fairly short order. Water Vapour might only stay in the atmosphere for a few weeks, whereas CO2 can last centuries. ", "provenance": null }, { "answer": null, "provenance": [ { "wikipedia_id": "972800", "title": "Abyssal plain", "section": "Section::::Exploitation of resources.\n", "start_paragraph_id": 46, "start_character": 0, "end_paragraph_id": 46, "end_character": 487, "text": "Hydrocarbon exploration in deep water occasionally results in significant environmental degradation resulting mainly from accumulation of contaminated drill cuttings, but also from oil spills. While the oil gusher involved in the Deepwater Horizon oil spill in the Gulf of Mexico originates from a wellhead only 1500 meters below the ocean surface, it nevertheless illustrates the kind of environmental disaster that can result from mishaps related to offshore drilling for oil and gas.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "994887", "title": "Petroleum industry", "section": "Section::::Environmental impact.:Water pollution.\n", "start_paragraph_id": 71, "start_character": 0, "end_paragraph_id": 71, "end_character": 841, "text": "Some petroleum industry operations have been responsible for water pollution through by-products of refining and oil spills. Though hydraulic fracturing has significantly increased natural gas extraction, there is some belief and evidence to support that consumable water has seen increased in methane contamination due to this gas extraction. Leaks from underground tanks and abandoned refineries may also contaminate groundwater in surrounding areas. Hydrocarbons that comprise refined petroleum are resistant to biodegradation and have been found to remain present in contaminated soils for years. To hasten this process, bioremediation of petroleum hydrocarbon pollutants is often employed by means of aerobic degradation. More recently, other bioremediative methods have been explored such as phytoremediation and thermal remediation. \n", "bleu_score": null, "meta": null }, { "wikipedia_id": "646478", "title": "Abiogenic petroleum origin", "section": "Section::::Empirical evidence.:Lost City hydrothermal vent field.\n", "start_paragraph_id": 104, "start_character": 0, "end_paragraph_id": 104, "end_character": 461, "text": "The Lost City hydrothermal field was determined to have abiogenic hydrocarbon production. Proskurowski et al. wrote, \"Radiocarbon evidence rules out seawater bicarbonate as the carbon source for FTT reactions, suggesting that a mantle-derived inorganic carbon source is leached from the host rocks. Our findings illustrate that the abiotic synthesis of hydrocarbons in nature may occur in the presence of ultramafic rocks, water, and moderate amounts of heat.\"\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "13257", "title": "Hydrocarbon", "section": "Section::::Environmental impact.\n", "start_paragraph_id": 54, "start_character": 0, "end_paragraph_id": 54, "end_character": 348, "text": "Hydrocarbons are introduced into the environment through their extensive use as fuels and chemicals as well as through leaks or accidental spills during exploration, production, refining, or transport. Anthropogenic hydrocarbon contamination of soil is a serious global issue due to contaminant persistence and the negative impact on human health.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "35494376", "title": "Environmental impact of hydraulic fracturing", "section": "Section::::Water contamination.:Methane.\n", "start_paragraph_id": 39, "start_character": 0, "end_paragraph_id": 39, "end_character": 1035, "text": "Groundwater methane contamination has adverse effect on water quality and in extreme cases may lead to potential explosion. A scientific study conducted by researchers of Duke University found high correlations of gas well drilling activities, including hydraulic fracturing, and methane pollution of the drinking water. According to the 2011 study of the MIT Energy Initiative, \"there is evidence of natural gas (methane) migration into freshwater zones in some areas, most likely as a result of substandard well completion practices i.e. poor quality cementing job or bad casing, by a few operators.\" A 2013 Duke study suggested that either faulty construction (defective cement seals in the upper part of wells, and faulty steel linings within deeper layers) combined with a peculiarity of local geology may be allowing methane to seep into waters; the latter cause may also release injected fluids to the aquifer. Abandoned gas and oil wells also provide conduits to the surface in areas like Pennsylvania, where these are common.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "354588", "title": "Hydrothermal vent", "section": "Section::::Biological theories.:\"The Deep Hot Biosphere\".\n", "start_paragraph_id": 40, "start_character": 0, "end_paragraph_id": 40, "end_character": 348, "text": "An article on \"abiogenic hydrocarbon production\" in the February 2008 issue of Science journal used data from experiments at the Lost City hydrothermal field to report how the abiotic synthesis of low molecular mass hydrocarbons from mantle derived carbon dioxide may occur in the presence of ultramafic rocks, water, and moderate amounts of heat.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "44537900", "title": "Water associated fraction", "section": "Section::::Toxicity.\n", "start_paragraph_id": 3, "start_character": 0, "end_paragraph_id": 3, "end_character": 451, "text": "Low molecular mass compounds account for much of the toxic nature of hydrocarbon spills. In particular, benzene, toluene, ethyl benzene and the xylenes (BTEX) are of great environmental interest due to their availability to organisms. This availability, also influenced by volatility and reactivity, impacts on biodegradation and bioremediation in water and soil environments, with even dissolved components within pore water considered bioavailable.\n", "bleu_score": null, "meta": null } ] } ]
null
67tp73
If an electric motor is supplied power but restricted in turning (like holding back a ceiling fan) what is happening which would cause it to 'burn up'?
[ { "answer": "If it has separate windings with a commutator bar, all that current is only going though one winding instead of all the windings taking turns. Also, if a motor is turning, it also acts as a generator from the spinning motion which produces a backvoltage which limits the current.", "provenance": null }, { "answer": "The electrical current will continue to increase trying to force the motor to move and you will hopefully blow a fuse or pop a breaker before burning up a motor. What is actually happening is that the spools of wire inside the motor, although they look like bare wire, are insulated and too much current causes the wires to heat up and the insulation melts and shorts the wires together. The smell of a bad motor is the plastic insulation having melted and one or more if the \"legs\" has grounded out.", "provenance": null }, { "answer": "When a motor is turning, that rotation generates a voltage, a 'back EMF', that acts against the flow of current. It is this voltage, not the resistance of the coils, that restricts the amount of power the motor draws. And as this is an impedance, it doesn't generate heat. The power - the current in the motor pushing against this voltage - is what turns the motor.\n\nWhen the rotor is locked, there is no back EMF to impede the flow of current through the motor. All the electricity flowing through the motor is converted to heat by the resistance of the windings. This quickly overheats the wiring, melting insulation, creating shorts, reducing the resistance and further increasing the current, until some wiring melts and blows.", "provenance": null }, { "answer": "Typically a circuit breaker triggers when the current gets too high (no turning means no back emf as others have stated so too much current ). This is only a temporary cut off so the switch goes back to on after a short time and the motor starts again, leading to a stuttering. \n\nIf this safety measure is not included, then the fan can overheat and burn up. ", "provenance": null }, { "answer": "Along with the counter EMF that allows more current that many people have mentioned, the motors typically have a small fan to cool the motor components also. This is usually a problem when you use a VFD (varies frequency of power going to the motor to speed up or slow down the motor) on a motor not rated for that use.\n\nSo basically more amps and less cooling. ", "provenance": null }, { "answer": "[This](_URL_0_) is a good description of a typical motor model.\n\nIf you look at the diagram, you see the armature circuit. That's what you connect your power to. It has a resistance, an inductance, and a back EMF. The back EMF is proportional to the rotor speed. The resistance is typically small. The motor I have here on my bench measures around 4 Ohms. \n\nIf you put in a constant voltage, you ignore the inductor part. If you hold the rotor steady, you ignore the back EMF part. All that's left is the armature resistance; input current is V/Ra. \n\nThis can be a problem because 1. 100% of that energy is dissipated in the winding, when some of it was supposed to be put out as mechanical energy, vibration, noise, rotor friction, etc. That's more than the armature is typically built to handle. And 2) it is, by definition, the maximum current the motor can handle. You're maximizing power in and minimizing efficiency, meaning the power goes to things it wasn't supposed to. ", "provenance": null }, { "answer": "The reason this happens is because the rotor is locked and there is no rotation to induce counter electromotive force (EMF) in the motor. I see others have called it \"back EMF,\" but it is the same thing.\n\nTo understand counter EMF, think of Newton's Third Law. For every action, there is an equal and opposite reaction. When a rotor rotates, it induces an electromagnetic force opposite of the direction of rotation. In an ideal situation the two forces balance out when there is a constant load and that is how you get your running current. Starting current is much different. As others have pointed out, before the rotor begins to turn and generate counter EMF the stator (armature) windings are essentially a short circuit. Those windings are not rated to handle that amount of current outside of starting surges. Because of a phenomenon called copper loss, all conductors generate heat as current flows through them. The amount of heat generated is proportional to current and calculated as P=I^2 * R. As you can see, the power dissipated (heat) grows exponentially with current. With nothing to limit the current flow in a locked rotor it is only a matter of time before any stator (armature) windings heat to the point of failure. If you are curious, there is a document that exists that explains a lot of these principles. The \"Applied Engineering Principles Manual\" can be found at _URL_0_ and you would want to start reading at around page 23 or so.", "provenance": null }, { "answer": "The coils are actually not that sturdy. If you took them out of the motor, and applied the regular voltage, they would go up in a puff of smoke.\n\nIn the motor, while it's turning, the interaction between coils and magnets generates an opposition (an opposed current) to the free flow of current through the coils. That's what keeps them from burning up.\n\nAnother way to look at it: when it's spinning, some of the power drawn from source goes into heating the coils, some goes into spinning the motor (well, ideally almost all power should go into spinning the motor). If it's not spinning, then ALL power goes into heating the coils - of course they go POOF.", "provenance": null }, { "answer": "This is also what happens to an acoustic speaker when it fails... A speaker driver is basically a motor that operates by the same Faraday law.\n\nThe voice coil heats up to a very high temperature and that causes the voice coil-former glue to melt and the speaker driver fails. ", "provenance": null }, { "answer": "The windings of a motor are inductors. When the motor is spinning, the voltage being applied to the inductors is constantly changing directions. When inductors are presented with a high frequency signal, they store energy in the form of a magnetic field, and then subsequently release it back into the circuit. During this process, since the energy in is equal to the energy out, the inductor produces very little heat. When you force the motor to stop turning, the voltage stops changing direction. When a constant voltage is applied to an inductor, it acts like a wire. This wire is forced to dissipate all of the energy entering the motor as heat. ", "provenance": null }, { "answer": null, "provenance": [ { "wikipedia_id": "4739163", "title": "Standby power", "section": "Section::::Magnitude.\n", "start_paragraph_id": 25, "start_character": 0, "end_paragraph_id": 25, "end_character": 373, "text": "\"Many appliances continue to draw a small amount of power when they are switched off. These \"phantom\" loads occur in most appliances that use electricity, such as VCRs, televisions, stereos, computers, and kitchen appliances. This can be avoided by unplugging the appliance or using a power strip and using the switch on the power strip to cut all power to the appliance.\"\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "20133356", "title": "Nuclear reactor safety system", "section": "Section::::Emergency electrical systems.:Motor generator flywheels.\n", "start_paragraph_id": 31, "start_character": 0, "end_paragraph_id": 31, "end_character": 366, "text": "Loss of electrical power can occur suddenly and can damage or undermine equipment. To prevent damage, motor-generators can be tied to flywheels that can provide uninterrupted electrical power to equipment for a brief period. Often they are used to provide electrical power until the plant electrical supply can be switched to the batteries and/or diesel generators.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "10457720", "title": "Engine-generator", "section": "Section::::Safety.\n", "start_paragraph_id": 26, "start_character": 0, "end_paragraph_id": 26, "end_character": 338, "text": "Additionally, it is important to prevent backfeeding when using a portable engine generator, which can harm utility workers or people in other buildings. Before turning on a diesel- or gasoline-powered generator, users should make sure that the main breaker is in the \"off\" position, to ensure that the electric current does not reverse.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "4739163", "title": "Standby power", "section": "Section::::Reducing standby consumption.:Operating practices.\n", "start_paragraph_id": 80, "start_character": 0, "end_paragraph_id": 80, "end_character": 401, "text": "Standby power consumption can be reduced by unplugging or totally switching off, if possible, devices with a standby mode not currently in use; if several devices are used together or only when a room is occupied, they can be connected to a single power strip that is switched off when not needed. This may cause some electronic devices, particularly older ones, to lose their configuration settings.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "25732867", "title": "Dangerous restart", "section": "Section::::No-volt release.\n", "start_paragraph_id": 7, "start_character": 0, "end_paragraph_id": 7, "end_character": 303, "text": "The motor controllers for large electric motors normally incorporate a type of circuit breaker known as a \"no-volt release\". If the power fails, the circuit breaker opens and the motor will not restart when the power is restored. The circuit breaker must be reset before the motor can be started again.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "11917751", "title": "Brownout (electricity)", "section": "Section::::Effects.\n", "start_paragraph_id": 7, "start_character": 0, "end_paragraph_id": 7, "end_character": 282, "text": "An induction motor will draw more current to compensate for the decreased voltage, which may lead to overheating and burnout. If a substantial part of a grid's load is electric motors, reducing voltage may not actually reduce load and can result in damage to customers' equipment. \n", "bleu_score": null, "meta": null }, { "wikipedia_id": "4211", "title": "Bootstrapping", "section": "Section::::Applications.:Electric power grid.\n", "start_paragraph_id": 52, "start_character": 0, "end_paragraph_id": 52, "end_character": 330, "text": "An electric power grid is almost never brought down intentionally. Generators and power stations are started and shut down as necessary. A typical power station requires power for start up prior to being able to generate power. This power is obtained from the grid, so if the entire grid is down these stations cannot be started.\n", "bleu_score": null, "meta": null } ] } ]
null
3ysjni
why don't stars appear red but white? It is said that only red colour sustains when light travels a long distance!
[ { "answer": "Some stars, such as Betelgeuse (in the corner of Orion) are noticeably red. Red stars are usually either dwarves or giants, but we can only see the giants without a telescope.\n\nHowever, the process of emitted light reddening over long distances (either due to absorption by dust or the expansion of the universe) occurs over distances much, much larger than the distances to visible stars in our galaxy.", "provenance": null }, { "answer": "Red travels a longer distance *through an atmosphere* because the other colors are scattered in other directions more readily. This is why the sun looks red at night– the red light from the sun can travel through greater distances of atmosphere while the other colors are scattered. This is not the case when light travels through space: there isn't any atmosphere for the light to be scattered off of, so regardless of the distance the star is from us we'll basically see its true color.", "provenance": null }, { "answer": "That 'appearance of the stars' is a matter of how your eyes work.\nAt low light levels we don't perceive color. We just 'see' a source of light as 'white'.\n\nOur vision in low light depends on color blind receptors, (Rod type), in our retinas. The color sensitive ones, (cone type), only function at higher illumination levels.\n\n The Red Shift referred to by other posters does exist, but it applies to only \nminiscule shifts in any star-light you are likely to see with the naked eye.\n", "provenance": null }, { "answer": null, "provenance": [ { "wikipedia_id": "1231797", "title": "Extinction (astronomy)", "section": "Section::::General characteristics.\n", "start_paragraph_id": 8, "start_character": 0, "end_paragraph_id": 8, "end_character": 272, "text": "Interstellar reddening occurs because interstellar dust absorbs and scatters blue light waves more than red light waves, making stars appear redder than they are. This is similar to the effect seen when dust particles in the atmosphere of Earth contribute to red sunsets.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "42210873", "title": "Green star (astronomy)", "section": "", "start_paragraph_id": 1, "start_character": 0, "end_paragraph_id": 1, "end_character": 548, "text": "In astronomy, a green star is a white or blue star that appears green due to an optical illusion. There are no truly green stars, because the color of a star is more or less given by a black-body spectrum and this never looks green. However, there are a few stars that appear green to some observers. This is usually because of the optical illusion that a red object can make nearby objects look greenish. There are some multiple star systems, such as Antares, with a bright red star where this illusion makes other stars in the system look green.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "34599886", "title": "Aurora Max", "section": "Section::::The northern lights.\n", "start_paragraph_id": 7, "start_character": 0, "end_paragraph_id": 7, "end_character": 445, "text": "In the upper atmosphere between 300-400 kilometers, red instead of green is caused by the collision with atomic oxygen. It takes longer for the lights to be produced at higher altitudes because the atmosphere is less dense so it takes more energy and more time for the red lights to be produced. Blue and purple lights are also produced by the collision with hydrogen and helium but they are difficult for the eyes to see against the night sky.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "28927", "title": "Stellar classification", "section": "Section::::Modern classification.:Harvard spectral classification.\n", "start_paragraph_id": 12, "start_character": 0, "end_paragraph_id": 12, "end_character": 659, "text": "Conventional color descriptions are traditional in astronomy, and represent colors relative to the mean color of an A class star, which is considered to be white. The apparent color descriptions are what the observer would see if trying to describe the stars under a dark sky without aid to the eye, or with binoculars. However, most stars in the sky, except the brightest ones, appear white or bluish white to the unaided eye because they are too dim for color vision to work. Red supergiants are cooler and redder than dwarfs of the same spectral type, and stars with particular spectral features such as carbon stars may be far redder than any black body.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "408026", "title": "Relativistic Doppler effect", "section": "Section::::Visualization.\n", "start_paragraph_id": 57, "start_character": 0, "end_paragraph_id": 57, "end_character": 567, "text": "Real stars are not monochromatic, but emit a range of wavelengths approximating a black body distribution. It is not necessarily true that stars ahead of the observer would show a bluer color. This is because the whole spectral energy distribution is shifted. At the same time that visible light is blueshifted into invisible ultraviolet wavelengths, infrared light is blueshifted into the visible range. Precisely what changes in the colors one sees depends on the physiology of the human eye and on the spectral characteristics of the light sources being observed.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "181887", "title": "Night vision", "section": "Section::::Biological night vision.\n", "start_paragraph_id": 13, "start_character": 0, "end_paragraph_id": 13, "end_character": 251, "text": "Another theory posits that since stars typically emit light with shorter wavelengths, the light from stars will be in the blue-green color spectrum. Therefore, using red light to navigate would not desensitize the receptors used to detect star light.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "42210873", "title": "Green star (astronomy)", "section": "Section::::Objects that resemble green stars.:Multiple stars.\n", "start_paragraph_id": 9, "start_character": 0, "end_paragraph_id": 9, "end_character": 275, "text": "There a few stars in double or multiple star systems that appear greenish, even though they are really blue or white. This can happen if the star system contains a large red or orange star. An optical illusion causes things close to the red star to appear slightly greenish.\n", "bleu_score": null, "meta": null } ] } ]
null
3sadom
To what extent did Alexander the Great/Hellenism pave the way for Christianity?
[ { "answer": "This is a pretty complicated question and, largely, is based on an interpretation of the histories of Alexander that is now considered unfeasible. That, however, doesn’t mean that there isn’t some truth to the claim. I don’t have the time to find my sources right now so this is going to be a bit informal.\n\nWhy it’s wrong:\n\nEarly modern Alexander scholarship was dominated by some British guy (sorry I don’t remember his name) who saw Alexander as basically a pre-Jesus. After Alexander finished with his conquests, he started taking on Persian customs and airs. The scholar saw this as proof of Alexander being an equalizing force: by elevating conquered Persians to the level of their Greek counterparts, Alexander was declaring equality of man. Christianity, which is a universal in its acceptance, spread easily because of the progressive framework wrought by Alexander’s liberal policies.\n\nEnter Badian. Basically the old British dude was the premier Alexander scholar until Badian showed up, slapped him around, and took his throne. Alexander’s ‘persianization’ Badian said, was not an attempt at syncretism but rather an attempt to fully adopt the Persian system. Alexander’s position on top of the Greek world was rather tenuous: Athens and Sparta were constantly angling for independence and Alexander’s supremacy was only guaranteed through military might. In contrast, the Persian monarchal system was steeped in tradition and (largely) unwavering loyalty for their ruler. Furthermore, in terms of bureaucracy, the Persian system was significantly more advanced and efficient. Think feudal lords paying tribute (Greece) compared to a much more modern taxation and levy system (Persia). It was therefore in Alexander’s best interests to position himself as a successor to Darius III rather than as a foreign invader. Badian also did a lot of detailed textual analysis (both scholars based their conclusions largely on the works of Plutarch and Arrian) that made the British dude’s claims look silly.\n\nWhy it’s right:\n\nOk so this concept was created by the wishful thinking of some British historian who wanted to see Jesus everywhere he looked. It is not, however, completely without merit. Christian thought was not created from a vacuum but rather, in many ways, can be seen as an extension of earlier Greek thought: For instance, despite Augustine’s professed rejection of Greek thinkers, many of his arguments rest on distinctly Greek premises. The similarity can be seen quite easily: take a Greek concept like Plato’s allegory of the cave and replace the conclusion with Jesus/God’s love etc. and you get a rough approximation of Christianity. Aristotle can be such a boring read because (in the Judeo-Christian tradition) we have accepted his ideas so completely that they seem utterly dull and obvious. Furthermore, in many Christian traditions there’s some sort of intercession (usually by Mary) to rescue saintly sinners from hell. These sinners are people in hell who were not saved by virtue of being born before Jesus and are invariably composed of famous Greek thinkers.\n\nBasically, the idea is that the universal acceptance of Hellenic ideals allowed for an acceptance of a theology that is created in a Greek framework. Furthermore, the lack of strict language/cultural/political boundaries allowed for dissemination of ideas across vast tracts of land. This created the perfect infrastructure for the spread of Christianity; which is already a pretty liberating and attractive ideology if you (as most non-elite people during this era were) are living in the shit. \n", "provenance": null }, { "answer": null, "provenance": [ { "wikipedia_id": "14599360", "title": "Alexander (Ephesian)", "section": "", "start_paragraph_id": 1, "start_character": 0, "end_paragraph_id": 1, "end_character": 359, "text": "Alexander (fl. 50–65) was a Christian heretical teacher in Ephesus. Hymenaeus and Alexander were proponents of antinomianism, the belief that Christian morality was not required. They put away—\"thrust from them\"—faith and a good conscience; they wilfully abandoned the great central facts regarding Christ, and so they \"made shipwreck concerning the faith.\" \n", "bleu_score": null, "meta": null }, { "wikipedia_id": "4443158", "title": "Alexander Sauli", "section": "Section::::Religious formation.\n", "start_paragraph_id": 6, "start_character": 0, "end_paragraph_id": 6, "end_character": 475, "text": "At the age of seventeen, Alexander asked to be admitted to the Congregation of the Barnabites. At this time the Congregation was experiencing the precariousness of its beginnings, extreme poverty, and harsh trials. It had been expelled from the Republic of Venice only two months before. The Fathers advised Alexander to consider other religious congregations, such as the Dominicans, the Franciscans, and the Capuchins, rich with members outstanding in holiness and wisdom.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "23787", "title": "Pope Alexander IV", "section": "Section::::Biography.\n", "start_paragraph_id": 5, "start_character": 0, "end_paragraph_id": 5, "end_character": 313, "text": "Alexander's pontificate was signaled by efforts to reunite the Eastern Orthodox churches with the Catholic Church, by the establishment of the Inquisition in France, by favours shown to the mendicant orders, and by an attempt to organize a crusade against the Tatars after the second raid against Poland in 1259.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "314988", "title": "Wildeshausen", "section": "Section::::Attractions.:Alexander Church.\n", "start_paragraph_id": 21, "start_character": 0, "end_paragraph_id": 21, "end_character": 753, "text": "The founding of the Alexander Church goes back to 814. In 807 Waltbert, a grandson of Duke Wittekind, brought the relics of the sainted Alexander from Rome by way of the Alpine mountains to Wildeshausen. Alexander died, as well as his mother and 6 brothers, as executed martyrs during the persecution of Christians in the first century. Waltbert donated a \"Chorherren Stift\" (a type of monastery, where the cleric lived to the rules of the Benedictines) named \"Alexander Kapitel\". It was to be used as a mission for the surrounding area (called Lerigau, or Largau). Wildeshausen became a place of pilgrimage, benefiting it economically. The Church and \"Stift\" owned treasures and were decorated with pictures. During renovation frescos were discovered.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "2626987", "title": "Alexandrian school", "section": "", "start_paragraph_id": 3, "start_character": 0, "end_paragraph_id": 3, "end_character": 669, "text": "The name of \"Alexandrian school\" is also used to describe the religious and philosophical developments in Alexandria after the 1st century. The mix of Jewish theology and Greek philosophy led to a syncretic mix and much mystical speculation. The Neoplatonists devoted themselves to examining the nature of the soul, and sought communion with God. The two great schools of biblical interpretation in the early Christian church incorporated Neoplatonism and philosophical beliefs from Plato's teachings into Christianity, and interpreted much of the Bible allegorically. The founders of the Alexandrian school of Christian theology were Clement of Alexandria and Origen.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "42056", "title": "Greeks", "section": "Section::::History.:Classical.\n", "start_paragraph_id": 19, "start_character": 0, "end_paragraph_id": 19, "end_character": 804, "text": "In any case, Alexander's toppling of the Achaemenid Empire, after his victories at the battles of the Granicus, Issus and Gaugamela, and his advance as far as modern-day Pakistan and Tajikistan, provided an important outlet for Greek culture, via the creation of colonies and trade routes along the way. While the Alexandrian empire did not survive its creator's death intact, the cultural implications of the spread of Hellenism across much of the Middle East and Asia were to prove long lived as Greek became the \"lingua franca\", a position it retained even in Roman times. Many Greeks settled in Hellenistic cities like Alexandria, Antioch and Seleucia. Two thousand years later, there are still communities in Pakistan and Afghanistan, like the Kalash, who claim to be descended from Greek settlers.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "12471", "title": "Gnosticism", "section": "Section::::Origins.:Jewish Christian origins.\n", "start_paragraph_id": 22, "start_character": 0, "end_paragraph_id": 22, "end_character": 348, "text": "Alexandria was of central importance for the birth of Gnosticism. The Christian \"ecclesia\" (i. e. congregation, church) was of Jewish–Christian origin, but also attracted Greek members, and various strand of thought were available, such as \"Judaic apocalypticism, speculation on divine wisdom, Greek philosophy, and Hellenistic mystery religions.\"\n", "bleu_score": null, "meta": null } ] } ]
null
2eyfpl
Why do some smells travel faster and propagate farther than others?
[ { "answer": "It has to do with the diffusion rate of the molecule. Smells are composed of molecules that bind to the receptors present in the nose; if the nose has a receptor for a certain molecule in the gaseous state, then the gas will \"smell.\" The potency of a smell depends on the concentration of the molecule in the atmosphere and on how fast those molecules can travel through the rest of the gas in the air. Because the smell molecules must travel through the atmosphere by bumping into other atmospheric molecules, those molecules of smaller mass and size tend to move more quickly and thus diffuse across a room with greater speed. ", "provenance": null }, { "answer": null, "provenance": [ { "wikipedia_id": "58991", "title": "Fog", "section": "Section::::Sound propagation and acoustic effects.\n", "start_paragraph_id": 50, "start_character": 0, "end_paragraph_id": 50, "end_character": 234, "text": "Sound typically travels fastest and farthest through solids, then liquids, then gases such as the atmosphere. Sound is affected during fog conditions due to the small distances between water droplets, and air temperature differences.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "18406", "title": "Luminiferous aether", "section": "Section::::Relative motion between the Earth and aether.:Negative aether-drift experiments.\n", "start_paragraph_id": 38, "start_character": 0, "end_paragraph_id": 38, "end_character": 917, "text": "A simple example concerns the model on which aether was originally built: sound. The speed of propagation for mechanical waves, the speed of sound, is defined by the mechanical properties of the medium. Sound travels 4.3 times faster in water than in air. This explains why a person hearing an explosion underwater and quickly surfacing can hear it again as the slower travelling sound arrives through the air. Similarly, a traveller on an airliner can still carry on a conversation with another traveller because the sound of words is travelling along with the air inside the aircraft. This effect is basic to all Newtonian dynamics, which says that everything from sound to the trajectory of a thrown baseball should all remain the same in the aircraft flying (at least at a constant speed) as if still sitting on the ground. This is the basis of the Galilean transformation, and the concept of frame of reference.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "36253964", "title": "Origin of speech", "section": "Section::::Origin of speech sounds.:Gestural theory.:Criticism.\n", "start_paragraph_id": 59, "start_character": 0, "end_paragraph_id": 59, "end_character": 397, "text": "Critics note that for mammals in general, sound turns out to be the best medium in which to encode information for transmission over distances at speed. Given the probability that this applied also to early humans, it's hard to see why they should have abandoned this efficient method in favour of more costly and cumbersome systems of visual gesturing — only to return to sound at a later stage.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "10630682", "title": "Coffin corner (aerodynamics)", "section": "Section::::Aerodynamic basis.\n", "start_paragraph_id": 5, "start_character": 0, "end_paragraph_id": 5, "end_character": 352, "text": "Air conducts sound at a certain speed, the \"speed of sound\". This becomes slower as the air becomes cooler. Because the temperature of the atmosphere generally decreases with altitude (until the tropopause), the speed of sound also decreases with altitude. (See the International Standard Atmosphere for more on temperature as a function of altitude.)\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "147853", "title": "Speed of sound", "section": "Section::::Altitude variation and implications for atmospheric acoustics.\n", "start_paragraph_id": 45, "start_character": 0, "end_paragraph_id": 45, "end_character": 397, "text": "In the Earth's atmosphere, the chief factor affecting the speed of sound is the temperature. For a given ideal gas with constant heat capacity and composition, the speed of sound is dependent \"solely\" upon temperature; see Details below. In such an ideal case, the effects of decreased density and decreased pressure of altitude cancel each other out, save for the residual effect of temperature.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "1676668", "title": "Bow wave", "section": "Section::::Description.\n", "start_paragraph_id": 6, "start_character": 0, "end_paragraph_id": 6, "end_character": 306, "text": "A similar thing occurs when an airplane travels at the speed of sound. The overlapping wave crests disrupt the flow of air over and under the wings. Just as a boat can easily travel faster than the wave it produces, an airplane with sufficient power can travel faster than the speed of sound (supersonic).\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "60560", "title": "Tetrapod", "section": "Section::::Anatomy and physiology.:Senses.:Olfaction.\n", "start_paragraph_id": 123, "start_character": 0, "end_paragraph_id": 123, "end_character": 313, "text": "The difference in density between air and water causes smells (certain chemical compounds detectable by chemoreceptors) to behave differently. An animal first venturing out onto land would have difficulty in locating such chemical signals if its sensory apparatus had evolved in the context of aquatic detection.\n", "bleu_score": null, "meta": null } ] } ]
null
qjegb
At what altitude does the sky cease to be blue?
[ { "answer": "I will answer your question with a question.\n\nAt what point does the red become blue in the following image?\n\n_URL_0_", "provenance": null }, { "answer": "Sky gets dark at around 30 kilometers. You can see stars during the day at these heights.", "provenance": null }, { "answer": "The atmosphere ends gradually -- it fades out. By tradition we often say it's over at about 100km, but that's arbitrary. The sky starts to look dark far below that point, and there is still a minuscule amount of gas up at the altitude of the International Space Station.", "provenance": null }, { "answer": null, "provenance": [ { "wikipedia_id": "4968799", "title": "Sky brightness", "section": "Section::::Twilight.\n", "start_paragraph_id": 21, "start_character": 0, "end_paragraph_id": 21, "end_character": 449, "text": "When the sun has just set, the brightness of the sky decreases rapidly, thereby enabling us to see the airglow that is caused from such high altitudes that they are still fully sunlit until the sun drops more than about 12° below the horizon. During this time, yellow emissions from the sodium layer and red emissions from the 630 nm oxygen lines are dominant, and contribute to the purplish color sometimes seen during civil and nautical twilight.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "4968799", "title": "Sky brightness", "section": "Section::::Twilight.\n", "start_paragraph_id": 22, "start_character": 0, "end_paragraph_id": 22, "end_character": 208, "text": "After the sun has also set for these altitudes at the end of nautical twilight, the intensity of light emanating from earlier mentioned lines decreases, until the oxygen-green remains as the dominant source.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "4035", "title": "Black", "section": "Section::::Science.:Astronomy.:Why the night sky and space are black – Olbers' paradox.\n", "start_paragraph_id": 87, "start_character": 0, "end_paragraph_id": 87, "end_character": 301, "text": "The daytime sky on Earth is blue because light from the Sun strikes molecules in Earth's atmosphere scattering light in all directions. Blue light is scattered more than other colors, and reaches the eye in greater quantities, making the daytime sky appear blue. This is known as Rayleigh scattering.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "195193", "title": "Sky", "section": "Section::::During the day.\n", "start_paragraph_id": 8, "start_character": 0, "end_paragraph_id": 8, "end_character": 541, "text": "The sun is not the only object that may appear less blue in the atmosphere. Far away clouds or snowy mountaintops may appear yellowish. The effect is not very obvious on clear days but is very pronounced when clouds cover the line of sight, reducing the blue hue from scattered sunlight. At higher altitudes, the sky tends toward darker colors since scattering is reduced due to lower air density; an extreme example is the moon, where there is no atmosphere and no scattering, making the sky on the moon black even when the sun is visible.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "18323093", "title": "Skyscape art", "section": "", "start_paragraph_id": 5, "start_character": 0, "end_paragraph_id": 5, "end_character": 389, "text": "The sky is really nothing more than the denser gaseous zone of the earth’s atmosphere. Sky can be depicted as many different colors, such as a pale blue or the lack of any color at all, such as the night sky, which has the appearance of blackness, albeit with a scattering of stars on a clear night. During the day, the sky is seen as a deep blue due to the sunlight reflected on the air.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "213238", "title": "Mercury-Atlas 6", "section": "Section::::Flight.:First orbit.\n", "start_paragraph_id": 21, "start_character": 0, "end_paragraph_id": 21, "end_character": 599, "text": "Over the Indian Ocean, Glenn observed his first sunset from orbit. He described the moment of twilight as \"beautiful\". The sky in space was very black, he said, with a thin band of blue along the horizon. He said the sun set fast, but not as quickly as he had expected. For five or six minutes there was a slow reduction in light intensity. Brilliant orange and blue layers spread out 45 to 60 degrees on either side of the sun, tapering gradually toward the horizon. Clouds prevented him from seeing a mortar flare fired by the Indian Ocean tracking ship as part of a pilot observation experiment.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "3428791", "title": "Blue hour", "section": "Section::::How and when it happens.\n", "start_paragraph_id": 4, "start_character": 0, "end_paragraph_id": 4, "end_character": 789, "text": "When the sky is clear, the blue hour can be a colorful spectacle, with the indirect sunlight tinting the sky yellow, orange, red, and blue. This effect is caused by the relative diffusibility of shorter wavelengths (bluer rays) of visible light versus the longer wavelengths (redder rays). During the blue \"hour\", red light passes through space while blue light is scattered in the atmosphere, and thus reaches Earth's surface. Blue hour usually lasts about 20-30 minutes right after sunset and right before sunrise. For instance, if the sun sets at 6:30 p.m., blue hour would occur from 6:40 p.m. to 7 p.m.. If the sun were to rise at 7:30 a.m., blue hour would occur from 7 a.m. to 7:20 a.m.. Time of year, location, and air quality all have an impact on the exact timing of blue hour. \n", "bleu_score": null, "meta": null } ] } ]
null
1so2vn
Is there a lot of variance between people in how much energy they are able to extract from food they digest?
[ { "answer": "I'm not sure I can answer the question for _all_ aspects of our physiology...however, I will say that there is some fascinating microbiological evidence to suggest that the answer to this question is \"yes.\" \n\nStudies have shown that the bacteria that live in our gut (the **microbiome**) influence our likelihood of developing obesity, metabolic disorders and diabetes. We have already established that these organisms assist our absorption of nutrients. We're starting to understand that _variability_ in these organisms may lead to _variable_ absorption of nutrients from food among different people. Specifically, studies have shown that obesity is associated with a decrease in the diversity of the gut microbiome. That is, people who are overweight/obese have fewer different _types_ of microorganisms colonizing their intestine. It has been shown that the organisms that are winning out (_Firmicutes_) have an increased capacity to harvest nutrients.\n\nThere's additional research being done to investigate how these microscopic critters affect inflammatory processes going on in the intestine that might affect myriad metabolic disorders.\n\nThe gut microbiome is an exploding new field of research, and there's much that remains unanswered about its effect on health and illness. Regarding your question, however, it seems that bacteria may play a role. \n\n**Seminal Paper**: Turnbaugh, P.J., Ley, R.E., Mahowald, M.A., Magrini, V., Mardis, E.R. and Gordon, J.I. “An obesity-associated gut microbiome with increased capacity for energy harvest.” Nature, 2006.\n\nEDIT: In the spirit of procrastination from what I _should_ be working on, I added more details and cited one of the foundational studies.", "provenance": null }, { "answer": null, "provenance": [ { "wikipedia_id": "5391288", "title": "Bicycle performance", "section": "Section::::Energy efficiency.:Energy input.\n", "start_paragraph_id": 11, "start_character": 0, "end_paragraph_id": 11, "end_character": 416, "text": "The energy input to the human body is in the form of food energy, usually quantified in kilocalories [kcal] or kiloJoules [kJ=kWs]. This can be related to a certain distance travelled and to body weight, giving units such as kJ/(km∙kg). The rate of food consumption, i.e. the amount consumed during a certain period of time, is the input power. This can be measured in kcal/day or in J/s = W (1000 kcal/d ~ 48.5 W).\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "491955", "title": "Energy bar", "section": "Section::::Nutrition.\n", "start_paragraph_id": 3, "start_character": 0, "end_paragraph_id": 3, "end_character": 243, "text": "Energy in food comes from three sources: fat, protein, and carbohydrates. A typical energy bar weighs between 45 and 80 g and is likely to supply about 200–300 Cal (840–1,300 kJ), 3–9 g of fat, 7–15 g of protein, and 20–40 g of carbohydrates.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "562788", "title": "Basal metabolic rate", "section": "Section::::Biochemistry.\n", "start_paragraph_id": 47, "start_character": 0, "end_paragraph_id": 47, "end_character": 527, "text": "About 70% of a human's total energy expenditure is due to the basal life processes taking place in the organs of the body (see table). About 20% of one's energy expenditure comes from physical activity and another 10% from thermogenesis, or digestion of food (\"postprandial thermogenesis\"). All of these processes require an intake of oxygen along with coenzymes to provide energy for survival (usually from macronutrients like carbohydrates, fats, and proteins) and expel carbon dioxide, due to processing by the Krebs cycle.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "26639763", "title": "Weight management", "section": "Section::::Key components of weight management.:Energy Balance.\n", "start_paragraph_id": 16, "start_character": 0, "end_paragraph_id": 16, "end_character": 1114, "text": "The calories a person consumes come from both the foods and drinks they eat and drink. The calories a person expends comes from their basal metabolic rate and their daily physical activity. When eating a healthy diet mainly composed of vegetables, lean meats, and fruits, the human body is very good at maintaining a neutral energy balance so that calories consumed do not substantially exceed calories expended in a given time period and vice versa. This energy balance is regulated by hormones like Leptin (suppresses), Ghrelin (stimulates), and Cholecystokinin (suppresses) which either suppress or stimulate appetite. This unconscious regulation of energy balance is one of the factors that make sustained weight loss very difficult for many people. That being said, consuming fewer calories than the numbers of calories expended each day is fundamental to weight loss in both the short and long term. If attempting to loss weight, the National Heart, Lung, and Blood Institute (NHLBI) recommends a slow and steady approach by eating 500 fewer calories than the number of calories burned or expended each day.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "820953", "title": "Malabsorption", "section": "", "start_paragraph_id": 2, "start_character": 0, "end_paragraph_id": 2, "end_character": 711, "text": "Normally the human gastrointestinal tract digests and absorbs dietary nutrients with remarkable efficiency. A typical Western diet ingested by an adult in one day includes approximately 100 g of fat, 400 g of carbohydrate, 100 g of protein, 2 L of fluid, and the required sodium, potassium, chloride, calcium, vitamins, and other elements. Salivary, gastric, intestinal, hepatic, and pancreatic secretions add an additional 7–8 L of protein-, lipid-, and electrolyte-containing fluid to intestinal contents. This massive load is reduced by the small and large intestines to less than 200 g of stool that contains less than 8 g of fat, 1–2 g of nitrogen, and less than 20 mmol each of Na, K, Cl, HCO, Ca, or Mg.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "22180699", "title": "Diet and obesity", "section": "Section::::Dietary energy supply.\n", "start_paragraph_id": 3, "start_character": 0, "end_paragraph_id": 3, "end_character": 917, "text": "The dietary energy supply is the food available for human consumption, usually expressed in kilocalories per person per day. It gives an overestimate of the total amount of food consumed as it reflects both food consumed and food wasted. The per capita dietary energy supply varies markedly between different regions and countries. It has also changed significantly over time. From the early 1970s to the late 1990s, the average calories available per person per day (the amount of food bought) has increased in all part of the world except Eastern Europe and parts of Africa. The United States had the highest availability with 3654 kilo calories per person in 1996. This increased further in 2002 to 3770. During the late 1990s, Europeans had 3394 kilo calories per person, in the developing areas of Asia there were 2648 kilo calories per person, and in sub-Sahara Africa people had 2176 kilo calories per person.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "26639763", "title": "Weight management", "section": "Section::::Key components of weight management.:Thermogenic Effect of Food.\n", "start_paragraph_id": 68, "start_character": 0, "end_paragraph_id": 68, "end_character": 472, "text": "The thermogenic effect of food is another component of a person's daily energy expenditure and refers to the amount of energy it takes the body to digest, absorb, and metabolize nutrients in the diet. The amount of energy expended while processing food differs by individual but on average it amounts to about 10% the number of calories consumed during a given time period. Processing proteins and carbohydrates has more of a thermogenic effect than does processing fats.\n", "bleu_score": null, "meta": null } ] } ]
null
1jbyki
how do we know the articles from /r/politics and anything the media shows is actually real?
[ { "answer": "You should treat everything you read with a healthy dose of skepticism. /r/politics is quite bad in that it commonly uses extremely biased news sources. If you read it in /r/politics, chances are it's only one part of the story. Get your news from multiple reputable sources, such as the NY Times, Al Jazeera, PBS/NPR, etc. ", "provenance": null }, { "answer": null, "provenance": [ { "wikipedia_id": "1968076", "title": "On the Media", "section": "Section::::Format.\n", "start_paragraph_id": 5, "start_character": 0, "end_paragraph_id": 5, "end_character": 404, "text": "The show also addresses questions about how the media is influenced or spun by politicians, corporations, and interest groups with the intent to shape public opinion. This includes an \"OTM\" feature that covers the media's use of terminologies that may engender biased points of view, and the use of hot-button issues and code words such as \"Michael Moore\", \"torture\", \"evangelical\", and \"islamofascist\".\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "10638844", "title": "Laboratory News", "section": "Section::::Websites.\n", "start_paragraph_id": 14, "start_character": 0, "end_paragraph_id": 14, "end_character": 366, "text": "News and feature articles can be viewed in full on the website along with the opportunity for readers to comments on articles. There is also an opinion poll posing a question for visitors to vote on. The events section gives details on conferences, exhibitions, shows, openings and workshops. Recently streaming video has been added to some articles on the website.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "2633667", "title": "Article (publishing)", "section": "Section::::News articles.\n", "start_paragraph_id": 3, "start_character": 0, "end_paragraph_id": 3, "end_character": 211, "text": "A news article discusses current or recent news of either general interest (i.e. daily newspapers) or of a specific topic (i.e. political or trade news magazines, club newsletters, or technology news websites).\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "59089831", "title": "NewsThump", "section": "Section::::Content.\n", "start_paragraph_id": 6, "start_character": 0, "end_paragraph_id": 6, "end_character": 564, "text": "The site's articles are presented as genuine news stories, with frequent use of fake quotes, which editor Richard Smith has suggested are intended to mimic the BBC News website. In 2016, he was quoted as saying that \"if someone shares one of our stories believing it to be true, then we would see that as both amusing, but also a failure on our part\" but claimed that he was not worried by a clamp-down on \"fake news\" by social media companies. In 2017, however, he complained that their articles had been hit by Facebook's implementation of a \"fake news\" filter.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "782108", "title": "List of satirical television news programs", "section": "", "start_paragraph_id": 1, "start_character": 0, "end_paragraph_id": 1, "end_character": 393, "text": "This is a list of satirical television news programs with a satirical bent, or parodies of news broadcasts, with either real or fake stories for mainly humorous purposes. The list does not include sitcoms or other programs set in a news-broadcast work environment, such as the US \"Mary Tyler Moore\", the UK's \"Drop The Dead Donkey\", the Australian \"Frontline\", or the Canadian \"The Newsroom\".\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "40343683", "title": "Los Desayunos de TVE", "section": "Section::::Format.\n", "start_paragraph_id": 3, "start_character": 0, "end_paragraph_id": 3, "end_character": 301, "text": "The show consists of an interview with the host of the show, accompanied by journalists from prestigious relevance to a particular political character, but also social, cultural, economic, artistic, sports or media. Then develops a gathering of political content among journalists present on the set.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "1968076", "title": "On the Media", "section": "Section::::Format.\n", "start_paragraph_id": 3, "start_character": 0, "end_paragraph_id": 3, "end_character": 584, "text": "As defined by co-host Garfield, \"On the Media\" covers \"...anything that reaches a large audience—either electronically or otherwise... Plus, throw into that anything that covers First Amendment issues; anything that has to do with freedom of speech, privacy, is also in our portfolio.\" The show explores how the media are changing, and their effects on America and the world. Many stories are centered on events of the previous week and how they were covered in the news. These often consist of interviews with reporters about the dilemmas they face in covering controversial issues.\n", "bleu_score": null, "meta": null } ] } ]
null
5ycq05
How populist was the American revolution? Was it a movent by the elites or did the lower classes support it?
[ { "answer": "So contrary to popular belief, the \"Founders\" were not the main proponents of separation from Great Britain -- quite the contrary. Most Founders were quite hesitant to pull away. However, populist movements really started to become prevalent by the end of the 1760s. \n\nTo explain, Founders like John Adams had a long history of fearing Democracy and many other aspects of their future government. Check out [Adams' own early view of democracy](_URL_2_) in 1763:\n\n > Democracy, will soon degenerate into an Anarchy, such an Anarchy that every Man will do what is right in his own Eyes, and no Mans life or Property or Reputation or Liberty will be secure and every one of these will soon mould itself into a system of subordination of all the moral Virtues, and Intellectual Abilities, all the Powers of Wealth, Beauty, Wit, and Science, to the wanton Pleasures, the capricious Will, and the execrable Cruelty of one or a very few.\n\nYeah, not very flattering, is it? However, his views eventually evolved as the situation in America became more severe.\n\nBy the early 1770s, populist movements across America really shifted American politics as a whole. Books like Marjoleine Kars' *Breaking Loose Together: The Regulator Rebellion in Pre-revolutionary North Carolina* outlines the ways that populist movements in the South were able to fight back against corruption and the government (mainly between 1766 - 1771). This is important because \"Regulator Rebellions\" had a direct impact on what happened after the war.\n\nBetween 1772 - 1775, a lot changed, and most was spurned on by the general public. Everyone knows about the Boston Tea party, but most people don't realize that the Boston Tea Party caused a ripple affect with Tea Parties all across America in 1774. Tea Parties happened in many American cities, including [Philadelphia, Annapolis, Charleston, and many others](_URL_1_). This forced the gentry into a precarious situation. Some, like Charles Carroll of Carrolton [corresponded with his father about this in 1774](_URL_3_), essentially saying that while he doesn't think that these protests are a good thing, the rich and powerful must get involved in American politics so they can secure their place in America's future. \n\nThat's why the tone of the early Continental Congress is very tame. Internal proceedings show that many states, especially in the South, leaders were not keen to even consider separation from Great Britain. But the challenge was that as the Continental Congresses met, the people back home kept protesting, and burning down the houses of tax collectors, or kicking governors out of their homes. A great book that tackles this is Terry Bouton's *Taming Democracy: The People, The Founders, and the Troubling End of the American Revolution* where he correctly ascribes many of the Founders to be hesitant leaders towards independence. It also explains why, even after minor hostilities broke out in 1775, the Founders still sent the \"[Olive Branch Petition](_URL_4_)\" back to Britain as they vainly hoped to stop a full war before it happened. Even when the Continental Congress empowered General George Washington with command in June 1775, they did not expect that his duties would necessarily be a full-break with Great Britain. The Continental Congress wanted some autonomy from Great Britain and some representation in government, which they believed was more achievable than independence. \n\nNow I should disclaim that this isn't true of all Founders. Some, especially in the North were much more pro-separation than many others. [Samuel Adams](_URL_0_), who was very vocal and active during the Stamp Acts protests, helped organized the Boston Tea Party, and was a member of the Continental Congress Representing Massachusetts was very vocal from the beginning that he believed separation from Great Britain should be a primary goal. There are a few others that fall into this category, but not many. \n\nTl;Dr: Most founders dragged their feet as the American \"mob\" dragged them forward . \n\nEdit: fixed a misspelled name. ", "provenance": null }, { "answer": null, "provenance": [ { "wikipedia_id": "33272713", "title": "Populism in Canada", "section": "Section::::19th century.\n", "start_paragraph_id": 4, "start_character": 0, "end_paragraph_id": 4, "end_character": 399, "text": "Anti-establishment populist politics became an important political force in 19th century Ontario amongst rural and working class political activists who were influenced by American populist radicals. Populism also became an important political force in Western Canada by the 1880s and 1890s. Populism was particularly strong in the form of farmer-labour coalition politics in the late 19th century.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "47979035", "title": "Revolutions Without Borders", "section": "", "start_paragraph_id": 2, "start_character": 0, "end_paragraph_id": 2, "end_character": 525, "text": "Polasky argues that the American Revolution, and the essays and arguments of its leaders, directly inspired a series of revolutions (some successful; most not) including the Geneva Revolution of 1782, the 1787 \"Patriot Revolution\" in the Dutch Republic, the Belgian \"small revolution\" of 1789, and the French Revolution itself. In her view, the literature and ideas of the American and French revolutionists converged to inspire a long series of revolutions at the end of the 18th century and in the early years of the 19th.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "8702096", "title": "Presidency of George Washington", "section": "Section::::Foreign affairs.:French Revolution.:Public debate.\n", "start_paragraph_id": 87, "start_character": 0, "end_paragraph_id": 87, "end_character": 1340, "text": "Though originally most Americans were in support of the revolution, the political debate in the U.S. over the nature of the revolution soon exacerbated pre-existing political divisions and resulted in the alignment of the political elite along pro-French and pro-British lines. Thomas Jefferson became the leader of the pro-French faction that celebrated the revolution's republican ideals. Though originally in support of the revolution, Alexander Hamilton soon led the faction which viewed the revolution with skepticism (believing that \"absolute liberty would lead to absolute tyranny\") and sought to preserve existing commercial ties with Great Britain. When news reached America that France had declared war on the British, people were divided on whether the U.S. should enter the war on the side of France. Jefferson and his faction wanted to aid the French, while Hamilton and his followers supported neutrality in the conflict. Jeffersonians denounced Hamilton, Vice President Adams, and even the president as \"friends of Britain\", \"monarchists\", and \"enemies of the republican values that all true Americans cherish\". Hamiltonians warned that Jefferson's Republicans would replicate the terrors of the French revolution in America\"crowd rule\" akin to anarchy, and the destruction of \"all order and rank in society and government.\"\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "61313146", "title": "History of U.S. foreign policy, 1776–1801", "section": "Section::::Outbreak of the French Revolution.:Public debate.\n", "start_paragraph_id": 47, "start_character": 0, "end_paragraph_id": 47, "end_character": 1340, "text": "Though originally most Americans were in support of the revolution, the political debate in the U.S. over the nature of the revolution soon exacerbated pre-existing political divisions and resulted in the alignment of the political elite along pro-French and pro-British lines. Thomas Jefferson became the leader of the pro-French faction that celebrated the revolution's republican ideals. Though originally in support of the revolution, Alexander Hamilton soon led the faction which viewed the revolution with skepticism (believing that \"absolute liberty would lead to absolute tyranny\") and sought to preserve existing commercial ties with Great Britain. When news reached America that France had declared war on the British, people were divided on whether the U.S. should enter the war on the side of France. Jefferson and his faction wanted to aid the French, while Hamilton and his followers supported neutrality in the conflict. Jeffersonians denounced Hamilton, Vice President Adams, and even the president as \"friends of Britain\", \"monarchists\", and \"enemies of the republican values that all true Americans cherish\". Hamiltonians warned that Jefferson's Republicans would replicate the terrors of the French revolution in America\"crowd rule\" akin to anarchy, and the destruction of \"all order and rank in society and government.\"\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "211484", "title": "Populism", "section": "Section::::History.:North America.\n", "start_paragraph_id": 123, "start_character": 0, "end_paragraph_id": 123, "end_character": 595, "text": "In the first decade of the 21st century, two populist movements appeared in the US, both in response to the Great Recession: the Occupy movement and the Tea Party movement. The populist approach of the Occupy movement was broader, with its \"people\" being what it called \"the 99%\", while the \"elite\" it challenged was presented as both the economic and political elites. The Tea Party's populism was Producerism, while \"the elite\" it presented was more party partisan than that of Occupy, being defined largely—although not exclusively—as the Democratic administration of President Barack Obama.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "8354382", "title": "Right-wing populism", "section": "Section::::By country.:America.:United States.\n", "start_paragraph_id": 22, "start_character": 0, "end_paragraph_id": 22, "end_character": 594, "text": "Moore (1996) argues that \"populist opposition to the growing power of political, economic, and cultural elites\" helped shape \"conservative and right-wing movements\" since the 1920s. Historical right-wing populist figures in both major parties in the United States have included Thomas E. Watson, Strom Thurmond, Joe McCarthy, Barry Goldwater, George Wallace and Pat Buchanan. When Conservative Democrats dominated the politics of the Democratic Party, they comprised a faction of the Democrats which espoused populism, while the Republicans have adopted some forms of populism since the 1960s.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "1406601", "title": "Market populism", "section": "Section::::1990s America.\n", "start_paragraph_id": 5, "start_character": 0, "end_paragraph_id": 5, "end_character": 557, "text": "The concept of market populism became especially popular during the American New Economy, which began in the 1990s. Academics, executives, Democrats and Republicans all shared the idea that markets were a popular system. In other words, because they were considered to be efficient at allocating resources, therefore the inefficiencies arising from poor legislation or unethical practices would be rooted out. The phrase \"golden straitjacket\" was coined by Thomas Friedman in his 1999 book, \"The Lexus and the Olive Tree\", as a synonym for market populism.\n", "bleu_score": null, "meta": null } ] } ]
null
43k7v1
why would a country implement negative interest rates (ie. japan)
[ { "answer": "Yes. That's exactly why they did it. They are suffering a recession right now, and this would stimulate people to take their money and spend it, giving a boost to the economy.", "provenance": null }, { "answer": null, "provenance": [ { "wikipedia_id": "34392", "title": "Japanese yen", "section": "Section::::Determinants of value.\n", "start_paragraph_id": 57, "start_character": 0, "end_paragraph_id": 57, "end_character": 498, "text": "Since the 1990s, the Bank of Japan, the country's central bank, has kept interest rates low in order to spur economic growth. Short-term lending rates have responded to this monetary relaxation and fell from 3.7% to 1.3% between 1993 and 2008. Low interest rates combined with a ready liquidity for the yen prompted investors to borrow money in Japan and invest it in other countries (a practice known as carry trade). This has helped to keep the value of the yen low compared to other currencies.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "495103", "title": "Economic history of Japan", "section": "Section::::Post-World War II.:Since the end of Cold War.:Deflation from the 1990s to present.\n", "start_paragraph_id": 75, "start_character": 0, "end_paragraph_id": 75, "end_character": 688, "text": "Deflation in Japan started in the early 1990s. On 19 March 2001, the Bank of Japan and the Japanese government tried to eliminate deflation in the economy by reducing interest rates (part of their 'quantitative easing' policy). Despite having interest rates near zero for a long period, this strategy did not succeed. Once the near-zero interest rates failed to stop deflation, some economists, such as Paul Krugman, and some Japanese politicians spoke of deliberately causing (or at least creating the fear of) inflation. In July 2006, the zero-rate policy was ended. In 2008, the Japanese Central Bank still had the lowest interest rates in the developed world and deflation continued.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "22099091", "title": "Subprime mortgage crisis solutions debate", "section": "Section::::Liquidity.:Lower interest rates.:Arguments against lower interest rates.\n", "start_paragraph_id": 29, "start_character": 0, "end_paragraph_id": 29, "end_character": 630, "text": "Other things being equal, economic theory suggests that lowering interest rates relative to other countries weakens the domestic currency. This is because capital flows to nations with higher interest rates (after subtracting inflation and the political risk premium), causing the domestic currency to be sold in favor of foreign currencies, a variation of which is called the carry trade. Further, there is risk that the stimulus provided by lower interest rates can lead to demand-driven inflation once the economy is growing again. Maintaining interest rates at a low level also discourages saving, while encouraging spending.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "180311", "title": "Exchange rate", "section": "Section::::Uncovered interest rate parity.\n", "start_paragraph_id": 73, "start_character": 0, "end_paragraph_id": 73, "end_character": 761, "text": "Uncovered interest rate parity (UIRP) states that an appreciation or depreciation of one currency against another currency might be neutralized by a change in the interest rate differential. If US interest rates increase while Japanese interest rates remain unchanged then the US dollar should depreciate against the Japanese yen by an amount that prevents arbitrage (in reality the opposite, appreciation, quite frequently happens in the short-term, as explained below). The future exchange rate is reflected into the forward exchange rate stated today. In our example, the forward exchange rate of the dollar is said to be at a discount because it buys fewer Japanese yen in the forward rate than it does in the spot rate. The yen is said to be at a premium.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "163115", "title": "Interest rate", "section": "Section::::Negative nominal or real rates.\n", "start_paragraph_id": 101, "start_character": 0, "end_paragraph_id": 101, "end_character": 626, "text": "\"Nominal\" interest rates are normally positive, but not always. In contrast, \"real\" interest rates can be negative, when nominal interest rates are below inflation. When this is done via government policy (for example, via reserve requirements), this is deemed financial repression, and was practiced by countries such as the United States and United Kingdom following World War II (from 1945) until the late 1970s or early 1980s (during and following the Post–World War II economic expansion). In the late 1970s, United States Treasury securities with negative real interest rates were deemed \"certificates of confiscation\".\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "234706", "title": "Usury", "section": "Section::::Usury law.:Japan.\n", "start_paragraph_id": 69, "start_character": 0, "end_paragraph_id": 69, "end_character": 787, "text": "Japan has various laws restricting interest rates. Under civil law, the maximum interest rate is between 15% and 20% per year depending upon the principal amount (larger amounts having a lower maximum rate). Interest in excess of 20% is subject to criminal penalties (the criminal law maximum was 29.2% until it was lowered by legislation in 2010). Default interest on late payments may be charged at up to 1.46 times the ordinary maximum (i.e., 21.9% to 29.2%), while pawn shops may charge interest of up to 9% per month (i.e., 108% per year, however, if the loan extends more than the normal short-term pawn shop loan, the 9% per month rate compounded can make the annual rate in excess of 180%, before then most of these transaction would result in any goods pawned being forfeited).\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "15345843", "title": "Endaka", "section": "Section::::History.:Origins.\n", "start_paragraph_id": 7, "start_character": 0, "end_paragraph_id": 7, "end_character": 1135, "text": "After the severe housing crisis bubble burst in 1992, Japan's interest rates sank to near zero. Coupled with gigantic savings accumulated over decades from overseas surpluses, and soaring yen, Japan tried a number of measures to weaken its currency. First it began to buy up properties overseas, such as the Rockefeller Center in New York City in 1990, as well as investing in US corporate bonds. After huge property losses, it gave that up. Another was state intervention BOJ in foreign exchange reserves, which it ultimately gave up in 2004 after accumulating nearly a trillion dollars. Japan also invested directly in Fannie Mae and other mortgage bonds, holding close to a trillion dollars in those bonds. Yet another measure was to loan out hoards of money to US and European banks at zero percent rates, which began in earnest in 2004, also known as the massive carry trade (via yen-denominated bank loans to overseas investors). US and European banks then loaned this money out to home owners in America, as well as big property investors in the Middle East. This effectively kept the yen at 120 or weaker levels to the dollar.\n", "bleu_score": null, "meta": null } ] } ]
null
3awpj2
How common is it for a planet to have a natural satellite like our moon?
[ { "answer": "Consider the solar system as we know it. Of the 8 major planets, all but 2 (Mercury & Venus) have at least one natural satellite, most have more. Mars 2, Jupiter & Saturn 60+ each, Uranus & Neptune 27 & 14 respectively. Even the dwarf planets have satellites, Pluto has 5.", "provenance": null }, { "answer": "Natural satellites seem to be pretty common amongst planets. 6 out of 8 planets have at least one moon, and many dwarf planets as well as some asteroids also have moons. \n\nWhat's remarkable is the size and mass ratio of the earth-moon-system: It is 1: 3,67 for the size and 1:81 for the mass. In other words, the moon is really big compared to its mother planet. Similar sized moons orbit only gas giants, where the mass and size ratio is much smaller, because the gas giants are so much bigger and heavier than earth. \n\nThe reason why a comparatively small planet like earth has such a big moon lies in the formation of our system. It is very likely that a mars-sized object shared and orbit with the young earth, which eventually lead to their collision. The heavy core of that object contributed a significant amount of metals like iron and nickel to the earth, while lighter stuff like rock was spewed into an orbit and coalesced to finally form our moon. That's a nice explanation for why we have such a big moon and why earth is so big and **dense** amongst the terrestrial planets. \n\nThe only system which seems similar to ours is actually not a planetary system, but a dwarf planet system, namely the one of Pluto and Charon. The size and mass ratio is even bigger here, since Charon has more than half the diameter of Pluto and about 1/8 its mass. That causes the barycenter to be outside of Pluto, why some people think it should be considered as a own type of system called a double planet. The probe [New Horizons](_URL_0_) is going to visit the system this july and is already sending data to earth. Perhaps we will learn more about our own system by studying that dwarf planet. \n\nRegarding moons of planets outside the solar system, the so called [exomoons](_URL_1_), we don't have many data yet, because our technology is still in early development. There are some good candidates listed in the linked article and our own solar system suggests that moons seem to be pretty common, but we still have to improve our observational instruments to get information how common moons around other planets are and what properties they have. ", "provenance": null }, { "answer": null, "provenance": [ { "wikipedia_id": "8453893", "title": "Claimed moons of Earth", "section": "Section::::Quasi-satellites and trojans.\n", "start_paragraph_id": 32, "start_character": 0, "end_paragraph_id": 32, "end_character": 422, "text": "Although no other moons of Earth have been found to date, there are various types of near-Earth objects in 1:1 resonance with it, which are known as quasi-satellites. Quasi-satellites orbit the Sun from the same distance as a planet, rather than the planet itself. Their orbits are unstable, and will fall into other resonances or be kicked into other orbits over thousands of years. Quasi-satellites of Earth include , ,\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "53306", "title": "Natural satellite", "section": "Section::::Satellites of satellites.\n", "start_paragraph_id": 22, "start_character": 0, "end_paragraph_id": 22, "end_character": 205, "text": "No \"moons of moons\" or subsatellites (natural satellites that orbit a natural satellite of a planet) are currently known . In most cases, the tidal effects of the planet would make such a system unstable.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "53306", "title": "Natural satellite", "section": "Section::::Terminology.\n", "start_paragraph_id": 9, "start_character": 0, "end_paragraph_id": 9, "end_character": 224, "text": "Many authors define \"satellite\" or \"natural satellite\" as orbiting some planet or minor planet, synonymous with \"moon\" – by such a definition all natural satellites are moons, but Earth and other planets are not satellites.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "6661422", "title": "Small Solar System body", "section": "Section::::Definition.\n", "start_paragraph_id": 5, "start_character": 0, "end_paragraph_id": 5, "end_character": 324, "text": "Except for the largest, which are in hydrostatic equilibrium, natural satellites (moons) differ from small Solar System bodies not in size, but in their orbits. The orbits of natural satellites are not centered on the Sun, but around other Solar System objects such as planets, dwarf planets, and small Solar System bodies.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "53306", "title": "Natural satellite", "section": "", "start_paragraph_id": 2, "start_character": 0, "end_paragraph_id": 2, "end_character": 268, "text": "In the Solar System there are six planetary satellite systems containing 185 known natural satellites. Four IAU-listed dwarf planets are also known to have natural satellites: Pluto, Haumea, Makemake, and Eris. , there are 334 other minor planets known to have moons.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "8453893", "title": "Claimed moons of Earth", "section": "", "start_paragraph_id": 2, "start_character": 0, "end_paragraph_id": 2, "end_character": 230, "text": "Although the Moon is Earth's only natural satellite, there are a number of near-Earth objects (NEOs) with orbits that are in resonance with Earth. These have been called, inaccurately, \"second\", \"third\" or \"other\" moons of Earth.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "53306", "title": "Natural satellite", "section": "Section::::Terminology.:Definition of a moon.\n", "start_paragraph_id": 12, "start_character": 0, "end_paragraph_id": 12, "end_character": 452, "text": "There is no established lower limit on what is considered a \"moon\". Every natural celestial body with an identified orbit around a planet of the Solar System, some as small as a kilometer across, has been considered a moon, though objects a tenth that size within Saturn's rings, which have not been directly observed, have been called \"moonlets\". Small asteroid moons (natural satellites of asteroids), such as Dactyl, have also been called moonlets.\n", "bleu_score": null, "meta": null } ] } ]
null
1la3d6
What would happen when light reflects off of a mirror, if the mirror was artificially heated to have a higher net energy than the particle does?
[ { "answer": "What do you mean \"higher net energy\"? The total heat energy in a given mirror is typically much higher than the energy of a given photon in the visible spectrum.", "provenance": null }, { "answer": null, "provenance": [ { "wikipedia_id": "539354", "title": "Mirror matter", "section": "Section::::Observational effects.:Abundance.\n", "start_paragraph_id": 10, "start_character": 0, "end_paragraph_id": 10, "end_character": 627, "text": "Mirror matter could have been diluted to unobservably low densities during the inflation epoch. Sheldon Glashow has shown that if at some high energy scale particles exist which interact strongly with both ordinary and mirror particles, radiative corrections will lead to a mixing between photons and mirror photons. This mixing has the effect of giving mirror electric charges a very small ordinary electric charge. Another effect of photon–mirror photon mixing is that it induces oscillations between positronium and mirror positronium. Positronium could then turn into mirror positronium and then decay into mirror photons.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "1045193", "title": "Perfect mirror", "section": "Section::::General.\n", "start_paragraph_id": 5, "start_character": 0, "end_paragraph_id": 5, "end_character": 307, "text": "Almost any dielectric material can act as a perfect mirror through total internal reflection. This effect only occurs at shallow angles, however, and only for light inside the material. The effect happens when light goes from a medium with a higher index of refraction to one with a lower value (like air).\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "39606746", "title": "Central Laser Facility", "section": "Section::::Notable studies.:The Light Clock.\n", "start_paragraph_id": 24, "start_character": 0, "end_paragraph_id": 24, "end_character": 559, "text": "Einstein proposed as part of his theory of special relativity that light reflected from a mirror moving close to the speed of light will have higher peak power than the incident light because of temporal compression. Using a dense relativistic electron mirror created from a high-intensity laser pulse and nanometre-scale foil, the frequency of the laser pulse was shown to shift coherently from infrared to the ultraviolet. The results elucidate the reflection process of laser-generated electron mirrors and suggest future research in relativistic mirrors.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "142258", "title": "Laser diode", "section": "Section::::Failure mechanisms.\n", "start_paragraph_id": 66, "start_character": 0, "end_paragraph_id": 66, "end_character": 871, "text": "Essentially, as a result, when light propagates through the cleavage plane and transits to free space from within the semiconductor crystal, a fraction of the light energy is absorbed by the surface states where it is converted to heat by phonon-electron interactions. This heats the cleaved mirror. In addition, the mirror may heat simply because the edge of the diode laser—which is electrically pumped—is in less-than-perfect contact with the mount that provides a path for heat removal. The heating of the mirror causes the bandgap of the semiconductor to shrink in the warmer areas. The bandgap shrinkage brings more electronic band-to-band transitions into alignment with the photon energy causing yet more absorption. This is thermal runaway, a form of positive feedback, and the result can be melting of the facet, known as \"catastrophic optical damage\", or COD.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "4969106", "title": "Catastrophic optical damage", "section": "Section::::Causes and mechanisms.\n", "start_paragraph_id": 9, "start_character": 0, "end_paragraph_id": 9, "end_character": 871, "text": "Essentially, as a result when light propagates through the cleavage plane and transits to free space from within the semiconductor crystal, a fraction of the light energy is absorbed by the surface states where it is converted to heat by phonon-electron interactions. This heats the cleaved mirror. In addition the mirror may heat simply because the edge of the diode laser—which is electrically pumped—is in less-than-perfect contact with the mount that provides a path for heat removal. The heating of the mirror causes the band gap of the semiconductor to shrink in the warmer areas. The band gap shrinkage brings more electronic band-to-band transitions into alignment with the photon energy causing yet more absorption. This is thermal runaway, a form of positive feedback, and the result can be melting of the facet, known as \"catastrophic optical damage\", or COD.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "40581191", "title": "Infinity mirror", "section": "Section::::Explanation of effect.\n", "start_paragraph_id": 8, "start_character": 0, "end_paragraph_id": 8, "end_character": 333, "text": "The 3D illusion mirror effect is produced whenever there are two parallel reflective surfaces which can bounce a beam of light back and forth an indefinite (theoretically infinite) number of times. The reflections appear to recede into the distance because the light actually is traversing the distance it appears to be travelling. \n", "bleu_score": null, "meta": null }, { "wikipedia_id": "894774", "title": "Emission theory", "section": "Section::::History.\n", "start_paragraph_id": 6, "start_character": 0, "end_paragraph_id": 6, "end_character": 307, "text": "BULLET::::- The excited portion of a reflecting mirror acts as a new source of light and the reflected light has the same velocity \"c\" with respect to the mirror as has original light with respect to its source. (Proposed by Richard Chase Tolman in 1910, although he was a supporter of special relativity).\n", "bleu_score": null, "meta": null } ] } ]
null
zav98
Is it possible to for the body to stop identifying an allergen as harmful after years of no exposure?
[ { "answer": "Actually, yes! Memory B-cells are what are responsible for long term humoral immunity (the kind involved in allergic reactions, among other things). Memory B-cells are some of the longest lived cells in the body, behind maybe neurons and cardiac myocytes, but even then they only live about 20 years. Without some kind of stimulation since the first incident, it's entirely possible that the clonal population your body created after your sting at age 12 has since died off or diminished to the point where you're anergic to yellow jacket venom.", "provenance": null }, { "answer": null, "provenance": [ { "wikipedia_id": "3485964", "title": "Allergic contact dermatitis", "section": "Section::::Signs and symptoms.\n", "start_paragraph_id": 6, "start_character": 0, "end_paragraph_id": 6, "end_character": 306, "text": "The symptoms of allergic contact may persist for as long as one month before resolving completely. Once an individual has developed a skin reaction to a certain substance it is most likely that they will have it for the rest of their life, and the symptoms will reappear when in contact with the allergen.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "3386326", "title": "Estragole", "section": "Section::::Safety.\n", "start_paragraph_id": 9, "start_character": 0, "end_paragraph_id": 9, "end_character": 404, "text": "The Scientific Committee on Food from the Health and Consumer Protection Directorate took a more concerned position and concluded that \"Estragole has been demonstrated to be genotoxic and carcinogenic. Therefore the existence of a threshold cannot be assumed and the Committee could not establish a safe exposure limit. Consequently, reductions in exposure and restrictions in use levels are indicated.\"\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "55313", "title": "Allergy", "section": "", "start_paragraph_id": 2, "start_character": 0, "end_paragraph_id": 2, "end_character": 726, "text": "Common allergens include pollen and certain food. Metals and other substances may also cause problems. Food, insect stings, and medications are common causes of severe reactions. Their development is due to both genetic and environmental factors. The underlying mechanism involves immunoglobulin E antibodies (IgE), part of the body's immune system, binding to an allergen and then to a receptor on mast cells or basophils where it triggers the release of inflammatory chemicals such as histamine. Diagnosis is typically based on a person's medical history. Further testing of the skin or blood may be useful in certain cases. Positive tests, however, may not mean there is a significant allergy to the substance in question.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "55313", "title": "Allergy", "section": "", "start_paragraph_id": 3, "start_character": 0, "end_paragraph_id": 3, "end_character": 473, "text": "Early exposure to potential allergens may be protective. Treatments for allergies include avoiding known allergens and the use of medications such as steroids and antihistamines. In severe reactions injectable adrenaline (epinephrine) is recommended. Allergen immunotherapy, which gradually exposes people to larger and larger amounts of allergen, is useful for some types of allergies such as hay fever and reactions to insect bites. Its use in food allergies is unclear.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "49284", "title": "Methylchloroisothiazolinone", "section": "Section::::Safety.\n", "start_paragraph_id": 5, "start_character": 0, "end_paragraph_id": 5, "end_character": 280, "text": "Methylchloroisothiazolinone can cause allergic reactions in some people. The first publication of the preservative as a contact allergen was in 1988. Cases of photoaggravated allergic contact dermatitis, i.e. worsening of skin lesions after sun exposure, have also been reported.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "566975", "title": "Sanofi", "section": "Section::::Products.\n", "start_paragraph_id": 98, "start_character": 0, "end_paragraph_id": 98, "end_character": 282, "text": "Sanofi US also added the following warning: If a patient experiencing a serious allergic reaction (i.e., anaphylaxis) did not receive the intended dose, there could be significant health consequences, including death because anaphylaxis is a potentially life‑threatening condition.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "2410182", "title": "Hydrolyzed vegetable protein", "section": "Section::::Allergenicity.\n", "start_paragraph_id": 25, "start_character": 0, "end_paragraph_id": 25, "end_character": 432, "text": "Nevertheless, strong evidence indicates at least aHVP is not allergenic, since proteins are degraded to single amino acids which are not likely to trigger an allergic reaction. A recent study has shown that aHVP does not contain detectable traces of proteins or IgE-reactive peptides. This provides strong evidence that aHVP is very unlikely to trigger an allergic reaction to people who are intolerant or allergic to soy or wheat.\n", "bleu_score": null, "meta": null } ] } ]
null
4yq3nq
In the history of presidential elections in the United States has a major political party ever functionally conceded defeat months before the general election and ran its house/senate candidates as checks on the opposing party's candidates power once they assumed the presidency?
[ { "answer": "Republicans ran Congressional campaigns in 1996 (Clinton/Dole) explicitly as a check against the presumed Clinton victory. (Clinton was up 8 points in October polling.) The NRCC warned against giving Clinton a blank check and pushed for voters in vulnerable districts to split their ballot.", "provenance": null }, { "answer": null, "provenance": [ { "wikipedia_id": "43179439", "title": "1824 United States elections", "section": "", "start_paragraph_id": 2, "start_character": 0, "end_paragraph_id": 2, "end_character": 1392, "text": "In the first close presidential election since the 1812 election, four major candidates ran, all of whom were members of the Democratic-Republican Party. The Democratic-Republicans had largely been successful in fielding only one presidential candidate in previous elections (except in 1812), but the breakdown of the congressional nominating caucus and a lack of meaningful opposition from the Federalists allowed for a multi-candidate field. Senator Andrew Jackson from Tennessee, Secretary of State John Quincy Adams, Secretary of the Treasury William Crawford, and Speaker of the House Henry Clay all received electoral votes. With no candidate receiving a majority of the electoral vote, the House chose among the three candidates (Jackson, Adams, and Crawford) with the most electoral votes. Although Jackson won a plurality of electoral and popular votes, the House elected Adams as President. Despite the chaos in the presidential election, John C. Calhoun won the vice presidency with a majority of electoral votes. The 1824 presidential election was the only time that the House elected the president under the terms of the Twelfth Amendment, and the only time that the winner of the most electoral votes did not win the presidency. Adams's victory ended the Virginia dynasty of presidents, but continued the trend of the incumbent secretary of state winning election as president.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "31664", "title": "Twelfth Amendment to the United States Constitution", "section": "Section::::Elections since 1804.\n", "start_paragraph_id": 59, "start_character": 0, "end_paragraph_id": 59, "end_character": 502, "text": "Since 1836, no major U.S. party has nominated multiple regional presidential or vice presidential candidates in an election. However, since the Civil War there have been two serious attempts by Southern-based parties to run regional candidates in hopes of denying either of the two major candidates an electoral college majority. Both attempts (in 1948 and 1968) failed, but not by much—in both cases a shift in the result of two close states would have forced the respective elections into the House.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "185311", "title": "United States presidential primary", "section": "Section::::Background.\n", "start_paragraph_id": 6, "start_character": 0, "end_paragraph_id": 6, "end_character": 359, "text": "Starting with the 1796 election, Congressional party or a state legislature party caucus selected the party's presidential candidates. Before 1820, Democratic-Republican members of Congress would nominate a single candidate from their party. That system collapsed in 1824, and since 1832 the preferred mechanism for nomination has been a national convention.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "48775742", "title": "November 1914", "section": "Section::::November 3, 1914 (Tuesday).\n", "start_paragraph_id": 31, "start_character": 0, "end_paragraph_id": 31, "end_character": 572, "text": "BULLET::::- The United States general elections were held to elect members for the 64th United States Congress. The Democratic Party retained control of both houses of Congress, the first time since the Civil War. The United States House of Representatives had 230 seats go to the Democrats while the Republican Party gained 196 (with 6 going to independents). It was also the first time American voters could elect candidates to the U.S. Senate with the ratification of the Seventeenth Amendment, resulting in 51 seats for the Democrats and 44 seats for the Republicans.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "55189857", "title": "List of unsuccessful major party candidates for President of the United States", "section": "", "start_paragraph_id": 3, "start_character": 0, "end_paragraph_id": 3, "end_character": 1335, "text": "The two current major parties are the Democratic Party and the Republican Party. At various points prior to the American Civil War, the Federalist Party, the Democratic-Republican Party, the National Republican Party, and the Whig Party were major parties. These six parties have nominated candidates in the vast majority of presidential elections, but six presidential elections deviate from the normal pattern of two major party candidates. There were no major party candidates for president in the presidential election of 1789 and the presidential election of 1792, both of which were won by George Washington. In the 1812 presidential election, DeWitt Clinton served as the de facto Federalist nominee even though he was a member of the Democratic-Republican Party; Clinton was defeated by Democratic-Republican President James Madison. In the presidential election of 1820, incumbent President James Monroe of the Democratic-Republican Party effectively ran unopposed. In the 1824 presidential election, four Democratic-Republicans competed in multiple states in the general election as the party was unable to agree on a single nominee. Similarly, in the presidential election of 1836, the Whig Party did not unify around a single candidate and two different Whig candidates competed in multiple states in the general election.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "1202130", "title": "The Guns of the South", "section": "Section::::Plot.\n", "start_paragraph_id": 9, "start_character": 0, "end_paragraph_id": 9, "end_character": 323, "text": "In the 1864 United States presidential election, it takes until November 19 to work out whether Democrats or Republicans had won the election. The Democrats' candidate, Horatio Seymour, and his running mate, Clement Vallandigham, narrowly defeat Republican US President Abraham Lincoln and Vice President Hannibal Hamlin. \n", "bleu_score": null, "meta": null }, { "wikipedia_id": "216817", "title": "Running mate", "section": "Section::::In United States politics.\n", "start_paragraph_id": 8, "start_character": 0, "end_paragraph_id": 8, "end_character": 833, "text": "The practice of a presidential candidate having a running mate was solidified during the American Civil War. In 1864, in the interest of fostering national unity, Abraham Lincoln from the Republican Party (popular in the North) and Andrew Johnson of the Democratic Party (popular in the South) were co-endorsed and ran together for President and Vice-President as candidates of the National Union Party. Notwithstanding this party disbanded after the war ended, with the result that Republican Lincoln after his assassination was succeeded by Democrat Johnson; the states began to place candidates for President and Vice-President together on the same ballot ticket – thus making it impossible to vote for a presidential candidate from one party and a vice-presidential candidate from another party, as had previously been possible.\n", "bleu_score": null, "meta": null } ] } ]
null
1llu2k
why do humans drink so much water when compared to cats/dogs?
[ { "answer": "Humans sweat while most other animals with fur/feathers don't. Obviously you need to replace the moisture lost by sweating. \n\nEfficency of our bodies is another issue. Some animals have more efficient kidneys that concentrate urine stronger than humans. This means they need less water to carry away waste.", "provenance": null }, { "answer": null, "provenance": [ { "wikipedia_id": "36843243", "title": "Preference test", "section": "Section::::Uses.:Preferences of wild animals.\n", "start_paragraph_id": 24, "start_character": 0, "end_paragraph_id": 24, "end_character": 311, "text": "BULLET::::- There have been relatively few studies on the preferences of wild animals. A recent study has shown that feral pigeons do not discriminate drinking water according to its content of metabolic wastes, such as uric acid or urea (mimicking faeces- or urine-pollution by birds or mammals respectively).\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "198725", "title": "Drinking water", "section": "Section::::Other animals.\n", "start_paragraph_id": 93, "start_character": 0, "end_paragraph_id": 93, "end_character": 487, "text": "The qualitative and quantitative aspects of drinking water requirements of domesticated animals are studied and described within the context of animal husbandry. However, relatively few studies have been focused on the drinking behavior of wild animals. A recent study has shown that feral pigeons do not discriminate drinking water according to its content of metabolic wastes, such as uric acid or urea (mimicking faeces-pollution by birds or urine-pollution by mammals respectively).\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "6678", "title": "Cat", "section": "Section::::Physiology.:Water conservation.\n", "start_paragraph_id": 90, "start_character": 0, "end_paragraph_id": 90, "end_character": 310, "text": "Cats' feces are comparatively dry and their urine is highly concentrated, both of which are adaptations to allow cats to retain as much water as possible. Their kidneys are so efficient, they can survive on a diet consisting only of meat, with no additional water, and can even rehydrate by drinking seawater.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "8595464", "title": "Cat behavior", "section": "Section::::Eating patterns.\n", "start_paragraph_id": 41, "start_character": 0, "end_paragraph_id": 41, "end_character": 268, "text": "Cats drink water by lapping the surface with their tongue. A fraction of a teaspoon of water is taken up with each lap. Although some desert cats are able to obtain much of their water needs through the flesh of their prey, most cats come to bodies of water to drink.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "31734", "title": "Urea", "section": "Section::::Adverse effects.\n", "start_paragraph_id": 52, "start_character": 0, "end_paragraph_id": 52, "end_character": 458, "text": "High concentrations in the blood can be damaging. Ingestion of low concentrations of urea, such as are found in typical human urine, are not dangerous with additional water ingestion within a reasonable time-frame. Many animals (e.g., dogs) have a much more concentrated urine and it contains a higher urea amount than normal human urine; this can prove dangerous as a source of liquids for consumption in a life-threatening situation (such as in a desert).\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "52637243", "title": "Sport dog nutrition", "section": "Section::::Water.\n", "start_paragraph_id": 11, "start_character": 0, "end_paragraph_id": 11, "end_character": 325, "text": "Dogs require a constant source of clean and fresh water. This is especially true for sporting dogs participating in high energy activities. High protein diets require increased water intake for removal of extra nitrogen via urination.12 Furthermore, to deposit protein within the animal, water is also an essential mediator.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "509632", "title": "Thirst", "section": "Section::::Thirst quenching.\n", "start_paragraph_id": 24, "start_character": 0, "end_paragraph_id": 24, "end_character": 213, "text": "Thirst quenching varies among animal species, with dogs, camels, sheep, goats, and deer replacing fluid deficits quickly when water is available, whereas humans and horses may need hours to restore fluid balance.\n", "bleu_score": null, "meta": null } ] } ]
null
1am2hh
Is it really possible to "utilize the natural electric currents within the earth" and convert it into "radiant electricity?"
[ { "answer": "Well, first off, the disinfographic claims that there are electric currents in the ground, but in fact **the crust of the Earth doesn't have significant electric currents**, certainly nothing strong enough to extract useful power from. It's mostly incoherent babble that doesn't mean anything at all, so there aren't many actual claims to debunk.\n\nAlso, it says \"this is the secret they will do anything to hide\", but apparently \"they\" can't be bothered to use a handful of computers to initiate a sustained DOS attack against his shitty pseudoscience website and keep it offline.\n\nI gritted my teeth and went to that website, and was greeted by such headlines as \"A Chemical Conception of the Ether\" and \"Avenge Tesla Once and 4 All-- Elite Truth Warriors Only\" and \"Notes on a Hollow Earth\". This stuff is pure crackpot.\n", "provenance": null }, { "answer": null, "provenance": [ { "wikipedia_id": "212141", "title": "Power station", "section": "Section::::Power from renewable energy.:Solar.\n", "start_paragraph_id": 51, "start_character": 0, "end_paragraph_id": 51, "end_character": 288, "text": "A solar photovoltaic power plant converts sunlight into direct current electricity using the photoelectric effect. Inverters change the direct current into alternating current for connection to the electrical grid. This type of plant does not use rotating machines for energy conversion.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "140271", "title": "Kardashev scale", "section": "Section::::Energy development.:Type I civilization methods.\n", "start_paragraph_id": 26, "start_character": 0, "end_paragraph_id": 26, "end_character": 633, "text": "BULLET::::- Renewable energy through converting sunlight into electricity—either by using solar cells and concentrating solar power or indirectly through biofuel, wind and hydroelectric power. There is no known way for human civilization to use the equivalent of the Earth's total absorbed solar energy without completely coating the surface with human-made structures, which is not feasible with current technology. However, if a civilization constructed very large space-based solar power satellites, Type I power levels might become achievable—these could convert sunlight to microwave power and beam that to collectors on Earth.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "9540", "title": "Electricity generation", "section": "Section::::Methods of generating electricity.\n", "start_paragraph_id": 9, "start_character": 0, "end_paragraph_id": 9, "end_character": 508, "text": "Several fundamental methods exist to convert other forms of energy into electrical energy. The triboelectric effect, piezoelectric effect, and even direct capture of the energy of nuclear decay Betavoltaics are used in niche applications, as is a direct conversion of heat to electric power in the thermoelectric effect. Utility-scale generation is done by rotating electric generators, or by photovoltaic systems. A very small proportion of electric power distributed by utilities is provided by batteries.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "3508315", "title": "Electric heating", "section": "Section::::Environmental and efficiency aspects.\n", "start_paragraph_id": 44, "start_character": 0, "end_paragraph_id": 44, "end_character": 263, "text": "Where the primary source of electrical energy is hydroelectric, nuclear, or wind, transferring electricity via the grid can be convenient, since the resource may be too distant for direct heating applications (with the notable exception of solar thermal energy).\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "857973", "title": "Contact electrification", "section": "Section::::Semiconductor contact.\n", "start_paragraph_id": 16, "start_character": 0, "end_paragraph_id": 16, "end_character": 595, "text": "In materials with a direct band gap, if bright light is aimed at one part of the contact area between the two semiconductors, the voltage at that spot will rise, and an electric current will appear. When considering light in the context of contact electrification, the light energy is changed directly into electrical energy, allowing creation of solar cells. Later it was found that the same process can be reversed, and if a current is forced backwards across the contact region between the semiconductors, sometimes light will be emitted, allowing creation of the light-emitting diode (LED).\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "3270043", "title": "Electric power", "section": "Section::::Use.\n", "start_paragraph_id": 34, "start_character": 0, "end_paragraph_id": 34, "end_character": 962, "text": "Electric power, produced from central generating stations and distributed over an electrical transmission grid, is widely used in industrial, commercial and consumer applications. The per capita electric power consumption of a country correlates with its industrial development. Electric motors power manufacturing machinery and propel subways and railway trains. Electric lighting is the most important form of artificial light. Electrical energy is used directly in processes such as extraction of aluminum from its ores and in production of steel in electric arc furnaces. Reliable electric power is essential to telecommunications and broadcasting. Electric power is used to provide air conditioning in hot climates, and in some places electric power is an economically competitive source of energy for building space heating. Use of electric power for pumping water ranges from individual household wells to irrigation projects and energy storage projects.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "1343597", "title": "Electrical energy", "section": "Section::::Electricity generation.\n", "start_paragraph_id": 7, "start_character": 0, "end_paragraph_id": 7, "end_character": 381, "text": "Electricity is most often generated at a power station by electromechanical generators, primarily driven by heat engines fueled by chemical combustion or nuclear fission but also by other means such as the kinetic energy of flowing water and wind. There are many other technologies that can be and are used to generate electricity such as solar photovoltaics and geothermal power.\n", "bleu_score": null, "meta": null } ] } ]
null
5yuubs
why have salaries not increased on par with the cost of living.
[ { "answer": "Our economy is based on an everlasting perpetual growth. In other words if company i.e. Walmart doesn't post profit increase in their year over year sales report it is considered unsuccessful or not profitable and investors start pulling away. One of the easiest ways to do that is to keep your payroll as low as possible. Now multiply that by 100s of powerful companies who are very powerful and and have significant representation and influence in our government and there is your answer. This is as simple way as I can put it without writing an essay.", "provenance": null }, { "answer": null, "provenance": [ { "wikipedia_id": "32022", "title": "Economy of the United States", "section": "Section::::Income and wealth.:Profits and wages.\n", "start_paragraph_id": 81, "start_character": 0, "end_paragraph_id": 81, "end_character": 361, "text": "According to an October 2014 report by the Pew Research Center, real wages have been flat or falling for the last five decades for most U.S. workers, regardless of job growth. Bloomberg reported in July 2018 that real GDP per capita has grown substantially since the Great Recession, but real compensation per hour, including benefits, hasn't increased at all.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "3126360", "title": "Baumol's cost disease", "section": "", "start_paragraph_id": 2, "start_character": 0, "end_paragraph_id": 2, "end_character": 605, "text": "The rise of wages in jobs without productivity gains is from the requirement to compete for employees with jobs that have experienced gains and so can naturally pay higher salaries, just as classical economics predicts. For instance, if the retail sector pays its managers 19th-century-style salaries, the managers may decide to quit to get a job at an automobile factory, where salaries are higher because of high labor productivity. Thus, managers' salaries are increased not by labor productivity increases in the retail sector but by productivity and corresponding wage increases in other industries.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "730742", "title": "Principal–agent problem", "section": "Section::::Performance evaluation.:Objective performance evaluation.\n", "start_paragraph_id": 48, "start_character": 0, "end_paragraph_id": 48, "end_character": 1438, "text": "The reason that employees are often paid according to hours of work rather than by direct measurement of results is that it is often more efficient to use indirect systems of controlling the quantity and quality of effort, due to a variety of informational and other issues (e.g., turnover costs, which determine the optimal minimum length of relationship between firm and employee). This means that methods such as deferred compensation and structures such as tournaments are often more suitable to create the incentives for employees to contribute what they can to output over longer periods (years rather than hours). These represent \"pay-for-performance\" systems in a looser, more extended sense, as workers who consistently work harder and better are more likely to be promoted (and usually paid more), compared to the narrow definition of \"pay-for-performance\", such as piece rates. This discussion has been conducted almost entirely for self-interested rational individuals. In practice, however, the incentive mechanisms which successful firms use take account of the socio-cultural context they are embedded in (Fukuyama 1995, Granovetter 1985), in order not to destroy the social capital they might more constructively mobilise towards building an organic, social organization, with the attendant benefits from such things as \"worker loyalty and pride (...) [which] can be critical to a firm's success ...\" (Sappington 1991,63)\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "9525663", "title": "Overhead (business)", "section": "Section::::Administrative overheads.:Examples.:Employee salaries.\n", "start_paragraph_id": 10, "start_character": 0, "end_paragraph_id": 10, "end_character": 493, "text": "This includes mainly monthly and annual salaries that are agreed upon. They are considered overheads as these costs must be paid regardless of sales and profits of the company. In addition, salary differs from wage as salary is not affected by working hours and time, therefore will remain constant. In particular, this would more commonly apply to more senior staff members as they are typically signed to longer tenure contracts, meaning that their salaries are more commonly predetermined.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "24987493", "title": "Health care reforms proposed during the Obama administration", "section": "Section::::Cost overview.\n", "start_paragraph_id": 26, "start_character": 0, "end_paragraph_id": 26, "end_character": 501, "text": "Increasing healthcare costs also contribute to wage stagnation, as corporations pay for benefits rather than wages. Bloomberg reported in January 2013: \"If there’s a consensus among health economists about anything, it’s that employer-provided health benefits come out of wages. If health insurance were cheaper, or the marketplace were structured so that most people bought health coverage for themselves rather than getting it with their jobs, people would be paid more and raises would be higher.\"\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "15037", "title": "Income", "section": "Section::::Income growth.\n", "start_paragraph_id": 13, "start_character": 0, "end_paragraph_id": 13, "end_character": 311, "text": "Income per capita has been increasing steadily in most countries. Many factors contribute to people having a higher income, including education, globalisation and favorable political circumstances such as economic freedom and peace. Increases in income also tend to lead to people choosing to work fewer hours.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "245481", "title": "Job rotation", "section": "Section::::Goals.:Why is job rotation beneficial?\n", "start_paragraph_id": 18, "start_character": 0, "end_paragraph_id": 18, "end_character": 241, "text": "Some employees are paid more for they are presenting that they are worth a greater amount since they can perform more than one job function and thus makes a higher incentive for more employees to be able to perform better in the workplace. \n", "bleu_score": null, "meta": null } ] } ]
null
eudqsl
Flairs, posters, lurkers, lend me your ears! I come to praise our NEW MOD!
[ { "answer": "Welcome to ~~best-paid~~ ~~most-respected~~ ~~most pleasant~~ ~~least stressful~~ ~~highest-status~~ a job on the internet!", "provenance": null }, { "answer": "Thank you so much for the warm welcome, u/hannahstohelit and the rest of the mod team! I'm excited to lend my hand in the effort to clean up the internet and let everyone know where all the comments have gone!", "provenance": null }, { "answer": "All hail the glorious ascendant /u/SarahAGilbert! Welcome to the storied and legendary ranks of the moderators, and long may you reign over this majestic community.", "provenance": null }, { "answer": "Gloria al nostro moderatore!!! May the stroke of your comment removals be fair and the swing of your banhammer be just!", "provenance": null }, { "answer": "Добро пожаловать в команду! Welcome, welcome, welcome!", "provenance": null }, { "answer": "On behalf of the lurking majority that come to read, thank you for sharing your insightful work, and thanks to all of you moderators for your stewardship. You've made it one of the most interesting and polite corners of the internet.", "provenance": null }, { "answer": "Welcome! Hals- und Beinbruch!", "provenance": null }, { "answer": "Welcome to our newest ~~slave~~ ~~unpaid intern~~ moderator! I hope you're ready for the massive influx of ~~wealth~~ ~~karma~~ interactions with neo-Nazis you'll be receiving.", "provenance": null }, { "answer": null, "provenance": [ { "wikipedia_id": "2063083", "title": "List of Teen Titans characters", "section": "Section::::Other villains.:Mad Mod.\n", "start_paragraph_id": 210, "start_character": 0, "end_paragraph_id": 210, "end_character": 387, "text": "Mad Mod is a psychedelic red-headed British villain with the mannerisms of a strict schoolmarm, whose root source of power comes from his ruby-tipped cane. It is later revealed that Mod is actually an old man who is given to the use of holograms of his younger self. He is also formidable for his use of hypnotic suggestion which has a stupefying and lobotomizing effect on its victims.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "1628253", "title": "Mad Mod", "section": "Section::::In other media.:Television.\n", "start_paragraph_id": 17, "start_character": 0, "end_paragraph_id": 17, "end_character": 3466, "text": "BULLET::::- Mad Mod appeared in the \"Teen Titans\" TV series voiced by Malcolm McDowell. Mad Mod has no superpowers of his own, but he is a master of technological trickery such as robots and holographic projectors, which he controls with a ruby-handled cane. His ability to produce illusions resulted in surreal, 1960s-styled landscapes, and allowed \"Moddie\" to appear in several different forms; even appearing stylized in the manner of God as he appeared in \"Monty Python and the Holy Grail\", or becoming a round Blue Meanie-esque figure. In one circumstance, he was able to use such a cane to drain the youth from someone, making them old and him young again. He has an English accent and is utterly anglophilic. Moddie, as he often calls himself, tends to view the Titans as rebellious \"snots\", and claims that they show no respect for their elders. He has an odd habit of calling people \"my duckies\". He first appears in \"Mad Mod\", where he kidnaps the Titans, frequently calls them, \"My Duckies\", and places them in a psychedelic \"school\". He attempts to \"teach them to behave\" through hypnosis, using methods similar to the Ludovico technique (used in \"A Clockwork Orange\", which McDowell starred in). When his initial attempts fail, he leads them on a Scooby-Doo-like chase through a Yellow Submarine-esque maze. The Titans escape when Robin realizes that Mad Mod is as fake as the rest of their surroundings; he thus abandons his attempts to capture him in favor of searching for flaws in the illusion. He quickly notices one — Mod's weapons have made an intriguing hole in the backdrop, which leads into the illusion's internal works. This enables him to make his way to the control room where he confronts the real Mad Mod — a sickly-looking old man using an advanced computer to control the whole school and a hologram of his younger self. Robin, of course, has no trouble defeating him. In \"Revolution\", Mad Mod crashes the Independence Day celebration claiming the American Revolution was a hoax. He then remakes the entire city in the image of Old England by using hypno-screens to control the population and giant illusions to change the look of the entire city to that of Merry Old London, claiming that \"the United States belongs to England again\". Mad Mod also kidnaps Robin and uses his cane to drain his youth, reducing Robin to a weak and helpless old man, while Mad Mod becomes his younger self again. The other Titans are initially prevented from reaching Mad Mod and freeing Robin by various large robots modeled after the Coldstream Guards and Mad Mod anticipating their plans. Later, they succeed in getting the cane from him. Robin then uses it to reverse the aging effects. He breaks the cane, thus ending Mad Mod's rule over the city by deactivating his equipment. The Titans then chase after Mad Mod. Mad Mod's aged form made a cameo appearance in \"The Lost Episode\" as one of the audience members in the orchestra and is last seen fleeing when Punk Rocket strikes. He later has several small appearances as a member of the Brotherhood of Evil where he is somehow young again. In \"Revved Up\", Mad Mod was seen taking part in Ding Dong Daddy's race. He was last seen fighting in \"Titans Together\", where he was blown off his feet by Beast Boy's \"Tyrannosaurus\" form. He was briefly possessed by Jericho and then was crushed by Overload. Jericho then left his body. He was flash-frozen along with the rest of the Brotherhood in the end.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "1628253", "title": "Mad Mod", "section": "Section::::In other media.:Film.\n", "start_paragraph_id": 21, "start_character": 0, "end_paragraph_id": 21, "end_character": 300, "text": "Mad Mod makes a silent cameo appearance in the \"Teen Titans Go!\" theatrical film \"Teen Titans Go! To the Movies\". In the film, as many villains including Control Freak, Mad Mod appears strapped to a light signal which forms Robin's name in the sky during Robin's musical number \"My Superhero Movie\".\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "810633", "title": "Mod (subculture)", "section": "Section::::History 1958-1969.\n", "start_paragraph_id": 10, "start_character": 0, "end_paragraph_id": 10, "end_character": 534, "text": "George Melly wrote that mods were initially a small group of clothes-focused English working class young men insisting on clothes and shoes tailored to their style, who emerged during the modern jazz boom of the late 1950s. Early mods watched French and Italian art films and read Italian magazines to look for style ideas. They usually held semi-skilled manual jobs or low grade white-collar positions such as a clerk, messenger or office boy. According to Hebdige, mods created a parody of the consumer society that they lived in. \n", "bleu_score": null, "meta": null }, { "wikipedia_id": "48225201", "title": "Long War (mod)", "section": "", "start_paragraph_id": 2, "start_character": 0, "end_paragraph_id": 2, "end_character": 621, "text": "The mod was developed by Long War Studios, a team that came to include four core members, with assistance from 29 contributors, 20 voice actors, and three members of Firaxis Games, including the developer of \"Enemy Unknown\" and \"Enemy Within\". According to one of the mod's core developers, Amineri, the mod started as a series of changes to the base game's configuration file, and grew more expansive as the team's capabilities grew. By the end of the mod's development, the team was working directly with the Unreal Development Kit, and had created a Java-based tool to help manage the changes that the mod was making.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "26277696", "title": "Rhye's and Fall of Civilization", "section": "Section::::Reception.\n", "start_paragraph_id": 18, "start_character": 0, "end_paragraph_id": 18, "end_character": 325, "text": "Due to the mod's high popularity, \"Rhye's and Fall of Civilization\" has had a number of fan-created mods that make changes to the mod itself. (\"modmods\") These serve a variety of functions, from lengthening the number of turns, to totally converting the mod, with entirely new units, buildings, civilizations, and tech tree.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "3201413", "title": "Mod DB", "section": "Section::::Mod of the Year.\n", "start_paragraph_id": 11, "start_character": 0, "end_paragraph_id": 11, "end_character": 409, "text": "Mod DB's Mod of the Year competition, the 'Golden Spanner' awards, aim to set the industry standard in awarding inventive and high-quality mods. Mods are chosen via a community vote and are then reviewed by staff to produce the final list of winners. The competition aims to encourage all fields of modding, with different categories such as graphics and gameplay, as well as a traditional 'best mod' winner.\n", "bleu_score": null, "meta": null } ] } ]
null
31a5rw
What exactly happens when cake batter turns into fluffy, moist cake?
[ { "answer": "Cake batter includes two chemicals, cream of tartar (or, to give it the chemist's name, tartaric acid) and baking soda (sodium bicarbonate). When mixed together dry, we call it 'baking powder', when sold mixed in with the flour, it becomes 'self-raising flour'. When mixed with water, these two chemicals react (slowly) to create salt and a gas, carbon dioxide. This causes bubbles in the batter. The first stages of baking help here, because, like most chemical reactions, it becomes faster at higher temperatures.\n\nThe batter contains some other chemicals too - mostly a protein, gluten, from the flour. As it cooks, different pieces (molecules) of gluten bind together, forming a strong framework. This traps the bubbles in place, making the light, fluffy, edible sponge that we call 'cake'.", "provenance": null }, { "answer": "There are several separate mechanisms that all produce the effect of getting bubbles of gas into the batter which the heat of baking 'sets' into a sponge-like structure. As mentioned above, yeast and baking powder both do this in different ways, yeast by the action of living organisms and baking powder from a chemical reaction to moisture.\n\nA third way to get the same result is used in sponge cake purely by the mechanical introduction of air into the mix prior to baking. Eggs and egg whites can be beaten until they are swollen with air bubbles and many times their original volume. This is them mixed with flour and sugar and the heat from baking makes the tiny bubbles expand and sets the mixture. A similar process works by 'fluffing' a dry mixture before baking by lifting and sprinkling the mixture to separate the particles of fat and flour and introduce air. This alone can have quite a strong effect of how much the final baked item swells during baking but care has to be taken whilst rolling and cutting (eg scones) to not compress the trapped air out of the dough.\n\nOther chemical mixtures are sometimes used such as soda (bicarbonate of soda) but this sometimes needs to be activated by an acidic liquid. Cream of tartar and bicarbe are sometimes used on their own, but I don't fully understand how they work without the other chemical to react with. Anyone know?", "provenance": null }, { "answer": null, "provenance": [ { "wikipedia_id": "246992", "title": "Angel food cake", "section": "Section::::Molecular and structural composition.:Sugar.\n", "start_paragraph_id": 45, "start_character": 0, "end_paragraph_id": 45, "end_character": 263, "text": "When the cake has finished baking, it should have a golden brown color on the exposed area. This is due to Maillard browning reactions. If the cake bakes for too long, more moisture will be removed and the texture will turn out dry, rough, and potentially burnt.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "57572", "title": "Cake", "section": "Section::::Cooking.\n", "start_paragraph_id": 42, "start_character": 0, "end_paragraph_id": 42, "end_character": 440, "text": "A cake can fall, whereby parts of it sink or flatten, when baked at a temperature that is too low or too hot, when it has been underbaked and when placed in an oven that is too hot at the beginning of the baking process. The use of excessive amounts of sugar, flour, fat or leavening can also cause a cake to fall. A cake can also fall when subjected to cool air that enters an oven when the oven door is opened during the cooking process.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "32839492", "title": "Donauwelle", "section": "Section::::Preparation.\n", "start_paragraph_id": 4, "start_character": 0, "end_paragraph_id": 4, "end_character": 578, "text": "The batter is a pound cake, a cake made of equal amounts by weight of butter, flour, eggs and sugar, which is then divided into two parts, one of which is colored with cocoa. The two batters are spread in layers onto the baking sheet, the chocolate batter above the plain batter, before the top is strewn with sour cherries. During baking, the cherries sink to the bottom of the cake, causing the wavy pattern. After the cake has cooled it is decorated with a thick layer of buttercream and iced with a chocolate glaze which may then be ornamented in a wavy manner with a fork.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "246992", "title": "Angel food cake", "section": "Section::::Molecular and structural composition.:Flour.\n", "start_paragraph_id": 37, "start_character": 0, "end_paragraph_id": 37, "end_character": 833, "text": "The baking process causes the batter to expand and go from a liquid to a solid foam. The proteins will not start to denature until the temperature reaches around 158 °F. During this rise in temperature the air bubbles will either expand, coalesce, or break. An egg white foam will continue to expand uniformly until the internal temperature reaches 176 – 185 °F. Based on the ideal gas law, as the temperature increases, the volume of the air bubbles will expand. The temperature will continue to rise, causing the cake to expand at different rates and egg white proteins will gradually denature. Some of the egg white proteins will start to denature and coagulate at around 135 °F. This establishes the setting of the foam structure. By the time the temperature reaches 180 °F, all of the egg white proteins will have set in place.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "34507065", "title": "Cake pop", "section": "Section::::Preparation.\n", "start_paragraph_id": 6, "start_character": 0, "end_paragraph_id": 6, "end_character": 540, "text": "Once the cake has been baked, or when leftovers from an existing cake have been collected, it is crumbled into pieces. These crumbs are mixed into a bowl of frosting and the resulting mixture is shaped into balls, cubes or other shapes. Each ball is attached to a lollipop stick dipped in melted chocolate, and put in the fridge to chill. Once the mixture solidifies, it is dipped in melted chocolate to form a hard shell, and decorated with sprinkles or decorative sugars. The cake balls can be frozen to speed the solidification process.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "246992", "title": "Angel food cake", "section": "Section::::Molecular and structural composition.:Flour.\n", "start_paragraph_id": 39, "start_character": 0, "end_paragraph_id": 39, "end_character": 517, "text": "The flour plays an important role in the texture, structure, and elasticity of an angel food cake. Minimal folding of the flour allows cell walls to form when it comes in contact with the egg protein foam and sugar mixture. If the batter is over-mixed, the egg white proteins may coagulate causing the bubbles to break during baking, or the cell walls may become too rigid, lacking elasticity. This would reduce the volume and result in a coarse texture. However, if the batter is under-mixed, a weak foam will form.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "15108730", "title": "Pouding chômeur", "section": "Section::::Description.\n", "start_paragraph_id": 4, "start_character": 0, "end_paragraph_id": 4, "end_character": 366, "text": "The \"pouding chômeur\" is a basic cake batter onto which a hot syrup or caramel is poured before baking. The cake then rises through the liquid which settles at the bottom of the pan, mixing with the batter and creating a distinct layer at the bottom of the dish. The syrup or caramel can be made from brown sugar, white sugar, maple syrup or a combination of these.\n", "bleu_score": null, "meta": null } ] } ]
null
3svhtz
why aren't siblings born with the same dna?
[ { "answer": "You have 23 pairs of DNA in your body. One of each pair from your mom, another from your dad. This means you have about 0.5^23 chance of having the same DNA as a sibling, and that's not even factoring in recombination (bits of DNA switching around).", "provenance": null }, { "answer": "Not all of it. A child only contains half the DNA each parent has, with two parents combining to make a complete set. The best way to describe this is in terms of meiosis, the production of gametes (sex cells) in which the chromosome number (I am assuming you know basic Biology, if not ask questions) halves.\n\nSo, most humans have 46 chromosomes. These chromosomes are in pairs, with 23 pairs, each pair containing one from the mother and another from the father. Each chromosome in these pairs controls the same traits.\n\nSo, during meiosis, the chromosome pairs split apart into different cells to form sex cells. However the nature of this splitting is random, chromosome pair one could have the mother's chromosome go to one side, while chromosome pair two could have either the mother's or the father's chromosome to go to that side. This is called independent assortment, since all the chromosomes arrange themselves independently of other chromosomes (except the pair) This splitting is random, and generates 2^23 types of combinations for a single gamete to be produced, which is about 8 million. For the production of the two cells needed to make up a child, it is 8 million squared, or over 10 billion. Ten billion different combinations, and that is ignoring the other DNA randomization processes. Factor in the others, you get numbers in the trillions and quadrillions of combinations, and even though some are more likely to occur than others, the chance is still tiny that two siblings will be produced to be exactly the same in DNA.", "provenance": null }, { "answer": null, "provenance": [ { "wikipedia_id": "352169", "title": "Sibling", "section": "Section::::Types.:Half.\n", "start_paragraph_id": 11, "start_character": 0, "end_paragraph_id": 11, "end_character": 429, "text": "Theoretically, there is a chance that they might not share genes. This is very rare and is due to there being a smaller possibility of inheriting the same chromosomes from the shared parent. However, the same is also theoretically possible for full siblings, albeit (comparatively) much less likely. Because of the formation of Chiasma in late prophase II (cross-over events), both previous statements are generally impossible. \n", "bleu_score": null, "meta": null }, { "wikipedia_id": "144757", "title": "Histocompatibility", "section": "Section::::Role in Transplantation.\n", "start_paragraph_id": 10, "start_character": 0, "end_paragraph_id": 10, "end_character": 429, "text": "Due to the inherited nature of HLA genes, family members are more likely to be histocompatible. The odds of a sibling having received the same haplotypes from both parents is 25%, while there is a 50% chance that the sibling would share just one haplotype and a 25% chance they would share neither. However, variability due to crossing over, haplotypes may rearrange between generations and siblings may be intermediate matches.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "352169", "title": "Sibling", "section": "", "start_paragraph_id": 2, "start_character": 0, "end_paragraph_id": 2, "end_character": 326, "text": "Identical twins share 100% of their DNA. Full siblings are first-degree relatives and, on average, share 50% of their genes out of those that vary among humans, assuming that the parents share none of those genes. Half-siblings are second-degree relatives and have, on average, a 25% overlap in their human genetic variation.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "6480091", "title": "X-linked hypophosphatemia", "section": "Section::::Genetics.\n", "start_paragraph_id": 7, "start_character": 0, "end_paragraph_id": 7, "end_character": 478, "text": "As the X chromosome is one of the sex chromosomes (the other being the Y chromosome), X-linked inheritance is determined by the sex of the parent carrying a specific gene and can often seem complex. This is because, typically, females have two copies of the X-chromosome and males have only one copy. The difference between dominant and recessive inheritance patterns also plays a role in determining the chances of a child inheriting an X-linked disorder from their parentage.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "6078840", "title": "X-linked dominant inheritance", "section": "Section::::Genetics.\n", "start_paragraph_id": 6, "start_character": 0, "end_paragraph_id": 6, "end_character": 399, "text": "inheritance is determined by the sex of the parent carrying a specific gene and can often seem complex. This is due to the fact that, typically, females have two copies of the X-chromosome, while males have only one copy. The difference between dominant and recessive inheritance patterns also plays a role in determining the chances of a child inheriting an X-linked disorder from their parentage.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "23128953", "title": "Bazex–Dupré–Christol syndrome", "section": "Section::::Genetics.\n", "start_paragraph_id": 6, "start_character": 0, "end_paragraph_id": 6, "end_character": 466, "text": "inheritance is determined by the gender of the parent carrying a specific gene and can often seem complex. This is because, typically, females have two copies of the X-chromosome, while males have only one copy. The difference between dominant and recessive inheritance patterns also plays a role in determining the chances of a child inheriting an X-linked disorder from their parentage.A locus of Xq24-q27 has been described. However, no gene has been identified.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "23798230", "title": "Lujan–Fryns syndrome", "section": "Section::::Pathophysiology.:Genetics.\n", "start_paragraph_id": 26, "start_character": 0, "end_paragraph_id": 26, "end_character": 389, "text": "inheritance is determined by the gender of the parent carrying a specific gene and can often seem complex. This is because, typically, females have two copies of the X-chromosome, while males have only one copy. The difference between dominant and recessive inheritance patterns also plays a role in determining the chances of a child inheriting an X-linked disorder from their parentage.\n", "bleu_score": null, "meta": null } ] } ]
null
d326cy
how do computers transmit and translate video and pictures? does each picture get boiled down to a pixel level with a binary code for each pixel? what about video? it blows my mind that computers can do this.
[ { "answer": "Every pixel has a colour represented by a binary code. Commonly a byte (8 bits) is used for each of the three primary colours of red, green, and blue, so you need three bytes (24-bits) per pixel. That gives 16 million possible different colours.\n\nPictures are then compressed which allows a big reduction in the number of bytes required. Pixels tend to be the same colour as adjoining pixels and encoding that in an image file allows it to be much smaller than if you described every pixel separately. Usually \"lossy\" compression is used which can allow the file to be much smaller, at the cost of not looking exactly the same as the original. There's a trade-off between file size and quality.\n\nVideo is just a sequence of pictures called frames. It can be compressed more because most frames are almost exactly the same as the previous frames. The compressor might only send one self-contained frame every second and the other frames are described based on how they're different from preceding and/or following frames. If one part of the frame is moving in front of the rest, the compressor can describe which parts of the frame have moved and how far, instead of having to resend the pixels.", "provenance": null }, { "answer": "Yes, in theory the computer takes each pixel gives it a number to describe its color and then goes on to the next, os that if you have a picture a hundred pixel by hundred pixel, you would transmit 10,000 numbers with a color for each pixel. The computer breaks up the color numbers into bytes for this. If you have only two colors like black and white you can fit 8 of those numbers into each byte, if you want 24-bit \"true color\" you end up needing 3 bytes for each pixel.\n\nThis is what happens with Bitmaps at least and some relatively rare forms of video.\n\nIn practice, pictures and videos like this can get really really big, so you use compression. This means that you don't transmit each single pixel, but use some math to describe where and when what color pixels are in a way that avoids unnecessary repetition. to shorten things a bit. The exact nature of the compression can differ from format to format.\n\nHowever it all boils down to the idea of giving each pixel a number that represents its color and transmitting a string of numbers that allows the computer to display pixels in the same (or at least a similar) way.\n\nThere are some exceptions however: vector graphics. These are not so much instructions to tell the computer what color each pixel looks like, but more instructions on how to draw the picture. It may contain an instruction like \"make a thin diagonal black line from the upper left corner to lower right corner of the picture\" and the computer has to figure out which pixels it has to make black itself.", "provenance": null }, { "answer": null, "provenance": [ { "wikipedia_id": "97923", "title": "Binary image", "section": "", "start_paragraph_id": 4, "start_character": 0, "end_paragraph_id": 4, "end_character": 321, "text": "A binary image can be stored in memory as a bitmap, a packed array of bits. A 640×480 image requires 37.5 KiB of storage. Because of the small size of the image files, fax machine and document management solutions usually use this format. Most binary images also compress well with simple run-length compression schemes.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "69770", "title": "Netpbm format", "section": "Section::::File format description.:PPM example.\n", "start_paragraph_id": 24, "start_character": 0, "end_paragraph_id": 24, "end_character": 243, "text": "The P6 binary format of the same image represents each color component of each pixel with one byte (thus three bytes per pixel) in the order red, green, then blue. The file is smaller, but the color information is difficult to read by humans.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "920702", "title": "Binary file", "section": "Section::::Viewing.\n", "start_paragraph_id": 10, "start_character": 0, "end_paragraph_id": 10, "end_character": 1216, "text": "If a binary file is opened in a text editor, each group of eight bits will typically be translated as a single character, and the user will see a (probably unintelligible) display of textual characters. If the file is opened in some other application, that application will have its own use for each byte: maybe the application will treat each byte as a number and output a stream of numbers between 0 and 255—or maybe interpret the numbers in the bytes as colors and display the corresponding picture. Other type of viewers (called 'word extractors') simply replace the unprintable characters with spaces revealing only the human-readable text. This type of view is useful for quick inspection of a binary file in order to find passwords in games, find hidden text in non-text files and recover corrupted documents. It can even be used to inspect suspicious files (software) for unwanted effects. For example, the user would see any URL/email to which the suspected software may attempt to connect in order to upload unapproved data (to steal). If the file is itself treated as an executable and run, then the operating system will attempt to interpret the file as a series of instructions in its machine language.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "68466", "title": "Gamma correction", "section": "Section::::Windows, Mac, sRGB and TV/video standard gammas.\n", "start_paragraph_id": 16, "start_character": 0, "end_paragraph_id": 16, "end_character": 643, "text": "In most computer display systems, images are encoded with a gamma of about 0.45 and decoded with the reciprocal gamma of 2.2. A notable exception, until the release of Mac OS X 10.6 (Snow Leopard) in September 2009, were Macintosh computers, which encoded with a gamma of 0.55 and decoded with a gamma of 1.8. In any case, binary data in still image files (such as JPEG) are explicitly encoded (that is, they carry gamma-encoded values, not linear intensities), as are motion picture files (such as MPEG). The system can optionally further manage both cases, through color management, if a better match to the output device gamma is required.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "3123410", "title": "JBIG2", "section": "Section::::Technical details.:Text image data.\n", "start_paragraph_id": 8, "start_character": 0, "end_paragraph_id": 8, "end_character": 694, "text": "Text coding is based on the nature of human visual interpretation. A human observer cannot tell the difference between two instances of the same characters in a bi-level image even though they may not exactly match pixel by pixel. Therefore, only the bitmap of one representative character instance needs to be coded instead of coding the bitmaps of each occurrence of the same character individually. For each character instance, the coded instance of the character is then stored into a \"symbol dictionary\". There are two encoding methods for text image data: pattern matching and substitution (PM&S) and soft pattern matching (SPM). These methods are presented in the following subsections.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "34467", "title": "ZX Spectrum", "section": "Section::::Hardware.\n", "start_paragraph_id": 7, "start_character": 0, "end_paragraph_id": 7, "end_character": 704, "text": "Video output is through an RF modulator and was designed for use with contemporary television sets, for a simple colour graphic display. Text can be displayed using 32 columns × 24 rows of characters from the ZX Spectrum character set or from a set provided within an application, from a palette of 15 shades: seven colours at two levels of brightness each, plus black. The image resolution is 256×192 with the same colour limitations. To conserve memory, colour is stored separate from the pixel bitmap in a low resolution, 32×24 grid overlay, corresponding to the character cells. In practice, this means that all pixels of an 8x8 character block share one foreground colour and one background colour.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "68466", "title": "Gamma correction", "section": "Section::::Methods to perform display gamma correction in computing.\n", "start_paragraph_id": 38, "start_character": 0, "end_paragraph_id": 38, "end_character": 746, "text": "BULLET::::- The rendering software writes gamma-encoded pixel binary values directly to the video memory (when highcolor/truecolor modes are used) or in the CLUT hardware registers (when indexed color modes are used) of the display adapter. They drive Digital-to-Analog Converters (DAC) which output the proportional voltages to the display. For example, when using 24-bit RGB color (8 bits per channel), writing a value of 128 (rounded midpoint of the 0–255 byte range) in video memory it outputs the proportional voltage to the display, which it is shown darker due to the monitor behavior. Alternatively, to achieve intensity, a gamma-encoded look-up table can be applied to write a value near to 187 instead of 128 by the rendering software.\n", "bleu_score": null, "meta": null } ] } ]
null
3ras3t
why does taking (something) to the negative power give us 1/(something)?
[ { "answer": "If you can realise that multiplication is the inverse of division, then it is pretty easy.\n\nFor positive powers:\n\n2^1 =1x2=2\n\n2^2 =1x2x2=4\n\n2^3 =1x2x2x2=8\n\nand so on. For a positive power, you multiply the number.\n\nA negative power has the same pattern, except with division, so:\n\n2^-1 =1/2\n\n2^-2 =1/2/2=1/4\n\n2^-3 =1/2/2/2=1/8\n\nand so on.", "provenance": null }, { "answer": "The fact that *a*^(-*n*) = 1/(*a*^(*n*)) is actually a matter of definition. However, any other definition would make little sense, and here's why:\n\nIf *a* is some number, the notation *a*^(2) is just a convenient way of writing *a*·*a*. In general, you have \n*a*^(2) = *a*·*a* \n*a*^(3) = *a*·*a*·*a* \n*a*^(4) = *a*·*a*·*a*·*a* \nand so on.\nThis is a definition we make: We define *a*^(*n*), where *n* is a positive whole number, to be *a*^(*n*) = *a*·*a*·*a*···*a*, where *a* occurs a total of *n* times in the product. (*a* can be any number, it doesn't have to be an integer.) In particular, this must mean that \n*a*^(1) = *a*.\nNow, consider something like *a*^(3)·*a*^(5). We know that *a*^(3) = *a*·*a*·*a* and *a*^(5) = *a*·*a*·*a*·*a*·*a*, so \n*a*^(3)·*a*^(5) = *a*·*a*·*a* · *a*·*a*·*a*·*a*·*a* = *a*^(8) (=*a*^(3+5)). \nSimilarly, \n*a*^(7)·*a*^(2) = *a*·*a*·*a*·*a*·*a*·*a*·*a* · *a*·*a* = *a*^(9) (=*a*^(7+2)). \nAs you can hopefully see, if *n* and *m* are positive whole numbers and you multiply *a*^(*n*) by *a*^(*m*), you'll end up with *n*+*m* factors *a* and so the result is *a*^(*n*+*m*). This is a rule we've thus proven (kind of proven anyway) for positive whole numbers: \n**RULE**: *a*^(*n*)·*a*^(*m*) = *a*^(*n*+*m*)\n\nNow, what should *a*^(0) be? We *could* define it to be whatever we feel like, but it's best to try to make the definition as convenient as possible, so why not try to define it to follow our rule *a*^(*n*)·*a*^(*m*) = *a*^(*n*+*m*)? Well if it is to abide by the rule, we need to have \n*a*^(*n*)·*a*^(0) = *a*^(*n*+0) = *a*^(*n*). \nThis actually means that we must have \n*a*^(0)=1 \nfor our rule to keep holding if *n* or *m* is zero. (Since *a*^(*n*)·*a*^(0) = *a*^(*n*) can only be true if *a*^(0)=1.)\n\nFinally, let's move on to negative whole numbers. Again, we *could* define things in a lot of ways, but again, we insist that our rule for exponents should remain valid for negative integers too. This means that we must have \n*a*^(*n*)·*a*^(-*n*) = *a*^*n*+(-*n*) = *a*^(0) = 1. \nAt this point you can divide through by *a*^(*n*) on both sides and you end up with \n*a*^(-*n*) = 1/(*a*^(*n*)). \n\nSo in other words: If you want the rule *a*^(*n*)·*a*^(*m*) = *a*^(*n*+*m*) to be true for all whole numbers (and not just the positive ones) you must have *a*(0) = 1 and *a*^(-*n*) = 1/(*a*^(*n*)).\n\nA couple of remarks: \n*0*^(-*n*) is not defined (unless *n* itself is negative). This is because the equation *a*^(*n*)·*a*^(-*n*) = 1 becomes 0^(*n*)·0^(-*n*) = 0·0^(-*n*) = 1 if *a*=0, and that's absurd. There's just nothing you can multiply by zero to get one. Sure, we could define 0^(-*n*) to be something else, but doing so would then break the rule we've tried to adapt to. \nI've only discussed whole numbers. If you want to consider rational numbers (fractions of whole numbers) or real numbers (any number that can be represented with decimals) as powers, the same argument still applies; it's just a little harder to define *a*^(*x*) when *x* is no longer an integer. (For instance, what would *a*^(√2) or *a*^(π) mean?) However, as soon as you do define *a*^(*x*), you still want it to fit the rule we established and so the same argument will lead to *a*^(-*x*) = 1/(*a*^(*x*)).\n\n**Summary:** For positive whole numbers *n* and *m* it's true that *a*^(*n*)·*a*^(*m*) = *a*^(*n*+*m*). If you want that to remain true for all whole numbers it needs to be true that *a*^(*n*)·*a*^(-*n*) = *a*^(*n*-*n*)=*a*^(0)=1, so *a*^(-*n*) = 1/(*a*^(*n*)). Just defining what *a*^(*x*) means for rational or real numbers *x* in general is pretty cumbersome, but the same argument goes through as soon as you do.", "provenance": null }, { "answer": null, "provenance": [ { "wikipedia_id": "22757157", "title": "French and Raven's bases of power", "section": "Section::::Bases of power.:Referent power.:Negative.\n", "start_paragraph_id": 49, "start_character": 0, "end_paragraph_id": 49, "end_character": 260, "text": "Referent power in a negative form produces actions in opposition to the intent of the influencing agent, this is the result from the agent's creation of cognitive dissonance between the referent influencing agent and the target's perception of that influence.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "1426315", "title": "Absence of good", "section": "", "start_paragraph_id": 5, "start_character": 0, "end_paragraph_id": 5, "end_character": 505, "text": "If the metaphor can be extended, and good and evil share the same asymmetry as light and darkness, evil can have no source, cannot be projected, and, of itself, can offer no resistance to any source of good, no matter how weak or distant. Then, goodness cannot be actively opposed, and power becomes a consequence of benevolence. However, evil is the default state of the universe, and good exists only through constant effort; any lapse or redirection of good will apparently create evil out of nothing.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "99491", "title": "Exponentiation", "section": "Section::::Limits of powers.\n", "start_paragraph_id": 247, "start_character": 0, "end_paragraph_id": 247, "end_character": 277, "text": "On the other hand, when is an integer, the power is already meaningful for all values of , including negative ones. This may make the definition obtained above for negative problematic when is odd, since in this case as tends to through positive values, but not negative ones.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "3558732", "title": "Negative and positive rights", "section": "Section::::Overview.\n", "start_paragraph_id": 7, "start_character": 0, "end_paragraph_id": 7, "end_character": 806, "text": "Under the theory of positive and negative rights, a negative right is a right \"not to be\" subjected to an action of another person or group—a government, for example—usually in the form of abuse or coercion. As such, negative rights exist unless someone acts to \"negate\" them. A positive right is a right \"to be\" subjected to an action of another person or group. In other words, for a positive right to be exercised, someone else's actions must be \"added\" to the equation. In theory, a negative right forbids others from acting against the right holder, while a positive right obligates others to act with respect to the right holder. In the framework of the Kantian categorical imperative, negative rights can be associated with perfect duties while positive rights can be connected to imperfect duties.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "6620973", "title": "BKL singularity", "section": "Section::::Generalized homogeneous solution.:Kasner solution.\n", "start_paragraph_id": 98, "start_character": 0, "end_paragraph_id": 98, "end_character": 247, "text": "However, the presence of 1 negative power among the 3 powers \"p\", \"p\", \"p\" results in appearance of terms from \"P\" with an order greater than \"t\". If the negative power is \"p\" (\"p\" = \"p\" 0), then \"P\" contains the coordinate function λ and become\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "520196", "title": "Positivity effect", "section": "", "start_paragraph_id": 1, "start_character": 0, "end_paragraph_id": 1, "end_character": 555, "text": "The positivity effect is the ability to constructively analyze a situation where the desired results are not achieved; but still obtain positive feedback that assists our future progression. When a person is considering people they like (including themselves), the person tends to make situational attributions about their negative behaviors and dispositional attributions about their positive behaviors. The reverse is true for people that the person dislikes. This is because of the dissonance between liking a person and seeing them behave negatively.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "12252139", "title": "Electrostatic separator", "section": "", "start_paragraph_id": 5, "start_character": 0, "end_paragraph_id": 5, "end_character": 294, "text": "An electric charge can be positive or negative — objects with a positive charge repel other positively charged objects, thereby causing them to push away from each other, while a positively charged object would attract to a negatively charged object, thereby causing the two to draw together. \n", "bleu_score": null, "meta": null } ] } ]
null
qyolp
the annoying sound in my ears when i get out of the shower.
[ { "answer": " > It's like there was a little, tiny flag in your ear which would wave very strongly with every move you make with your head.\n\nIf it's a clicking sound and seems related to swallowing, yawning, or breathing, you may want to ask a doctor about possible [Eustachian tube](_URL_1_) problems. That's a little tube inside your head that helps you \"pop\" your ears and equalize pressure.\n\nIf you mean some sort of ringing sound, then I'd suggest you look at [ELI5 : Tinnitus](_URL_0_).", "provenance": null }, { "answer": null, "provenance": [ { "wikipedia_id": "7752055", "title": "Montana Meth Project", "section": "Section::::Television ads.:2005-2006: Directed by Tony Kaye.:Wave 1.\n", "start_paragraph_id": 32, "start_character": 0, "end_paragraph_id": 32, "end_character": 427, "text": "BULLET::::- Bathtub - A teenage girl in her bathrobe talks on her cell phone while looking into her bathroom mirror. She says, \"yeah, my parents think I'm sleeping at your house\". She hangs up and gets into the shower. While showering, she looks down and sees a trickle of blood. She turns around and screams; there is a pockmarked, bleeding version of herself shivering at the bottom of the shower, who pleads, \"don't do it.\"\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "51635756", "title": "Diary of a Wimpy Kid: The Long Haul (film)", "section": "Section::::Plot.\n", "start_paragraph_id": 7, "start_character": 0, "end_paragraph_id": 7, "end_character": 408, "text": "Going to sleep, Greg is annoyed by a loud noise created by the Beardo siblings, who playfully crash a cleaning cart into a wall, and storms out of the room. He confronts them but Brandi, the oldest sibling, purposely rolls the cart into their car, leaving a huge scratch. Just as Mr. Beardo comes out of his motel room, Brandi angrily blames Greg responsible and Mr. Beardo goes after him but he evades him.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "13516328", "title": "Just Annoying!", "section": "Section::::Stories.:In the Shower with Andy.\n", "start_paragraph_id": 12, "start_character": 0, "end_paragraph_id": 12, "end_character": 768, "text": "Andy wants to see what it feels like to have the shower full with water during the time Mr. and Mrs. Bainbridge are at Andy's house for dinner with his parents. With a lot of time on his hands, Andy seals up the door with a silicone gun from his Dad. Unfortunately, he accidentally breaks the hot tap, and ends up nearly drowning in cold water. The only way out is through the fan. He reaches up and pulls it, and is on the insulation patches as the water rises up the stall. With his rubber duck with him, a fiber in the vent pokes him. In temporary pain his loses the duck after being startled. He goes after the duck, but realizes quickly that the ceiling there is unsupported. The ceiling caves in and he finds himself lying legs spread on the dinner table, nude.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "1001142", "title": "Repulsion (film)", "section": "Section::::Plot.\n", "start_paragraph_id": 8, "start_character": 0, "end_paragraph_id": 8, "end_character": 474, "text": "One morning she runs a bath and walks away from it, causing it to overflow. After work, she returns home and notices the plate of uncooked rabbit. As she turns on a light, the wall underneath the switch cracks open. She locks herself in her room and again hears footsteps. This time, she hallucinates that a man breaks into her room and rapes her. She is awoken on the floor of the hallway by the phone ringing. She answers it, and Colin is on the other end. She hangs up. \n", "bleu_score": null, "meta": null }, { "wikipedia_id": "17463777", "title": "Dusche", "section": "", "start_paragraph_id": 1, "start_character": 0, "end_paragraph_id": 1, "end_character": 475, "text": "Dusche (German: \"Shower\") is a song by Farin Urlaub. It's the first single and fourteenth (and the last) track from his album \"Am Ende der Sonne\". It's about a paranoid man, who fears things in his house, thinking that they are conspiring to assassinate him. The man fights back and decides to burn everything down, when nothing else helps. The shower is the only one on his side. The man gradually grows more frenetic, until the end, where he is stabbed by his only friend.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "6067897", "title": "Referred itch", "section": "", "start_paragraph_id": 1, "start_character": 0, "end_paragraph_id": 1, "end_character": 912, "text": "Referred itch is the phenomenon in which a stimulus applied in one region of the body is felt as an itch or irritation in a different part of the body. The syndrome is relatively harmless, though it can be irritating, and healthy individuals can express symptoms. Stimuli range from a firm pressure applied to the skin – a scratch – to irritation or pulling on a hair follicle on the skin. The referred sensation itself should not be painful; it is more of an irritating prickle leading to the compulsion to scratch the area. The stimulus and referred itch are ipsilateral (the stimulus and the referred itch occur on the same side of the body). Also, because scratching or putting pressure on the referred itch does not cause the stimulus area to itch, the relationship between the stimulus and the referred itch is unidirectional. The itching sensation is spontaneous and can cease with continued stimulation.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "5605738", "title": "Leave 'Em Laughing", "section": "Section::::Plot.\n", "start_paragraph_id": 7, "start_character": 0, "end_paragraph_id": 7, "end_character": 425, "text": "The scene opens in the duo's flat at night time. Stan complains that he has a toothache. Ollie goes to the bathroom to get him a hot-water bottle and keeps stepping on a tack which is lying around. When Stan gets the water bottle, the lid opens and the water pours out in the bed. The two of them make much noise and the landlord (Charlie Hall) comes in, telling them that they will have to leave first thing in the morning.\n", "bleu_score": null, "meta": null } ] } ]
null
52b4r1
Can a collapsing star have such great mass that the black hole formed completely absorbs the supernova and the star simply "goes dark?"
[ { "answer": "Yes, but it is not really due to the size of the star (given that the star is massive enough to undergo core collapse).\n\nOnce nucleosynthesis in the core of a star reaches iron-56, it begins to consume energy, rather than create it. If the core is large enough, electron degeneracy won't be able to support the core against the force of gravity and it will collapse catastrophically. During this process, a great deal of the gravitational potential energy is transferred to material that has \"rebounded\" from the core, which accelerates it away from the core. How exactly this happens is not understood at this time. \n\nWhat we do know is that in some cases the amount of energy transferred is enough to expel most of the outer layers of the star in a supernova. In others, so much energy is transferred that the star is completely destroyed, and no remnant black hole or white dwarf is left behind. In other cases, the energy transferred is not sufficient to push the outer layers to escape velocity and no supernova is observable. \n\n[Here](_URL_0_) under Current Models, Core Collapse is a nice chart giving a general overview of size/properties, type of supernova and remnant. ", "provenance": null }, { "answer": null, "provenance": [ { "wikipedia_id": "51629229", "title": "N6946-BH1", "section": "", "start_paragraph_id": 3, "start_character": 0, "end_paragraph_id": 3, "end_character": 474, "text": "One hypothesis is that the core of the star collapsed to form a black hole. The collapsing matter formed a burst of neutrinos that lowered the total mass of the star by a fraction of a percent. This caused a shock wave that blasted out the star's envelope to make it brighter. After the idea that a black holes are usually formed after a supernova, N6946-BH1 has given evidence that, instead of following this process, the star may automatically collapse into a black hole.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "212764", "title": "Superluminous supernova", "section": "Section::::Astrophysical models.:Collapsar model.\n", "start_paragraph_id": 19, "start_character": 0, "end_paragraph_id": 19, "end_character": 826, "text": "A star with a core mass slightly below this level—in the range of —will undergo a supernova explosion, but so much of the ejected mass falls back onto the core remnant that it still collapses into a black hole. If such a star is rotating slowly, then it will produce a faint supernova, but if the star is rotating quickly enough, then the fallback to the black hole will produce relativistic jets. The energy that these jets transfer into the ejected shell renders the visible outburst substantially more luminous than a standard supernova. The jets also beam high energy particles and gamma rays directly outward and thereby produce x-ray or gamma-ray bursts; the jets can last for several seconds or longer and correspond to long-duration gamma-ray bursts, but they do not appear to explain short-duration gamma-ray bursts.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "510340", "title": "Stellar black hole", "section": "Section::::Properties.\n", "start_paragraph_id": 4, "start_character": 0, "end_paragraph_id": 4, "end_character": 635, "text": "The gravitational collapse of a star is a natural process that can produce a black hole. It is inevitable at the end of the life of a star, when all stellar energy sources are exhausted. If the mass of the collapsing part of the star is below the Tolman–Oppenheimer–Volkoff (TOV) limit for neutron-degenerate matter, the end product is a compact star — either a white dwarf (for masses below the Chandrasekhar limit) or a neutron star or a (hypothetical) quark star. If the collapsing star has a mass exceeding the TOV limit, the crush will continue until zero volume is achieved and a black hole is formed around that point in space.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "37690757", "title": "R136a2", "section": "Section::::Fate.\n", "start_paragraph_id": 11, "start_character": 0, "end_paragraph_id": 11, "end_character": 727, "text": "It is thought that stars this massive can never lose enough mass to avoid a catastrophic end with the collapse of a large iron core. The result will be a supernova, hypernova, gamma-ray burst, or perhaps almost no visible explosion, and leaving behind a black hole. The exact details depend heavily on the timing and amount of mass loss, with current models not fully reproducing the distribution of stars and supernovae that we observe. The most massive stars in the local universe are expected to progress to hydrogen-free Wolf Rayet stars before their cores collapse, producing a type Ib or Ic supernova and leaving behind a black hole. Gamma ray bursts are only expected under unusual conditions, or for less massive stars\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "11009033", "title": "Type II supernova", "section": "Section::::Core collapse.\n", "start_paragraph_id": 15, "start_character": 0, "end_paragraph_id": 15, "end_character": 546, "text": "When the progenitor star is below about – depending on the strength of the explosion and the amount of material that falls back – the degenerate remnant of a core collapse is a neutron star. Above this mass, the remnant collapses to form a black hole. The theoretical limiting mass for this type of core collapse scenario is about . Above that mass, a star is believed to collapse directly into a black hole without forming a supernova explosion, although uncertainties in models of supernova collapse make calculation of these limits uncertain.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "4650", "title": "Black hole", "section": "Section::::Formation and evolution.:Gravitational collapse.\n", "start_paragraph_id": 63, "start_character": 0, "end_paragraph_id": 63, "end_character": 468, "text": "If the mass of the remnant exceeds about (the Tolman–Oppenheimer–Volkoff limit), either because the original star was very heavy or because the remnant collected additional mass through accretion of matter, even the degeneracy pressure of neutrons is insufficient to stop the collapse. No known mechanism (except possibly quark degeneracy pressure, see quark star) is powerful enough to stop the implosion and the object will inevitably collapse to form a black hole.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "45260574", "title": "V4998 Sagittarii", "section": "Section::::Evolution.\n", "start_paragraph_id": 11, "start_character": 0, "end_paragraph_id": 11, "end_character": 536, "text": "The star's high mass loss rate combined with its eruptions will strip off its hydrogen layers and expose a hot helium core. It will proceed to the Wolf–Rayet sequence. It will eventually start fusing heavy elements in its core, and when it develops a large iron core the star will collapse in on itself and explode as a type Ib or Ic supernovae. Depending on the amount of mass lost before the supernova explosion, the remnant will be a neutron star or black hole. A black hole is predicted for the most massive stars such as this one.\n", "bleu_score": null, "meta": null } ] } ]
null
ax54wr
How do white bloodcells know what to attack?
[ { "answer": "This is a fantastic question! Though, I will preface this answer with 1) it is extraordinarily complex and we are still learning so very much about the intricacies and signaling pathways the many different types of white blood cells use to recognize \"self\" from \"not self\" and how they attack, 2) I am just a lowly surgeon, not an immunologist so I may make some small mistakes that I would hope my Allergy and Immunology colleagues get expand upon, and 3) I'm going to try to make this as simple as possible so please ask more questions if you'd like clarification or more information on anything else I touch on!\n\n & #x200B;\n\nFirst, lets look into what is a white blood cell. In the bone marrow, pluripotent stem cells are constantly replicating and differentiating. What this means is a constant population of stem cells that have the ability to become a whole host of different types of cells in the blood are living in the bone marrow waiting for specific signals that tell them the body needs a certain type of cell now be it red blood cells, platelets (in the form of megakaryocytes), B-lymphocytes, neutrophils, eosinophils, etc. I won't go into the signaling pathways for the differentiation of the stem cells but there are some factors that act on the bone marrow, that we have been able to synthetically create too, like G-CSF (granulocyte colony stimulating factor) that we can give to patients with cancer or other types of diseases that reduce the number of granulocytes (a specific type of immune cell) that help increase their white blood cell count!\n\n & #x200B;\n\nWhen it comes to immunity and infection, there are two different pathways that the body fights infection. The first is the innate immune system. This system is endogenous, non-cellular mechanisms that help fight infections. This includes things called \"complement\" which is a series of proteins that basically cover an antigen (something that activates immune cells) and can start to destroy it itself and things called immunoglobulins which can cover and mark an antigen to be recognized by immune cells to be destroyed. The innate immune system also encompasses other functions but that is a little too specific for your question so I'll not go into detail unless asked later.\n\n & #x200B;\n\nYou also have the cellular immune system which encompasses all the different types of immune cells. These include, but are not limited to, lymphocytes (\"T\" and \"B\" cells), leukocytes (macrophages, neutrophils, eosinophils, basophils, etc.), APCs (antigen presenting cells such as dendritic cells), plasma cells (activated B cells), natural killer cells ... the list goes on. This part of the immune system is the cellular arm and each population of cells within this system has specific jobs. Neutrophils can be considered the workhorse and first line of defense against pathogens. Let's say you get a splinter in your finger and some bacteria is introduced into the dermis and starts a local infection. The cells where this infection is occuring will upregulate an enzyme known as cyclooxygenase and start to produce a type of molecule called prostaglandins. These start the inflammatory process. Inflammation is defined as the process by which rubor, tumor, calor, dolor, and loss of function occur -- redness, swelling, warmth, and pain. These prostaglandins, as well as other inflammatory signals, are released into the dermis and causes upregulation (increased production) of other proteins but for this example it upregulates these little proteins that can line the inside of the vasculature that can help neutrophils grab, roll, and eventually stop on the vascular wall. There is a whole bunch of them and some are called selectins, integrins, and other CAMs. When these proteins are upregulated, the chances of passing neutrophils coming into contact with them increases. This process is called margination, rolling, adherence, and diapedesis. Once a neutrophil is stopped on the vascular wall surface in the area of inflammation, neutrophils rely on chemotaxis (sensing of inflammatory mediators) and follows this \"trail\" of inflammation to the source -- the infection. Once there, they are activated and essentially kamikaze themselves against the bacteria. We know this process colloquially as \"pus.\" Pus is just a giant mess of activated, dead neutrophils. This further increases the inflammatory process and more and more white blood cells are recruited.\n\nIn addition to neutrophils, macrophages diapedesis through a similar method as neutrohils into the tissue where inflammation is occurring, and utilizing chemotaxis, can encompass a bacteria and engulf it. Once engulfed, the macrophage will release harmful enzymes that help \"kill\" bacteria or whatever pathogen is causing the infection. They also release a whole mess of inflammatory mediators called interleukins. There are many interleukins and they activate other responses in the body in a very sophisticated but complex way that I do not trust my own knowledge of in fear of making some grave mistakes. I also think that this is way above the level of detail you would like but I'll say it's extremely interesting but incredibly complex with active research in this area still occurring!\n\n & #x200B;\n\nThis would not be a good answer if I did not talk about lymphocytes. These are my personal favorite population of immune cells. These come in two flavors, \"T\" and \"B\" cells. Before my immunologist friends get their pitchforks out, I know NK cells are considered lymphocytes but I do not feel comfortable enough discussing their role in depth as it's been many years since I took I & I. First, let's talk about T cells cause I think they are very cool. \"T\" cells stands for \"thymocytes\" because they grow and mature in the thymus. When we are developing, T cells migrate from the bone marrow to the thymus where they undergo a very rigorous process called tolerance. Tolerance is a system of checkpoints that all T cells must pass that says each cell understands what \"self\" is and what \"not-self\" is. I believe up to 98% of T cells fail tolerance and are destroyed once they fail. Tolerance is a two step process where first positive and then negative selection occurs. Positive selection is a check to make sure T cells can interact with a very important cell surface protein called MHC (major histone compatibility complex). Almost all cells in the body express MHC however there are two classes of them, MHC-I and MHC-II. All T-cells must be able to interact with MHC and if it cannot, it is signaled for apoptosis (signaled, controlled cell death). If the T cell interacts with MHC I then the T cell is further differentiated to a CD8+ (\"Cytotoxic\") T cell and if the T cell interacts with MHC-II it differentiates to a CD4+ (\"Helper\") T cell. Next is negative selection where the T cell must be able to recognize \"self\" and to not activate to \"self.\" If a T cell strongly interacts with it's corresponding MHC protein (CD8+ with MHC-I and CD4+ with MHC-II) then the T cell is marked for apoptosis because it is reacting to strongly to \"self\" and can cause unchecked damage to our own body. Negative selection wants T cells that weakly interact with MHC so that it will check MHCs in comes into contact with but will only strongly interact with MHCs from other organisms, not \"self\" MHCs. Then these cells are released into the blood stream and start their life checking MHC throughout the body looking for \"not self\" to become activated and signal whatever it is recognizing as \"not self\" for destruction.\n\n & #x200B;\n\nB cells mature in the bone marrow hence why they are called \"B\" cells. These cells have a really cool receptor called \"B cell receptor\" or BCR that rearranges its gene so many times that it can produce 3x10^(11) different combinations! That is an insane number of different, specific combinations that the BCR can recognize and cause B cell activation! It boggles my mind how incredibly adapted at recognizing ANYTHING the B cells are! Anyways, once the BCR is developed and the B cell enters circulation, it will become activated by either another activated B cell or by a CD4+ \"Helper\" T cell. Once activated within a lymphoid tissue (Spleen or lymph node) the activated B cell turns into either a plasma cell or a memory B cell and starts producing tons and tons of immunoglobulins (IgM and IgG) that stick to the antigen (bacteria, fungus, virus, whatever) that initially activated the B cell and marks it for destruction.\n\n & #x200B;\n\nI hope this was informative and answered all questions you had. If you'd like clarification or more information please don't hesitate to ask! I wanted to touch on a lot of things but the simple answer to your question is these cells \"sense\" chemical signals produced by the antigen (\"not self\") thing in the body and either directly kill it or tag it with chemicals that allow other cells to notice it and kill it.\n\n & #x200B;\n\nAlso I just noticed what you said about platelets. Platelets are NOT immune cells. In fact, platelets come from very large cells called Megakaryocytes and deal with coagulation (the bodies ability to make blood change from a liquid to a solid). If you'd like more information about that let me know and I'd be happy to give you a crash course in coagulation!", "provenance": null }, { "answer": null, "provenance": [ { "wikipedia_id": "334392", "title": "Horned lizard", "section": "Section::::Defenses.\n", "start_paragraph_id": 5, "start_character": 0, "end_paragraph_id": 5, "end_character": 674, "text": "At least eight species (\"P. asio\", \"P. cornutum\", \"P. coronatum\", \"P. ditmarsi\", \"P. hernandesi\", \"P. orbiculare\", \"P. solare\", and \"P. taurus\") are also able to squirt an aimed stream of blood from the corners of the eyes for a distance of up to . They do this by restricting the blood flow leaving the head, thereby increasing blood pressure and rupturing tiny vessels around the eyelids. The blood not only confuses predators, but also tastes foul to canine and feline predators. It appears to have no effect against predatory birds. Only three closely related species (\"P. mcallii\", \"P. modestum\", and \"P. platyrhinos\") are certainly known to be unable to squirt blood.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "8063189", "title": "Yellow boxfish", "section": "", "start_paragraph_id": 4, "start_character": 0, "end_paragraph_id": 4, "end_character": 259, "text": "When stressed or injured it releases the neurotoxin tetrodoxin (TTX) from its skin that may prove lethal to the fish in the surrounding waters. The bright yellow color and black spots are a form of warning coloration (Aposematism) to any potential predators.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "24950378", "title": "Cell–cell interaction", "section": "Section::::Transient interactions.:Immune system.\n", "start_paragraph_id": 20, "start_character": 0, "end_paragraph_id": 20, "end_character": 610, "text": "Leukocytes or white blood cells destroy abnormal cells and also provide protection against bacteria and other foreign matter. These interactions are transitory in nature but are crucial as an immediate immune response. To fight infection, leukocytes must move from the blood into the affected tissues. This movement into tissues is called extravasation. It requires successive forming and breaking of cell-cell interactions between the leukocytes and the endothelial cells that line blood vessels. These cell-cell interactions are mediated mainly by a group of Cell Adhesion Molecules (CAMs) called selectins.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "6458374", "title": "Bloodfin tetra", "section": "", "start_paragraph_id": 1, "start_character": 0, "end_paragraph_id": 1, "end_character": 325, "text": "The bloodfin tetra (\"Aphyocharax anisitsi\") is a species of characin from the Paraná River basin in South America. The bloodfin is a relatively large tetra, growing to 5.5 cm. Its notable feature (as the name suggest) is the blood-red colouration of the tail, dorsal, anal and adipose fin, while the body is silver in color.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "3997", "title": "Blood", "section": "Section::::Color.:Hemoglobin.\n", "start_paragraph_id": 80, "start_character": 0, "end_paragraph_id": 80, "end_character": 610, "text": "Hemoglobin is the principal determinant of the color of blood in vertebrates. Each molecule has four heme groups, and their interaction with various molecules alters the exact color. In vertebrates and other hemoglobin-using creatures, arterial blood and capillary blood are bright red, as oxygen imparts a strong red color to the heme group. Deoxygenated blood is a darker shade of red; this is present in veins, and can be seen during blood donation and when venous blood samples are taken. This is because the spectrum of light absorbed by hemoglobin differs between the oxygenated and deoxygenated states.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "684347", "title": "Amido black 10B", "section": "", "start_paragraph_id": 1, "start_character": 0, "end_paragraph_id": 1, "end_character": 484, "text": "Amido black 10B is an amino acid staining azo dye used in biochemical research to stain for total protein on transferred membrane blots, such as the western blot. It is also used in criminal investigations to detect blood present with latent fingerprints. It stains the proteins in blood a blue-black color. Amido Black can be either methanol or water based as it readily dissolves in both. With picric acid, in a van Gieson procedure, it can be used to stain collagen and reticulin.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "35792737", "title": "Urchin (Dungeons & Dragons)", "section": "Section::::Description.:Types.\n", "start_paragraph_id": 11, "start_character": 0, "end_paragraph_id": 11, "end_character": 254, "text": "The yellow urchin is colored a very pale yellow with light green tips on its spines; like the green urchin, it is very difficult to see when in water. It can fire four spines at a time, which carry venom which will paralyze the victim for a few minutes.\n", "bleu_score": null, "meta": null } ] } ]
null
3kh1hm
the difference between deductive and abductive reasoning.
[ { "answer": "It simply means coming up with ideas to explain things we see. Those ideas then are put to the test and discarded or validated.\n\n- Deduction: Winter is cold. Winter starts next month. So it will be cold next month. \n- Induction: Last winter was cold, the one before was cold, and so on. So next winter is probably going to be cold. \n- Abduction: Why is it so cold? Well, if it were winter, then of course it would be cold. So it's probably winter.", "provenance": null }, { "answer": "Deductive reasoning is deducing or \"arriving at\" a conclusion based on a clear observation that can be connected and make logical sense. (More so evidence based)\n\n(Ex: If its raining outside and I don't bring an umbrella, then I will get wet)\n\nAbductive reasoning bases a conclusion off a set of incomplete observations. The conclusion is reached when a decision is made on the data that you currently have. (More so judgement based)\n\n(Ex: I may or may not be raining..I'm not sure. So I will take my umbrella. If it is then raining I will not get wet)", "provenance": null }, { "answer": "* Deductive reasoning is all about if A is true, then B must be true. It is often less useful in the real world, as things are seldom that certain.\n* Inductive reasons is the voice of experience...yesterday it was cloudy, then it rain...today is cloudy, so it will likely rain again. Induction is more about probability than certainty.\n* Abduction is about observing anomalies, and finding a common explanation for those anomalies. That is what Holmes is doing when he determines the killer is a left handed smoker who spend time in the tropics. He is picking out details and finding the simplest explanation that ties them together.", "provenance": null }, { "answer": null, "provenance": [ { "wikipedia_id": "61093", "title": "Deductive reasoning", "section": "", "start_paragraph_id": 4, "start_character": 0, "end_paragraph_id": 4, "end_character": 282, "text": "Deductive reasoning differs from abductive reasoning by the direction of the reasoning relative to the conditionals. Deductive reasoning goes in the \"same direction as that of the conditionals, whereas abductive reasoning goes in the opposite direction to that of the conditionals.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "50123232", "title": "Logic and rationality", "section": "Section::::Forms of reasoning.\n", "start_paragraph_id": 6, "start_character": 0, "end_paragraph_id": 6, "end_character": 368, "text": "Abductive reasoning is a form of inference which goes from an observation to a theory which accounts for the observation, ideally seeking to find the simplest and most likely explanation. In abductive reasoning, unlike in deductive reasoning, the premises do not guarantee the conclusion. One can understand abductive reasoning as \"inference to the best explanation\".\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "42446", "title": "Reason", "section": "Section::::Reason compared to related concepts.:Logical reasoning methods and argumentation.:Abductive reasoning.\n", "start_paragraph_id": 97, "start_character": 0, "end_paragraph_id": 97, "end_character": 818, "text": "Abductive reasoning, or argument to the best explanation, is a form of reasoning that doesn't fit in deductive or inductive, since it starts with incomplete set of observations and proceeds with likely possible explanations so the conclusion in an abductive argument does not follow with certainty from its premises and concerns something unobserved. What distinguishes abduction from the other forms of reasoning is an attempt to favour one conclusion above others, by subjective judgement or attempting to falsify alternative explanations or by demonstrating the likelihood of the favoured conclusion, given a set of more or less disputable assumptions. For example, when a patient displays certain symptoms, there might be various possible causes, but one of these is preferred above others as being more probable.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "19442735", "title": "Psychology of reasoning", "section": "Section::::Different sorts of reasoning.\n", "start_paragraph_id": 14, "start_character": 0, "end_paragraph_id": 14, "end_character": 950, "text": "In opposition, deductive reasoning is a basic form of valid reasoning. In this reasoning process a person starts with a known claim or a general belief and from there asks what follows from these foundations or how will these premises influence other beliefs. In other words, deduction starts with a hypothesis and examines the possibilities to reach a conclusion. Deduction helps people understand why their predictions are wrong and indicates that their prior knowledge or beliefs are off track. An example of deduction can be seen in the scientific method when testing hypotheses and theories. Although the conclusion usually corresponds and therefore proves the hypothesis, there are some cases where the conclusion is logical, but the generalization is not. For example, the argument, “All young girls wear skirts. Julie is a young girl. Therefore, Julie wears skirts,” is valid logically, but is not sound because the first premise isn't true.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "341086", "title": "Non-monotonic logic", "section": "Section::::Abductive reasoning.\n", "start_paragraph_id": 4, "start_character": 0, "end_paragraph_id": 4, "end_character": 584, "text": "Abductive reasoning is the process of deriving the most likely explanations of the known facts. An abductive logic should not be monotonic because the most likely explanations are not necessarily correct. For example, the most likely explanation for seeing wet grass is that it rained; however, this explanation has to be retracted when learning that the real cause of the grass being wet was a sprinkler. Since the old explanation (it rained) is retracted because of the addition of a piece of knowledge (a sprinkler was active), any logic that models explanations is non-monotonic.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "393736", "title": "Inductive reasoning", "section": "", "start_paragraph_id": 1, "start_character": 0, "end_paragraph_id": 1, "end_character": 529, "text": "Inductive reasoning is a method of reasoning in which the premises are viewed as supplying \"some\" evidence for the truth of the conclusion; this is in contrast to \"deductive\" reasoning. While the conclusion of a deductive argument is certain, the truth of the conclusion of an inductive argument may be \"probable\", based upon the evidence given. Many dictionaries define inductive reasoning as the derivation of general principles from specific observations, though there are many inductive arguments that do not have that form.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "393736", "title": "Inductive reasoning", "section": "Section::::Comparison with deductive reasoning.\n", "start_paragraph_id": 34, "start_character": 0, "end_paragraph_id": 34, "end_character": 506, "text": "Inductive reasoning is a form of argument that—in contrast to deductive reasoning—allows for the possibility that a conclusion can be false, even if all of the premises are true. Instead of being valid or invalid, inductive arguments are either \"strong\" or \"weak\", according to how \"probable\" it is that the conclusion is true. We may call an inductive argument plausible, probable, reasonable, justified or strong, but never certain or necessary. Logic affords no bridge from the probable to the certain.\n", "bleu_score": null, "meta": null } ] } ]
null
7uhefy
How realistic is the cancer "vaccine" talked about recently?
[ { "answer": "Paper is available in its entirety here: \n[**Eradication of spontaneous malignancy by local immunotherapy**\n Idit Sagiv-Barfi, Debra K. Czerwinski, Shoshana Levy, Israt S. Alam, Aaron T. Mayer, Sanjiv S. Gambhir, Ronald Levy - *Science Translational Medicine*](_URL_0_)\n", "provenance": null }, { "answer": "So, immunotherapy has long been seen as a holy grail for cancer treatment. The immune system is naturally programmed to attack cells that have gone a bit weird (to use the scientific term). The problem tends to be that the cancer cells can also alter themselves so that they are disguised from the immune system, or in fact inhibit any immune cells that come into contact with them. This stops the immune system from seeing them as dangerous, allowing the cancer to grow. So the balance of the immune response is in favour of leaving the cancer alone.\n\nWhat this treatment does is inject the tumour with molecules that tell the immune cells in the vicinity of the tumour to wake up and start doing their job, overcoming the inhibition that the cancer cells have put in place. This means that the balance of the immune response is now to attack the tumour, which seems to work very well.\n\nThe really cool thing is that now that the immune system is trained to see the tumour as bad, and will attack similar cells in different sites. This is why it behaves in some fashion like a vaccine.\n\nIt's perfectly viable, and very exciting. As always, there is always the question of how well it translates into human biology but it is still very promising. I think one problem is going to be how specific the immune response is. In the paper, they see the immune cells are trained to attack cells with protein markers unique to the tumour cells, which is a good sign. One concern might be that if you accidentally trigger the immune response to normal cell markers, it could cause your immune system to attack healthy cells which would obviously be a very bad consequence. Another would be how readily a tumour can evolve to overcome the immune system attack. If the immune system only ends up going for certain markers, it could miss tumour cells that don't have the same ones. These could then continue to grow and cause the cancer to return.\n\nETA: thank you kind, golden stranger!\n\n...strangers!", "provenance": null }, { "answer": "It's actually a really cool concept that frankesteins together mechanisms of action that we already use.\n\n1. Making cancer cells visible to the immune system: PD-1 and PD-L1 inhibitors\n\n2. Siccing T-cells on cancer: CAR-T cells, Allogenous stem cell transplantation, BiTE immunotherapy\n\nThey then do this only locally, circumventing the problems that would arise if we did it systemically, i.e. death from immune system overdrive.\n\nI'm interested in seeing how well it works in humans and reading about it made me slightly nervous about my job to be honest", "provenance": null }, { "answer": "Veterinarians have been using cancer vaccines for a while. The melanoma vaccine has about a 50% success rate which is pretty good considering how melanoma is so difficult to treat. There is also a fibrosarcoma and B cell lymphoma vaccine. As far as I know those vaccines are less effective. Fibrosarcoma in cats is extremely aggressive while lymphoma is more manageable with chemo. The vaccines are all made by Merial. UPenn Vet and The Children’s hospital in Philadelphia have had some instances of curing children’s leukemia with immunotherapy but those cases were experimental. ", "provenance": null }, { "answer": "There is a treatment for stage 4 brain cancer that does this indirectly. Duke has modified the polio virus to attack cancer cells, and they inject it into the center of a brain tumor. While the polio virus is attacking the cells, the immune system starts going in to clean up the mess. In turn, the immune system learns to recognize the cancer cells. The whole process seems to last for months after just the one single injection.\n\nYou can google PVS-RIPO for further information. The first few patients were treated in 2012-2013 and some are still alive without any further treatment. Those patients had recurrent Glioblastoma Multiforme, which is perhaps the deadliest cancer situation that one can imagine.\n\nWhile this would technically be considered virotherapy, the immune system plays a huge role in the outcome.\n\nThey are now in phase 2 trials, and still only for brain tumors, but I believe the assumption is that it could be widely used among many different solid tumors in the future.", "provenance": null }, { "answer": "I work in this field. \n\nThis type of cancer vaccine is kinda old news. What's really cutting edge are the personalized cancer vaccines based on mRNA that are in clinical trails in humans **already**. Basically, these vaccines are tailored to an individual's cancer. You see, your cancer and my cancer may have the same name, lymphoma for example, but they may be very different in terms of their susceptibility to drugs and especially to vaccines. Using personal genomics to tailor mRNA to program your own cells to make a vaccine against your very unique form of cancer is the real key to all of this. Adding on top of this is that mRNA is just a superior way to produce vaccines. As it turns out, when your body makes the vaccine inside its own cells, it stimulates your immune system more effectively than a vaccine made in a factory and injected (this has to do with MHC1 and MHC2 activation). \n\nThese clinical trials are looking promising. If they work, this can be combined with checkpoint inhibitors and other mRNA therapies (that also boost immune response against cancer) and it's bye-bye cancer. \n\nLet's just say that it's the most promising thing I've seen in cancer therapy ever. ", "provenance": null }, { "answer": "To make a broad analogy:\n\nPicture your car for a moment. Immunooncology broadly speaking unlocks parts of the immune system that keep the body from attacking itself. In our analogy immunooncology released the parking brake and the car is in neutral. Certain combination treatments are the equivalent of giving a push.\n\nWhat happens next depends on how your car is parked. \"Curing\" Murine models of cancer are the relative equivalent to parking on a steep hill and popping bottles of champagne when the car rolls downhill. It's dishonest at best to promote the results that way and a bit laughable, but academia does it periodically to stir public interest (and by proxy send public funding their way).\n\nTo continue the analogy most human cancers are the equivalent of parking on a level surface at the base of a hill. Unlocking the parking brake is important, but it's not usually going to send you uphill. Sometimes you get lucky and the metaphorical car wants to move, meaning you only need to give a small push to get the car unstuck and it climbs under it's own power (a so-called \"hot\" tumor) but more often than not you're talking about pushing a car uphill by hand, which isn't terribly effective.", "provenance": null }, { "answer": "Having read a bit of the responses, I have to ask: Given the cancer cells ability to \"blend in\", would this treatment possibly cause the body to attack it's healthy cells? If so, how would it be remedied and how often would this occur?", "provenance": null }, { "answer": "Have we seen similar successful results in test on mice etc, that sadly haven't panned out successfully for humans in the end in the past? \nJust wondering if this is the first time we've seen such a positive success, or if we should still be somewhat apprehensive? ", "provenance": null }, { "answer": "So I worked on a project and helped publish a paper that also showed this exact effect (even showing it attacked tumors in other locations) several years ago (PMID: 25179345).\n\nOne of the biggest issues with directly translating this to therapy is that directly injecting a vaccine into a tumor cell is not necessarily a wise choice and not commonly done (I think, my clinical exposure to oncology isn't large). The main fear is something called \"tumor seeding\", which can also happen when they aspirate or biopsy, or do something otherwise invasive with a tumor that could cause cancer cells to dislodge and make it to the blood or lymphatic systems. If the tumor is well-demarcated (ie. enclosed), you don't want to open up a channel to your blood supply for cancer cells to spread throughout your body (hematogenous spread). Clinically the chance of this is probably very low and this is a consideration when physicians consider how to biopsy a tumor, but if you start making it standard procedure, who knows what could happen. \n\nEven if the cancer is metastatic, it is possible that it's in a compartment that is very well isolated from the rest of your body. For example when testicular cancer is suspected, a biopsy is almost never performed and the suspected testicle is straight up removed because it's in a very well-enclosed portion of the body (referring not to the skin scrotum, but another membrane separating it from the inside of your body). ", "provenance": null }, { "answer": "As scientists we're equally sceptical of new wondrous drugs, but I can tell you that immunotherapy is the most exciting development in cancer therapy in decades. Maybe the one thing to keep in mind is that it tends to work much better in certain cancers (e. g. melanoma) than others. Source: PhD in Molecular Biology working in cancer research", "provenance": null }, { "answer": "Can I ask a follow up question? How long will it be until we get first results from the clinical trials?\n\nThis seems so promising, that I'd love to stay up to date on current progress, can anyone tell me where I should check?", "provenance": null }, { "answer": "There are many very promising developments, but one must remember that cancer isn't a single disease, it's a whole group of related diseases, which are very different. So, it might work on some types, but not others. Also, from doing it in a lab until its a useable tool in clinical use might be decades.\n\nAs for the vaccine effect, don't forget that cancer is based on random mutations, so there is no guarantee that \"defense knowledge\" from one immune system is transferable to another patient, with a slightly different \"enemy\".\n\nDon't expect a silver bullet for all cancers. I think a more reasonable expectation is that it (or something like it) will become another tool in the existing toolbox for fighting cancer, and used in combination with other tools, much like how we today mix surgery, different kinds of chemo and radiation therapy in different ways, depending on the exact nature of each case.\n\nAs much as I would like to see a silver bullet, I don't think it's realistic to expect. On the bright side, though, we are getting much better at treating, and the advances are done at an amazing pace.", "provenance": null }, { "answer": "As a general rule, one should be very cautious about translating results from mice into humans.\n\nA great example is avastin/bevacizumab (an anti-angiogenesis antibody). When early studies of endostatin (an anti-angiogenesis molecule) performed very well, Time magazine printed a cover along the lines of \"A Cure for Cancer\". \n\nIn retrospect, people in the field admit that it should have read \"A Cure for Cancer in Mice.\"\n\nThere are many issues with mouse models, not least of which is that cell lines are often the source of the \"tumours\" (such that they differ dramatically from tumours that arise spontaneously in humans.\n\nThis study accounted for that somewhat. They did some of their experiments with cancer cell lines injected into mice, and some with mice that spontaneously develop tumours. Even the latter have to be taken with caution, though, as these are still derived from modified cells that express a specific tumour-inducing viral protein. (All the cells in these mice have the gene for this protein and they develop breast tumours.)\n\nFinally, immunotherapy has been an amazing step forward and, along with targeted therapies like monoclonal antibodies (which are used in immunotherapy) and small molecule kinase inhibitors, represents the biggest advance in the past 20 years. \n\nSo the strategy is promising and it is aligned with what has been working over the past few years. Too early to say how it will work in humans.\n\nHere's an [article](_URL_0_) on some previous stumbles.", "provenance": null }, { "answer": "OK I've read the paper and it's very promising, but here are some caveats:\n\n-nearly all of the tumors were treated at a very small size (a completely standard practice in the field). It's unclear how well this would work in a patient with heavy tumor burden.\n\n-all of the injected tumors were at the surface and readily \"injectable\". It's not clear if this would translate well in a situation where all tumors were internal.\n\n-all the recent successful immunotherapies don't work as well in tumors with a low mutation burden (which is most solid tumors), because fewer mutant proteins = fewer immune targets = lower immune stimulation. The models used here are fairly immune-stimulating, so may be a bit overestimating. That being said, the broad results are still unprecedented and exciting.", "provenance": null }, { "answer": "TL;DR. Immunotherapy uses monoclonal antibodies which serves to block inhibitory pathways tumour cells might utilize to halt the immune response. This works best on tumours with high mutation rates, \"hot\" tumours, essentially making them appear as foreign to our immune system and illicit a response. Other tumours have low mutation rates, \"cold\" tumours, which appear almost identical to our own cells and therefore wouldn't traditionally trigger the immune response. Using antibodies inhibits the inhibitory pathway, effectively activating the immune response. This therapy, although promising, is very toxic and expensive. Altogether, this is a very promising avenue in cancer treatments moving forward\n\n\nCancer vaccines, a.k.a immunotherapy, is a remarkably vast and exciting field of oncology. In general, immunotherapy serves to educate our immune cells, namely our B and T adaptive lymphocytes, to recognize and kill tumour cells. Tumour cells are smart and will develop mutations to prevent them from being recognized and consequently killed by our immune cells. Tumour cells have learned to up-regulate specific proteins, called checkpoint inhibitors. Checkpoint inhibitors are normally expressed on immune regulatory cells, which when bound to ligands on activated CD4 and CD8 cells, halt their activation and \"shut\" them down. Tumour cells expressing these checkpoint inhibitors, namely PD-1, can effectively inhibit the attack from CD8 cells that may recognize the tumour as foreign. Tumour cells also can secrete various cytokines in the microenvironment, such as TGF-B and IL-10, which can also inhibit the activation of CD4 and CD8 cells, as well as boosting the activity of regulatory T cells which can exert similar effects.\n\nImmunotherapy works by using humanized monoclonal antibodies that when bound to checkpoint inhibitor proteins prevent their binding to its complementary ligand. This means you are inhibiting the inhibitory pathway, which ultimately leads to an activation of CD4 and CD8 T cells that recognize the tumour as foreign. Now, that is the major caveat of immunotherapy. The CD4 and CD8 T cells MUST recognize the cancer as foreign. \"Hot\" tumours are tumours with high mutation rates and therefore generate neo-antigens quite frequently. A model \"hot\" tumour is melanoma, whose high mutation rates are as a result of constant exposure to harmful UV radiation from the sun directly. Tumours that generate new antigens frequently increase the likelihood our CD4 and CD8 cells will have receptors that recognize this as foreign. \"Cold\" tumours on the other hand have low somatic mutation rates and do not generate neo-antigens as frequent as their \"hot\" counterparts. Therefore, our CD4 and CD8 cells are much less likely to recognize the tumour cells as foreign and the tumour persists. Keep in mind, tumours cells are just really messed up versions of our own cells. We have mechanisms to prevent our immune cells from attacking our own cells (autoimmunity). Cancer cells with low mutation rates look identical to our own cells, which would not typically illicit an immune response.\n\nAnother avenue recently taken is the use of oncolytic viruses as part coupled with monoclonal antibody therapy. Oncolytic viruses prefer to infect tumour cells as tumour cells represent the perfect host; they do not undergo programmed cell death, they do not alert the immune system and most importantly, are always dividing and replicating their DNA which the virus needs to survive. Oncolytic viruses, even with broad tropisms, will preferably infect cancer cells over normal healthy cells. When the virus completes a round of replication, it will lyse the cancer cells, releasing internal antigens that can stimulate the adaptive immune response, as well as danger molecules such as ATP. Danger molecules like ATP are powerful inducers of the immune response and can very quickly establish local inflammation in the tumour microenvironment, which is necessary to allow trafficking of the appropriate immune cells. Danger molecules essentially are the ticket for the immune cells to get into the tumour site. Monoclonal antibodies can remove any checkpoint inhibition the activated lymphocytes may encounter to kill the tumour cells. This has been shown to work extremely well, and even better, offers excellent chances for long term remission (memory effect of the immune system).\n\nUnfortunately, monoclonal antibody therapies act non-specifically and therefore can be incredibly toxic. Almost anyone that is enlisted on monoclonal antibody regimens for immunotherapy experience some adverse side effects. Unfortunately many must pull off treatment altogether. Also, there is a major cost issue especially in countries with private health care. Ipilimumab, a monoclonal antibody that blocks the CTLA-4/B-7 interaction, costs around $700 000 annually. Unfortunately, most people just can not afford this treatment even though it works.\n\nIn my opinion, this is the poster child of current cancer treatments. It is one of the only treatment regimens that attempts to \"personalize\" the treatment as much as possible and its ability to offer long term remission is extremely promising.", "provenance": null }, { "answer": "Cancer vaccines are totally a real thing. Well, they're actually a couple different real things.\n\nThe first is vaccines like Gardasil (human papillomavirus), where the vaccine is **actually against a virus that causes cancer**. These are pretty normal vaccines. \n\nHowever, there is also a concept of cancer vaccines against \"neo-antigens.\" As cancer cells accumulate mutations, they begin to display mutant proteins on their cell surface. If we can identify these proteins, and create a platform to command cells to create antibody responses against them, we can program the immune system to seek out cancer and destroy it. This idea has worked in a variety of different animal models of cancer, from a variety of different groups, in a variety of different ways, which convinces me that the underlying technology is valid, and it's a question of when, not if, we see clinical advancement. \n\nChimeric antigen receptor T cell therapy (CAR-T cell therapy) is a closely related technology where the T-cell programming is done outside the body rather than with a vaccine. It's very promising and even has an FDA approval at this point (Gilead/Kite's ~~Keytruda~~ Yescarta). We're talking cancer *cures* for cancers that were previously universally near fatal. Side effects and especially runaway T-cell proliferation can still be a problem. \n\nWatch this space. Scientists are seriously starting to crack this nut wide open, between new immunotherapy regimes (targeting PD-L1 and/or CTLA-4), programmed CAR-T cell therapy, and cancer vaccines. We're starting to understand how traditional cytotoxic chemotherapy drugs may work in conjuction with immuno-therapies, and we're starting to understand the interplay between these systems previously thought to be disconnected. We really are on the cusp of a real revolution, and I'm usually an intractable cynic about most cancer therapies. \n\nEdit: got a brand name wrong. It's Yescarta. Keytruda is a monoclonal antibody immunotherapy targeting PD-L1. Five years out, it's 8x more patients are alive and cancer free (40%) than the previous treatment (5%). ", "provenance": null }, { "answer": "Im going to re-post my comment from r/world news, and make minor edits for spelling and clarity. It seems I have a more cynical perspective than some here. My background is that I am an immunologist working for a biotech company developing cancer immunotherapy drugs. \n\nMy previous post:\n\n“I work in this field and although this is a nice approach (CpG will do a lot of good stuff beyond just driving OX40 and I think it’s a nice combo) there are big limitations here. The real issue is that this relies on pre-existing T cell infiltration. This is one of the big challenges for immunotherapy as a whole: in the clinic so far, it has mostly been effective in patients where there is already an immune response against the tumor and where there are already infiltrating T cells specific for a tumor antigen (this is mostly pertaining to solid tumors, liquid tumors are another matter and are looking very promising with CAR-T therapy). Patients that do not have that infiltrate tend not to respond to these therapies, which is why current approved immunotherapies like PD-1 blockade only work in about 10-20% of patients (although those numbers do get better when you combine it with other agents, like IDO inhibitors or CTLA-4 blockade), and only in tumors like melanoma or MSI high colorectal where there is a high mutational load in the cancer, meaning that there are a lot of tumor neoantigens that can serve as targets for a productive immune response. Essentially, for the approach described to work, you need the patients tumor to have a good antigen that the immune system has recognized, and that limits you to a subset of cancers, within which the drug will still probably be partially effective because some of those patients may need other combination therapies to reduce immunosuppressive factors that would shut down that T cell response (although hopefully the CpG will help with that by aggravating the immune system in general).\n\nAlso, even in a scenario where you have an antigen, it may still not work. The reason for this is that although you may get a T cell that can recognize that antigen, if that antigen is a self antigen (for example, something normally expressed during pre-natal development that is now being overexpressed by the tumor) there is a good chance that the T cell which recognizes it will not have a high affinity TCR, unlike T cells which recognize completely foreign antigen in the context of an immune response to a pathogen, for example. That means that you’ll get a weak T cell response, so you’ll get the infiltration and some level of parmacodynamic response, but weak efficacy. The fact that this works in mice is nice but the problem is that a lot of these mouse tumor models have good antigens in them that allow for these responses. So you may be modeling under ideal circumstances that don’t reflect the reality of the majority of human cancer patients, in terms of antigen exposure. \n\nThis has been a fundamental limitation of cancer vaccines as a class of therapeutic. There are A LOT of examples of cancer vaccines that went to the clinic, went into patients, resulted in a T cell response against the particular tumor antigen, and ultimately provided very little benefit to patients because those T cells weren’t effective at killing the tumor. So if this approach is successful at taking a T cell specific for self antigen being expressed abnormally in the cancer (like most prostate cancers, for example) or for a neoantigen resulting from a mutation that causes a new epitope to form, and then driving that T cell to a more activated and effective state via OX40, that could be effective at solving the cancer vaccine challenge, but we’ll have to wait and see.”", "provenance": null }, { "answer": "A lot of my published research was on Cancer therapeutics, so this topic is something I know quite a lot about. \"Killing cancer cells\" is not a hard process. We have been able to do this for a long time. The issue comes with therapeutics want to in some way kill the cancer cells, keep killing them, all while not hurting the person. The last part is the hard part, because we really haven't found a good way to do that yet. Most cancer treatments end up doing more harm to the person. I don't want to dampen any hopes on on this, but we hear about \"the cancer vaccine\" or that we can finally \"kill cancer cells with a drug\" and then we hear nothing about it in the future. That is because we can't find a way to safely administer these treatments since they always end up doing damage to the point of not being useful. Also, even if we do find something that works in the lab, it may very well not work as well with humans. Going from successful lab research to a treatment that can be used for the public can take an extremely long time, and for good reason. They want to make sure they aren't shotgunning out a therapeutic that will end up harming most that take it. With all of that said, I am very optimistic on where we are regarding cancer research. While we aren't quite there yet contrary to what these recent articles may say, we are still somewhat close. The problem is that we have been somewhat close for a while, but getting us over that last little edge is the hardest part. ", "provenance": null }, { "answer": "Immune therapy isn't new. It works wonders in theory and for limited times has near-miraculous results in practice. Similar clinical trials have already been done, achieving similar results for a limited period of time. Problem is the mouse model in the paper is very simplistic -- a flaw for most (all?) tumor mouse models. Real stage IV nasty cancers that we have a hard time curing usually have genetic instability and *extreme population variance* that is somewhere between extremely difficult and impossible to simulate in a laboratory. Real advanced cancerous tumor cells tend to adapt to cancer \"vaccines\" and come roaring back in a few months, almost without fail. Because of that population variance. And the next time they are completely unresponsive to the vaccine. The population variance tends to overlap extensively with the variance of the normal body cells, which is why curing cancer *in general* seems to be nearly or actually impossible.\n\nCalling them \"vaccines\" I think is a bit of marketing. They are not vaccines, since they are not meant to prevent cancer, unlike the HPV vaccine for example. It's marketing to get people to trust it a bit more, I think. I would call them immunotherapy, just because that's really what it is. But, in my cynical opinion, the reason why immunotherapies are getting hyped so much is because there's TONS of money in it. It's not creative, risky work so much anymore. Corporations or investigators just follow a known and predictable process to generate these antibodies. Build an custom antibody, patent it, and sell it. And it's all patentable, every new target, every new cancer is a unique invention. And treatments can cost hundreds of thousands of dollars for these cancer vaccines. That, in practice, yes, are almost certainly doomed to ultimately fail (won't cure the cancer). However, they are also very likely to reliably prolong survival, especially in cancers where there was nothing else you could do a few years ago to even budget survival a little bit. So, want to make a few billion dollars? Find a cancer for which no current practical therapies exist -- a nasty, lethal cancer that kills people all the time with few options -- make an immunotherapy to it. Show any kind of efficacy, and bam: Instant money. Is that the way we should be running medical research and making healthcare decisions? It doesn't matter, that's just how it's done. But, like I said, I'm probably cynical. \n\nSome people believe in this a lot and think it's the future. They know the problems but think if you just tweak it, it will work like gangbusters. That kind of tweaking is not what this article did, which would be a lot to ask. That's why it's buried in the middle of a pretty prestigious journal instead of the lead story for the most prestigious journals. Maybe the tweaks are possible someday and the therapy will work super consistent. Who knows, that's science. My personal and professional take is that the problems with immunotherapy are *probably* mostly intractable, that they've essentially already made almost all of the progress they are going to make, and that further advancements might help a few edge cases but will fall far short of a general cure for cancer.", "provenance": null }, { "answer": null, "provenance": [ { "wikipedia_id": "3304705", "title": "HPV vaccine", "section": "Section::::Medical uses.:Public health.\n", "start_paragraph_id": 29, "start_character": 0, "end_paragraph_id": 29, "end_character": 502, "text": "The National Cancer Institute states \"Widespread vaccination has the potential to reduce cervical cancer deaths around the world by as much as two-thirds if all women were to take the vaccine and if protection turns out to be long-term. In addition, the vaccines can reduce the need for medical care, biopsies, and invasive procedures associated with the follow-up from abnormal Pap tests, thus helping to reduce health care costs and anxieties related to abnormal Pap tests and follow-up procedures.\"\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "5482977", "title": "Gardasil", "section": "Section::::Public health.\n", "start_paragraph_id": 42, "start_character": 0, "end_paragraph_id": 42, "end_character": 464, "text": "Widespread vaccination has the potential to reduce cervical cancer deaths around the world by as much as two-thirds, if all women were to take the vaccine and if protection turns out to be long-term. In addition, the vaccines can reduce the need for medical care, biopsies, and invasive procedures associated with the follow-up from abnormal Pap tests, thus helping to reduce health care costs and anxieties related to abnormal Pap tests and follow-up procedures.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "36648613", "title": "University of Louisville School of Medicine", "section": "Section::::Innovations.\n", "start_paragraph_id": 32, "start_character": 0, "end_paragraph_id": 32, "end_character": 290, "text": "BULLET::::- Drs. A. Bennett Jenson and Shin-je Ghim, innovators of the world's first 100% effective cancer vaccine have begun work to develop a less expensive vaccine with an increased spectrum of activity. This vaccine will be produced in tobacco plants, one of Kentucky's abundant crops.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "34952645", "title": "HybriCell", "section": "Section::::Procedure.\n", "start_paragraph_id": 4, "start_character": 0, "end_paragraph_id": 4, "end_character": 315, "text": "Each vaccine is specific to that patient. Though not a preventative measure, the vaccine's creator, Dr. Barbuto, predicted that the vaccine would be even more effective in patients in earlier stages of cancer. The vaccine is administered in conjunction with other cancer-preventative measures such as chemotherapy.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "4191", "title": "BCG vaccine", "section": "Section::::Medical uses.:Cancer.\n", "start_paragraph_id": 24, "start_character": 0, "end_paragraph_id": 24, "end_character": 309, "text": "BCG has been one of the most successful immunotherapies. BCG vaccine has been the \"standard of care for patients with bladder cancer (NMIBC)\" since 1977. By 2014 there were more than eight different considered biosimilar agents or strains used for the treatment of non–muscle-invasive bladder cancer (NMIBC).\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "15025638", "title": "Epithelioid sarcoma", "section": "Section::::New therapeutic strategies.:Immunotherapies.\n", "start_paragraph_id": 43, "start_character": 0, "end_paragraph_id": 43, "end_character": 557, "text": "Vaccine therapy is perhaps the immunotherapeutic strategy with the most ongoing exploration in sarcomas at the current time, although, thus far at least, little evidence has emerged indicating that active vaccination alone can lead to tumor regression. Multiple techniques and treatment strategies are currently being studied in an effort to improve the objective response rate of vaccine therapy. Vaccines can deliver various tumor-associated factors (tumor antigens) to the immune system, resulting in a natural antibody and T-cell response to the tumor.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "339633", "title": "Vaccinia", "section": "Section::::Use as a vaccine.\n", "start_paragraph_id": 21, "start_character": 0, "end_paragraph_id": 21, "end_character": 491, "text": "Currently, the vaccine is only administered to health care workers or research personnel who have a high risk of contracting the variola virus, and to the military personnel of the United States. Due to the threat of smallpox bioterrorism, there is a possibility the vaccine may have to be widely administered again in the future. Therefore, scientists are currently developing novel vaccine strategies against smallpox which are safer and much faster to deploy during a bioterrorism event.\n", "bleu_score": null, "meta": null } ] } ]
null
34x4wg
Why do radar dishes need to spin and scan the area progressively? Can't they design a radar which scans the whole area?
[ { "answer": "It's due to the antenna design, as directional antenna can have a bigger range and better signal than omnidirectional.", "provenance": null }, { "answer": "I guess you could think of if by analogy with our eyes. We turn our heads to look at things because we have a narrow field of view. Radar is similar - it only gets information from in front of it, and so needs to be turned to look at everything around it. \n\nYou could, however, build an onmidirectional radar. It would put out a pulse in all directions and then listen for reflected radio signals from all directions. But there would be two big issues:\n\n* You'd not get, in an normal radar design, any information about what direction the detected object was in. You could tell how far away it was and how fast it was moving towards you, but not at what angle it was. You could add this information by having a group of antennae that effectively triangulate.\n\n* You'd need a lot of power, as unlike a normal radar, you'd be sending the radio power in all directions. If you didn't give more power, you'd have a much weaker return signal and thus a harder time to detect small or distant objects..\n\nEdit: bullet points ftw", "provenance": null }, { "answer": "They can make them so they don't spin, but it's cheaper to make them spin. A spinning radar sends out pulses in a narrow direction, and listens in that direction for reflections. The location on the screen is just the direction the antenna is pointing on the angular axis, and the distance is just the time for the signal to come back, and the brightness is the intensity of the signal.\n\nNow you can build a phased array, and use modern electronics to electronically spin the antenna, but that's a whole lot of money on electronics when a motor would work just fine. The other options is use a phased array to implement an omnidirectional antenna that works as an antenna pointing in all directions, but that requires MUCH more power to get the same distance, but it would scan the whole area in one go. It's also worth noting that you need fairly modern electronics to do that type of signal processing, and they didn't really have it more than 15 years ago.\n\nIt's also worth pointing out tracking radars, they don't spin, they are basically radar on a satellite dish, and you point the dish at your target. That lets them pick one object and get very up to date information on that one thing.\n\n", "provenance": null }, { "answer": null, "provenance": [ { "wikipedia_id": "13557101", "title": "Blip-to-scan ratio", "section": "Section::::Radar basics.\n", "start_paragraph_id": 5, "start_character": 0, "end_paragraph_id": 5, "end_character": 614, "text": "Classic radars measure range by timing the delay between sending and receiving pulses of radio signals, and determine the angular location by the mechanical position of the antenna at the instant the signal is received. To scan the entire sky, the antenna is rotated around its vertical axis. The returned signal is displayed on a circular cathode ray tube that produces dots at the same angle as the antenna and displaced from the center by the time delay. The result is a two-dimensional re-creation of the airspace around the antenna. Such a display is called a Plan Position Indicator, usually simply a \"PPI\".\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "15383119", "title": "Swathi Weapon Locating Radar", "section": "Section::::Operation.\n", "start_paragraph_id": 17, "start_character": 0, "end_paragraph_id": 17, "end_character": 355, "text": "At a given position, the radar can scan for targets in one quadrant, encompassing a 90° sector. The array can electronically scan up to +/-45° from its mean bearing. Additionally, for 360° coverage from a given position, the whole array can be rotated by 135° on either side within 30 seconds to quickly change the scanning sector in response to threats.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "43764346", "title": "AI Mk. VIII radar", "section": "Section::::Development.:Scanning.\n", "start_paragraph_id": 53, "start_character": 0, "end_paragraph_id": 53, "end_character": 1090, "text": "The team first considered spinning the radar dish around a vertical axis and then angling the dish up and down a few degrees with each complete circuit. The vertical motion could be smoothed out by moving continually rather than in steps, producing a helix pattern. However, this helical-scan solution had two disadvantages; one was that the dish spent half of its time pointed backwards, limiting the amount of energy broadcast forward, and the other was that it required the microwave energy to somehow be sent to the antenna through a rotating feed. At a 25 October all-hands meeting attended by Dee, Hodgkin and members of the GEC group at GEC's labs, the decision was made to proceed with the helical-scan solution in spite of these issues. GEC solved the problem of having the signal turned off half the time by using two dishes mounted back-to-back and switching the output of the magnetron to the one facing forward at that instant. They initially suggested that the system would be available by December 1940, but as work progressed it became clear that it would take much longer.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "1382559", "title": "Low-probability-of-intercept radar", "section": "Section::::Methods.\n", "start_paragraph_id": 17, "start_character": 0, "end_paragraph_id": 17, "end_character": 593, "text": "Constructing a radar so as to emit minimal side and back lobes may also reduce the probability of interception when it is not pointing at the radar warning receiver. However, when the radar is sweeping a large volume of space for targets, it is likely that the main lobe will repeatedly be pointing at the RWR. Modern phased-array radars not only control their side lobes, they also use very thin, fast-moving beams of energy in complicated search patterns. This technique may be enough to confuse the RWR so it does not recognize the radar as a threat, even if the signal itself is detected.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "17395977", "title": "Track while scan", "section": "Section::::Track while scan.\n", "start_paragraph_id": 11, "start_character": 0, "end_paragraph_id": 11, "end_character": 784, "text": "With the location of targets known even when the radar antenna is not pointed at them, TWS radars can return to the same area of sky on their next scan and beam additional energy toward the target. So in spite of the radar not constantly painting the target as it would in a traditional lock-on, enough energy is sent in that direction to allow a missile to track. A phased array antenna helps here, by allowing the signal to be focused on the target when the antenna is in that direction, without it having to be pointed directly at the target. This means that the target can be painted for a longer period of time, whenever the antenna is in the same general direction. Advanced phased array radars make this even easier, allowing a signal to be continually directed at the target.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "1376411", "title": "Electronic counter-countermeasure", "section": "Section::::Specific ECCM techniques.:Sidelobe blanking.\n", "start_paragraph_id": 17, "start_character": 0, "end_paragraph_id": 17, "end_character": 668, "text": "Radar jamming can be effective from directions other than the direction the radar antenna is currently aimed. When jamming is strong enough, the radar receiver can detect it from a relatively low gain sidelobe. The radar, however, will process signals as if they were received in the main lobe. Therefore, jamming can be seen in directions other than where the jammer is located. To combat this, an omnidirectional antenna is used for a comparison signal. By comparing the signal strength as received by both the omnidirectional and the (directional) main antenna, signals can be identified that are not from the direction of interest. These signals are then ignored.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "675776", "title": "Weather radar", "section": "Section::::How a weather radar works.:Determining height.\n", "start_paragraph_id": 25, "start_character": 0, "end_paragraph_id": 25, "end_character": 613, "text": "A weather radar network uses a series of typical angles that will be set according to the needs. After each scanning rotation, the antenna elevation is changed for the next sounding. This scenario will be repeated on many angles to scan all the volume of air around the radar within the maximum range. Usually, this scanning strategy is completed within 5 to 10 minutes to have data within 15 km above ground and 250 km distance of the radar. For instance in Canada, the 5 cm weather radars use angles ranging from 0.3 to 25 degrees. The image to the right shows the volume scanned when multiple angles are used.\n", "bleu_score": null, "meta": null } ] } ]
null
1kotmw
What were some tools and technology utilized in the Golden Age of Arctic Exploration?
[ { "answer": "Neat idea. Good luck!", "provenance": null }, { "answer": "I only know of Amundsen's expeditions in passing (at the \"Wikipedia\" level), but I've read books by and about pre-flight-era polar explorers. \n\nIn slightly earlier days, Norwegians like Amundsen had been noted for using more indigenous technologies and survival techniques than, say, the British (notably Scott). By the time he disappeared in 1928, technology had evolved so quickly and significantly (e.g. airplanes), it's likely that those national differences no longer existed, but I bring this up because it's possible that Amundsen's crew had different technology than the searchers. \n\nWhat I'd suggest is to find his most recent expedition memoir, which I gather is *Our Polar Flight: The Amundsen-Ellsworth Polar Flight* aka *My Polar Flight* (1925) which will hopefully give a good description of the preparation, equipment, clothing, safety considerations, etc his crew were using around that time. Also look for books by any of the other parties involved, or books describing the search efforts. I usually find that the explorers themselves go into more technical details and practicalities, rather than filling pages with biographical information, politics, legacy, etc. If your library doesn't have what you want, ask if they can bring them in on an inter-library loan.\n\nBTW, while trying to find his book online, I found a couple of film shorts [1](_URL_1_) and [2](_URL_0_) which you've probably already found, but in any case are great for visualization", "provenance": null }, { "answer": "Depending on the snow (incline, thickness) short and long skis (used by Amundsen on his South Pole expedition and Nansen's Farthest North expedition).\n\nSnowshoes (Fridtjof Nansen's Crossing of Greenland)\n\nSledges to haul provisions - dragged by either men or sled dogs (again, Amundsen South Pole and Fridtjof Nansen North Pole expeditions).\n\nRobert Falcon Scott used ponies to haul the sleds instead of dogs and that didn't work out so well. I'd presume the same could be said of using ponies in the North.\n\nPemmican -- something akin to beef jerkey -- would be eaten by humans or dogs. It's high in fat content so it provides many calories. Dried meats, because water is heavy and a burden to haul around. Biscuits. Tobacco, etc. This, again, makes sled dogs superior to ponies being that the food stores could be used by both humans and dogs. Ponies need hay, and that would be extra weight on the sledges that would be somewhat useless to humans. Other tinned goods would be added such as jams to provide some variety to the diet.\n\nFor the sleds, you'd need leather lashings of some sort to tie everything down, and then extra lashings in case the first ones broke. You would also need extra runners (skis on the bottom) for the sledges in case they break (which they will, it's the rough icy arctic).\n\nTents are a must. Sleeping bags are a must. Snow goggles to prevent snow blindness. Cocaine was sometimes used to help with the pain of snow blindness (staring at bright snow too long).\n\nBoots and pant linings were often made out of reindeer, sometimes out of specific parts such as the inside of the leg of the baby reindeer. \n\n*\"Reindeer-skin is, in comparison with its weight, the warmest of all similar materials known to me, and the skin of the calf, in its winter-coat especially, combines the qualities of warmth and ligtness in quite an unusual degree.\"* -Nansen (Greenland)\n\nFor a certain type of boot called 'finnesko', Nansen claimed the best were made out of reindeer leg skin from a buck.\n\nAdditionally, heavy woolen shirts and trousers were worn with layers of socks and gloves. To soak up moisture buildup in the boots and the gloves, Nansen would stuff 'sennegrass' near the feet and hands. [Sennegrass, according to the Meriam-Webster Dictionary: a widely distributed sedge (Carex vesicaria) with grasslike leaves that is used by arctic and antarctic explorers as insulating material ](_URL_0_)\n\nIn rain and blizzards, they would also wear a thin canvas overcoat to keep dry.\n\nA primus stove/cooker that runs on paraffin and/or alcohol is a must for heating up water and other food. It's been a while since I have read books on the matter, but I believe Nansen used seal blubber in place of oil/paraffin when the latter ran low. The primus would work, but would spew dirtier smoke since the blubber would be relatively unrefined.\n\nSpeaking of blubber. Harpoons. You're going to want some harpoons to hunt seals. Rifles as well, but harpoons will be good if you need to conserve ammo. You'll need some sort of knives to butcher the seals as well.\n\nSeals (and polar bears in the north) are another reason you'll want dogs. You can't feed seal/bear meat to ponies. And when you run out of food for the dogs, you can eat those as well (Nansen did just this in his Farthest North expedition)\n\nMotors aren't the best. At least not in the south pole expeditions. Too many pieces to break down and the gas/oil/other liquids would gum up at those low temperatures (Robert Falcon Scott brought a motor on his expedition -- it didn't work very long)\n\nFor shipboard entertainment, you'll want a library filled with books, and maybe a small piano/organ (Nansen expedition. Amundsen might have brought one South, too.) A small printing press to make a short \"newspaper\" is another way Polar expeditions have killed time in the long, dark winter. Cards. Cigars. Some alcohol. You have to keep spirits up. \n\nUsually on polar expeditions, they will bring scientific instruments for posterity. They'd bring *long* lengths of cable to make depth measurements around the shorelines of Antarctica. \n\nStraight out of Farthest North (1897):\n\n*\"The instruments of scientific observations of course formed an important part of our equipment, and special care was bestowed upon them. in addition to the collection of instruments i had used on my Greenland expedition, a great many new ones were provided, and no pains were spared to get them as good and complete as possible. for meteorological observations, in addition to the ordinary thermometers, barometers, aneroids, psychrometers, hygrometers, anemometers, etc., etc., self-registering instruments were also taken. of special importance were a self-registering aneroid barometer (barograph) and a pair of self-registering thermometers (thermographs). for astronomical observations we had a large theodolite and two smallers ones, intended for use on our sledge expeditions, together with several sextants of different sizes. We had, moreover, four ship's chronometers, and several pocket-chronometers. For magnetic observations for taking the declination, inclination, and intensity (both horizontal and total intensity) we had a complete set of instruments. Among others may be mentioned a spectroscope especially adapted for the northern lights, an electroscope for determining the amount of electricity in the air, photographic apparatuses, of which we had seven, large and small, and a photographometer for making charts. for hydrographic observations we took a full equipment of water-samplers, deep-water thermometers, etc. to ascertain the saltiness of the water, we had, in addition to the ordinary areometers, an electric apparatus specially constructed by Mr. Thornoe. Altogether, our scientific equipment was especially excellent, thanks in great measure to the obliging assistance rendered me by many men of science.\"* \n\nsources: \"Farthest North (1897)\" by Fridtjof Nansen. \"South Pole (1912)\" by Roald Amundsen. \"First Crossing of Greenland (1880)\" by Fridtjof Nansen.", "provenance": null }, { "answer": null, "provenance": [ { "wikipedia_id": "196075", "title": "Indigenous peoples in Canada", "section": "Section::::History.:Archaic period.\n", "start_paragraph_id": 27, "start_character": 0, "end_paragraph_id": 27, "end_character": 736, "text": "The Arctic small tool tradition is a broad cultural entity that developed along the Alaska Peninsula, around Bristol Bay, and on the eastern shores of the Bering Strait around 2,500 BCE (4,500 years ago). These Paleo-Arctic peoples had a highly distinctive toolkit of small blades (microblades) that were pointed at both ends and used as side- or end-barbs on arrows or spears made of other materials, such as bone or antler. Scrapers, engraving tools and adze blades were also included in their toolkits. The Arctic small tool tradition branches off into two cultural variants, including the Pre-Dorset, and the Independence traditions. These two groups, ancestors of Thule people, were displaced by the Inuit by 1000 Common Era (CE).\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "2621171", "title": "Arctic small tool tradition", "section": "", "start_paragraph_id": 1, "start_character": 0, "end_paragraph_id": 1, "end_character": 894, "text": "The Arctic Small Tool tradition (ASTt) was a broad cultural entity that developed along the Alaska Peninsula, around Bristol Bay, and on the eastern shores of the Bering Strait around 2500 BC. ASTt groups were the first human occupants of Arctic Canada and Greenland. This was a terrestrial entity that had a highly distinctive toolkit based on microblade technology. Typically tool types include scrapers, burins and side and end blades used in composite arrows or spears made of other materials, such as bone or antler. Many researchers also assume that it was Arctic Small Tool populations who first introduced the bow and arrow to the Arctic. ASTt camps are often found along coasts and streams, to take advantage of seal or salmon populations. While some of the groups were fairly nomadic, more permanent, sod-roofed homes have also been identified from Arctic Small Tool tradition sites.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "24162943", "title": "Chertov Ovrag", "section": "", "start_paragraph_id": 3, "start_character": 0, "end_paragraph_id": 3, "end_character": 329, "text": "Various stone and ivory tools were found, including a toggle harpoon, used for hunting sea mammals. The emergence of hunting sea mammals was a significant innovation for the Arctic culture, believed to have started from 2000 to 1 BCE. The Chertov Ovrag site contributed to the understanding of the development of Arctic culture.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "19166484", "title": "Inuit culture", "section": "Section::::Overview of cultural history.:Period IV (1000 BCE-1000 CE).\n", "start_paragraph_id": 40, "start_character": 0, "end_paragraph_id": 40, "end_character": 463, "text": "Their hunting methods were greatly improved over previous Arctic cultures. They probably invented the igloo, which is difficult to determine because such ephemeral structures leave no archaeological evidence. They spent the winters in relatively permanent dwellings constructed of stone and pieces of grass; these were the precursors of the later qarmaqs. They were also the first culture to carve seal-oil lamps (\"qulliq\", also spelled \"kudlik\") from soapstone.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "11180149", "title": "Arctic ecology", "section": "Section::::Human ecology in the Arctic.\n", "start_paragraph_id": 34, "start_character": 0, "end_paragraph_id": 34, "end_character": 813, "text": "The Aurignacoid (upper Paleolithic tool-making) tradition of the modern people is most associated with a feature called blade-and-core technology. According to Quaternary scientist C.V. Haynes, Arctic cave art also dates back to the Aurignacoid phase and climaxes during the end of the Pleistocene, which encompasses subjects such as hunting and spirituality. People stemming from the Clovis culture populated northern regions of Canada and formed what led to the Northern Archaic and Maritime Archaic traditions at the end of the Late Glacial period. Recently, small flint tools and artifacts from about 5,000 years ago were discovered that belonged to a culture now generally called the Arctic Small Tool tradition. The ASTt people are believed to be the physical and cultural ancestors of modern arctic Inuit.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "25003074", "title": "List of Russian explorers", "section": "Section::::History of Russian exploration.:1900s.:Polar exploration.\n", "start_paragraph_id": 24, "start_character": 0, "end_paragraph_id": 24, "end_character": 1116, "text": "The late 19th century and the early 20th century was marked by a renewed interest in Arctic exploration. Many expeditions of that era met a tragic fate, like the voyages of Eduard Toll, Georgy Brusilov, Vladimir Rusanov and Georgy Sedov, yet brought some valuable geographic results. Modern era polar icebreakers, dating from Stepan Makarov's \"Yermak\", made Arctic voyages safer and led to new attempts to explore the Northern Sea Route. The last major unknown archipelago on Earth, Severnaya Zemlya, was discovered by Boris Vilkitsky during his expedition on the icebreakers \"Taymyr\" and \"Vaygach\" and later explored and mapped by Nikolay Urvantsev and Georgy Ushakov. The Soviet Chief Directorate of the Northern Sea Route under Otto Shmidt completed the exploration of the Russian Arctic and established regular marine communications alongside Russia's northern shores in the 1930s. \"North Pole-1\", the drifting ice station populated by the team led by Ivan Papanin, became the first expedition of its kind in 1937–38, and inaugurated a succession of drifting polar research stations which continues to this day.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "21189", "title": "Neolithic", "section": "Section::::Cultural characteristics.:Lithic technology.\n", "start_paragraph_id": 73, "start_character": 0, "end_paragraph_id": 73, "end_character": 212, "text": "The peoples of the Americas and the Pacific mostly retained the Neolithic level of tool technology until the time of European contact. Exceptions include copper hatchets and spearheads in the Great Lakes region.\n", "bleu_score": null, "meta": null } ] } ]
null
bwwd2v
what are apertures, f-stops, how does depth of field work, and how does lens measurement factor into the equation?
[ { "answer": "An ideal lens focuses light from a single plane (called the focal plane) onto its sensor. However, that's not super useful, as we often want to take pictures of things that are thick. As it turns out, there is a region around the focal plane where the image is still well focused. This is called the \"field\" of the photo, and the \"depth of field\" (DOF) measures the thickness of this region from the point nearest the camera that is well focused to the farthest point that is well focused.\n\nAs it turns out, actual lenses are not ideal lenses. This matters when it comes to DOF. At small apertures, much less light enters the lens, and it all enters through the middle part of the lens. The result is a larger DOF. In fact, you can make pictures with no lens at all using a pinhole camera. The aperture is so small that the DOF is essentially infinite. Since the amount of light that comes through is similarly small, you need a very bright scene.\n\nSince aperture effects both amount of light and DOF, it's not exactly a DOF control. As less light comes through, more integration time (or exposure time if you're still thinking of a film camera) is required to get an image.\n\nf-number (or f-stop) is a ratio of aperture to focal length. This is a camera-specific idea, but the exposure time for similar f-stops is similar. This was a more interesting parameter when light meters were separate from cameras. Almost all modern cameras use through-the-lens metering and automatic (or at least semi-automatic programs) to select appropriate f-stops and exposure times.", "provenance": null }, { "answer": null, "provenance": [ { "wikipedia_id": "18427873", "title": "Focus recovery based on the linear canonical transform", "section": "Section::::Depth of field and perceptual focus.\n", "start_paragraph_id": 3, "start_character": 0, "end_paragraph_id": 3, "end_character": 320, "text": "In photography, depth of field (DOF) means an effective focal length. It is usually used for stressing an object and deemphasizing the background (and/or the foreground). The important measure related to DOF is the lens aperture. Decreasing the diameter of aperture increases focus and lowers resolution and vice versa.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "13208662", "title": "Lenses for SLR and DSLR cameras", "section": "Section::::Aperture and depth of field.\n", "start_paragraph_id": 9, "start_character": 0, "end_paragraph_id": 9, "end_character": 311, "text": "The aperture affects not only the amount of light that passes through the lens, but also the depth of field of the resulting image: a larger aperture (a smaller f-number, e.g. f/2.0) will have a shallow depth of field, while a smaller aperture (a larger f-number, e.g. f/11) will have a greater depth of field.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "52648", "title": "Camera", "section": "Section::::Physics.:Exposure control.:Aperture.\n", "start_paragraph_id": 39, "start_character": 0, "end_paragraph_id": 39, "end_character": 391, "text": "Adjustment of the lens opening measured as f-number, which controls the amount of light passing through the lens. Aperture also has an effect on depth of field and diffraction – the higher the f-number, the smaller the opening, the less light, the greater the depth of field, and the more the diffraction blur. The focal length divided by the f-number gives the effective aperture diameter.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "37583418", "title": "Glossary of video terms", "section": "Section::::D.:Depth of Field.\n", "start_paragraph_id": 151, "start_character": 0, "end_paragraph_id": 151, "end_character": 355, "text": "The in-focus range of a lens or optical system around an item of interest. It is measured from the distance behind an object of interest, to the distance in front of the object of interest, when the viewing lens is specifically focused on the object of interest. Depth of field depends on subject-to-camera distance, focal length of the lens, and f-stop.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "10750774", "title": "Tilted plane focus", "section": "Section::::Depth of field.\n", "start_paragraph_id": 5, "start_character": 0, "end_paragraph_id": 5, "end_character": 404, "text": "Depth of field is an effect that permits bringing objects into focus at varying distances from the camera, and at varying depth between each other, into the field of view. A short lens, as explained above, will bring objects into focus that are relatively close to the camera, but it will also keep focus at greater distances between objects. A telephoto lens will be very shallow in its gamut of focus.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "2165927", "title": "Depth of focus", "section": "Section::::\"Depth of focus\" versus \"depth of field\".\n", "start_paragraph_id": 6, "start_character": 0, "end_paragraph_id": 6, "end_character": 506, "text": "The same factors that determine depth of field also determine depth of focus, but these factors can have different effects than they have in depth of field. Both depth of field and depth of focus increase with smaller apertures. For distant subjects (beyond macro range), depth of focus is relatively insensitive to focal length and subject distance, for a fixed \"f\"-number. In the macro region, depth of focus increases with longer focal length or closer subject distance, while depth of field decreases.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "47474", "title": "Aperture", "section": "Section::::In photography.\n", "start_paragraph_id": 19, "start_character": 0, "end_paragraph_id": 19, "end_character": 567, "text": "A device called a diaphragm usually serves as the aperture stop, and controls the aperture. The diaphragm functions much like the iris of the eye – it controls the effective diameter of the lens opening. Reducing the aperture size increases the depth of field, which describes the extent to which subject matter lying closer than or farther from the actual plane of focus appears to be in focus. In general, the smaller the aperture (the larger the f-number), the greater the distance from the plane of focus the subject matter may be while still appearing in focus.\n", "bleu_score": null, "meta": null } ] } ]
null
34oh2g
why do i have to wait 30 seconds after i unplug my modem to plug it back in? once it's off, isn't it just...off?
[ { "answer": "there are various components that still hold a slight electric charge for a few seconds after you disconnect the power cord. leaving it unplugged for 30 seconds or so allows these components to completely drain making the power cycle completely effective. ", "provenance": null }, { "answer": "Electronics contain capacitors and will maintain a capacitive charge for some length of time. The 30 second window is intended to be enough to let that capacitive charge drain. Not starting discharged can lead to some sequencing problems as the modem powers back up. In a perfect world they'd be designed such that it wouldn't matter, and it usually probably really doesn't.\n\nYou can see it one some electronics when you unplug them and the LED slowly fades out rather than going blank entirely.", "provenance": null }, { "answer": "It has to do with provisioning at the ISP side. Generally they ask for power off for a few minutes, this causes their equipment to \"release\" your modem's information, so when you turn it back on, it acquires new information, be it IP address, updated firmware, whatever.\n\nIf they do tell you \"30 seconds\" - it's either their equipment can be told to manually drop the stored information, or, they want to be SURE you actually powered it off, and will ask you \"what lights are blinking?\" - this proves you didn't just blow the tech off and not power cycle it. ", "provenance": null }, { "answer": "I work for an ISP. We tell you 30 seconds, because it is long enough to make sure that everything fully powers off. if you pull the plug and put it back, sometimes it won't reset.\n\nAs for the reset button, if you don't hold that down for 30 seconds it won't actually wipe the config file on our modems.", "provenance": null }, { "answer": "Most of these answers are right to some extent but aren't explaining the whole / true reasoning.\n\nYour modem automatically sends information to your ISP (pings) ever so frequently in order to let them know that your modem is still online. These pings contain your modem's MAC address which is a unique serial to your modem. When you unplug your modem and instantly plug it back in, the pings resume without a large delta (time gap from the last ping). Once a ping hasn't been received after a certain amount of time (between 5 - 15 seconds) the ISP's devices can determine that your modem is offline and then do things like releasing your stored MAC address to IP allocation. By waiting a full 30 seconds, both you and the ISP support technician can ensure that the ISP's devices recognize that your modem is offline / has restarted.\n\nSrc: Network Engineer / Computer Scientist", "provenance": null }, { "answer": null, "provenance": [ { "wikipedia_id": "28151337", "title": "Dazer Laser", "section": "Section::::Technology.\n", "start_paragraph_id": 14, "start_character": 0, "end_paragraph_id": 14, "end_character": 350, "text": "The products have built-in security codes for controlled activation. Once the code has been input into the device, it is activated for a time period of 8, 12, or 24 hours. Upon expiration of the time clock, the device shuts off and cannot be used again until the code is re-entered. This prevents unauthorized use in the event it is lost or stolen. \n", "bleu_score": null, "meta": null }, { "wikipedia_id": "5772470", "title": "Digital Serial Interface", "section": "Section::::Advantages.\n", "start_paragraph_id": 5, "start_character": 0, "end_paragraph_id": 5, "end_character": 225, "text": "BULLET::::- Because each device has its own wire to the controller (rather than being part of a network) it has no need of an address to be set, so can be replaced simply by unplugging the faulty one and plugging in the new.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "3345112", "title": "The Addams Family (pinball)", "section": "Section::::Hidden game codes.\n", "start_paragraph_id": 43, "start_character": 0, "end_paragraph_id": 43, "end_character": 231, "text": "The codes may also temporarily stop working if they are done too many times in a row. Allowing the Attract mode display screens to cycle all the way through (at least 1 or 2 minutes) before trying a code again should rectify this.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "11773491", "title": "LV2", "section": "Section::::Concepts.:Worker Thread.\n", "start_paragraph_id": 10, "start_character": 0, "end_paragraph_id": 10, "end_character": 612, "text": "One capability that a host can provide to a plugin is a \"worker thread\". In programming terms, this means that the plugin can offload some work to be done in another thread that the host provides. This is generally useful because a plugin is usually run in the real-time audio thread of an application, and hence cannot do any non-real-time safe operations (disk-accesses, system calls, etc.). To make it easy for the plugin to achieve its goals (e.g.: load a file from disk), the host can provide a worker thread. The host provides the LV2_Extension for the worker thread and the plugin is then able to use it.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "29748150", "title": "DebugWIRE", "section": "Section::::debugWIRE specifications.\n", "start_paragraph_id": 8, "start_character": 0, "end_paragraph_id": 8, "end_character": 252, "text": "debugWIRE can be disabled with e.g. JTAGICE mkII by sending a special reset command that disables temporarily the debugWIRE function and reenables /RESET and also ISP until next power down cycle. debugWIRE is not able to program the fuses of a device.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "64817", "title": "600 series connector", "section": "Section::::Varieties.:611 socket.\n", "start_paragraph_id": 15, "start_character": 0, "end_paragraph_id": 15, "end_character": 283, "text": "This is particularly designed for mode 3 connection. The incoming line to the mode 3 device is connected using pair one, and pair two is used as the outgoing line to other devices. If the mode 3 device is unplugged, the switch contacts maintain line connection to the other devices.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "13491002", "title": "Huawei E220", "section": "Section::::Features.\n", "start_paragraph_id": 9, "start_character": 0, "end_paragraph_id": 9, "end_character": 367, "text": "Updating the modem's Dashboard does not remove or affect the network-lock (that may be in effect with modems purchased subsidized from a service provider) that prevents you from using the modem with any service provider. However updating the modem's firmware may remove this network-unlock or even the opposite, turn a network-unlock free modem into an unlocked one.\n", "bleu_score": null, "meta": null } ] } ]
null
1t4uaz
What advice can askhistorians give me on becoming a professional historian.
[ { "answer": "Not to dissuade any new advice, but we've collected past posts in this topic under the FAQ section [History Careers and Education](_URL_0_).", "provenance": null }, { "answer": null, "provenance": [ { "wikipedia_id": "43216976", "title": "The Historian's Craft", "section": "Section::::Content.\n", "start_paragraph_id": 3, "start_character": 0, "end_paragraph_id": 3, "end_character": 684, "text": "The work explores the craft of the historian from a number of different angles and discusses what constitutes history and how it should be configured and created in literary form by the historian. The scope of the work is broad across space and time: in one chapter, for instance, he cites a number of examples of erroneous history-writing and forgeries, citing sources as wide-ranging as the \"Commentaries\" of Julius Caesar and the \"Protocols of the Elders of Zion\". His approach is one that is configured not for those who are necessarily professional historians themselves (members of what he referred to as \"the guild\") but instead for all interested readers and non-specialists.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "39195435", "title": "Association of Personal Historians", "section": "Section::::Concept.\n", "start_paragraph_id": 6, "start_character": 0, "end_paragraph_id": 6, "end_character": 503, "text": "Personal historians have been described as comprising \"journalists, psychotherapists, social workers, nurses, videographers, gerontologists, and people from other helping or writing professions\", as \"retired teachers, journalists, genealogists, and therapists...\" and as \"social workers, journalists and others involved in communications... retirees who want to embark on a second career.\" In each case they form \" [g]enerally a one-person conglomerate of ghostwriter, editor, and publishing house...\".\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "19188466", "title": "Helen Gregory", "section": "", "start_paragraph_id": 4, "start_character": 0, "end_paragraph_id": 4, "end_character": 734, "text": "In the mid 1970s she was a consultant historian to the private and government sectors, and is believed to be the first graduate historian in Queensland to use her training in this way, demonstrating that privately commissioned histories could be undertaken without sacrificing academic standards or ethical integrity. She was the founder of the Brisbane History Group and the Professional Historians' Association (Queensland), the professional association which promotes the interests of consulting historians in Queensland, and maintains standards of practice. As well as commissioned history, Ms Gregory is the author or co-author of many academic articles and studies and several entries in the Australian Dictionary of Biography.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "4150331", "title": "Institute of Historical Research", "section": "Section::::Role.\n", "start_paragraph_id": 27, "start_character": 0, "end_paragraph_id": 27, "end_character": 270, "text": "To provide a welcoming environment where historians at all stages in their careers and from all parts of the world can meet formally and informally to exchange ideas and information, and to bring themselves up to date with current developments in historical scholarship\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "52265494", "title": "Sabine von Heusinger", "section": "Section::::Career.\n", "start_paragraph_id": 6, "start_character": 0, "end_paragraph_id": 6, "end_character": 282, "text": "Her main fields of expertise are the history of history, the history of the church, the history of religions and confessions, the history of everyday life, the family, life forms, women and gender, regional, urban and local history as well as social, political and cultural orders.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "2561766", "title": "Gilder Lehrman Institute of American History", "section": "Section::::Website.\n", "start_paragraph_id": 11, "start_character": 0, "end_paragraph_id": 11, "end_character": 495, "text": "The institute maintains a non-free Web site to serve as a portal for American history on the Web, to offer educational material for teachers, students, historians, and the public, and to provide up-to-the-minute information about the institute's programs and activities. The Web site offers learning modules on major topics in American history, podcasts from noted historians discussing their work, online exhibitions of primary source documents, and information about the institute's programs.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "9355174", "title": "Georgia Historical Society", "section": "Section::::Vincent J. Dooley Distinguished Fellows Program.\n", "start_paragraph_id": 26, "start_character": 0, "end_paragraph_id": 26, "end_character": 525, "text": "The Dooley Distinguished Research Fellows Program will also mentor the next generation of historians by giving younger scholars the opportunity to conduct research for a specific period of time in the vast collection of primary sources at the Georgia Historical Society Research Center. The research is expected to lead to a major piece of scholarly work such as: a dissertation, a book, an article in a refereed scholarly journal, a chapter in an edited collection, or an academic paper presented at a scholarly conference.\n", "bleu_score": null, "meta": null } ] } ]
null
9rproh
Why so much variation in the spelling of Irish surnames?
[ { "answer": "It's a byproduct of British colonialism. During their process of colonisation, the English settlers often took Irish names - of people and places - and Anglicised them. For example, the capital of the Republic is Dublin, based on the viking settlement that used to be there called Dubhlinn (Blackpool if translated literally). More than that, though, while Irish was never prohibited in general (contrary to many popular myths), English was the language which dominated the Irish education system and civil service, and English was taught *exclusively* in the education system until 1871. There was major social pressure from the Catholic Church as well to discontinue the usage of Irish and they advocated against people speaking it until around the 1890's. \n\nThat attitude continued among a huge section of Ireland in spite of the Cultural Revival at the turn of the 20th century because employment opportunities were to be found in the Anglosphere - the United Kingdom and the United States, so Irish people were encouraged to learn English for when they would \"inevitably\" emigrate.\n\nNow to the thrust of your question; Irish people's names are spelled with such variety because they weren't originally in English. They had to be Anglicised at some point and the method in which that was done wasn't done so consistently. O'Neill vs O'Neal vs O'Neil for example, would translate roughly back to Ó'Néill (grandson/descendant of Néill). Another example could be Piers v Pierce v Pearse v Pearson, which would go back to Mac Phearais (or Nic Phearais if they were a woman).\n\nThat's not to say Irish names in the original format are extremely consistent either. Ó'Néill could be Ua Néill, or Uí Néill, or Ní Néill but that's defined by rules - depending on stuff such as a person's gender.", "provenance": null }, { "answer": null, "provenance": [ { "wikipedia_id": "27092361", "title": "Poland (surname)", "section": "Section::::Etymology.\n", "start_paragraph_id": 8, "start_character": 0, "end_paragraph_id": 8, "end_character": 628, "text": "A further reason for the variety in anglicised forms of the surname can be explained, as Irish Catholic priests, whilst literate, were only required to record surname spellings phonetically on birth certificates. This led to individuals sometimes having their surname recorded with a different spelling from their father, and indeed there are many examples of individuals with their surname spelt differently from their birth and death certificates even in the 20th century. Whilst literacy rates amongst the general Irish population were low in the 17th, 18th and 19th centuries, such variant spellings were rarely questioned.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "2475455", "title": "Carey (surname)", "section": "Section::::From the Irish records.\n", "start_paragraph_id": 22, "start_character": 0, "end_paragraph_id": 22, "end_character": 643, "text": "Throughout this period and the following centuries, as noted by the Registrar General, R. E. Matheson in his report of 1901, surnames in Ireland had become altered in form by regional dialects and pronunciation, the anomalies of anglicisation and the effects of illiteracy, so as to occur in a bewildering variety of forms, even within the same families. Alongside this is the process of simplification already mentioned, reinforced by the mutation of the Irish form into English letters, e.g. the 'y' ending in English replacing 'aigh', 'aidh', 'dha' and even 'n' endings, 'áin, ín' etc. \"cf.\" 'Tipperary' and Irish original 'Tiobrad Árann'.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "12339240", "title": "O'Hare (surname)", "section": "", "start_paragraph_id": 10, "start_character": 0, "end_paragraph_id": 10, "end_character": 938, "text": "Down from the Anglo-Norman invasion, the names in use in Ireland were almost purely Gaelic, however the English forced the Irish to adopt English surnames. Accordingly it was enacted by the statute of Edward IV (1465), that every Irishman dwelling within the Pale, which then comprised the counties of Dublin, Meath, Louth and Kildare, should take an English surname. The Irish people were forced into adopting an English surname, or at least an English version of their Irish surname, therefore many removed the 'Mac' or 'O' from their surname. However The O'Hehir and O'Hare families did not drop the 'O', nor did they adopt an English version of their surnames. As a result, they would have had to endure extreme hardship and suffering because of such opposition. (The creation of societies such as the Gaelic League in the late 19th Century resulted in the widespread resumption of the 'Mac' and 'O' prefixes to many Irish surnames.)\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "57767309", "title": "Keillor (surname)", "section": "Section::::Origin and variants.\n", "start_paragraph_id": 5, "start_character": 0, "end_paragraph_id": 5, "end_character": 757, "text": "Until the gradual standardization of English spelling in the last few centuries, English lacked any comprehensive system of spelling. Medieval Scottish names, particularly as they were anglicized from the original Gaelic, historically displayed wide variations in recorded spellings as scribes of the era spelled words according to how they sounded rather than any set of rules. This means that a person's name was often spelled several different ways over a lifetime. As such, different variations of the Keillor surname usually have the same origin. Aside from the United Kingdom, variants of the surname can be found today across the English-speaking world, particularly in Michigan, Wisconsin, Minnesota, and Ontario in North America, and in Australia.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "57729647", "title": "Glady (surname)", "section": "Section::::Origins and variants.\n", "start_paragraph_id": 5, "start_character": 0, "end_paragraph_id": 5, "end_character": 619, "text": "Until the gradual standardization of German spelling with the introduction of compulsory education in late 18th and early 19th centuries, many names displayed wide variations in spelling. In addition, as was quite common in earlier eras of mass migration, many immigrants either changed their surnames completely, or decided to use one of a number of various anglicized spellings of their original surnames, leading to wide variation in the spelling and pronunciation of what was originally the same name. As such, different variations of surnames such as Glady, from the original Glöde, usually have the same origin. \n", "bleu_score": null, "meta": null }, { "wikipedia_id": "57831165", "title": "Teschow (surname)", "section": "Section::::Origin and Variants.\n", "start_paragraph_id": 6, "start_character": 0, "end_paragraph_id": 6, "end_character": 605, "text": "Until the gradual standardization of German spelling with the introduction of compulsory education in late 18th and early 19th centuries, many names displayed wide variations in spelling. In addition, as was quite common in earlier eras of mass migration, many immigrants either changed their surnames completely, or decided to use one of a number of various anglicized spellings of their original surnames, which further led to wide variations in the spelling and pronunciation of what was originally the same name. As such, different variations of surnames such as Teschow usually have the same origin.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "24318510", "title": "O'Neill (surname)", "section": "Section::::Origins.\n", "start_paragraph_id": 4, "start_character": 0, "end_paragraph_id": 4, "end_character": 711, "text": "It is due to the Anglicization of the original Irish that the several spelling variations have emerged, during the transcribing of the name into English. As well, all variations upon the O'Neill spelling are incorrect. This is mainly due to the lack of literacy and ability to spell (common at the time), and people wishing to associate themselves with the O'Neill royalty. Irish and Scottish variants also exist and include MacNeal, MacNiel and MacNeill, which arose when the \"ua\" element in the name was replaced with \"mac\", meaning \"son of.\" Ó has replaced Ua since the end of a standard Irish and its gradual evolution into Scottish, Manx and Irish. O'Neill is also occasionally found used as a given name.\n", "bleu_score": null, "meta": null } ] } ]
null
qkbsv
Would it be possible to create a stable, artificial ring around our planet (any celestial body, really)?
[ { "answer": "The issue is the amount of material you're talking about. Realistically you'd want to push asteroids around to form the belt. Then your main issue is what is this going to do to the gravity of the earth/moon system.\n\nLets throw some numbers around for fun. Average asteroid density is about 2g/cm^3 and a megastructure at geosync orbit that's 10m x 10m gets us 6.1* 10^13 kg. The gravitational pull on the surface would be ... OK my math broke down since it's a toroid and computing the gravity is problamatic. Either way we're talking a huge amount of mass but honestly with careful calculation and some good heavy list rockets and some extra-planetary tugs I don't see why it can't be done. It could even serve a purpose in that it's a great ancher for large space manfacturing and it's minimal gravity pull would clean out a lot of space junk in geosync.\n\nIf we could build one in LEO then it would be a lot smaller but it would need some kind of structural integrety to keep itself up as LEO still experiences some drag from the upper atmosphere. We don't want to be parking large asteroids there as without maintenence they'll fall on us fairly fast. Geosync has much less of that problem. Edit to add: 8.5 * 10^12 kg of mass there. That's only a cubic mile of material - not something we can launch but we could get a series of asteroids into place fairly easily. It's just then the question of how to keep them in orbit.\n\nedit: forgot to add earth to the radius of geosync", "provenance": null }, { "answer": null, "provenance": [ { "wikipedia_id": "1029423", "title": "Megastructure", "section": "Section::::Theoretical.:Stellar scale.\n", "start_paragraph_id": 29, "start_character": 0, "end_paragraph_id": 29, "end_character": 330, "text": "BULLET::::- A Ringworld (or Niven Ring) is an artificial ring encircling a star, rotating faster than orbital velocity to create artificial gravity on its inner surface. A non-rotating variant is a transparent ring of breathable gas, creating a continuous microgravity environment around the star, as in the eponymous Smoke Ring.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "17037094", "title": "Non-rocket spacelaunch", "section": "Section::::Dynamic structures.:Orbital ring.\n", "start_paragraph_id": 29, "start_character": 0, "end_paragraph_id": 29, "end_character": 206, "text": "An orbital ring is a concept for a giant artificially constructed ring hanging at low Earth orbit that would rotate at slightly above orbital speed that would have fixed tethers hanging down to the ground.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "1945739", "title": "Orbital ring", "section": "Section::::Birch's model.\n", "start_paragraph_id": 11, "start_character": 0, "end_paragraph_id": 11, "end_character": 283, "text": "A simple unsupported hoop about a planet is unstable: it would crash into the Earth if left unattended. The orbital ring concept requires cables to the surface to stabilize it, with the outward centrifugal force providing tension on the cables, and the tethers stabilizing the ring.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "30643750", "title": "Astronomical engineering", "section": "Section::::Examples.:Science fiction.\n", "start_paragraph_id": 11, "start_character": 0, "end_paragraph_id": 11, "end_character": 654, "text": "BULLET::::- In the Ringworld series by Larry Niven, a ring a million miles wide is built and spun (to simulate gravity) around a star roughly one astronomical unit away. The ring can be viewed as a functional version of a Dyson sphere with the interior surface area of 3 million Earth-sized planets. Because it is only a partial Dyson sphere, it can be viewed as a construction of a civilization intermediary between Type I and Type II. Both Dyson spheres and the Ringworld suffer from gravitational instability, however—a major focus of the Ringworld series is coping with this instability in the face of partial collapse of the Ringworld civilization.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "4243069", "title": "10199 Chariklo", "section": "Section::::Rings.\n", "start_paragraph_id": 13, "start_character": 0, "end_paragraph_id": 13, "end_character": 785, "text": "The existence of a ring system around a minor planet was unexpected because it had been thought that rings could only be stable around much more massive bodies. Ring systems around minor bodies had not previously been discovered despite the search for them through direct imaging and stellar occultation techniques. Chariklo's rings should disperse over a period of at most a few million years, so either they are very young, or they are actively contained by shepherd moons with a mass comparable to that of the rings. However, other research suggests that Chariklo's elongated shape combined with its fast rotation can clear material in an equatorial disk through Lindblad resonances and explain the survival and location of the rings, a mechanism valid also for the ring of Haumea.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "42321551", "title": "Rings of Chariklo", "section": "", "start_paragraph_id": 2, "start_character": 0, "end_paragraph_id": 2, "end_character": 777, "text": "The existence of a ring system around a minor planet was unexpected because it had been thought that rings could only be stable around much more massive bodies. Ring systems around minor bodies had not previously been discovered despite the search for them through direct imaging and stellar occultation techniques. Chariklo's rings should disperse over a period of at most a few million years, so either they are very young, or they are actively contained by shepherd moons with a mass comparable to that of the rings. The team nicknamed the rings Oiapoque (the inner, more substantial ring) and Chuí (the outer ring), after the two rivers that form the northern and southern coastal borders of Brazil. A request for formal names will be submitted to the IAU at a later date.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "26052", "title": "Ringworld", "section": "Section::::Plot summary.\n", "start_paragraph_id": 4, "start_character": 0, "end_paragraph_id": 4, "end_character": 970, "text": "They first go to the puppeteer home world, where they learn that the expedition's goal is to investigate the Ringworld, a gigantic artificial ring, to see if it poses any threat. The Ringworld is about one million miles (1.6 million km) wide and approximately the diameter of Earth's orbit (which makes it about 600 million miles or 950 million km in circumference), encircling a sunlike star. It rotates to provide artificial gravity 99.2% as strong as Earth's from centrifugal force. The Ringworld has a habitable, flat inner surface (equivalent in area to approximately three million Earths), a breathable atmosphere and a temperature optimal for humans. Night is provided by an inner ring of shadow squares which are connected to each other by thin, ultra-strong wire. When the crew completes their mission, they will be given the starship in which they travelled to the puppeteer home world; it is orders of magnitude faster than any possessed by humans or Kzinti.\n", "bleu_score": null, "meta": null } ] } ]
null
6pkxoc
why do brass instruments only emit a sound when pursing your lips? why can't you just blow into them and make sound?
[ { "answer": "There needs to be some kind of vibration. Your lips vibrate in the mouthpeice and the instrument basically amplifies that vibration. If you just blow all you do is move air though a bunch of tubes. A saxaphone is brass but is considered a woodwind instrument because they have a wooden reed that emits the vibration.", "provenance": null }, { "answer": "Sound is really just the air vibrating. In order for anything to make sound, it must make the air vibrate. A piano makes sound because a hammer hits a string, and the string vibrates, and then that causes the air to vibrate too. This basically the way all 'string' instruments (like the guitar, violin or cello) work.\n\nThe other large class of musical instruments are the wind instruments. Some of these wind instruments work when you just blow into them (like a recorder), and some don't, but all of them must make vibrations. The instruments that work when you just blow into them work in two steps: first your breath passes though some device so that is makes a \"whushing\" sound. This sound contains a very large range of musical notes, all sitting on top of each other. Then this \"whushing\" sound enters a tube. The tube will only allow a particular musical note to come out: the longer the tube, the lower the note. Here, the air inside the tube is like the string in string instruments, it vibrates at a particular frequency, and this is the sound you hear.\n\nLots of instruments that you blow into, however, do not work this way. Many of them require you to make a particular note first, like a clarinet or a trumpet. Here, you make a note with your lips (or with the reed in a clarinet) and then this note moves into the tubes of the instrument, which again, will only vibrate at a given note, depending on the length of the pipe.\n\nSo why can't you make a \"whushing\" noise, and have a trumpet or clarinet work? You probably could, (especially in a clarinet) it just wouldn't be very loud. Because of the different materials/construction different wind instruments are better at filtering out all the unwanted notes in the \"whushing\" noise, and leaving and making louder, the note you want.\n\nThis all comes down to an idea in physics called resonance, that you should probably look up!", "provenance": null }, { "answer": null, "provenance": [ { "wikipedia_id": "18502271", "title": "Embouchure collapse", "section": "Section::::Mouthpiece pressure.\n", "start_paragraph_id": 15, "start_character": 0, "end_paragraph_id": 15, "end_character": 599, "text": "Many brass instrumentalists argue that excessive mouthpiece pressure is a major cause of embouchure problems and can be a factor in causing embouchure collapse. However, the pressure of the mouthpiece is not static during playing: it increases the higher in the register a player plays and the louder volume level. Also, a little mouthpiece pressure is essential to provide a seal between the player's embouchure and the instrument; without this, all the air would escape before entering the instrument and no sound would be emitted (brass instruments are dependent on an airflow to produce sound).\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "4940", "title": "Brass instrument", "section": "Section::::Sound production in brass instruments.\n", "start_paragraph_id": 84, "start_character": 0, "end_paragraph_id": 84, "end_character": 436, "text": "Because the player of a brass instrument has direct control of the prime vibrator (the lips), brass instruments exploit the player's ability to select the harmonic at which the instrument's column of air vibrates. By making the instrument about twice as long as the equivalent woodwind instrument and starting with the second harmonic, players can get a good range of notes simply by varying the tension of their lips (see embouchure).\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "7100", "title": "Cornet", "section": "Section::::Playing technique.\n", "start_paragraph_id": 15, "start_character": 0, "end_paragraph_id": 15, "end_character": 608, "text": "Like the trumpet and all other modern brass wind instruments, the cornet makes a sound when the player vibrates (\"buzzes\") the lips in the mouthpiece, creating a vibrating column of air in the tubing. The frequency of the air column's vibration can be modified by changing the lip tension and aperture or \"embouchure\", and by altering the tongue position to change the shape of the oral cavity, thereby increasing or decreasing the speed of the airstream. In addition, the column of air can be lengthened by engaging one or more valves, thus lowering the pitch. Double and triple tonguing are also possible.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "10365", "title": "Embouchure", "section": "Section::::Brass embouchure.\n", "start_paragraph_id": 3, "start_character": 0, "end_paragraph_id": 3, "end_character": 358, "text": "While performing on a brass instrument, the sound is produced by the player buzzing his or her lips into a mouthpiece. Pitches are changed in part through altering the amount of muscular contraction in the lip formation. The performer's use of the air, tightening of cheek and jaw muscles, as well as tongue manipulation can affect how the embouchure works.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "3247450", "title": "Horn (acoustic)", "section": "Section::::Applications.:Horn-loaded musical instruments.\n", "start_paragraph_id": 19, "start_character": 0, "end_paragraph_id": 19, "end_character": 540, "text": "This has the effect of providing both the \"brassy\" sound of horn instruments versus woodwinds or even metal instruments which lack a flare, and also of increasing the perceived loudness of the instrument, as harmonics in the range to which the ear is most sensitive are now delivered more efficiently. However, this enhanced radiation in the higher frequencies means by definition less energy imparted to the standing waves, and thus less stable and well-defined notes in the higher registers, making the instrument more difficult to play.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "4940", "title": "Brass instrument", "section": "", "start_paragraph_id": 1, "start_character": 0, "end_paragraph_id": 1, "end_character": 265, "text": "A brass instrument is a musical instrument that produces sound by sympathetic vibration of air in a tubular resonator in sympathy with the vibration of the player's lips. Brass instruments are also called \"labrosones\", literally meaning \"lip-vibrated instruments\".\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "10455311", "title": "History of primitive, ancient Western and non-Western trumpets", "section": "Section::::Etruria and Ancient Rome.\n", "start_paragraph_id": 67, "start_character": 0, "end_paragraph_id": 67, "end_character": 511, "text": "Like the Greek \"salpinx\" the Roman trumpets were not regarded as musical instruments. Among the tems used to describe the tone of the \"tuba\", for instance, were \"horribilis\" (“horrible”), \"terribilis\" (“terrible”), \"raucus\" (“raucous”), \"rudis\" (“coarse”), \"strepens\" (“noisy”) and \"stridulus\" (“shrieking”). When sounding their instruments, the \"tubicines\" sometimes girded their cheeks with the \"capistrum\" (“muzzle”) which \"aulos\" (“flute”) players used to prevent their cheeks from being puffed out unduly.\n", "bleu_score": null, "meta": null } ] } ]
null
2khmos
What type of wood was the medieval trebuchet made of?
[ { "answer": "In all likelihood most siege engines would have been a melange of cut and scavenged woods, some chronicles testify to ships hulls and masts, and houses, torn apart. However, oak and beech are the most common references in chronicles from Charlemagne (8th c CE) to Froissart (14th c CE), but that would be in areas where it was plentiful from forests in France, England, Germany. Fir was a good replacement: a strong tree with height and stoutness. Ash as well would have been a good substitute for some parts under some stress where flex was acceptable, and it was common for wheels, although did not grow as big. Hornbeam for axels where available, pretty much the hardest wood in Europe although did not grow in size like other trees. \n\nOnce you get to the Levant there are stories where crusaders needed to travel miles around to find suitable woods, although the woods are not named. The plentiful pines would have been liable to snapping, although Arabic siege engines were made of cedar according to one chronicle of the 8th c CE.", "provenance": null }, { "answer": null, "provenance": [ { "wikipedia_id": "7063", "title": "Catapult", "section": "Section::::Medieval catapults.\n", "start_paragraph_id": 24, "start_character": 0, "end_paragraph_id": 24, "end_character": 763, "text": "BULLET::::- Trebuchet: Trebuchets were probably the most powerful catapult employed in the Middle Ages. The most commonly used ammunition were stones, but \"darts and sharp wooden poles\" could be substituted if necessary. The most effective kind of ammunition though involved fire, such as \"firebrands, and deadly Greek Fire\". Trebuchets came in two different designs: Traction, which were powered by people, or Counterpoise, where the people were replaced with \"a weight on the short end\". The most famous historical account of trebuchet use dates back to the siege of Stirling Castle in 1304, when the army of Edward I constructed a giant trebuchet known as Warwolf, which then proceeded to \"level a section of [castle] wall, successfully concluding the siege\".\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "4839138", "title": "Warwolf", "section": "", "start_paragraph_id": 1, "start_character": 0, "end_paragraph_id": 1, "end_character": 254, "text": "The Warwolf, or War Wolf or Ludgar (\"Loup de Guerre\"), is believed to be the largest trebuchet ever made. It was created in Scotland by order of King Edward I of England, during the siege of Stirling Castle, as part of the Scottish Wars of Independence.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "23289836", "title": "Claymore", "section": "", "start_paragraph_id": 1, "start_character": 0, "end_paragraph_id": 1, "end_character": 313, "text": "A claymore (; from , \"great sword\") is either the Scottish variant of the late medieval two-handed sword or the Scottish variant of the basket-hilted sword. The former is characterised as having a cross hilt of forward-sloping quillons with quatrefoil terminations and was in use from the 15th to 17th centuries.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "3380077", "title": "Ribauldequin", "section": "", "start_paragraph_id": 1, "start_character": 0, "end_paragraph_id": 1, "end_character": 520, "text": "A ribauldequin, also known as a rabauld, ribault, ribaudkin, infernal machine or organ gun, was a late medieval volley gun with many small-caliber iron barrels set up parallel on a platform, in use during the 14th and 15th centuries. When the gun was fired in a volley, it created a shower of iron shot. They were employed, specifically, during the early fifteenth century, and continued serving, mostly, as an anti-personnel gun. The name \"organ gun\" comes from the resemblance of the multiple barrels to a pipe organ.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "23289836", "title": "Claymore", "section": "Section::::Two-handed (Highland) claymore.\n", "start_paragraph_id": 7, "start_character": 0, "end_paragraph_id": 7, "end_character": 1094, "text": "The two-handed claymore was a large sword used in the late Medieval and early modern periods. It was used in the constant clan warfare and border fights with the English from circa 1400 to 1700. Although claymores existed as far back as the Wars of Scottish Independence they were smaller and few had the typical quatrefoil design (as can be seen on the Great Seal of John Balliol King of Scots). The last known battle in which it is considered to have been used in a significant number was the Battle of Killiecrankie in 1689. It was somewhat longer than other two-handed swords of the era. Though the English did use swords similar to the Claymore during the renaissance called a greatsword. The two-handed claymore seems to be an offshoot of early Scottish medieval longswords (similar to the espee de guerre or grete war sword) which had developed a distinctive style of a cross-hilt with forward-angled arms that ended in spatulate swellings. The lobed pommels on earlier swords were inspired by the Viking style. The spatulate swellings were later frequently made in a quatrefoil design.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "21012786", "title": "Torsion siege engine", "section": "Section::::History.:Medieval.\n", "start_paragraph_id": 12, "start_character": 0, "end_paragraph_id": 12, "end_character": 989, "text": "Jacques de Vitry mentions \"cum cornu\" (\"with horns\") in 1143 whilst referencing siege engines, which could indicate double arms made of horn required by a torsion machine (though it could just as likely be a tension device). The best medieval source is a 12th-century treatise by Mardi ibn Ali al-Tarsusi. The account is highly detailed, if incredibly dense. It describes a single-armed torsion machine on a triangular frame that could hurl 50 lbs. stones. Additionally, Persian double-armed devices similar to ancient Greek design are also described. The major problem with this source, however, is that most of the illustrations show trebuchets, not onagers or other torsion machines. Also by the 12th century, siege engines were used in batteries, often consisting of large numbers of torsion devices, as in Philip Augustus’ siege of Chinon in 1205 during which he collected 400 cords for petrariae. These batteries were gradually replaced with trebuchets and early gunpowder machines.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "1489027", "title": "Volley gun", "section": "Section::::15th-century volley guns.\n", "start_paragraph_id": 4, "start_character": 0, "end_paragraph_id": 4, "end_character": 256, "text": "The Ribauldequin was a medieval version of the volley gun. It had its barrels set up in parallel. This early version was first employed during the Hundred Years' War by the army of Edward III of England, in 1339. Later on, the late Swiss army employed it.\n", "bleu_score": null, "meta": null } ] } ]
null
uufd1
Are quarks affected by magnetic fields?
[ { "answer": "Yes, though quarks are never found alone.\n\nAnything with electric charge can be affected by a magnetic field.\n\nA proton is made of two up quarks and a down quark; it has a charge of +1.\n\nThe up, charm, and top quark have a charge of +2/3. The down, strange, and bottom quark have a charge of -1/3.", "provenance": null }, { "answer": "Yes. Consider neutrons for example, that interact magnetically despite having no net charge.", "provenance": null }, { "answer": null, "provenance": [ { "wikipedia_id": "27582895", "title": "Magnetic catalysis", "section": "Section::::Applications.:Chiral symmetry breaking in quantum chromodynamics.\n", "start_paragraph_id": 7, "start_character": 0, "end_paragraph_id": 7, "end_character": 595, "text": "In the theory of quantum chromodynamics, magnetic catalysis can be applied when quark matter is subject to extremely strong magnetic fields. Such strong magnetic fields can lead to more pronounced effects of chiral symmetry breaking, e.g., lead to (i) a larger value of the chiral condensate, (ii) a larger dynamical (constituent) mass of quarks, (iii) larger baryon masses, (iv) modified pion decay constant, etc. Recently, there was an increased activity to cross-check the effects of magnetic catalysis in the limit of a large number of colors, using the technique of AdS/CFT correspondence.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "47641", "title": "Standard Model", "section": "Section::::Particle content.:Fermions.\n", "start_paragraph_id": 19, "start_character": 0, "end_paragraph_id": 19, "end_character": 938, "text": "The defining property of the quarks is that they carry color charge, and hence interact via the strong interaction. A phenomenon called color confinement results in quarks being very strongly bound to one another, forming color-neutral composite particles (hadrons) containing either a quark and an antiquark (mesons) or three quarks (baryons). The familiar proton and neutron are the two baryons having the smallest mass. Quarks also carry electric charge and weak isospin. Hence they interact with other fermions both electromagnetically and via the weak interaction. The remaining six fermions do not carry color charge and are called leptons. The three neutrinos do not carry electric charge either, so their motion is directly influenced only by the weak nuclear force, which makes them notoriously difficult to detect. However, by virtue of carrying an electric charge, the electron, muon, and tau all interact electromagnetically.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "25179", "title": "Quark", "section": "Section::::Interacting quarks.\n", "start_paragraph_id": 52, "start_character": 0, "end_paragraph_id": 52, "end_character": 916, "text": "Since gluons carry color charge, they themselves are able to emit and absorb other gluons. This causes \"asymptotic freedom\": as quarks come closer to each other, the chromodynamic binding force between them weakens. Conversely, as the distance between quarks increases, the binding force strengthens. The color field becomes stressed, much as an elastic band is stressed when stretched, and more gluons of appropriate color are spontaneously created to strengthen the field. Above a certain energy threshold, pairs of quarks and antiquarks are created. These pairs bind with the quarks being separated, causing new hadrons to form. This phenomenon is known as \"color confinement\": quarks never appear in isolation. This process of hadronization occurs before quarks, formed in a high energy collision, are able to interact in any other way. The only exception is the top quark, which may decay before it hadronizes.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "26998617", "title": "Field (physics)", "section": "Section::::Quantum fields.\n", "start_paragraph_id": 42, "start_character": 0, "end_paragraph_id": 42, "end_character": 471, "text": "In quantum chromodynamics, the color field lines are coupled at short distances by gluons, which are polarized by the field and line up with it. This effect increases within a short distance (around 1 fm from the vicinity of the quarks) making the color force increase within a short distance, confining the quarks within hadrons. As the field lines are pulled together tightly by gluons, they do not \"bow\" outwards as much as an electric field between electric charges.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "11274", "title": "Elementary particle", "section": "Section::::Standard Model.:Fundamental fermions.:Quarks.\n", "start_paragraph_id": 23, "start_character": 0, "end_paragraph_id": 23, "end_character": 590, "text": "Isolated quarks and antiquarks have never been detected, a fact explained by confinement. Every quark carries one of three color charges of the strong interaction; antiquarks similarly carry anticolor. Color-charged particles interact via gluon exchange in the same way that charged particles interact via photon exchange. However, gluons are themselves color-charged, resulting in an amplification of the strong force as color-charged particles are separated. Unlike the electromagnetic force, which diminishes as charged particles separate, color-charged particles feel increasing force.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "48400112", "title": "Accretion disk", "section": "Section::::Accretion disk physics.:Magnetorotational instability.\n", "start_paragraph_id": 21, "start_character": 0, "end_paragraph_id": 21, "end_character": 256, "text": "Most astrophysical disks do not meet this criterion and are therefore prone to this magnetorotational instability. The magnetic fields present in astrophysical objects (required for the instability to occur) are believed to be generated via dynamo action.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "36313540", "title": "Proton spin crisis", "section": "Section::::The experiment.\n", "start_paragraph_id": 7, "start_character": 0, "end_paragraph_id": 7, "end_character": 473, "text": "In this EMC experiment, a quark of a polarized proton target was hit by a polarized muon beam, and the quark's instantaneous spin was measured. In a polarized proton target, all the protons' spin take the same direction, and therefore it was expected that the spin of two out of the three quarks cancels out and the spin of the third quark is polarized in the direction of the proton's spin. Thus, the sum of the quarks' spin was expected to be equal to the proton's spin.\n", "bleu_score": null, "meta": null } ] } ]
null
5fut2f
how does my printer know how much ink is in the cartridge?
[ { "answer": "the inkjet cartridge has a electronic chip inside that counts how many times its asked to jet ink of each color. reach the upper end of that count and you have a good idea when it's going to run out. \n\n", "provenance": null }, { "answer": null, "provenance": [ { "wikipedia_id": "11789067", "title": "Compatible ink", "section": "Section::::Comparison of performance, quality and reliability.\n", "start_paragraph_id": 13, "start_character": 0, "end_paragraph_id": 13, "end_character": 228, "text": "All types of compatible ink cartridges are different and vary from supplier to supplier. This is due to the type of ink in the printer, the chips (or no chip) on the cartridge and the actual manufacture of the cartridge itself.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "7808413", "title": "Inkjet refill kit", "section": "Section::::Refilling process.\n", "start_paragraph_id": 7, "start_character": 0, "end_paragraph_id": 7, "end_character": 608, "text": "BULLET::::- \"Injecting ink\": Depending on the type of cartridge being refilled, ink can either be injected through a hole on top of the cartridge, or directly into the ink chambers after the top has been popped off. The ink can be injected directly from a bottle (with a needle tip on it) or from a needle filled with ink. The ink must be slowly injected into the cartridge so as not to cause damage, or overfilling, or overflow to other-color ink reservoirs. (For colors, a label on the cartridge might have three ordered color-dots to indicate the corresponding three ink colors of the reservoir chambers.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "11490208", "title": "ISO/IEC 19752", "section": "", "start_paragraph_id": 2, "start_character": 0, "end_paragraph_id": 2, "end_character": 280, "text": "Traditionally, printer manufacturers did not employ a standard, well-defined methodology for measuring toner cartridge yield. The most widely used description of cartridge capacity was \"number of printed pages at 5% coverage\", with final results depending on a number of factors.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "3787783", "title": "Ink cartridge", "section": "Section::::Refills and third party replacements.\n", "start_paragraph_id": 25, "start_character": 0, "end_paragraph_id": 25, "end_character": 425, "text": "Another option is for the consumer to purchase \"bulk ink\" (in pints, quarts, or gallons) and refill the cartridges themselves. This can be extremely cost-effective if the consumer is a heavy user of cartridges, although care is required while refilling to avoid ink stains on hands, clothes, or surroundings. One US pint (473 ml) is sufficient to fill about 15 to 17 large-capacity cartridges (or 34 to 39 per liter of ink).\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "7808413", "title": "Inkjet refill kit", "section": "Section::::Refilling process.\n", "start_paragraph_id": 8, "start_character": 0, "end_paragraph_id": 8, "end_character": 1341, "text": "BULLET::::- \"Installing and running\": Once the cartridge is filled, the top is placed back on (if necessary) and the cartridge can be reinstalled in the printer. Extra ink flowing from the cartridge print-head can be wiped or blotted (for a few minutes). On some cartridges, the ink has a problem getting to the bottom of the cartridge (especially the colored cartridges), it must be forced to the bottom either by suction through the jet plate or by putting pressure from the top with a syringe to purge the ink through the jet plate very gently. It might be necessary to run the printer cleaning utilities on the refilled cartridge, in case any excess ink is left over from the refilling process. A note to the unfamiliar: the capacity of the cartridge of some brands is much much more than the cartridge comes with when new (especially the colored ones), there may be room for 2 or 3 times the ink sold in some \"kits\" (this can be learned by doing an autopsy on a non-functioning cartridge, also why the ink does not reach the bottom in the case of some colored inks). In those cases, the needle must be able to reach to within 0.375\" from the bottom or closer to be sure that the ink can reach the jets and not just saturate the sponge. The sponge can in some cases take two or more charges of ink and still not reach the bottom (jets).\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "25033702", "title": "Toner cartridge", "section": "Section::::Yield.\n", "start_paragraph_id": 7, "start_character": 0, "end_paragraph_id": 7, "end_character": 391, "text": "Page yield is the number of pages that can be printed with a cartridge. Estimated yield is a number published by manufacturers to provide consumers with an idea of how many pages they can expect from a cartridge. For many years, manufacturers developed their own methods for testing and reporting the yields of their toner cartridges, making it difficult for customers to compare products. \n", "bleu_score": null, "meta": null }, { "wikipedia_id": "1403168", "title": "Bendix G-15", "section": "Section::::Peripherals.\n", "start_paragraph_id": 12, "start_character": 0, "end_paragraph_id": 12, "end_character": 505, "text": "The high-speed photoelectric paper tape reader (250 hexadecimal digits per second on five-channel paper tape for the PR-1; 400 characters from 5-8 channel tape for the PR-2) read programs (and occasionally saved data) from tapes that were often mounted in cartridges for easy loading and unloading. Not unlike magnetic tape, the paper tape data are blocked into runs of 108 words or less since that is the maximum read size. A cartridge can contain many multiple blocks, up to 2500 words (~10 kilobytes).\n", "bleu_score": null, "meta": null } ] } ]
null
jvhga
I have a question for you /r/askscience. Is this some kind of hoax, or can this really work? I'm looking forward to your downvote if it is a duplicate. Reposted from /r/physics
[ { "answer": "Can you summarize the video for those of us at work?", "provenance": null }, { "answer": "A simple search in askscience for your post turned up no results (it took a lil bit of digging to find this, and I only kept looking because I knew it was there) so you haven't done anything wrong but it has already been asked:\n\n_URL_0_\n\nEDIT: grammar", "provenance": null }, { "answer": "It appears to be fairly simple magnetic propulsion. Keep in mind that the magnets are slowly losing energy each time he is doing this.\n\nI'd also note he hasn't demonstrated the ability to take a curve nor any significant inclination over time (which would be a requirement for his idea to loop back and 'drop' it back on the starting point.\n\nHe kind of loses it about 1/2 way through where he halves the track and then does it the opposite direction - because he forgot to remove the 'raise' he put under the leg (so now it is actually going a bit downhill) - but whatever.", "provenance": null }, { "answer": "A static magnetic field (and gravity) is conservative, so you're not gaining or losing energy, just converting it between potential and kinetic. Placing the magnets in a particular arrangement takes some amount of energy. Some of that energy can be converted to kinetic energy, but you're not going to get kinetic energy without losing potential. \n\nWhen he says:\n\n > It possibly could run continually by itself\n\nHe's wrong. It can't. He puts potential energy into it by placing it at one end of the track and that energy is converted into kinetic energy.", "provenance": null }, { "answer": "Can you summarize the video for those of us at work?", "provenance": null }, { "answer": "A simple search in askscience for your post turned up no results (it took a lil bit of digging to find this, and I only kept looking because I knew it was there) so you haven't done anything wrong but it has already been asked:\n\n_URL_0_\n\nEDIT: grammar", "provenance": null }, { "answer": "It appears to be fairly simple magnetic propulsion. Keep in mind that the magnets are slowly losing energy each time he is doing this.\n\nI'd also note he hasn't demonstrated the ability to take a curve nor any significant inclination over time (which would be a requirement for his idea to loop back and 'drop' it back on the starting point.\n\nHe kind of loses it about 1/2 way through where he halves the track and then does it the opposite direction - because he forgot to remove the 'raise' he put under the leg (so now it is actually going a bit downhill) - but whatever.", "provenance": null }, { "answer": "A static magnetic field (and gravity) is conservative, so you're not gaining or losing energy, just converting it between potential and kinetic. Placing the magnets in a particular arrangement takes some amount of energy. Some of that energy can be converted to kinetic energy, but you're not going to get kinetic energy without losing potential. \n\nWhen he says:\n\n > It possibly could run continually by itself\n\nHe's wrong. It can't. He puts potential energy into it by placing it at one end of the track and that energy is converted into kinetic energy.", "provenance": null }, { "answer": null, "provenance": [ { "wikipedia_id": "2282077", "title": "Misappropriation", "section": "Section::::Scientific research.\n", "start_paragraph_id": 6, "start_character": 0, "end_paragraph_id": 6, "end_character": 451, "text": "In scientific research, misappropriation is a type of research misconduct. An investigator, scholar or reviewer can obtain novel ideas during the process of the exchange of ideas amongst colleagues and peers. However, improper use of such information could constitute fraud. This can include plagiarism of work or to make use of any information in breach of any duty of confidentiality associated with the review of manuscripts or grant applications.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "7015856", "title": "World Trade Center controlled demolition conspiracy theories", "section": "Section::::Criticism.\n", "start_paragraph_id": 38, "start_character": 0, "end_paragraph_id": 38, "end_character": 416, "text": "Thomas Eagar, a professor of materials science and engineering at the Massachusetts Institute of Technology, also dismissed the controlled-demolition conspiracy theory. Eagar remarked, \"These people (in the 9/11 truth movement) use the 'reverse scientific method.' They determine what happened, throw out all the data that doesn't fit their conclusion, and then hail their findings as the only possible conclusion.\"\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "229723", "title": "Lie", "section": "Section::::Types.\n", "start_paragraph_id": 5, "start_character": 0, "end_paragraph_id": 5, "end_character": 356, "text": "A big lie is one which attempts to trick the victim into believing something major which will likely be contradicted by some information the victim already possesses, or by their common sense. When the lie is of sufficient magnitude it may succeed, due to the victim's reluctance to believe that an untruth on such a grand scale would indeed be concocted.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "7904935", "title": "The Killing Joke (novel)", "section": "Section::::Plot summary.\n", "start_paragraph_id": 5, "start_character": 0, "end_paragraph_id": 5, "end_character": 541, "text": "He goes back to Sally, believing her to be the last chance he has of finding out what was going on. However, when he gets there, her house is blown up. Sally herself is not in, but her mother, who has elephantiasis, is. Sally decides to go with Guy to track down the joke. His only lead is a company called Sphinx, that apparently create vacuum cleaners, as that was where he ended one of the trails of the joke. He tries to track down Sphinx, but cannot, and when he rings their number is left holding for an hour, before being redirected.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "3218480", "title": "Katt Williams", "section": "Section::::Career.:Stand-up career.\n", "start_paragraph_id": 13, "start_character": 0, "end_paragraph_id": 13, "end_character": 936, "text": "The conspiracy conversation is a conversation that we are all familiar with. We know that there are conspiracies out there, but this is a conversation that encompasses a lot of things that aren't being discussed other places. That's the basis for all conspiracy theories: the fact that there is hidden information out there, and how our process changes about things that we thought we used to know. We all, at some point, if we're are at a certain age, we grew up thinking Pluto was a planet. This is probably going to go down as one of my finest works, just because it's a collection of forbidden topics that we can't seem to get answered. I am one of the rare urban public officials. Part of my guarantee in my ticket price is that I'm going to be talking about what we are talking about now, and discussing from now to the next time we see [me] again. This is the open discussion that we've had since 2003. This is what it is about.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "16255445", "title": "Arnab Goswami", "section": "Section::::Reception.:Praise.\n", "start_paragraph_id": 20, "start_character": 0, "end_paragraph_id": 20, "end_character": 340, "text": "The \"Financial Times\" quotes his conviction to be \"I don't believe in creating an artificial consensus, which some people are comfortable with. So if there is something wrong, you can ask yourself two questions: Why did it happen? Will the people who did it go unpunished?\". This was published as the \"Quote of the Week\" by the World Bank.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "36667", "title": "Counterfactual definiteness", "section": "Section::::Overview.\n", "start_paragraph_id": 6, "start_character": 0, "end_paragraph_id": 6, "end_character": 207, "text": "If physics gives up the \"no conspiracy\" condition, it becomes possible for \"nature to force experimenters to measure what she wants, and when she wants, hiding whatever she does not like physicists to see.\"\n", "bleu_score": null, "meta": null } ] } ]
null
2z5s23
Was former L.A. Mayor John Porter a member of the Ku Klux Klan?
[ { "answer": "Not a historian, but you might look at the [San Diego History Center](_URL_3_) for answers on this. They appear to have some primary sources in their collection, but specifically on this they site Kevin Starr's Material Dreams: Southern California through the 1920's (New York: Oxford University Press, 1990). \n\nI also found a [1931 news article](_URL_2_) that mentions it, charging him with \"bias to Jews, Blacks and Negroes.\"\n\nMichael Newton's [White Robes and Burning Crosses: A History of the Ku Klux Klan from 1866](_URL_1_) mentions him.\n\nYou should also seek out the book [Chronological Record of Los Angeles City Officials: 1850—1938](_URL_0_) in a library. You might also look in LA newspaper archives - he was accused of being endorsed by the Klan and admitted his prior membership, so there should be something in the archives from 1928. ", "provenance": null }, { "answer": null, "provenance": [ { "wikipedia_id": "24791745", "title": "1865 in the United States", "section": "Section::::Events.:October–December.\n", "start_paragraph_id": 73, "start_character": 0, "end_paragraph_id": 73, "end_character": 261, "text": "BULLET::::- December 24 – The Ku Klux Klan is formed by six Confederate Army veterans, with support of the Democratic Party, in Pulaski, Tennessee, to resist Reconstruction and intimidate \"carpetbaggers\" and \"scalawags\", as well as to repress the freed slaves.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "16939254", "title": "Race in the United States criminal justice system", "section": "Section::::Historical timeline.:Reconstruction Period (1865–1877).\n", "start_paragraph_id": 15, "start_character": 0, "end_paragraph_id": 15, "end_character": 402, "text": "The Ku Klux Klan, was founded in 1865 in Pulaski, Tennessee as a vigilante organization whose goal was to keep control over freed slaves; It performed acts of lawlessness against negroes and other minorities. This included taking negro prisoners from the custody of officers or breaking into jails to put them to death. Few efforts were made by civil authorities in the South against the Ku Klux Klan.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "14453579", "title": "John Brown Anti-Klan Committee", "section": "", "start_paragraph_id": 1, "start_character": 0, "end_paragraph_id": 1, "end_character": 517, "text": "The John Brown Anti-Klan Committee (JBAKC) was an anti-racist organization based in the United States. The group protested against the Ku Klux Klan (KKK) and other white supremacist organizations and published anti-racist literature. Members of the JBAKC were involved in a string of bombings of military, government, and corporate targets in the 1980s. The JBAKC viewed themselves as anti-imperialists and considered African Americans, Native Americans, Puerto Ricans, and Mexicans to be oppressed colonial peoples.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "24300885", "title": "Virgil Effinger", "section": "", "start_paragraph_id": 1, "start_character": 0, "end_paragraph_id": 1, "end_character": 556, "text": "Virgil H. \"Bert\" Effinger (1873 – 15 December 1955) was a renegade member of the Ku Klux Klan who became the self-proclaimed leader of the Black Legion in the United States, active mostly in Ohio and Michigan. The secret, white vigilante group was made up of native-born Protestant men, many from the South, who were threatened by immigration and contemporary industrial society during the struggles of the Great Depression. One-third of the members in Michigan lived in Detroit. Effinger advocated a fascist revolution in the US with himself as dictator.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "48885448", "title": "Harold Preece", "section": "Section::::Biography.:Writing career.\n", "start_paragraph_id": 13, "start_character": 0, "end_paragraph_id": 13, "end_character": 340, "text": "Preece corresponded with Roy Wilkins and W. E. B. Du Bois as a fighter for civil rights. He continued his support for civil rights in \"New Masses\" as well. He took on the Ku Klux Klan in the October 16, 1945 issue, with the result that \"[i]n 1946 the Ku Klux Klan chased him and his family out of the state, and they moved on to New York.\"\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "57884327", "title": "Brown Harwood", "section": "", "start_paragraph_id": 1, "start_character": 0, "end_paragraph_id": 1, "end_character": 522, "text": "Brown Harwood (March 8, 1872 – June 26, 1963) was an American realtor and prominent leader of the Ku Klux Klan. A resident of Fort Worth, Texas, Harwood was a charter member of the Klan in that city; he eventually became Grand Dragon of the Texas Ku Klux Klan. In 1922, Harwood became imperial (national) klazik (vice-president) of the Ku Klux Klan. He stayed in that position until April 14, 1925; the arrest of Klan leader D. C. Stephenson for rape and murder in Indiana turned public opinion against the organization. \n", "bleu_score": null, "meta": null }, { "wikipedia_id": "44306744", "title": "The Traitor (Dixon novel)", "section": "Section::::Plot summary.\n", "start_paragraph_id": 3, "start_character": 0, "end_paragraph_id": 3, "end_character": 480, "text": "John Graham, a Confederate veteran and dispossessed planter, serves as the Grand Dragon of the Ku Klux Klan in North Carolina. As black power has been curtailed, the Grand Wizard orders Graham to have one last march through town and finally discontinue their activities. (This was ordered by the Klan's first Grand Wizard, Nathan Bedford Forrest.) The Klan members burn their robes and bury them in a grave. Two weeks later Graham's rival, Steve Hoyle, starts a new Ku Klux Klan.\n", "bleu_score": null, "meta": null } ] } ]
null
wgfqv
How come we can see distant galaxies but just recently discovered Pluto's fifth moon?
[ { "answer": "Galaxies and stars are very bright, so you can see them from farther away. Pluto and its moon do not emit light and all we see from them is reflected sunlight off their surface. \n\nIt's kindof like how you can see a streetlight from miles away at night, while you can't see the rock 10 feet away. ", "provenance": null }, { "answer": "Galaxies emit light, and lots of it. Spectacularly large amounts - a single galaxy can contain hundreds of billions of stars. Pluto's moon, on the other hand, is very small and emits light only by reflection from the Sun. What this means is that the amount of light received by a telescope on or near Earth from Pluto's moon is actually less than that received from many distant galaxies, making it harder to spot. There's the added complication that it moves around, since it's orbiting Pluto, whereas distant galaxies are effectively stationary in the sky, meaning a long exposure image won't necessarily detect it.", "provenance": null }, { "answer": "While galaxies are much, much further away from us than pluto is, they are also much, much larger than pluto.\n\nCheck [this pic](_URL_0_) of the relative sizes in the night sky of the andromeda galaxy and the moon.\n\nAs you can see, on a clear night far from city lights you could probably perceive the galaxy with your naked eye simply because it's so huge in the night sky.\n\nPluto, by comparison, would be impossible to spot with the naked eye because, despite its proximity to us, it is extremely small.\n\nIt's like being on top of a mountain and wondering why you can see the small patch of forest 60 miles away, but cannot see the mite of dust 5 feet away from you.", "provenance": null }, { "answer": null, "provenance": [ { "wikipedia_id": "23140545", "title": "Mordor", "section": "Section::::Allusions in other works.\n", "start_paragraph_id": 46, "start_character": 0, "end_paragraph_id": 46, "end_character": 257, "text": "In July 2015 NASA published photographs taken as the New Horizons space probe passed within 7000 miles of Pluto. A photo of Pluto's largest moon, Charon, shows a large dark area near its north pole. The dark area has been unofficially called Mordor Macula.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "52838", "title": "Charon (moon)", "section": "Section::::Observation and exploration.\n", "start_paragraph_id": 30, "start_character": 0, "end_paragraph_id": 30, "end_character": 640, "text": "Since the first blurred images of the moon , images showing Pluto and Charon resolved into separate disks were taken for the first time by the Hubble Space Telescope in the 1990s . The telescope was responsible for the best, yet low quality images of the moon. In 1994, the clearest picture of the Pluto-Charon system showed two distinct and well defined circles . The image was taken by Hubble's Faint Object Camera (FOC) when the system was 4.4 billion kilometers (2.6 billion miles) away from Earth Later, the development of adaptive optics made it possible to resolve Pluto and Charon into separate disks using ground-based telescopes.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "584499", "title": "James W. Christy", "section": "", "start_paragraph_id": 6, "start_character": 0, "end_paragraph_id": 6, "end_character": 233, "text": "In more modern telescopes, such as the Hubble or ground-based telescopes using adaptive optics, separate images of Pluto and Charon can be resolved, and the \"New Horizons\" probe took images showing some of Charon's surface features.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "19363373", "title": "Moons of Haumea", "section": "Section::::Surface properties.\n", "start_paragraph_id": 12, "start_character": 0, "end_paragraph_id": 12, "end_character": 290, "text": "The moons of Haumea are too faint to detect with telescopes smaller than about 2 metres in aperture, though Haumea itself has a visual magnitude of 17.5, making it the third-brightest object in the Kuiper belt after Pluto and Makemake, and easily observable with a large amateur telescope.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "2276960", "title": "Tucana Dwarf", "section": "", "start_paragraph_id": 1, "start_character": 0, "end_paragraph_id": 1, "end_character": 340, "text": "The Tucana Dwarf Galaxy is a dwarf galaxy in the constellation Tucana. It was discovered in 1990 by R.J. Lavery of Mount Stromlo Observatory. It is composed of very old stars and is very isolated from other galaxies. Its location on the opposite side of the Milky Way from other Local Group galaxies makes it an important object for study.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "390905", "title": "New Horizons", "section": "Section::::Journey to Pluto.:Pluto approach.\n", "start_paragraph_id": 100, "start_character": 0, "end_paragraph_id": 100, "end_character": 318, "text": "On February 12, 2015, NASA released new images of Pluto (taken from January 25 to 31) from the approaching probe. \"New Horizons\" was more than away from Pluto when it began taking the photos, which showed Pluto and its largest moon, Charon. The exposure time was too short to see Pluto's smaller, much fainter, moons.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "3048284", "title": "Moons of Pluto", "section": "Section::::Characteristics.\n", "start_paragraph_id": 12, "start_character": 0, "end_paragraph_id": 12, "end_character": 210, "text": "An intense search conducted by \"New Horizons\" confirmed that no moons larger than 4.5 km in diameter exist at the distances up to 180,000 km from Pluto (for smaller distances, this threshold is still smaller).\n", "bleu_score": null, "meta": null } ] } ]
null
1q2vdg
how much money (usd) would need to be "destroyed" in order to see a significant rise in the value of the dollar?
[ { "answer": "_URL_0_\n\nThe monetary base went from ~800 billion USD in 2008 to 3.6 trillion 2013 and you had ~10% cumulative inflation over 5 years.\n\nSo if you want to roll back that inflation you'd need to eliminate at least 80% of the monetary base (note that much of the monetary base is electronic), and that might get you back to where the US dollar was in 2008.\n\nIt's a non trivial question honestly, causing deflation - even a trivial amount would cause huge damage to the US economy, which would have effects of the US dollar. ", "provenance": null }, { "answer": null, "provenance": [ { "wikipedia_id": "1200154", "title": "Columbus Day Storm of 1962", "section": "Section::::Impact.\n", "start_paragraph_id": 19, "start_character": 0, "end_paragraph_id": 19, "end_character": 415, "text": "Estimates put the dollar damage at approximately $230 million to $280 million for California, Oregon and Washington combined. Those figures in 1962 dollars translate to $1.8 Billion to $2.2 Billion in 2014 Dollars. Oregon's share exceeded $200 million in 1962 dollars. This is comparable to land-falling hurricanes that occurred within the same time frame (for example, Audrey, Donna, and Carla from 1957 to 1961).\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "180847", "title": "United States Note", "section": "Section::::Public debt of the United States.\n", "start_paragraph_id": 30, "start_character": 0, "end_paragraph_id": 30, "end_character": 412, "text": ", the U.S. Treasury calculates that $239 million in United States notes are in circulation and, in accordance with debt ceiling legislation, excludes this amount from the statutory debt limit of the United States. The $239 million excludes $25 million in United States Notes issued prior to July 1, 1929, determined pursuant to Act of June 30, 1961, 31 U.S.C. 5119, to have been destroyed or irretrievably lost.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "18604409", "title": "Allegations of misappropriations related to the Iraq War", "section": "Section::::Media investigations.\n", "start_paragraph_id": 6, "start_character": 0, "end_paragraph_id": 6, "end_character": 271, "text": "$12 billion in U.S. currency was transported from the Federal Reserve to Baghdad in April 2003 and June 2004, where it was dispensed by the Coalition Provisional Authority. A Vanity Fair magazine report concluded that of this sum, \"at least $9 billion has gone missing\".\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "3757812", "title": "Victory Tour (The Jacksons tour)", "section": "Section::::Aftermath.\n", "start_paragraph_id": 41, "start_character": 0, "end_paragraph_id": 41, "end_character": 409, "text": "Estimates of SMC's losses have ranged from $13 million to $22 million ($ million to $ million in modern dollars). Sullivan and his father quietly put the word out around the NFL that the Patriots and their stadium were for sale. Their $100 million asking price for the combined package made more sense when the Patriots qualified for Super Bowl XX after the next season, the first time they had ever done so.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "55157486", "title": "Polish material losses during World War II", "section": "Section::::Material losses under German occupation.\n", "start_paragraph_id": 20, "start_character": 0, "end_paragraph_id": 20, "end_character": 328, "text": "Conclusively, the material losses and destruction was valued at 258 billion prewar Złoty, which amount to 50 billion U.S. Dollars (1939 rate). In relation to the 2017, the aforesaid U.S. Dollar transfers to $850–920 billion U.S. Dollars. As such, Poland's capital city of Warsaw suffered $60 billion U.S. Dollars in war losses.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "22099091", "title": "Subprime mortgage crisis solutions debate", "section": "Section::::Solvency.:\"Toxic\" or \"Legacy\" asset purchases.:Arguments against toxic asset purchases.\n", "start_paragraph_id": 79, "start_character": 0, "end_paragraph_id": 79, "end_character": 435, "text": "Research by JP Morgan and Wachovia indicates that the value of toxic assets (technically CDO's of ABS) issued during late 2005 to mid-2007 are worth between 5 cents and 32 cents on the dollar. Approximately $305 billion of the $450 billion of such assets created during the period are in default. By another indicator (the ABX), toxic assets are worth about 40 cents on the dollar, depending on the precise vintage (period of origin).\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "31461654", "title": "United States debt ceiling", "section": "Section::::Debt not covered by ceiling.\n", "start_paragraph_id": 30, "start_character": 0, "end_paragraph_id": 30, "end_character": 399, "text": "In December 2012, Treasury calculated that $239 million in United States Notes were in circulation. These Notes, in accordance the debt ceiling legislation, are excluded from the statutory debt limit. The $239 million excludes $25 million in United States Notes issued prior to July 1, 1929, determined pursuant to Act of June 30, 1961, 31 U.S.C. 5119, to have been destroyed or irretrievably lost.\n", "bleu_score": null, "meta": null } ] } ]
null
bgn603
what is the purpose of that transparent blue strip on the top of the windshield glass of almost every car?
[ { "answer": "As the sun starts to set it can be shining directly into the drivers eyes. The shade strip lets you block some of that without having to tint the whole window which would make it harder to see out at night.", "provenance": null }, { "answer": null, "provenance": [ { "wikipedia_id": "22390444", "title": "Glass coloring and color marking", "section": "Section::::Dichroic glass.\n", "start_paragraph_id": 32, "start_character": 0, "end_paragraph_id": 32, "end_character": 244, "text": "Dichroic glass has one or several coatings in the nanometer-range (for example metals, metal oxides, or nitrides) which give the glass dichroic optical properties. Also the blue appearance of some automobile windshields is caused by dichroism.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "1864909", "title": "Window film", "section": "Section::::Primary properties.\n", "start_paragraph_id": 22, "start_character": 0, "end_paragraph_id": 22, "end_character": 436, "text": "Privacy films reduce visibility through the glass. Privacy film for flat-glass commercial and residential applications may be silvered, offering an unimpeded view from the low-light side but virtually no view from the high-light side. It may also be frosted, rendering the window translucent but not transparent. Privacy films for automobiles are available in gradients of darkness, with the darker tints commonly known as \"limo tint.\"\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "254664", "title": "Windshield", "section": "Section::::Other aspects.\n", "start_paragraph_id": 11, "start_character": 0, "end_paragraph_id": 11, "end_character": 246, "text": "In many places, laws restrict the use of heavily tinted glass in vehicle windshields; generally, laws specify the maximum level of tint permitted. Some vehicles have noticeably more tint in the uppermost part of the windshield to block sunglare.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "34957798", "title": "Ford F-Series (seventh generation)", "section": "Section::::Special Order Equipment.\n", "start_paragraph_id": 21, "start_character": 0, "end_paragraph_id": 21, "end_character": 347, "text": "BULLET::::- Windshield, Tinted: With this Special Order option, tinted glass can be ordered for the windshield only, instead of all the vehicles windows. The tinted windshield offers the benefit of reduced glare for the driver on bright, sunny days, plus it costs less than Tinted Glass, Complete option-especially on crew cab or SuperCab models.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "8656637", "title": "Vicke Lindstrand", "section": "Section::::Monumental glass sculptures.\n", "start_paragraph_id": 6, "start_character": 0, "end_paragraph_id": 6, "end_character": 260, "text": "Constructed with thousands of 8mm thick glass window panes, glued together using invisible 2 part epoxy glue (Ciba-Geigy). The flat-drawn glass used (Emmaboda, Sweden) is deep green when seen from the edge of the pane (modern float glass would be much paler).\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "254664", "title": "Windshield", "section": "", "start_paragraph_id": 1, "start_character": 0, "end_paragraph_id": 1, "end_character": 452, "text": "The windshield (North American English) or windscreen (Commonwealth English) of an aircraft, car, bus, motorbike or tram is the front window, which provides visibility whilst protecting occupants from the elements. Modern windshields are generally made of laminated safety glass, a type of treated glass, which consists of, typically, two curved sheets of glass with a plastic layer laminated between them for safety, and bonded into the window frame.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "47202079", "title": "Windshield sun shades", "section": "Section::::Use.\n", "start_paragraph_id": 4, "start_character": 0, "end_paragraph_id": 4, "end_character": 529, "text": "The windshield glass itself blocks most of the UV light and some of the infrared radiation. But it can't protect from the visible light that mostly penetrates through it and gets absorbed by the objects inside the car. The visible light that passes into the interior through the windshield is converted into the infrared light which, in its turn, is blocked by the glass and gets trapped inside, heating up the interior. Windshield sun shades have a reflective surface to bounce the light back, reducing the interior temperature\n", "bleu_score": null, "meta": null } ] } ]
null
5uldrh
why primary education is disproportionately a female institution?
[ { "answer": "That's a pretty tough question to answer, and I think it also depends on the country you live in. \n\nA lot of people think that the reason there are a majority of female teachers is because society puts pressure on girls to go into fields that have a more nurturing nature like teaching, child care, and nursing/medical fields. \n\nOr it could be that, in general, women are more likely to go into fields like that because of their biology, as women are more genetically programmed for these types of things. Or maybe they just enjoy it more. \n\nIt's really more of an open-ended discussion than a cut-and-dry answer. ", "provenance": null }, { "answer": "It's work that the average woman would find more suitable as there is less physical labor and more interpersonal skill necessary. Plus women are generally seen as being more trustworthy to be around kids, especially with all the pedophile hysteria in recent years.", "provenance": null }, { "answer": "This might be somewhat off topic but I work somewhat with nurses by delivering them their patient's medication. It is surprising how many men are nurses. ", "provenance": null }, { "answer": null, "provenance": [ { "wikipedia_id": "2313092", "title": "Education in Africa", "section": "Section::::Women's education.:Disparity in Education.\n", "start_paragraph_id": 77, "start_character": 0, "end_paragraph_id": 77, "end_character": 1063, "text": "The foremost factor limiting female education is poverty. Economic poverty plays a key role when it comes to coping with direct costs such as tuition fees, cost of textbooks, uniforms, transportation and other expenses. Wherever, especially in families with many children, these costs exceed the income of the family, girls are the first to be denied schooling. This gender bias decision in sending females to school is also based on gender roles dictated by culture. Girls usually are required to complete household chores or take care of their younger siblings when they reach home. This limits their time to study and in many cases, may even have to miss school to complete their duties. It is common for girls to be taken out of school at this point. Boys however, may be given more time to study if their parents believe that the education will allow them to earn more in the future. Expectations, attitudes and biases in communities and families, economic costs, social traditions, and religious and cultural beliefs limit girls’ educational opportunities.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "2313092", "title": "Education in Africa", "section": "Section::::Women's education.:Significance.\n", "start_paragraph_id": 82, "start_character": 0, "end_paragraph_id": 82, "end_character": 548, "text": "Education, especially for girls, has social and economic benefits for society as a whole. Women earn only one tenth of the world’s income and own less than one percent of property, so households without a male head are at special risk of impoverishment. These women will also be less likely to immunize their children and know how to help them survive. Women who are educated tend to have fewer and healthier children, and these children are more likely to attend school. Higher female education makes women better-informed mothers and hence could\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "39685204", "title": "Women in South Sudan", "section": "Section::::Gender equality.:Gender roles.\n", "start_paragraph_id": 20, "start_character": 0, "end_paragraph_id": 20, "end_character": 1071, "text": "Gender inequality and strict gender roles are maintained by the strict patriarchal social system. This generally marginalizes women out of roles of power and productive wage-paying jobs. Women are expected to take on the role as the caretaker of the household. They bare the role of providing food and sanitized water for the household. They are expected to care for children, the elderly, and the sick. Consequently, the second leading reason girls do not go to school is due to increased care work. Due to the increased displacement of people being affected by the conflict, women are taking on extra care-taking responsibilities since the members of their households have increased. Accordingly, the conflict has increased their reproductive responsibilities which has further limited their access to education, political participation, and other activities. In addition, women are more susceptible to food insecurity and malnutrition owing to the fact that they are culturally and socially expected to refuse food in order to provide more for the rest of the family.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "19595245", "title": "Women in Mali", "section": "Section::::Education.\n", "start_paragraph_id": 7, "start_character": 0, "end_paragraph_id": 7, "end_character": 393, "text": "Education is compulsory from ages six to 15. However, many children do not attend school, and girls' enrollment is lower than that of boys at all levels due to factors such as poverty, societal preference to educate boys, child marriage and sexual harassment. Women's literacy rate (aged 15 and over) is significantly lower than that of men: female 22.2%, compared to male 45.1% (2015 est.). \n", "bleu_score": null, "meta": null }, { "wikipedia_id": "6070524", "title": "Jules Ferry laws", "section": "Section::::Laws of 1882.:Article 4.\n", "start_paragraph_id": 36, "start_character": 0, "end_paragraph_id": 36, "end_character": 261, "text": "Primary education is compulsory for children of both sexes between the ages of six to thirteen years and may be given either in institutions of primary or secondary schools, in public or free schools, or in the home, by the father himself or anyone he chooses.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "42632230", "title": "Girls' School Committee of 1866", "section": "Section::::Background and context.\n", "start_paragraph_id": 3, "start_character": 0, "end_paragraph_id": 3, "end_character": 1114, "text": "Since the introduction of a public compulsory school system for children of both sexes in 1842, education for females had been a constant question of debate for politicians as well as in intellectual circles: while the new school system allowed every male the opportunity to go from compulsory education to secondary education and finally university, the public school system was closed to females after 5th grade. Except for private teachers, only two educational institutions were open to females after puberty: the free pauper schools, which taught poor girls professions, and the girls' schools for students from the middle and upper classes. These existing girls schools were normally more or less equivalent to finishing schools, with the goal of making the student a \"lady\", and they were forcefully criticized for their shallow and \"useless\" education. In 1842, only five girls' schools offered a more serious academic secondary education: Wallinska skolan in Stockholm, Askersunds flickskola in Askersund, and Fruntimmersföreningens flickskola, Kjellbergska flickskolan and Societetsskolan in Gothenburg.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "9384153", "title": "Female education", "section": "", "start_paragraph_id": 1, "start_character": 0, "end_paragraph_id": 1, "end_character": 830, "text": "Female education is a catch-all term of a complex set of issues and debates surrounding education (primary education, secondary education, tertiary education, and health education in particular) for girls and women. It includes areas of gender equality and access to education, and its connection to the alleviation of poverty. Also involved are the issues of single-sex education and religious education, in that the division of education along gender lines as well as religious teachings on education have been traditionally dominant and are still highly relevant in contemporary discussions of educating females as a global consideration. In the field of female education in STEM, it has been shown that girls’ and women under-representation in science, technology, engineering and mathematics (STEM) education is deep rooted.\n", "bleu_score": null, "meta": null } ] } ]
null
11j430
When you lose your memory, or if you have a hard time remembering things, is that because your brain can't "store" the memories properly or because it can't "retrieve" them properly?
[ { "answer": "The answer really is \"it depends.\" Disruption in both storage (called encoding) and retrieval can both disrupt your ability to recall memories. There is quite a bit of debate about exactly what goes on when you forget something, with some people arguing that the memory trace (typically referred to as an association) simply decays due to time, while others believe that you build a new association and that this disrupts the original.\n\nThere's a lot of interesting research in the area of directed forgetting that tackles this exact problem. They have shown that intentional forgetting is much more effective if you use a replacement for the association (like if you are trying to forget the word \"bed\" when you originally associated it with \"queen,\" you would then try to memorize the word \"crown\" being associated with \"queen\" and you would be more likely to forget the word \"bed.\"\n\nMost of the evidence points towards both decay and new associations being responsible for forgetting.\n\nIf you spontaneously recall the memory later, then that would be a pretty strong indication that it was an issue with recall and not encoding.", "provenance": null }, { "answer": null, "provenance": [ { "wikipedia_id": "31217535", "title": "Memory", "section": "", "start_paragraph_id": 5, "start_character": 0, "end_paragraph_id": 5, "end_character": 648, "text": "Memory is not a perfect processor, and is affected by many factors. The ways by which information is encoded, stored, and retrieved can all be corrupted. The amount of attention given new stimuli can diminish the amount of information that becomes encoded for storage. Also, the storage process can become corrupted by physical damage to areas of the brain that are associated with memory storage, such as the hippocampus. Finally, the retrieval of information from long-term memory can be disrupted because of decay within long-term memory. Normal functioning, decay over time, and brain damage all affect the accuracy and capacity of the memory.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "5506325", "title": "Memory inhibition", "section": "", "start_paragraph_id": 2, "start_character": 0, "end_paragraph_id": 2, "end_character": 566, "text": "Memory inhibition is a critical component of an effective memory system. While some memories are retained for a lifetime, most memories are forgotten. According to evolutionary psychologists, forgetting is adaptive because it facilitates selectivity of rapid, efficient recollection. For example, a person trying to remember where they parked their car would not want to remember every place they have ever parked. In order to remember something, therefore, it is essential not only to activate the relevant information, but also to inhibit irrelevant information. \n", "bleu_score": null, "meta": null }, { "wikipedia_id": "4402098", "title": "Memory and aging", "section": "Section::::Causes.\n", "start_paragraph_id": 17, "start_character": 0, "end_paragraph_id": 17, "end_character": 631, "text": "Memory lapses can be both aggravating and frustrating but they are due to the overwhelming amount of information that is being taken in by the brain. Issues in memory can also be linked to several common physical and psychological causes, such as: anxiety, dehydration, depression, infections, medication side effects, poor nutrition, vitamin B12 deficiency, psychological stress, substance abuse, chronic alcoholism, thyroid imbalances, and blood clots in the brain. Taking care of your body and mind with appropriate medication, doctoral check-ups, and daily mental and physical exercise can prevent some of these memory issues.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "1042932", "title": "The Seven Sins of Memory", "section": "Section::::Types of memory failure.:Absent-mindedness.\n", "start_paragraph_id": 10, "start_character": 0, "end_paragraph_id": 10, "end_character": 300, "text": "This form of memory breakdown involves problems at the point where attention and memory interface. Common errors of this type include misplacing keys or eyeglasses, or forgetting appointments, because at the time of encoding sufficient attention was not paid to what would later need to be recalled.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "31217535", "title": "Memory", "section": "", "start_paragraph_id": 1, "start_character": 0, "end_paragraph_id": 1, "end_character": 389, "text": "Memory is the faculty of the brain by which data or information is encoded, stored, and retrieved when needed. It is the retention of information over time for the purpose of influencing future action. If past events could not be remembered, it would be impossible for language, relationships, or personal identity to develop. Memory loss is usually described as forgetfulness or amnesia.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "208865", "title": "False memory syndrome", "section": "Section::::Recovered memory therapy.\n", "start_paragraph_id": 8, "start_character": 0, "end_paragraph_id": 8, "end_character": 387, "text": "Memory consolidation becomes a critical element of false memory and recovered memory syndromes. Once stored in the hippocampus, the memory may last for years or even for life, regardless that the memorized event never actually took place. Obsession to a particular false memory, planted memory, or indoctrinated memory can shape a person's actions or even result in delusional disorder.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "26685627", "title": "Motivated forgetting", "section": "Section::::Theories.\n", "start_paragraph_id": 12, "start_character": 0, "end_paragraph_id": 12, "end_character": 488, "text": "The main theory, the \"motivated forgetting theory\", suggests that people forget things because they either do not want to remember them or for another particular reason. Painful and disturbing memories are made unconscious and very difficult to retrieve, but still remain in storage. Retrieval Suppression is one way in which we are able to stop the retrieval of unpleasant memories using cognitive control. This theory was tested by Anderson and Green using the Think/No-Think paradigm.\n", "bleu_score": null, "meta": null } ] } ]
null
3okz11
How often do neutrinos interact with us? What happens when they do?
[ { "answer": " > How often do neutrinos interact with us? \n\nA quick *literal* rule of thumb for neutrinos: 10^11 neutrinos pass through your thumbnail *every second*. It doesn't matter if it's day or night - they interact so rarely that using the earth as shielding won't make a difference. \n\nSo how many of them interact? Well, [your lifetime odds for a neutrino interaction in your body are about 25%](_URL_0_). This means the odds of a neutrino interacting are about 1 in 10^25. For perspective, there are about 10^21 grains of sand on earth, so if one neutrino passed through your body for every grain of sand on earth you could *literally bet your life on nothing happening* and you'd be pretty safe. \n\n > What happens when they do?\n\n[Depends on the energy and flavor of the neutrino.](_URL_2_) They could just bounce off an electron or neutron, imparting some energy in a collision, or they could be absorbed by a neutron and make a proton and electron. There's lots of fun possibilities.\n\n > And, lastly, is the Sun the only source from which the Earth gets neutrinos?\n\n\nTwo more rules I know for neutrinos: The sun emits about 2% of it's energy in neutrinos and about 98% as photons. A supernova, in contrast, releases 99% of it's energy as neutrinos, and only 1% as photons (imagine how much brighter a supernova would be if you could see the neutrinos :D). \n\nThere's a huge number of sources of neutinos, all with different energies and abundances. [Check this plot.](_URL_1_) Nuclear reactors make fucktons of them (among other terrestrial sources), and there's even more that form a sort of 'cosmic neutrino background' dating to the same time as the cosmic microwave background. Supernova and stars are another major source. \n\n\n------\n\nAnd my last favorite fun fact - [look at this picture.](_URL_3_) That is a picture of the sun, but it was *taken at night.* The camera is a neutrino detector under a mountain in Japan. *They took a picture of the sun, from underground, at night.* That's the power of neutrinos - they pass right through the world. This picture was taken with the SuperKamiokande detector in Japan, whose neutrino experiments earned the Nobel Prize last week for Takaaki Kajita, which he shared with Canadian astrophysicist Arthur McDonald. ", "provenance": null }, { "answer": "Neutrinos pass through us millions of times per second and almost never interact with us. \n\nWe are exposed to neutrinos from other stars, not just our own, and get an especially high dose when they go super nova.\n\nI heard that if you were at the orbit of mars when a super nova went off at the position of our sun, you could be killed by the neutrinos. Thats how powerful a supernova can be\n", "provenance": null }, { "answer": "I helped build a neutrino detector as a research assistant back in college. The detector is about 2.5 stories tall and wide, and as long as half a football field. It is located in northern Minnesota, and the beam that it is detecting is located near Chicago. The beam goes through the curvature of the earth practically unimpeded. Even then, most of the neutrinos pass right through the detector too without getting picked up. Only a very small number actually get detected. Check out the project though, if you are interested. It's called NOvA.", "provenance": null }, { "answer": "Randall Munroe (of xkcd fame) addressed a variant of this in his \"what if\" column. The question he answered was [How close would you have to be to a supernova to get a lethal dose of neutrino radiation?](_URL_1_) \n\nIn Munroe's investigation he poses a rhetorical question:\n\n > Which of the following would be brighter, in terms of the amount of energy delivered to your retina?\n\n * A supernova, seen from as far away as the Sun is from the Earth, or\n * The detonation of a hydrogen bomb pressed against your eyeball?\n\nIt makes for an amusing but informative read which augments the the [excellent answer written by VeryLittle](_URL_0_) above.\n", "provenance": null }, { "answer": null, "provenance": [ { "wikipedia_id": "28930", "title": "SN 1987A", "section": "Section::::Neutrino emissions.\n", "start_paragraph_id": 12, "start_character": 0, "end_paragraph_id": 12, "end_character": 409, "text": "The Kamiokande II detection, which at 12 neutrinos had the largest sample population, showed the neutrinos arriving in two distinct pulses. The first pulse started at 07:35:35 and comprised 9 neutrinos, all of which arrived over a period of 1.915 seconds. A second pulse of three neutrinos arrived between 9.219 and 12.439 seconds after the first neutrino was detected, for a pulse duration of 3.220 seconds.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "3876984", "title": "Journey to Where", "section": "Section::::Story.\n", "start_paragraph_id": 4, "start_character": 0, "end_paragraph_id": 4, "end_character": 463, "text": "After this reality check, the senior executives review the facts. A neutrino transmission is an advanced method of communication which can cover billions of miles in an instant. A neutrino beam from Earth focused on the Moon \"could\" provide this miraculous two-way contact. Theoretical experiments had just begun in 1999. As their years in space would translate into the passage of decades on Earth, there would have been ample time for research and development.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "633325", "title": "Neutrino astronomy", "section": "Section::::Detection methods.\n", "start_paragraph_id": 17, "start_character": 0, "end_paragraph_id": 17, "end_character": 373, "text": "Since neutrinos interact only very rarely with matter, the enormous flux of solar neutrinos racing through the Earth is sufficient to produce only 1 interaction for 10 target atoms, and each interaction produces only a few photons or one transmuted atom. The observation of neutrino interactions requires a large detector mass, along with a sensitive amplification system.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "3952417", "title": "Neutrino detector", "section": "Section::::Theory.\n", "start_paragraph_id": 5, "start_character": 0, "end_paragraph_id": 5, "end_character": 526, "text": "Neutrinos are omnipresent in nature such that every second, tens of billions of them \"pass through every square centimetre of our bodies without us ever noticing.\" Many were created during the big bang and others are generated by nuclear reactions inside stars, planets, and other interstellar processes. Some may also originate from events in the universe such as \"colliding black holes, gamma ray bursts from exploding stars, and/or violent events at the cores of distant galaxies,\" according to speculation by scientists. \n", "bleu_score": null, "meta": null }, { "wikipedia_id": "671090", "title": "Cowan–Reines neutrino experiment", "section": "Section::::Setup.\n", "start_paragraph_id": 13, "start_character": 0, "end_paragraph_id": 13, "end_character": 755, "text": "Given the small chance of interaction of a single neutrino with a proton, neutrinos could only be observed using a huge neutrino flux. Beginning in 1951, Cowan and Reines, both then scientists at Los Alamos, New Mexico, initially thought that neutrino bursts from the atomic weapons tests that were then occurring could provide the required flux. They eventually used a nuclear reactor as a source of neutrinos, as advised by Los Alamos physics division leader J.M.B. Kellogg. The reactor had a neutrino flux of neutrinos per second per square centimeter, far higher than any flux attainable from other radioactive sources. A detector consisting of two tanks of water was employed, offering a huge number of potential targets in the protons of the water.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "35182609", "title": "Measurements of neutrino speed", "section": "Section::::Fermilab (1970s).\n", "start_paragraph_id": 9, "start_character": 0, "end_paragraph_id": 9, "end_character": 343, "text": "Since the protons are transferred in bunches of one nanosecond duration at an interval of 18.73 ns, the speed of muons and neutrinos could be determined. A speed difference would lead to an elongation of the neutrino bunches and to a displacement of the whole neutrino time spectrum. At first, the speeds of muons and neutrinos were compared.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "33338392", "title": "Faster-than-light neutrino anomaly", "section": "Section::::The measurement.:Overview.\n", "start_paragraph_id": 33, "start_character": 0, "end_paragraph_id": 33, "end_character": 1032, "text": "Since neutrinos could not be accurately tracked to the specific protons producing them, an averaging method had to be used. The researchers added up the measured proton pulses to get an average distribution in time of the individual protons in a pulse. The time at which neutrinos were detected at Gran Sasso was plotted to produce another distribution. The two distributions were expected to have similar shapes, but be separated by 2.4 milliseconds, the time it takes to travel the distance at light speed. The experimenters used an algorithm, maximum likelihood, to search for the time shift that best made the two distributions to coincide. The shift so calculated, the statistically measured neutrino arrival time, was approximately 60 nanoseconds shorter than the 2.4 milliseconds neutrinos would have taken if they traveled just at light speed. In a later experiment, the proton pulse width was shortened to 3 nanoseconds, and this helped the scientists to narrow the generation time of each detected neutrino to that range.\n", "bleu_score": null, "meta": null } ] } ]
null
3694vk
Was Joseph Smith sincere?
[ { "answer": "In short, we can show that Joseph and/or his compatriots were involved in intentional deception, explicit plagiarism, and attempts to bury evidence of misdeeds. I don't think we can ever completely rule out psychosis or an epic level of self-dillusion, but I think it highly unlikely considering what we know about his actions and behaviors. There's too much to write to go through it all, but I'll touch on some of the evidence below. \n\n1. As you mentioned, Joseph had a long history of cons and frauds. This started young as a failed seer or glass looker. He was involved in at least one expedition for buried treasure, along with his father. This produced no results. Every time they started to dig, Joseph would say evil spirits took the gold away. He was involved in a fraudulent banking scheme, which he fled from. He was involved in several secret and illegal marriages across multiple states, which he publicly and repeatedly lied about. There are others, but I mention these because they required an active attempt at deception. See [more information here](_URL_4_), and a copy of his youth arrest for [scrying here](_URL_1_). \n\n2. Joseph's origin claims are demonstrably fraudulent. For example, the Book of Mormon claims to have been completed in 600 BC and translated solely from that record in 1829; however, several verses were copied verbatim from the 1611 KJV + Apocrypha (his family bible), including [translation errors of the KJV translators](_URL_3_) (Joseph's family bible). There are many other examples of plagiarism, but this one is the most blatant. Again, something that required an intent to deceive. \n\n3. Joseph had a history of hiding evidence of his misdeeds. For example, we have one letter he wrote where he told a potential lover to [burn the letter to protect him](_URL_5_), one of his former wives [accused him of ordering abortions after impregnating a girl](_URL_2_), and ordering his secretary to [burn the minutes of his infamous council of 50](_URL_0_) (his attempt at the presidency) when arrested for treason. \n\n", "provenance": null }, { "answer": null, "provenance": [ { "wikipedia_id": "307496", "title": "First Vision", "section": "Section::::Interpretations and responses to the vision.:Criticism and response.\n", "start_paragraph_id": 92, "start_character": 0, "end_paragraph_id": 92, "end_character": 561, "text": "My instinct is to attribute a sincerity to Joseph Smith. And yet at the same time, as an evangelical Christian, I do not believe that the members of the godhead really appeared to him and told him that he should start on a mission of, among other things, denouncing the kinds of things that I believe as a Presbyterian. I can't believe that. And yet at the same time, I really don't believe that he was simply making up a story that he knew to be false in order to manipulate people and to gain power over a religious movement. And so I live with the mystery. \n", "bleu_score": null, "meta": null }, { "wikipedia_id": "61364186", "title": "Messiah ben Joseph (LDS Church)", "section": "Section::::\"Mashiaḥ\" Joseph: true prophet, priest, and king.:Renewal & rebirth in the 'City of Joseph'.\n", "start_paragraph_id": 147, "start_character": 0, "end_paragraph_id": 147, "end_character": 1304, "text": "Joseph Smith — a self-professed \"lawful\" heir (an heirship long hidden \"from the world with Christ in God\") to the princely thrones of the house of Judah and the house of Joseph — believing that he had been sent to the earth in its final dispensation, or in \"the fulness of times,\" discovered (as he records in his history) the earth's peoples living in a degenerate, apostate world over which the power of the great Enemy (Satan/Leviathan) had grown monstrously strong. Having in early spring 1820 been given his prophetic calling in vision by God the Father's personal appearance with His Messiah Son (JSH–1), the boy-prophet Joseph (later a recipient of both the lesser and higher priesthoods under the hands of the resurrected John the Baptist and Christ's chief apostles Peter, James and John) went forth into the world, endowed with great authority and power from on high. His divinely decreed mission was to \"restore\" the fullness of God's kingdom with its saving ordinances and primordial doctrines that would bring \"efficacy\" to the Davidic Messiah's redeeming atonement made at the meridian of time, and thus consummate salvation for God's chosen people and for all the world, if they would believe, repent, accept His eternal covenant and everlasting gospel, and endure faithfully to the end.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "3113151", "title": "Early life of Joseph Smith", "section": "Section::::Religious background.\n", "start_paragraph_id": 15, "start_character": 0, "end_paragraph_id": 15, "end_character": 639, "text": "Joseph Smith's ancestors had an eclectic variety of religious views and affiliations. For instance, Joseph Smith's paternal grandfather, Asael, was a Universalist who opposed evangelical religion. According to Lucy Smith, Asael once came to Joseph Smith Sr.'s door after he had attended a Methodist meeting with Lucy and \"threw Tom Paine's \"Age of Reason\" into the house and angrily bade him read that until he believed it.\" Conversely, in 1811 Smith's maternal grandfather, Solomon Mack, self-published a book describing a series of heavenly visions and voices he said had led to his conversion to Christianity at the age of seventy-six.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "636946", "title": "Joseph Smith III", "section": "Section::::Teachings on plural marriage.\n", "start_paragraph_id": 20, "start_character": 0, "end_paragraph_id": 20, "end_character": 890, "text": "Joseph Smith III was an ardent opponent of the practice of plural marriage throughout his life. For most of his career, Smith denied that his father had been involved in the practice and insisted that it had originated with Brigham Young. Smith served many missions to the western United States where he met with and interviewed associates and women claiming to be widows of his father, who attempted to present him with evidence to the contrary. In the end, Smith concluded that he was \"not positive nor sure that [his father] was innocent\" and that if, indeed, the elder Smith had been involved, it was still a false practice. However, many members of Community of Christ, and some of the groups that were formerly associated with it are still not convinced that Joseph Smith III's father did indeed engage in plural marriage, and feel that the evidence that he did so is largely flawed.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "59012193", "title": "Members of the Church of Jesus Christ of Latter-day Saints in 20th Century Warfare", "section": "Section::::World War I.:Member Involvement.\n", "start_paragraph_id": 5, "start_character": 0, "end_paragraph_id": 5, "end_character": 1023, "text": "Joseph F. Smith was the president of the Church of Jesus Christ of Latter-Day Saints during the time of World War 1. Even though he was an advocate for peace, when the United States entered WWI by declaring war on Germany, President Smith supported the cause. He supported patriotism and responsibility, and pertaining to the two he once said, “a good Latter-Day Saint is a good citizen in every way.” President Smith showed this support by providing Latter-Day Saint chaplains for active duty military units. This was the first time that the United States Military allowed the LDS Church to directly select active duty member chaplains, and they have continued to allow this practice to the present day. The first three LDS chaplains selected by the church were Calvin Schwartz Smith, Herbert Brown Maw, and Brigham Henry Roberts. Even though there were only three chaplains for the 15,000 LDS members in the war, these men labored diligently to contribute to the war effort by religiously strengthening the LDS soldiers.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "9622352", "title": "Dan Vogel", "section": "Section::::Joseph Smith biography.\n", "start_paragraph_id": 4, "start_character": 0, "end_paragraph_id": 4, "end_character": 582, "text": "Vogel argues in the biography that Joseph Smith was a pious fraud—that Smith essentially invented his religious claims for what he believed were noble, faith-promoting purposes. Vogel identifies the roots of the pious fraud in the conflict between members of the Smith family, who were divided between the skepticism and universalism of Joseph Smith, Sr., and the more mainstream Protestant faith of Lucy Mack Smith. Vogel interweaves the history of Joseph Smith with interpretation of the Book of Mormon, which is read as springing from the young man's psychology and experiences.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "1848014", "title": "William Clayton (Mormon)", "section": "Section::::Early church service.\n", "start_paragraph_id": 6, "start_character": 0, "end_paragraph_id": 6, "end_character": 812, "text": "We have had the privilege of conversing with Joseph Smith Jr. and we are delighted with his company. We have had a privilege of ascertaining in a great measure from whence all the evil reports have arisen and hitherto have every reason to believe him innocent. He is not an idiot, but a man of sound judgment, and possessed of abundance of intelligence and whilst you listen to his conversation you receive intelligence which expands your mind and causes your heart to rejoice. He is very familiar, and delights to instruct the poor saints. I can converse with him just as easy as I can with you, and with regard to being willing to communicate instruction he says, \"I receive it freely and I will give it freely.\" He is willing to answer any question I have put to him and is pleased when we ask him questions.\n", "bleu_score": null, "meta": null } ] } ]
null
q9sdb
Why do cochlear implants not produce normal hearing, and what would they need to do so?
[ { "answer": "Your [cochlea](_URL_3_) is shaped like a snail shell. Throughout this shell, there are hairs that are triggered by different frequencies of sound. Hairs near the base fire in response to high frequency sounds. Hairs near the apex fire in response to low frequency sounds. \n\n[This](_URL_0_) is an artists rendering of what the array looks like in the cochlea. If this is a 22 channel array, there are 22 spots on the long inserted piece that will stimulate the cochlea at different points along the cochlea. Remember that different points along the way respond to different frequencies, so a 22 channel array would allow for 22 frequencies to be heard. \n\nThe sound produced by cochlear implants has become more realistic overtime because initially there was only one frequency/one channel and progressively more so that a patient can hear a wider range of frequencies and better distinguish sounds. \n\n[Wikipedia](_URL_1_) \n\nedit: [This is an article about cochlear implants, place theory, and channels](_URL_2_)", "provenance": null }, { "answer": "Cochlear implants excite nerve cells via an electrode array that is inserted into the cochlea. Each electrode in the array represents one of the channels you mention. This electrode array replaces the 1000's of hair cells in the cochlea that normally carry out this function. The electrodes cannot target nerve cells as specifically as the hair cells can, so there is a limit to how closely they can be spaced before no further benefit is gained.\n\nAlso, the ear has some pretty amazing mechanisms for processing sound. One example is a feedback mechanism in the cochlea that causes soft sounds to be perceived as louder than they are. This provides a huge dynamic range: we can hear everything from a mosquito or pin drop to a live concert. The cochlear implant doesn't provide as wide a dynamic range.\n\nInterestingly, 24 bands (channels) is around what we use for automatic speech and speaker recognition. The incoming signal is converted into a spectral (frequency) representation and grouped into about 20-24 critical bands prior to further processing.\n\nEDIT: Interesting - > Interestingly", "provenance": null }, { "answer": "medstudent22 is correct. Also, the main problem with CIs is that they can only act in a very linear way, and as we have learned with the invention of CIs, the auditory system is incredibly non-linear. One of the big discoveries was the afferent AND efferent neurons going to/from the hair cells. This means that there is information being sent FROM the brain to the hair cells. This is where and why things get complicated b/c this is not all that clear. So, the idea is that our brain (probably auditory cortex) is sending signals which can 'adjust' the hair cells in response to a stimulus. \n\nThere are actually 3 big companies that make CIs. One of them boasts more channels (i think it's 31 or 33). The idea is that this gives better pitch perception (which is the real problem area w/ CIs. Timing is just fine). However, no one has any real 'proof' yet on whether this is true for the patients. My lab is working on a way to test sound quality for this exact purpose. \n\nWe look at music perception in CI patients. and in a word, it's total shit. So, improving their pitch discrimination and thus improving their musical listening experience is something that we think is important and mostly ignored. ", "provenance": null }, { "answer": "There are already great responses here, but I want to add one thing about the interface between the electrodes and the auditory nerve.\n\nWhen a cochlear implant says it has a channel, 16-channels, 24-channels, 120-channels, what that means is that the frequency spectrum is filtered into that number of passbands. But the signal is not directly sent to the internal electrodes, it is processed first. The processing that occurs is the extraction of the amplitude envelope, which is the variation in the amplitude in that channel over time. If you are familiar with signal processing, it's basically the Hilbert envelope, lowpass filtered to 200-300Hz. In simple terms, this processing turns the original content of the channel into a smooth outline.\n\nThe extracted envelope in each channel is applied to a series of clicks, and the internal electrodes broadcast that signal as electric impulses. The impulses have the same smooth outline, or envelope, as the original signal in each channel. Those electric clicks cause auditory neurons to fire in much the same way as they would in a normal ear. The problem is that many of the neurons are dead, and many of the living ones are separated from the electrode by a wall of bone. When the electric impulses leave the electrodes they travel outward in 3 dimensions, and you can't control what neurons actually respond. You try to send a low-frequency channel to the region of the cochlea that would normally respond to low frequencies, but if there is a lower impedance to some other part of the auditory nerve, then you have a problem. The channels can blend together, or interact in strange ways. I think the main limitation in cochlear implants right now is the interface between the electrodes and the neuron, which is being addressed through research into releasing growth factor from the electrode array that can draw dendrites from the auditory nerve into closer proximity to the electrodes.\n\nOnce that interface is improved, more discrete channels can be used, a greater dynamic range of envelope can be transmitted, and more of the natural processing on the inner ear can be simulated. However, the issue raindiva1 brought up about efferent control of the cochlea will continue to be a problem. For that reason, a CI will not resemble normal hearing for many, many, many years. We will probably be able to grow you a new ear from stem cells sooner than we can interface your ear directly with a computer. But that's a hell of a long way off as well.", "provenance": null }, { "answer": null, "provenance": [ { "wikipedia_id": "60620490", "title": "Management of hearing loss", "section": "Section::::Assistive devices.:Surgery.\n", "start_paragraph_id": 28, "start_character": 0, "end_paragraph_id": 28, "end_character": 318, "text": "Cochlear implants improve outcomes in people with hearing loss in either one or both ears. They work by artificial stimulation of the cochlear nerve by providing an electric impulse substitution for the firing of hair cells. They are expensive, and require programming along with extensive training for effectiveness.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "768413", "title": "Ear", "section": "Section::::Clinical significance.:Hearing loss.\n", "start_paragraph_id": 51, "start_character": 0, "end_paragraph_id": 51, "end_character": 311, "text": "Hearing aids or cochlear implants may be used if the hearing loss is severe or prolonged. Hearing aids work by amplifying the sound of the local environment and are best suited to conductive hearing loss. Cochlear implants transmit the sound that is heard as if it were a nervous signal, bypassing the cochlea.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "56439577", "title": "Temporal envelope and fine structure", "section": "Section::::Transmission by hearing aids and cochlear implants.:Temporal envelope transmission.\n", "start_paragraph_id": 76, "start_character": 0, "end_paragraph_id": 76, "end_character": 1486, "text": "Cochlear implants differ than hearing aids in that the entire acoustic hearing is replaced with direct electric stimulation of the auditory nerve, achieved via an electrode array placed inside the cochlea. Hence, here, other factors than device signal processing also strongly contribute to overall hearing, such as etiology, nerve health, electrode configuration and proximity to the nerve, and overall adaptation process to an entirely new mode of hearing. Almost all information in cochlear implants is conveyed by the envelope fluctuations in the different channels. This is sufficient to give reasonable perception of speech in quiet, but not in noisy or reverberant conditions. The processing in cochlear implants is such that the TFSp is discarded in favor of fixed-rate pulse trains amplitude-modulated by the ENVp within each frequency band. Implant users are sensitive to these ENVp modulations, but performance varies across stimulation site, stimulation level, and across individuals. The TMTF shows a low-pass filter shape similar to that observed in normal-hearing listeners. Voice pitch or musical pitch information, conveyed primarily via weak periodicity cues in the ENVp, results in a pitch sensation that is not salient enough to support music perception, talker sex identification, lexical tones, or prosodic cues. Listeners with cochlear implants are susceptible to interference in the modulation domain which likely contributes to difficulties listening in noise.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "241649", "title": "Cochlear implant", "section": "", "start_paragraph_id": 1, "start_character": 0, "end_paragraph_id": 1, "end_character": 402, "text": "A cochlear implant (CI) is a surgically implanted neuroprosthetic device that provides a sense of sound to a person with moderate to profound sensorineural hearing loss. Cochlear implants bypass the normal acoustic hearing process, instead replacing it with electric signals which directly stimulate the auditory nerve. With training the brain may learn to interpret those signals as sound and speech.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "30872390", "title": "Prelingual deafness", "section": "Section::::Treatment.\n", "start_paragraph_id": 7, "start_character": 0, "end_paragraph_id": 7, "end_character": 849, "text": "Hearing aids and cochlear implants may make the child able to hear sounds in their hearing range—but they don't restore normal hearing. Cochlear implants can stimulate the auditory nerve directly to restore some hearing, but the sound quality isn't that of a normal hearing ear, suggesting that deafness cannot be fully overcome by medical devices. Some say that the benefits and safety of cochlear implants continues to grow, especially when children with implants receive a lot of oral educational support. It is a goal for some audiologists to test and fit a deaf child with a cochlear implant by six months of age, so that they don't get behind in learning language. In fact, there are expectations that if children get fit for implants early enough, they can acquire verbal language skills to the same level as their peers with normal hearing.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "241649", "title": "Cochlear implant", "section": "Section::::History.\n", "start_paragraph_id": 9, "start_character": 0, "end_paragraph_id": 9, "end_character": 279, "text": "However, research indicated that these single-channel cochlear implants were of limited usefulness because they can not stimulate different areas of the cochlea at different times to allow differentiation between low and mid to high frequencies as required for detecting speech.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "197037", "title": "Artificial organ", "section": "Section::::Examples.:Ear.\n", "start_paragraph_id": 22, "start_character": 0, "end_paragraph_id": 22, "end_character": 454, "text": "In cases when a person is profoundly deaf or severely hard of hearing in both ears, a cochlear implant may be surgically implanted. Cochlear implants bypass most of the peripheral auditory system to provide a sense of sound via a microphone and some electronics that reside outside the skin, generally behind the ear. The external components transmit a signal to an array of electrodes placed in the cochlea, which in turn stimulates the cochlear nerve.\n", "bleu_score": null, "meta": null } ] } ]
null
fte4e
If time stops completely at the event horizon, how can black holes grow?
[ { "answer": "What matters is the mass of the black hole from the black hole's perspective. And from its perspective, matter falls in just fine.", "provenance": null }, { "answer": "The answer is that it is really complicated. You are correct that for the schwarzschild metric around a black hole, the far away observer never sees anything actually reach the event horizon. However this assumes that the mass of the object falling in is much smaller than the mass of the black hole. If you take into account the spacetime distortion caused by the smaller object as it falls in, then the far away observer will actually observe the object reach the surface of the black hole at some finite time. Still, if the object is much smaller than the mass of the black hole then this time will be really long.", "provenance": null }, { "answer": "The phenomenon goes by the name *black hole complementarity.* Matter *both* falls into the singularity *and* stays fixed for eternity at the event horizon. Both things occur. This may sound like a paradox, but it really isn't, because no knowledge of what transpires within a black hole can ever filter out of it, so there's no actual contradiction, but merely an apparent one.\n\nIn a very real sense, there are two black holes. One is the black hole that exists to infalling matter; it's a point of zero volume and infinite density. The other is the black hole that exists to all observers who do *not* fall in; it's a spherical shell of energy that grows over time as more matter gets mushed up against it.\n\nSince from the outside a spherical shell gravitates exactly as it would if it were a point, we can tell no difference between the two. But the tiny void just outside the event horizon roils and churns at the quantum scale, and the fluctuations that are present there are determined by what's fallen into the black hole. This is the source of Hawking radiation: If the universe ever cools to the point where black holes can radiate their heat away — and it is not certain this will occur — then all the information that fell into the black hole will emerge again, bit for bit, albeit in an entirely homogenized form.\n\nBlack hole complementarity is a relatively new idea in physics, only having been first articulated back in the early 1990s. But in the years since, it's been universally accepted as the truth, as much as any notion that can't be experimentally tested or compared with observation can be.", "provenance": null }, { "answer": null, "provenance": [ { "wikipedia_id": "14056613", "title": "Nonsingular black hole models", "section": "Section::::Avoiding paradoxes in the standard black hole model.\n", "start_paragraph_id": 3, "start_character": 0, "end_paragraph_id": 3, "end_character": 746, "text": "For a black hole to physically exist as a solution to Einstein's equation, it must form an event horizon in finite time relative to outside observers. This requires an accurate theory of black hole formation, of which several have been proposed. In 2007, Shuan Nan Zhang of Tsinghua University proposed a model in which the event horizon of a potential black hole only forms (or expands) after an object falls into the existing horizon, or after the horizon has exceeded the critical density. In other words, an infalling object causes the horizon of a black hole to expand, which only occurs after the object has fallen into the hole, allowing an observable horizon in finite time. This solution does not solve the information paradox, however.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "29320146", "title": "Event horizon", "section": "", "start_paragraph_id": 2, "start_character": 0, "end_paragraph_id": 2, "end_character": 792, "text": "Any object approaching the horizon from the observer's side appears to slow down and never quite pass through the horizon, with its image becoming more and more redshifted as time elapses. This means that the wavelength of the light emitted from the object is getting longer as the object moves away from the observer. The notion of an event horizon was originally restricted to black holes; light originating inside an event horizon could cross it temporarily but would return. Later a strict definition was introduced as a boundary beyond which events cannot affect any outside observer at all, encompassing other scenarios than black holes. This strict definition of EH has caused information and firewall paradoxes; therefore Stephen Hawking has supposed an apparent horizon to be used. \n", "bleu_score": null, "meta": null }, { "wikipedia_id": "14286", "title": "Holographic principle", "section": "Section::::Black hole entropy.\n", "start_paragraph_id": 13, "start_character": 0, "end_paragraph_id": 13, "end_character": 697, "text": "Stephen Hawking had shown earlier that the total horizon area of a collection of black holes always increases with time. The horizon is a boundary defined by light-like geodesics; it is those light rays that are just barely unable to escape. If neighboring geodesics start moving toward each other they eventually collide, at which point their extension is inside the black hole. So the geodesics are always moving apart, and the number of geodesics which generate the boundary, the area of the horizon, always increases. Hawking's result was called the second law of black hole thermodynamics, by analogy with the law of entropy increase, but at first, he did not take the analogy too seriously.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "6813660", "title": "Apparent horizon", "section": "Section::::Differences from the (absolute) event horizon.\n", "start_paragraph_id": 10, "start_character": 0, "end_paragraph_id": 10, "end_character": 299, "text": "In the simple picture of stellar collapse leading to formation of a black hole, an event horizon forms before an apparent horizon. As the black hole settles down, the two horizons approach each other, and asymptotically become the same surface. If the AH exists, it is necessarily inside of the EH.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "151013", "title": "T-symmetry", "section": "Section::::Macroscopic phenomena: black holes.\n", "start_paragraph_id": 20, "start_character": 0, "end_paragraph_id": 20, "end_character": 471, "text": "The event horizon of a black hole may be thought of as a surface moving outward at the local speed of light and is just on the edge between escaping and falling back. The event horizon of a white hole is a surface moving inward at the local speed of light and is just on the edge between being swept outward and succeeding in reaching the center. They are two different kinds of horizons—the horizon of a white hole is like the horizon of a black hole turned inside-out.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "293670", "title": "Eddington–Finkelstein coordinates", "section": "Section::::Tortoise coordinate.\n", "start_paragraph_id": 15, "start_character": 0, "end_paragraph_id": 15, "end_character": 496, "text": "The increase in the time coordinate to infinity as one approaches the event horizon is why information could never be received back from any probe that is sent through such an event horizon. This is despite the fact that the probe itself can nonetheless travel past the horizon. It is also why the space-time metric of the black hole, when expressed in Schwarzschild coordinates, becomes singular at the horizon - and thereby fails to be able to fully chart the trajectory of an infalling probe.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "2089093", "title": "Kruskal–Szekeres coordinates", "section": "Section::::Qualitative features of the Kruskal–Szekeres diagram.\n", "start_paragraph_id": 28, "start_character": 0, "end_paragraph_id": 28, "end_character": 1349, "text": "The event horizons bounding the black hole and white hole interior regions are also a pair of straight lines at 45 degrees, reflecting the fact that a light ray emitted at the horizon in a radial direction (aimed outward in the case of the black hole, inward in the case of the white hole) would remain on the horizon forever. Thus the two black hole horizons coincide with the boundaries of the future light cone of an event at the center of the diagram (at \"T\"=\"X\"=0), while the two white hole horizons coincide with the boundaries of the past light cone of this same event. Any event inside the black hole interior region will have a future light cone that remains in this region (such that any world line within the event's future light cone will eventually hit the black hole singularity, which appears as a hyperbola bounded by the two black hole horizons), and any event inside the white hole interior region will have a past light cone that remains in this region (such that any world line within this past light cone must have originated in the white hole singularity, a hyperbola bounded by the two white hole horizons). Note that although the horizon looks as though it is an outward expanding cone, the area of this surface, given by \"r\" is just formula_46, a constant. I.e., these coordinates can be deceptive if care is not exercised.\n", "bleu_score": null, "meta": null } ] } ]
null
28fqva
in philosophy, what are epistemology and metaphysics?
[ { "answer": "True ELI5: \n\nEpistemology = \"How do I know shit? What does it mean to know shit?\"\n\nMetaphysics = \"What is this shit? What is shit? What is?\"", "provenance": null }, { "answer": null, "provenance": [ { "wikipedia_id": "22500921", "title": "Outline of knowledge", "section": "Section::::Epistemology (philosophy of knowledge).\n", "start_paragraph_id": 68, "start_character": 0, "end_paragraph_id": 68, "end_character": 426, "text": "Epistemology – philosophy of knowledge. It is the study of knowledge and justified belief. It questions what knowledge is and how it can be acquired, and the extent to which knowledge pertinent to any given subject or entity can be acquired. Much of the debate in this field has focused on the philosophical analysis of the nature of knowledge and how it relates to connected notions such as truth, belief, and justification.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "682482", "title": "Human", "section": "Section::::Behavior.:Philosophy and self-reflection.\n", "start_paragraph_id": 136, "start_character": 0, "end_paragraph_id": 136, "end_character": 592, "text": "Philosophy is a discipline or field of study involving the investigation, analysis, and development of ideas at a general, abstract, or fundamental level. It is the discipline searching for a general understanding of reality, reasoning and values. Major fields of philosophy include logic, metaphysics, epistemology, philosophy of mind, and axiology (which includes ethics and aesthetics). Philosophy covers a very wide range of approaches, and is used to refer to a worldview, to a perspective on an issue, or to the positions argued for by a particular philosopher or school of philosophy.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "3193700", "title": "Meta-epistemology", "section": "Section::::Definition.\n", "start_paragraph_id": 11, "start_character": 0, "end_paragraph_id": 11, "end_character": 439, "text": "Some goals of meta-epistemology are to identify inaccurate traditional assumptions, or hitherto overlooked scope for generalization. Thus whereas epistemology has usually been seen as a branch of philosophy, the discussion below also takes examples from biology which seem equivalent in relevant ways. Also, insofar as philosophy \"is\" involved, there may be a case for extending it beyond its traditional domain of word-based definitions.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "4119715", "title": "Glossary of education terms (D–F)", "section": "Section::::E.\n", "start_paragraph_id": 31, "start_character": 0, "end_paragraph_id": 31, "end_character": 572, "text": "BULLET::::- Epistemology: (from the Greek words \"episteme\" (knowledge) and \"logos\" (word/speech)) The branch of philosophy that deals with the nature, origin and scope of knowledge. Historically, it has been one of the most investigated and most debated of all philosophical subjects. Much of this debate has focused on analysing the nature and variety of knowledge and how it relates to similar notions such as truth and belief. Much of this discussion concerns the justification of knowledge claims, that is the grounds on which one can claim to know a particular fact.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "14471594", "title": "Outline of metaphysics", "section": "Section::::Nature of metaphysics.\n", "start_paragraph_id": 7, "start_character": 0, "end_paragraph_id": 7, "end_character": 341, "text": "BULLET::::- Branch of philosophy – philosophy is the study of general and fundamental problems, such as those connected with existence, knowledge, values, reason, mind, and language. Philosophy is distinguished from other ways of addressing such problems by its critical, generally systematic approach and its reliance on rational argument.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "39727186", "title": "Epistemology of Wikipedia", "section": "", "start_paragraph_id": 1, "start_character": 0, "end_paragraph_id": 1, "end_character": 204, "text": "Epistemology is a major branch of philosophy and is concerned with the nature and scope of knowledge. The epistemology of Wikipedia has been a subject of interest from the earliest days of its existence.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "2590334", "title": "Metaphysics (Aristotle)", "section": "", "start_paragraph_id": 1, "start_character": 0, "end_paragraph_id": 1, "end_character": 495, "text": "Metaphysics (Greek: τὰ μετὰ τὰ φυσικά; Latin: \"Metaphysica\") is one of the principal works of Aristotle and the first major work of the branch of philosophy with the same name. The principal subject is \"being qua being,\" or being insofar as it is being. It examines what can be asserted about any being insofar as it is and not because of any special qualities it has. Also covered are different kinds of causation, form and matter, the existence of mathematical objects, and a prime-mover God.\n", "bleu_score": null, "meta": null } ] } ]
null
2zhbk5
how can we smell spring?
[ { "answer": "I'd like to think it's more a mass thawing of hundreds of petrified dog turds.", "provenance": null }, { "answer": "Pollen is just plant spunk. Walking outside in spring is like walking into a huge tree orgy. There really is something in the air, and on the ground, and covering your car (if you park under a tree).", "provenance": null }, { "answer": null, "provenance": [ { "wikipedia_id": "18938263", "title": "Havuş", "section": "Section::::Etymology.\n", "start_paragraph_id": 4, "start_character": 0, "end_paragraph_id": 4, "end_character": 325, "text": "Avuş spring - According to information of local people, the name of the spring is from word of \"ovuc\" (the handful), it was so called as the shape it is similar to the handful. In the Turkic languages, \"avuş/avuj\" means \"vaccine, alum\". And this show the existence of the same material in the content of the water of spring.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "316612", "title": "Spring (hydrology)", "section": "Section::::Formation.\n", "start_paragraph_id": 3, "start_character": 0, "end_paragraph_id": 3, "end_character": 372, "text": "A spring may be the result of karst topography where surface water has infiltrated the Earth's surface (recharge area), becoming part of the area groundwater. The groundwater then travels through a network of cracks and fissures—openings ranging from intergranular spaces to large caves. The water eventually emerges from below the surface, in the form of a karst spring.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "1708140", "title": "Nanalan'", "section": "Section::::Full-Length Show Episodes.\n", "start_paragraph_id": 42, "start_character": 0, "end_paragraph_id": 42, "end_character": 270, "text": "BULLET::::- 135: Spring - Today is Spring! And that can only mean one thing: spring cleaning! Also, Mona and Russell go on a teeter-totter with Mr. Wooka, smell flowers, and find a cocoon! A butterfly comes out of it after reading a book called \"Little Fuzz\" with Nana.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "23754380", "title": "Climate of Allentown, Pennsylvania", "section": "Section::::Spring.\n", "start_paragraph_id": 22, "start_character": 0, "end_paragraph_id": 22, "end_character": 369, "text": "Spring is a season of growth, with new grass growing and flowers blooming. Animals come out of hibernation, and sun fills the city and its surroundings. Temperatures are on the rise, but March and April bring much rain, with light rain that can last for hours on end. That gives way to warm May and June months, and shore weather is in the forecast for many residents.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "14431203", "title": "Central Tablelands", "section": "Section::::Climate.\n", "start_paragraph_id": 21, "start_character": 0, "end_paragraph_id": 21, "end_character": 372, "text": "Spring is when the temperature starts to warm, although frosts and sometimes snow still occur in early spring. Around mid-spring the temperature can get as high as 24 °C (75.2 °F), sometimes even higher. Around spring all the crops and flowers start growing in the Central Tablelands. Nights in spring may still drop below 0 °C (32 °F), especially if the wind is blowing.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "17985997", "title": "Wells in the Bible", "section": "Section::::Springs.\n", "start_paragraph_id": 8, "start_character": 0, "end_paragraph_id": 8, "end_character": 338, "text": "A spring is the \"eye of the landscape\", the natural burst of living water, flowing all year or drying up at certain seasons. In contrast to the \"troubled waters\" of wells and rivers (Jer. 2:18), there gushes forth from it \"living water\", to which Jesus compared the grace of the Holy Spirit (John 4:10; 7:38; compare Isaiah 12:3; 44:3). \n", "bleu_score": null, "meta": null }, { "wikipedia_id": "316532", "title": "Spring (season)", "section": "Section::::Ecological reckoning.\n", "start_paragraph_id": 11, "start_character": 0, "end_paragraph_id": 11, "end_character": 866, "text": "The beginning of spring is not always determined by fixed calendar dates. The phenological or ecological definition of spring relates to biological indicators, such as the blossoming of a range of plant species, the activities of animals, and the special smell of soil that has reached the temperature for micro flora to flourish. These indicators, along with the beginning of spring, vary according to the local climate and according to the specific weather of a particular year. Most ecologists divide the year into six seasons that have no fixed dates. In addition to spring, ecological reckoning identifies an earlier separate prevernal (early or pre-spring) season between the hibernal (winter) and vernal (spring) seasons. This is a time when only the hardiest flowers like the crocus are in bloom, sometimes while there is still some snowcover on the ground.\n", "bleu_score": null, "meta": null } ] } ]
null
6ek466
what's happening inside of a plasma ball?
[ { "answer": "Inside a plasma ball high voltage is used to strip electrons away from atoms of a noble gas (usually neon or argon). Plasma is a state of matter comprising these free electrons. Light (release of photons) happens whenever electrons change orbitals.\n\nThe stream of electrons are negatively charged and looking for a place to \"go\". Your body is a conductor so when you touch the globe you are presenting a path for these free electrons to go to.", "provenance": null }, { "answer": null, "provenance": [ { "wikipedia_id": "878855", "title": "Naga fireball", "section": "Section::::Explanations.\n", "start_paragraph_id": 9, "start_character": 0, "end_paragraph_id": 9, "end_character": 344, "text": "A similar explanation involves a similar phenomenon in plasma physics. A free-floating plasma orb, created when surface electricity (e.g., from a capacitor) is discharged into a solution. However, most plasma ball experiments are conducted using high voltage capacitors, microwave oscillators, or microwave ovens, not under natural conditions.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "870889", "title": "Plasma globe", "section": "", "start_paragraph_id": 1, "start_character": 0, "end_paragraph_id": 1, "end_character": 237, "text": "A plasma globe or plasma lamp (also called plasma ball, dome, sphere, tube or orb, depending on shape) is a clear glass container filled with a mixture of various noble gases with a high-voltage electrode in the center of the container.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "32004278", "title": "Surface modification of biomaterials with proteins", "section": "Section::::Fabrication techniques.:Plasma treatment.\n", "start_paragraph_id": 40, "start_character": 0, "end_paragraph_id": 40, "end_character": 676, "text": "Plasma techniques are especially useful because they can deposit ultra thin (a few nm), adherent, conformal coatings. Glow discharge plasma is created by filling a vacuum with a low-pressure gas (ex. argon, ammonia, or oxygen). The gas is then excited using microwaves or current which ionizes it. The ionized gas is then thrown onto a surface at a high velocity where the energy produced physically and chemically changes the surface. After the changes occur, the ionized plasma gas is able to react with the surface to make it ready for protein adhesion. However, the surfaces may lose mechanical strength or other inherent properties because of the high amounts of energy.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "1251318", "title": "Plasma window", "section": "", "start_paragraph_id": 2, "start_character": 0, "end_paragraph_id": 2, "end_character": 330, "text": "Plasma is any gas whose atoms or molecules have been ionized, and is a separate phase of matter. This is most commonly achieved by heating the gas to extremely high temperatures, although other methods exist. Plasma becomes increasingly viscous at higher temperatures, to the point where other matter has trouble passing through.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "751638", "title": "Reactive-ion etching", "section": "Section::::Method of operation.\n", "start_paragraph_id": 7, "start_character": 0, "end_paragraph_id": 7, "end_character": 322, "text": "Plasma is initiated in the system by applying a strong RF (radio frequency) electromagnetic field to the wafer platter. The field is typically set to a frequency of 13.56 Megahertz, applied at a few hundred watts. The oscillating electric field ionizes the gas molecules by stripping them of electrons, creating a plasma.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "22063454", "title": "General Fusion", "section": "Section::::Technology.:Power plant design.\n", "start_paragraph_id": 15, "start_character": 0, "end_paragraph_id": 15, "end_character": 303, "text": "The outside of the sphere is covered with steam pistons, which push the liquid metal and collapse the vortex, thereby compressing the plasma. The compression increases the temperature of the plasma to the point where the deuterium and tritium nuclei fuse, releasing energy in the form of fast neutrons.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "30747793", "title": "Plasma polymerization", "section": "Section::::Basic operating mechanism.:Glow discharge.\n", "start_paragraph_id": 6, "start_character": 0, "end_paragraph_id": 6, "end_character": 925, "text": "Plasma consists of a mixture of electrons, ions, radicals, neutrals and photons. Some of these species are in local thermodynamic equilibrium, while others are not. Even for simple gases like argon this mixture can be complex. For plasmas of organic monomers, the complexity can rapidly increase as some components of the plasma fragment, while others interact and form larger species. Glow discharge is a technique in polymerization which forms free electrons which gain energy from an electric field, and then lose energy through collisions with neutral molecules in the gas phase. This leads to many chemically reactive species, which then lead to a plasma polymerization reaction. The electric discharge process for plasma polymerization is the “low-temperature plasma” method, because higher temperatures cause degradation. These plasmas are formed by a direct current, alternating current or radio frequency generator.\n", "bleu_score": null, "meta": null } ] } ]
null
28zam8
how could pixar produce toy story back in 1995?
[ { "answer": "They had 117 computers running 24 hours a day, which could produce three minutes of the movie a week. It was slow.", "provenance": null }, { "answer": "Rendering 3d video is done through \"render farms\", which are essentially a bunch of computers in a warehouse turning the 3d models created by the artists into fully textured, high resolution video with lighting and particle effects - basically it turns it into a movie. Pixar has a 13500 square foot render farm which houses 3000 AMD processors, with the ability to add workstations to the farm pool after hours, increasing to 5000 processors. \n\nI don't know what their farm stats were in 1995, but they did it the same way. Create the model via wire frames and vertices, assign textures and lighting and effects, send it to the farm to be rendered into a final product. \n\nEdit: these numbers are out of date, from 2010. Apparently they had 12500 cores for Cars 2.", "provenance": null }, { "answer": "Look at Toy Story compared to the sequels though, they're a lot more primitive (still an amazing technical achievement and a great movie)\n\nAs processing power has increased, the depth and definition of Pixar's work has increased massively.", "provenance": null }, { "answer": "The whole reason for the concept of Toy Story is because of the limitations of the technology at the time.\n\nSomewhere in Pixar there was a conversation like \"Damn, our animation methods struggle to capture the complexity of human movement realistically, and when we render our characters they look like they're made of plastic. They look like toys.\" \"Fuck it, let's just have the characters be toys. Keep the humans off-screen whenever we can.\"\n\nLater, as the tech got better they could do insects for A Bug's Life, which move complexly but don't have skin and hair. Then they figured out hair/fur for Monsters Inc, then water for Finding Nemo. Notice how even in those films, adult humans are pretty much never seen. Then they could do people, but in a very cartoony way for the Incredibles, and then finally the more realistic human characters in Ratatouille and Up.\n\nBut toys are by far the easiest thing to animate and render, so they did that first.", "provenance": null }, { "answer": "If you can find it, checkout *[The Pixar Story](_URL_0_)* it is unfortunately not on Netflix anymore, but it is very interesting and explains to a certain degree what you are asking.", "provenance": null }, { "answer": null, "provenance": [ { "wikipedia_id": "53942507", "title": "Steve Segal", "section": "Section::::Animation and film production.\n", "start_paragraph_id": 11, "start_character": 0, "end_paragraph_id": 11, "end_character": 431, "text": "\"Toy Story\" (1995) was the first computer-animated feature film in Pixar's debut contract with Disney. In 2015, movie writer Julia Zorthian said in TIME, \"Children and adults flocked to theaters when Toy Story opened, making it the highest-selling film for three weeks in a row. As the first full-length, 3D computer-animated movie, it was a milestone for animation, possibly the most significant since the introduction of color.\"\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "15335778", "title": "The Pixar Story", "section": "", "start_paragraph_id": 1, "start_character": 0, "end_paragraph_id": 1, "end_character": 305, "text": "The Pixar Story, directed by Leslie Iwerks, is a documentary of the history of Pixar Animation Studios. An early version of the film premiered at the Sonoma Film Festival in 2007, and it had a limited theatrical run later that year before it was picked up by the Starz cable network in the United States.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "20800509", "title": "Toy Story (franchise)", "section": "Section::::Films.:\"Toy Story\" (1995).\n", "start_paragraph_id": 7, "start_character": 0, "end_paragraph_id": 7, "end_character": 1185, "text": "\"Toy Story\", the first film in the franchise, was released on November 22, 1995. It was the first feature-length film created entirely by CGI and was directed by John Lasseter. The plot involves Andy (voiced by John Morris), an imaginative young boy, getting a new Buzz Lightyear (Tim Allen) action figure for his birthday, causing Sheriff Woody (Tom Hanks), a vintage cowboy doll, to think that he has been replaced as Andy's favorite toy. In competing for Andy's attention, Woody accidentally knocks Buzz out a window, leading the other toys to believe he tried to murder Buzz. Determined to set things right, Woody attempts to save Buzz, and both must escape from the house of the next-door neighbor Sid Phillips (Erik von Detten), who likes to torture and destroy toys. In addition to Hanks and Allen, the film featured the voices of Don Rickles, Jim Varney, Wallace Shawn, John Ratzenberger, and Annie Potts. The film was critically and financially successful, grossing over $373 million worldwide. The film was later re-released in Disney Digital 3-D as part of a double feature, along with \"Toy Story 2\", for a 2-week run, which was later extended due to its financial success.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "21281430", "title": "List of Pixar awards and nominations (feature films)", "section": "Section::::Films.:\"Toy Story\".\n", "start_paragraph_id": 5, "start_character": 0, "end_paragraph_id": 5, "end_character": 470, "text": "Toy Story was released in 1995 to be the first feature film in history produced using only computer animation. The film, directed by John Lasseter and starring Tom Hanks and Tim Allen, went on to gross over $191 million in the United States during its initial theatrical release, and took in more than $373 million worldwide. Reviews were overwhelmingly positive, praising both the technical innovation of the animation and the wit and sophistication of the screenplay.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "53085", "title": "Toy Story", "section": "", "start_paragraph_id": 1, "start_character": 0, "end_paragraph_id": 1, "end_character": 733, "text": "Toy Story is a 1995 American computer-animated buddy comedy film produced by Pixar Animation Studios and released by Walt Disney Pictures. The feature film directorial debut of John Lasseter, it was the first entirely computer-animated feature film, as well as the first feature film from Pixar. The screenplay was written by Joss Whedon, Andrew Stanton, Joel Cohen, and Alec Sokolow from a story by Lasseter, Stanton, Pete Docter, and Joe Ranft. The film features music by Randy Newman, and was executive-produced by Steve Jobs and Edwin Catmull. It features the voices of Tom Hanks, Tim Allen, Don Rickles, Wallace Shawn, John Ratzenberger, Jim Varney, Annie Potts, R. Lee Ermey, John Morris, Laurie Metcalf, and Erik von Detten. \n", "bleu_score": null, "meta": null }, { "wikipedia_id": "7412236", "title": "Steve Jobs", "section": "Section::::1985–1997.:Pixar and Disney.\n", "start_paragraph_id": 63, "start_character": 0, "end_paragraph_id": 63, "end_character": 682, "text": "The first film produced by Pixar with its Disney partnership, \"Toy Story\" (1995), with Jobs credited as executive producer, brought fame and critical acclaim to the studio when it was released. Over the next 15 years, under Pixar's creative chief John Lasseter, the company produced box-office hits \"A Bug's Life\" (1998); \"Toy Story 2\" (1999); \"Monsters, Inc.\" (2001); \"Finding Nemo\" (2003); \"The Incredibles\" (2004); \"Cars\" (2006); \"Ratatouille\" (2007); \"WALL-E\" (2008); \"Up\" (2009); and \"Toy Story 3\" (2010). \"Finding Nemo\", \"The Incredibles\", \"Ratatouille\", \"WALL-E\", \"Up\" and \"Toy Story 3\" each received the Academy Award for Best Animated Feature, an award introduced in 2001.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "78969", "title": "Pixar", "section": "Section::::Feature films and shorts.:Adaptation to television.\n", "start_paragraph_id": 55, "start_character": 0, "end_paragraph_id": 55, "end_character": 538, "text": "\"Toy Story\" was the first Pixar film to be adapted for television as and TV series. \"Cars\" became the second with the help of \"Cars Toons\", a series of 3-to-5-minute short films running between regular Disney Channel shows and featuring Mater (a tow truck voiced by comedian Larry the Cable Guy). Between 2013 and 2014, Pixar released its first two television specials, \"Toy Story of Terror!\" and \"Toy Story That Time Forgot\". \"Monsters at Work\", a television series spin-off of \"Monsters, Inc.\", is currently in development for Disney+.\n", "bleu_score": null, "meta": null } ] } ]
null
5yvh3n
how do download and upload speeds actually work,(i.e how do they limit the speed of download through your cables)
[ { "answer": "They limit the speed of the download by limiting how many bits per second are allowed to transfer through the wire to you. Someone, somewhere tracks all the bits that go into your house, and counts those bits. Every second, that count \"refreshes,\" but if that count reaches the max rate, they stop sending traffic through until the next second.\n\nBasically, if your cable supports 10MBPS, they don't limit it by somehow making your cable support 5MBPS, they just transmit 5MB in half a second, then transmit 0MB for the next half a second.", "provenance": null }, { "answer": "I always assumed it had more to do with the physical meaning of \"bandwidth\" rather than the way we measure it in the computer world (digital transfer rate - i.e. Bits per second). I could be terribly wrong, so input is welcome. I'm also no pro, so I may jack up terminology.\n\n\"Bandwidth\" refers to a range of frequencies. Data (signals) is transferred over a cable using a certain frequency. Think of a dump truck. You fill it up with dirt and drive it across town but your max speed is limited by the street's speed limit. If you need to quickly move 5 loads of dirt, the people on both ends have to wait on you while you drive back and forth. That's not very fast. The solution? Fill up 5 trucks *at the same time* and drive them *at the same time* and you move 5 loads in the same time you could have moved one. It's similar with sending signals - you can only send the signals so fast, and then you have to wait before you send more. Solution? Connect 5 wires and send data over them all at the same time. Luckily, instead of adding more wires, you can use the same wire as long as you can use more than one frequency. If you can use 5 frequencies then you can send 5 different signals, each with their own frequency, *at the same time.* We can mostly thank Jospeh Fourier for this.\n\nAlmost 200 years ago some guy named Joseph Fourier realized that you can take multiple frequencies, mash them together to make one signal, then take them back apart, and end up with the exact same original frequencies. So now we can take 5 frequencies, upload/send data over each one, mash them together, shoot them through an Ethernet cable, and have a device on the other end that pulls them apart and receives the exact data you sent on each one. Alternatively, your computer can download/listen for the signal, split it apart into each frequency, and get that info from each frequency. (I think this is how cable worked - the cable company sends you all the channels down one wire, and when you turn the channel your TV just filters out the other frequencies and displays the one you wanted. Surely there's more to it, but I think that's the basics of cable tv, and explains why your neighbor could steal your cable!)\n\n**Wrapping up** (I promise)\n\nYour router/modem takes signals from all the computers connected to it, works that Fourier magic on those signals, and shoots the combined/composite signal to a magical cable in your wall that connects your house to the internet. That cable that brings internet to your house is connected (in my case anyway) to a big green box up the street. All of your neighbors' magic internet cables are also tied in there. I imagine that box kind of like a huge router (just like the one in your house). That big box takes signals from you and your neighbors, works it's Fourier magic on those signals, and sends it up the next wire. A group of those boxes all plug into an even bigger one, and so on until it connects back to the ISP. \n\nGo back to the dump truck example. If the dirt is the data, and each truck is another frequency, then the road is the wire. Even if you buy a million trucks, the road can only fit so many. Again, easy solution - upgrade the road by making it bigger. But that costs money! Instead, just make certain customers pay more money if they want their dirt faster. If you have 4 customers and one pays for quicker dirt delivery speed, then you can send 2 trucks together for his delivery and the other customers each get one truck for their delivery. \n\nApply that analogy:\n\nEach cable can only carry a certain range of frequencies (something in physics explains this), so those boxes do eventually max out data transfer. This can be fixed by using bigger, better boxes and cables all the way to the ISP, but it gets way too expensive. The solution?? --- > make a customer pay more money in order to have more frequencies available to them. So now you give the ISP more money, and they push a button that tells the box up the street to allow you to use a bigger *range of frequencies,* which we now know that a certain range of frequencies is also referred to as a *bandwidth.* Since downloading is the majority of internet traffic, they assign you many more frequencies for download than they do for upload. Once you run out of frequencies to use, you spend time waiting on your computer to finish using the current ones so that you can use them for something else.\n\nBoom. Done.\n\nI'm fairly positive that this is how it works, but it could all be monitored and regulated. Heck if I actually know haha", "provenance": null }, { "answer": null, "provenance": [ { "wikipedia_id": "40102006", "title": "Cốc Cốc", "section": "Section::::Features.\n", "start_paragraph_id": 16, "start_character": 0, "end_paragraph_id": 16, "end_character": 325, "text": "BULLET::::- Files are downloaded in multiple streams, which under certain conditions can accelerate download speeds by up to eight times, depending on the bandwidth of the Internet connections and the speed at which the server sends files. At present, an option to increase or decrease the downloading speed is not provided.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "40102006", "title": "Cốc Cốc", "section": "Section::::User feedback and issues.\n", "start_paragraph_id": 24, "start_character": 0, "end_paragraph_id": 24, "end_character": 369, "text": "BULLET::::- Using multi thread download similar to Internet Download Manager, the browser is supposed to download at just the same speed as Internet Download Manager, however there are reports of cases where it failed to perform up to expectation, because the actual acceleration depends on the bandwidth of the Internet, and the speed at which the server sends files.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "15044355", "title": "Project Dakota", "section": "Section::::Purpose.\n", "start_paragraph_id": 8, "start_character": 0, "end_paragraph_id": 8, "end_character": 217, "text": "BULLET::::- Users with a slow connection to the Internet who want to avoid slow download times by using a faster connection on another computer to download Project Dakota, and burn it to a CD, DVD or USB flash drive.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "32881", "title": "Warez", "section": "Section::::File formats of warez.\n", "start_paragraph_id": 67, "start_character": 0, "end_paragraph_id": 67, "end_character": 343, "text": "BULLET::::- In the case of One-click hosting websites downloading multiple files from one or several sources can significantly increase download speeds. This is because even if the source(s) provides slow download speeds on individual disks, downloading several disks simultaneously will allow the user to achieve much greater download rates.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "265666", "title": "Bram Cohen", "section": "Section::::Early life and career.\n", "start_paragraph_id": 5, "start_character": 0, "end_paragraph_id": 5, "end_character": 298, "text": "speeding up the download time, especially for users with faster download than upload speeds. Thus, the more popular a file is, the faster a user will be able to download it, since many people will be downloading it at the same time, and these people will also be uploading the data to other users.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "24767575", "title": "Torrent file", "section": "Section::::Background.\n", "start_paragraph_id": 6, "start_character": 0, "end_paragraph_id": 6, "end_character": 695, "text": "Each file to be distributed is divided into small information chunks called \"pieces\". Downloading peers achieve high download speeds by requesting multiple pieces from different computers simultaneously in the swarm. Once obtained, these pieces are usually immediately made available for download by others in the swarm. In this way, the burden on the network is spread among the downloaders, rather than concentrating at a central distribution hub or cluster. As long as all the pieces are available, peers (downloaders and uploaders) can come and go; no one peer needs to have all the chunks, or to even stay connected to the swarm in order for distribution to continue among the other peers.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "3725091", "title": "CoDeeN", "section": "", "start_paragraph_id": 6, "start_character": 0, "end_paragraph_id": 6, "end_character": 252, "text": "For rare files this system could be slightly slower than downloading the file itself. Especially for non-cacheable content, you may as well go to the origin host. The system's speed is also subject to the constraint of number of participating proxies.\n", "bleu_score": null, "meta": null } ] } ]
null
1djsdv
how come the zimbabwe dollar inflated so fast? how do people survive in a country with such hyperinflation?
[ { "answer": "People stop using the currency and move to a barter system (or different currency). That is the resolution as well.", "provenance": null }, { "answer": "The president Mugabe is an EVIL dictator (arguably the worst on the planet) and he's printing money like crazy to fund corrupt politics in Zimbabwe.\n\nIt's not like people are having an easy time. Many are moving out of the country and but it's becoming more and more difficult as they become more poor. ", "provenance": null }, { "answer": "Economic growth can come from 2 areas, supply side or demand side. Supply side growth is long term, sustainable, and deflationary; it involves pushing out the maximum you can produce, by investing and improving infrastructure. Demand side means making people spend more, and a recession is caused by a lack of demand side growth. However, demand side growth is unsustainable. It leads to inflation.\n\n Demand side growth is short term, and can be easy, but it can only move so far. You can't buy more stuff then you have. One way of increasing demand side growth is called quantitive easing (it's sounds complicated, it's literally just printing money) and the Zimbabwe Govt did this loads; causing hyperinflation that crippled their economy. \n \nTL;DR Essentially is was a government looking for quick growth, without wanting to invest, so they printed money until everyone had so much it was meaningless - hence inflation.", "provenance": null }, { "answer": "Hyperinflation is typically caused when a nation goes through a major crisis (war, political turmoil, etc) and has a simultaneous need to spend large amounts of money. The tax base has collapsed and the uncertain economy makes international borrowing unavailable, so the government starts to print money. \n\nThe sudden, huge increase in the amount of money in circulation makes the currency less valuable. With the value of money shrinking, the government has to print ever more of it to meet its commitments. Very rapidly this turns into a spiral of hyperinflation. \n\nIn the case of Zimbabwe specifically, the country entered into a plan of forced land redistribution. At least initially, the idea was to confiscate farms from the descendants of former European colonials and give the land to the poorest indigenous people. There were many problems with this plan. Chief among them, the recipients of the land knew very little about farming so productivity collapsed. Foreign investors saw property being confiscated and left the market. To make matters worse, the land grants were frequently awarded to cronies of the Mugabe regime. The government printed huge volumes of money to try to make up for the lost tax base and foreign investment. \n\nHow do people survive? Well, you may have heard of other cases of hyperinflation from history where people try to adapt. In the southern US after the Civil War and in Weimar Germany after World War I, there were stories of people bringing cash to the markets in wheelbarrows to try to buy food. In some families, there are stories of people burning bundles of cash for heat in the winter because it was cheaper than buying fuel. \n\nIn Brazil there was a saying that you should always take a bus instead of a cab because on a bus you pay when you get on; in a taxi, you pay when you get out and there is no telling how much the currency may have devalued during the ride. \n\nAs others have said, many people turn to barter or other types of trade that are not dependent on currency. Sadly, in most affected countries, this also means a large increase in crime. \n\nThe interesting thing is that barter holds the key to how Brazil finally managed to beat decades of hyperinflation. Economists noticed that people bartering would settle on fairly standard relative values of goods: just as an example, imagine two potatoes for one tomato. These same ratios held for the prices in the markets. If a potato was $50, a tomato was $100. When potatoes hit $50,000 tomatoes were $100,000. \n\nThe economists called this \"real value\" and started referring to prices in units of real value. Storekeepers started putting units of real value in ads and on shelves and just posted an exchange rate between the currency and the real value (this was also much easier than re-pricing everything in the store every day). \n\nEventually, after a few years of this, the country just switched to a new currency. Each unit of the new currency was equal to one unit of real value. The currency was even called the *real* (in Portuguese, the plural is *reais*). The switch was remarkably smooth, since everyone was already thinking in units of real value. \n\nIt's kind of fascinating from a psychological standpoint as much as an economic one. You wouldn't expect that you could simply swap out a failed currency for a stable one, but in this case (with the right preparation) it actually worked. ", "provenance": null }, { "answer": "My aunt and uncle live in Zimbabwe (with my cousin before he was shot). We visited them in 2004. They are white, and they own a business, so they were lucky enough to have had money before the hyperinflation started. \n\nThey got around it by using foreign currency. Basically, South African Rand and American Dollars were used in place of Zimbabwean currency by most people. The tricky part is getting hold of Rand or US Dollars when nobody will trade them for your worthless Zimbabwean paper. There are several ways to do this. \n\nOne, used by my aunt and uncle, is to store any cash you have in foreign banks. They had bank accounts in South Africa and the UK. This works pretty well, but is not an option for most people in Zimbabwe since they haven't got enough money in the first place for a foreign bank account.\n\nThe more common method is for somebody in your family to leave the country and find work elsewhere, and then send their wages back to Zimbabwe in hard currency. Millions of Zimbabweans did this, amounting to about one out of every five or six citizens. Most went to South Africa, where living in a slum and working for less than what the locals will accept in wages (which is a pittance) is still seen as a more hopeful option than living in Zimbabwe. I think that these remittances supported most people in Zimbabwe for the last decade or so. \n\nThis phenomenon also achieves part of the governments political goals: there are two large tribes in Zimbabwe, and Mugabe is from the northern one. He indirectly oppresses the southern tribe by reserving government jobs for members of his own tribe, by directing foreign aid exclusively to his own people, and by generally disenfranchising the southerners. When millions of those people left the country to find work, he is happy to see his political enemies become somebody else's problem.\n\nFinally, people learn to be savvy and get their hands on usable currency whenever the opportunity arises. When we visited, we traded US $600 to my aunt and uncle in exchange for a suitcase full of ZIM $20,000 notes, which we used while we were in the country. Every market trader we traded with was willing to accept any foreign currency from a country without hyperinflation, in one case we paid with a mix of low-denomination (ones, fives, tens) Australian, British, American, and South African currency. This was preferred by the traders to the Zimbabwean currency, presumably since it could be used as a store of value.\n\nAs far as I know, Zimbabwe is no longer experiencing hyperinflation, thanks to the policies implemented by Morgan Tsvangirai and the MDC, the opposition party who are now in coalition with Mugabes ZPF in government (the MDC actually won the election, even in the face of massive fraud, but Mugabe would not step down so this was their best option).", "provenance": null }, { "answer": "I lived in Zimbabwe for a time. They use the US Dollar now, but back then we would spend our money as soon as we got it. You couldn't save it because it would be worth less the next day. As soon as you got paid, you bought all your groceries immediately. Most people survived by growing a lot of their own food. Nearly everyone has a garden.\n\nThe massive deflation on a 5 year old level: No one thought that Zimbabwe money was worth anything because the country's government was bad. As things got worse, people put even less value on the currency. ", "provenance": null }, { "answer": "I was in Zim a few years back and talked to a lot of people about what they went through. Normal people didn't have much savings anyway, they lived paycheck to paycheck. So when the shit started they would get paid everyday and buy groceries every day. You couldn't hold onto anything. One guy described it as feeling like falling off a cliff all the time. He actually worked in a Bank and did during the crisis.\n\nThe richer and upper middle classes already used foreign banks and currencies. I met one guy who was studying finance, but he was in Malaysia in school when it happened.\n\nI stayed with guys in Mbare which is the poor part of Harare. They also didn't really have anything to lose. They still have suitcases of trillion dollar bills.\n\nBut the biggest problem was that there wasn't anything to buy, because things couldn't be imported and the farmers collapsed, and they couldn't market crops correctly. Then the massive brain drain happened when 30% of the population left. Its still a big issue, most of the best most educated people simply left and can't come back. Many are in South Africa working and they send money home. \n\nZim has a better education system, they speak better English then the South Africans do, and to be frank they are more honest and better employees. So the South Africans hate them, rob them and even kill them.\n\nThe root cause of the inflation was that Mugabe had a lot of debt and decided to fuck the IMF by inflating his currency to pay off the debt. His economists were idiots.\n\nLastly - Zimbabwe is a great country with really friendly people. I met rastas and computer programmers, painters, school teachers, welders, sculptors and musicians. I definitely recommend visiting. they use USD and SA Rand for the coins.", "provenance": null }, { "answer": null, "provenance": [ { "wikipedia_id": "34404", "title": "Economy of Zimbabwe", "section": "", "start_paragraph_id": 1, "start_character": 0, "end_paragraph_id": 1, "end_character": 716, "text": "The economy of Zimbabwe shrank significantly after 2000, resulting in a desperate situation for the country – widespread poverty and a 95% unemployment rate. Zimbabwe's participation from 1998 to 2002 in the war in the Democratic Republic of the Congo set the stage for this deterioration by draining the country of hundreds of millions of dollars. Hyperinflation in Zimbabwe was a major problem from about 2003 to April 2009, when the country suspended its own currency. Zimbabwe faced 231 million percent peak hyperinflation in 2008. A combination of the abandonment of the Zimbabwe dollar and a government of national unity in 2009 resulted in a period of positive economic growth for the first time in a decade.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "51016885", "title": "2016–17 Zimbabwe protests", "section": "Section::::Background.\n", "start_paragraph_id": 5, "start_character": 0, "end_paragraph_id": 5, "end_character": 395, "text": "The economy of Zimbabwe began shrinking significantly around 2000, following a series of events and government policies such as the fast-track land reform programme and the 1997 War Veterans' Compensation Fund pay-out. This led to hyperinflation, devaluation and the eventual collapse of the Zimbabwean dollar, high unemployment and general economic depression over the course of sixteen years.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "34399", "title": "Zimbabwe", "section": "Section::::History.:Independence era (1980–present).\n", "start_paragraph_id": 46, "start_character": 0, "end_paragraph_id": 46, "end_character": 225, "text": "By 2003, the country's economy had collapsed. It's estimated that up to a fourth of Zimbabwe's 11 million people had fled the country. Three-quarters of the remaining Zimbabweans were living on less than one US dollar a day.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "26219", "title": "Rhodesia", "section": "Section::::Legacy.\n", "start_paragraph_id": 147, "start_character": 0, "end_paragraph_id": 147, "end_character": 558, "text": "Zimbabwe also suffered from a crippling inflation rate, as the Reserve Bank of Zimbabwe had a policy of printing money to satisfy government debt. This policy caused the inflation rate to increase from 32% in 1998 to 11,200,000% in 2007. Monetary aid by the International Monetary Fund was suspended due to the Zimbabwe government's defaulting on past loans, its inability to stabilise its own economy, its inability to stem corruption and its failure to advance human rights. In 2009, Zimbabwe abandoned its currency, relying instead on foreign currencies.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "17878788", "title": "Hyperinflation in Zimbabwe", "section": "", "start_paragraph_id": 1, "start_character": 0, "end_paragraph_id": 1, "end_character": 491, "text": "Hyperinflation in Zimbabwe was a period of currency instability in Zimbabwe that, using Cagan's definition of hyperinflation, began in February 2007. During the height of inflation from 2008 to 2009, it was difficult to measure Zimbabwe's hyperinflation because the government of Zimbabwe stopped filing official inflation statistics. However, Zimbabwe's peak month of inflation is estimated at 79.6 billion percent month-on-month, 89.7 sextillion percent year-on-year in mid-November 2008.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "14169856", "title": "Economic history of Zimbabwe", "section": "Section::::2000–present.\n", "start_paragraph_id": 26, "start_character": 0, "end_paragraph_id": 26, "end_character": 486, "text": "Zimbabwe's economy has shrunk since 2000, in an atmosphere of political turmoil, capital flight, corruption and mismanagement. Inflation has spiralled out of control (peaking at 500 billion % in 2009) and the underpinnings of the economy in agriculture and industry have been dissipated. Due to the state of the formal economy, many Zimbabweans have begun working in the informal economy. Because of this, it is estimated that by 2009 unemployment was nearer 10% than the official 90%.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "13681", "title": "Hyperinflation", "section": "Section::::Notable hyperinflationary episodes.:Zimbabwe.\n", "start_paragraph_id": 144, "start_character": 0, "end_paragraph_id": 144, "end_character": 571, "text": "At its November 2008 peak, Zimbabwe's rate of inflation approached, but failed to surpass, Hungary's July 1946 world record. On 2 February 2009, the dollar was redenominated for the third time at the ratio of ZWR to 1 ZWL, only three weeks after the $100 trillion banknote was issued on 16 January, but hyperinflation waned by then as official inflation rates in USD were announced and foreign transactions were legalised, and on 12 April the Zimbabwe dollar was abandoned in favour of using only foreign currencies. The overall impact of hyperinflation was 1 USD = ZWD.\n", "bleu_score": null, "meta": null } ] } ]
null
5llgje
in ww2 movies and rl videos some soldiers salute and some use nazi salute. why is that?
[ { "answer": "Nazi salute (with straight right hand) was mandatory for civilians but optional for military. Soldiers mostly used traditional salute. ", "provenance": null }, { "answer": null, "provenance": [ { "wikipedia_id": "240483", "title": "Battle of Britain (film)", "section": "Section::::Historical accuracy.\n", "start_paragraph_id": 99, "start_character": 0, "end_paragraph_id": 99, "end_character": 540, "text": "During filming, Galland, who was acting as a German technical adviser, took exception to a scene where Kesselring is shown giving the Nazi salute, rather than the standard military salute. Journalist Leonard Mosley witnessed Galland spoiling the shooting and having to be escorted off the set. Galland subsequently threatened to withdraw from the production, warning \"dire consequences for the film if the scene stayed in\". When the finished scene was screened before Galland and his lawyer, he was persuaded to accept the scene after all.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "340471", "title": "Roman salute", "section": "", "start_paragraph_id": 4, "start_character": 0, "end_paragraph_id": 4, "end_character": 361, "text": "Since the end of World War II, displaying the Nazi variant of the salute has been a criminal offence in Germany, Austria, the Czech Republic, Slovakia and Poland. Legal restrictions on its use in Italy are more nuanced, and use there has generated controversy. The gesture and its variations continue to be used in neo-fascist, neo-Nazi and Falangist contexts.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "10806011", "title": "Salute (1929 film)", "section": "", "start_paragraph_id": 1, "start_character": 0, "end_paragraph_id": 1, "end_character": 400, "text": "Salute is a 1929 motion picture directed by John Ford and starring George O’Brien, Helen Chandler, William Janney, Stepin Fetchit, and Frank Albertson. It is about the football rivalry of the Army–Navy Game, and two brothers, played by O'Brien and Janney, one of West Point, the other of Annapolis. John Wayne had an uncredited role in the film, as one of three midshipmen who perform a mild hazing.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "36772726", "title": "Nazi salute", "section": "Section::::From 1933 to 1945.\n", "start_paragraph_id": 18, "start_character": 0, "end_paragraph_id": 18, "end_character": 826, "text": "The salute soon became part of everyday life. Postmen used the greeting when they knocked on people's doors to deliver packages or letters. Small metal signs that reminded people to use the Hitler salute were displayed in public squares and on telephone poles and street lights throughout Germany. Department store clerks greeted customers with \"Heil Hitler, how may I help you?\" Dinner guests brought glasses etched with the words \"Heil Hitler\" as house gifts. The salute was required of all persons passing the \"Feldherrnhalle\" in Munich, site of the climax of the 1923 Beer Hall Putsch, which the government had made into a shrine to the Nazi dead; so many pedestrians avoided this mandate by detouring through the small \"Viscardigasse\" behind that the passage acquired the nickname \"Dodgers' Alley\" (\"Drückebergergasse\").\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "26762611", "title": "Unternehmen Michael", "section": "Section::::Reception.\n", "start_paragraph_id": 39, "start_character": 0, "end_paragraph_id": 39, "end_character": 837, "text": "Officers of the Wehrmacht objected to the suicidal message of the film. The director, Karl Ritter, responded by saying, \"I want to show the German youth that senseless, sacrificed death has its moral value.\" Carl Bloem, a retired military officer who had published popular novels, was asked to rebut the film's point of view and did so in a radio play in which the pragmatic view won out amongst the soldiers: \"No German commanding officer has the right or duty to destroy \"uselessly\" the lives of German soldiers.\" The company raise the white flag of surrender. This was broadcast on the Cologne radio station, but the Ministry of Propaganda disavowed it, with the statement, \"Our film has the purpose of showing the younger generation of today the real spirit of the German soldier during the offensive at the western front in 1918.\" \n", "bleu_score": null, "meta": null }, { "wikipedia_id": "49902432", "title": "The War for Men's Minds", "section": "Section::::Production.\n", "start_paragraph_id": 24, "start_character": 0, "end_paragraph_id": 24, "end_character": 500, "text": "The striking images of cheering crowds in Nazi Germany that were featured in Leni Riefenstahl's films were used extensively. U.S. Hollywood filmmaker Frank Capra also used scenes from her films, which he described partially as \"the ominous prelude of Hitler's holocaust of hate\", in many parts of the U.S. government's \"Why We Fight\" anti-Axis seven film series, to demonstrate what the personnel of the American military would be facing in the Second World War, and why the Axis had to be defeated.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "36772726", "title": "Nazi salute", "section": "Section::::In popular culture.\n", "start_paragraph_id": 46, "start_character": 0, "end_paragraph_id": 46, "end_character": 614, "text": "BULLET::::- In a running gag in \"Hogan's Heroes\", Colonel Klink often forgets to give the Hitler salute at the end of a phone call; instead, he usually asks, \"What's that?\" and then says, \"Yes, of course, Heil Hitler\". In the German language version of the show, called \"Ein Käfig voller Helden\" (\"A Cage Full of Heroes\"), \"Col. Klink and Sgt. Schultz have rural Gomer Pyle-type accents\", and \"stiff-armed salutes are accompanied by such witticisms as \"this is how high the cornflowers grow\". The \"Heil Hitler\" greeting was the variant most often used and associated with the series; \"Sieg Heil\" was rarely heard.\n", "bleu_score": null, "meta": null } ] } ]
null
2avydb
What happens when I take a USB drive out without ejecting?
[ { "answer": "When you eject a USB drive, the operating system flushes all buffered data to the drive and closes the software device. This guarantees that everything you wrote to the drive is actually physically written there. If you do not do this, you stand a chance of losing data or corrupting files. \n \nThe long version: Most operating systems maintain a memory buffer for each drive connected to the system. The buffer is used to cache data written to and from the drive, to speed up operations. Data written to the drive is first written to the memory buffer, then after a certain amount of time passes, or when enough data is written to fill the buffer, the contents of the buffer are actually written to the drive. Until this happens, the data is not actually on the drive, so if you lose power or yank the drive out the data will go missing. This could cause files to be lost, or if you were making changes to a file, the file could become corrupted because only part of it was updated. \n \nThe reason you have never experienced any problems is basically luck. Most devices and computers will flush their drive buffers periodically, so if you haven't written anything to the drive in a while it is probably safe to yank it out. But it is always better to eject the drive, to be certain everything has been flushed.", "provenance": null }, { "answer": "~~From an electrical point of view it implies little or nothing.~~ But from a software perspective there is some risk of information loss.\n\nBasically, writes to your device could be incomplete. The operating system may have a write cache to make the user interface more fluent, delaying the actual write to the external device. It may *appear* that a file has finished copying but it actually hasn't. If you've modified any directory entries (i.e. renaming or moving files) then things may get more complicated and the incomplete write may corrupt the whole directory causing the loss of several files.\n\nEjecting it makes sure the OS finishes anything it has to do and empties the write cache. If the device also has its own buffer, then the OS will send the instruction to commit the write to persistent memory.\n\n_URL_1_\n\n > I've been doing this most of my life and nothing has ever happened.\n\nThat's true, it's quite unlikely. Modern operating systems write everything to the external device as soon as possible to minimize the risk of data loss in these cases. But still there is a small chance that it happens.\n\n-----\n\nEdit: Sorry that I left so many followup questions unanswered! This was my last comment before going to sleep in my timezone. Fortunately I see that other redditors have already answered everything, thanks!\n\n-----\n\nEdit 2: A comment below reminded me something important about power cuts. I wrote all of the above thinking about flash drives. /u/PinguRambo suggests SSDs may be different, which also makes me think about spinning external disks (they are still used as their cost is a lot lower for the same capacity). If you unplug those without properly ejecting them then you also have some risk that heads are not properly parked.\n\nNot sure if modern disks are protected against that, but for safety it's usually strongly recommended to eject those properly. If ever an improperly parked head damages your disk you will lose just *everything*.\n\n_URL_0_ :\n\n > Head crash: a head may contact the rotating platter due to mechanical shock or other reason. At best this will cause irreversible damage and data loss where contact was made. In the worst case the debris scraped off the damaged area may contaminate all heads and platters, and destroy all data on all platters. If damage is initially only partial, continued rotation of the drive may extend the damage until it is total", "provenance": null }, { "answer": "In all cases, any file that is currently being copied to / written to the drive will not be properly written. That much should be obvious: it's impossible to finish writing something if you take away the drive in the middle.\n\nWhat state that drive remains in depends, however, on a number of factors. Chiefly, the \"filesystem\" on the drive. The filesystem is the way files on the drive are structured, filesystems include FAT32, NTFS, etc.\n\n* With some filesystems such as FAT32, removing the device during a write may corrupt the structure of the filesystem in such a way that the filesystem is broken or exhibits broken-like behaviour afterwards, requiring a repair operation (sometimes known as a \"chkdsk\" or \"fsck\"). This broken-like behaviour may include incorrect or corrupt metadata, such as incorrect file sizes and filenames, files which overlap or which have missing pieces, etc.\n\n* With some filesystems such as NTFS, ext3, ext4 and more, removing the device during a write may leave a partially written file, but the filesystem is not harmed structurally and does not require repair. The write that was happening at the time the drive was removed will be partially complete, and it may be unpredictable to determine how much of the write completed until you inspect the file. Depending on how the filesystem is implemented, some data in the not-yet-complete part of the file may be filled with random or unpredictable data. However, details such as the sizes and filenames of files will not have been damaged or corrupted and the drive will otherwise continue to work normally.\n\n* Some filesystems go even further than this and have the ability to automatically \"roll back\" a partially completed write to how it was just before the last segment was written - meaning that as soon as you connect to the drive again you'll find the file as it was before the last segment was written and the file size and and last update time will reliably reflect the last written segment that actually completed. (Note that these \"full journalled\" filesystems don't necessarily guarantee the atomicity on a whole-file basis but on segments written, with the segments determined by implementation.)\n\nDue to write caching in the operating system, the writes to a drive may be delayed by up to several seconds between the write appearing to complete and being actually written to the drive. This means that if you remove the drive during that time, the write that you thought had completed may not actually have been written to the drive yet.\n\nHowever, even if you disable write caching it does not stop the problem that some filesystems may be corrupted if the drive is removed during a write - and you may not always realise when something is writing to the drive.\n\nWindows tends to disable write caching by default for removable drives using the FAT32 system - the thought behind it is that it reduces the likelihood of corruption and lost data if the drive is removed after a write completes but before the write makes it from the cache to disk. However, you can still corrupt the drive if you remove it during a write - and for an NTFS drive, it will still be cached, so even though NTFS is resistant to corruption by incomplete writes, you'll still lose data if it hasn't made it from the cache to the disk.\n\nHow to be safe:\n\n* Always use the \"eject\" or \"safely remove\" feature in your operating system and wait for the message that tells you it's now safe to remove the drive. This will allow the OS to complete all unfinished business, including queued writes, leave the filesystem in a consistent state, and then disable further writes to the drive (usually by unmounting the drive, which also disables reads).\n\n* Shutting down or suspending the computer will also complete all unfinished business on all drives, so there is no need to \"eject\" or \"safely remove\" when you shut down or suspend (and by suspend, I don't mean just locking or going to screensaver). You can safely remove a drive while a computer is shut down or suspended at any time.\n\n* For NTFS, ext3, ext4 or other journalled filesystems, it is generally OK to remove the drive anyway as long as you wait at least 15 seconds after any important writes to the drive. When I say \"generally OK\" I mean it - it's not 100% safe. Also, doing this for FAT/FAT32 can still corrupt the drive if you don't realise something else is writing to the drive in the background.\n\n", "provenance": null }, { "answer": "(edit: this became longer than I expected, tl;dr at the bottom)\n\nThere's another factor in addition to the things other people have written, namely filesystems. This is arguably the biggest reason why you shouldn't just yank out USB drives.\n\nVery simply put, the file system is a data structure that is stored on a device that translates physical drive sectors into a usable directory structure. On a very low level, when you want to write something to a hard drive (whether it's a spinning magnetic platter, an SSD drive, a flash drive, or whatever), you're writing that data into numbered sectors. You say \"write data X to sector Y\", where Y is some number. \n\nThe file system translates that into a directory structure. So instead of saying \"write data X to sector 41567\", you say \"write data X to file C:\\Users\\Username\\Documents\\myfile.txt\" (if you're on windows). File systems thus create an abstract structure around the raw data that's on the drive that allows the computer to function. The way it does this is incredibly complicated, but many file systems basically keep a sort of tree stored on the device, where each directory is a branch. When you need to access a specific folder, you walk down the branches of that tree until you find the sector you need to read from or write to. \n\nNow, many files are too large to store in a single sector. So, basically that file becomes a tree of it's own, linking several sectors (which does not necessarily have to be next to each other physically) together. When you write large files to a hard-drive, or copying over many files at once, the file system is continually updating this tree, moving links around and finding new sectors to to write to. \n\nHere's where the problem comes in: what if, for some reason (like you pulling out the usb drive), a write is unexpectedly cancelled in the middle of writing large amounts of data? Well, then, there is a large risk that this tree might get messed up, and when the tree gets messed up, data can disappear, sectors which should have been written to remains unwritten, and all sorts of bad things can happen. In other words, you DO NOT want to mess this tree up. What you have to do then is run special programs (in Windows it's called \"chkdsk\", in linux it's called \"fsck\") that repair the structure of the file system and restores whatever can be restored. These programs can take a long time to run, since they basically have to check the entire drive for inconsistencies in the file system structure. \n\nThis is a big problem, and you might ask whether or not we've come up with a solution. And the answer is yes, we have solved this problem with something called \"journaling\". Modern filesystems keep a \"journal\" of all the different changes it intends to do, and it keeps it updated both before and after writes have been done. So, when you yank out the USB drive, the journal will reflect what was and wasn't changed, so the next time you plug it in the computer can just say \"oh, wait, this part of the tree is messed up, but the journal contains all the information I need to repair it, so I'll just do that\". This happens automatically and completely transparently when a file system is mounted (i.e. made available to be write and read from), which is why modern computer users very rarely have to wait 30 minutes for a computer to start up because it has to check the entire file system (something which was common back in the 90's). \n\nNow, you may ask, if that's the case, do I need to worry about yanking out USB cords? And the answer is, I'm sorry to say, yes you do. The reason is that almost all camera memory sticks and USB dongles use an antiquated filesystem known as FAT32, which does not use one of these fantastic journals. The reason why everyone uses FAT32 anyway is that it is basically the only filesystem where there are built in drivers for it in every operating system, so it doesn't matter if you plug it in to a Mac or a PC running Windows or Linux, it's just gonna work. No need to install drivers or anything. \n\nFAT32 is terrible, terrible file system (there are many reasons besides the journaling thing, another big one is that FAT32 can't store files larger than 4 gigabytes), but it's the only one we've got right now that offers easy interoperability between operating systems. If you don't care about that, you should format your usb sticks to whatever modern file system your OS uses, NTFS for Windows, HFS+ for Mac, or ext4 (or a bunch of other ones) for Linux. Then you never have to worry about yanking stuff out, because all of them are journaled and your tree structure will never get messed up. \n\nIf you properly eject a drive before yanking it out, what happens is that the computer basically closes the connection to the file system (known as \"unmounting\" it), so that no more writes are possible. That way, you ensure that the structure on the device remains consistent. \n\n**TL;DR:** if you yank a drive out, you can mess up the structure of an unjournaled file system. To avoid that, either format the drive with a modern file system, or properly eject the thing every time. Or just make sure you're not writing anything to the disk when you pull it out, that basically works as well. ", "provenance": null }, { "answer": "As others have commented, cached unwritten data could be lost.\n\nHowever the reason windows complains about unexpected USB drive removal, is usually because some program has an open (read only) file handle. Could be windows explorer itself (are you browsing the drive) or an app like a photo viewer or media player. A well written app will be able to deal with \"sorry, the file disappeared so your read call failed\" but many would just crash. Windows can't guarantee that all apps are written correctly, that they would not crash if an open file suddenly vanished, so it tries to prevent users from pulling the rug out from under these apps.\n", "provenance": null }, { "answer": "If you have write caching turned on, writes will go unfinished. To get around this, disable write caching on all removable drives (you can find this in the drive's right click menu). This will force all writes to go IMMEDIATELY, but also handicaps the performance of the drive.\n\nSo it would still be possible ot rip it out while its writing data, but you'd be safe the second the file transfer box or file save completed.", "provenance": null }, { "answer": "Nothing happens. This is because since Windows 2000 or Windows XP, By default when you copy a file to a thumb drive, it's all written ASAP. \n\nIf you pull it out while you are copying files, then the files on the thumb drive will appear as corrupted.\n\nNothing more.", "provenance": null }, { "answer": "If you are ever wondering whether to chance the extra minute it takes to eject it or not, think about this. One time in college I wrote up a 10 page paper in a single day which was due on a Friday. I was actually happy with what I had written and the content of the essay, so I was super happy to be done with it before my birthday weekend. I save it to the usb, pull the usb out of the PC, run upstairs to the PC with a printer, only my usb drive was now corrupted and every bit of data was unrecoverable. I just turned my computer off and forgot about it for the weekend.", "provenance": null }, { "answer": "1) Any of the data that \"appears\" on the drive, might actually still be in RAM and not have finished writing to the usb drive. Unplugging it might interrupt a sync that is still in progress, and you lose data that only **looked** like it was on the drive. In linux, you can avoid this by just typing \"sync\" into terminal and wait for that command to finish before unplugging the drive.\n\n2) Also, any trash/recyclebin files won't be purged and the drive will eventually fill up if you keep doing this. Example: 4gb flash drive, the drive properties says there is 1gb used up, but nothing will transfer to it because \"it's full.\"", "provenance": null }, { "answer": "On FAT16 volumes, Windows would calculate the amount of free space by counting the unallocated clusters in the File Allocation Table whenever you ran DIR (see [\"A Brief and Incomplete History of FAT32\"](_URL_0_)). On FAT32 volumes, that took an inordinate amount of time so Microsoft created an entry in the volume metadata that tracked the amount of free space left. This entry is updated whenever the file system driver makes changes to the file system.\n\nUnless the volume is uncounted uncleanly (IOW, taking it out without ejecting it). In this case, all of the unallocated space is counted up and the free space in the volume is recalculated and rewritten to disk, which take minutes.\n\nFAT32 is widely deployed on USB keys, camera flash media, etc., which means that just pulling it out of the computer results in the free space entry becoming invalid.\n\nSo you could either eject properly and wait for the file system driver to write everything to disk then start using the drive immediate on the next insert or you can yank it out, hope everything was written properly and wait a minute for the file system driver to calculate how much free space is left on the drive.", "provenance": null }, { "answer": "Nothing unless something on your computer is actively using the device or data is being transferred. In that case some data may become corrupted from the interruption in the transfer.\n\nIt's a huge misconception that unplugging it without ejecting \"breaks something\"", "provenance": null }, { "answer": "I've been working on, building, and fixing computers for more than 20 years. Mac, Linux and PC. As long as there is no data transfer going on, I have only had one problem just pulling the device out.\n\nThe only thing that ever failed on me was a 12,000RPM drive that was supposed to be hot swappable. I think it had issues with being moved while at speed even though it was not reading or writing. Since then, I try to shut down media that is moving before I work with it physically.", "provenance": null }, { "answer": "It's a technical equivalent of walking up to a scribe and just stealing their paper. Most of the time they're not writing and everything is fine, sometimes they're midway through a paragraph and the information turns out a bit wonky (files in inconsistent state, filesystem with freed or double-committed blocks) and very occasionally they're writing at that time and you get the software equivalent of a pen strike across the paper (corrupted sector, checksum failures, file table corruption). ", "provenance": null }, { "answer": "Most of the answers seem to be talking about the effects to a filesystem. \n\nThat is only ONE of the reasons you should be careful about ejecting drives. \n\nThe other is that applications maintain various types of records about the files you work with using them. \n\nSay you add a new picture to a document. Internally it might have to do a bunch of operations to your file. \n\nWhat you see as, say a document - is actually a complex structure of several types of files internally. \n\nTake a Microsoft Word [DOCX file](_URL_0_). One that looks [like this](_URL_1_). A Microsoft Word DOCX file is actually a zip file, with a bunch of files within it. \n\nYou can extract those files, and get a list of the files it contains: \n\nHere's the list of files from my example document: \n \n \\[Content_Types].xml\n \\docProps\\app.xml\n \\docProps\\core.xml\n \\word\\document.xml\n \\word\\fontTable.xml\n \\word\\settings.xml\n \\word\\styles.xml\n \\word\\stylesWithEffects.xml\n \\word\\webSettings.xml\n \\word\\media\\image1.png\n \\word\\theme\\theme1.xml\n \\word\\_rels\\document.xml.rels\n \\_rels\\.rels\n\nIf you don't want to go read the full spec, no worries - Just looking around, you can make out what some of those things are. There's an image there in a 'media' folder, a 'document.xml' which contains the actual document text. \n\nNow, Word is pretty good about how it manages writes, and it shouldn't corrupt the file if the drive is yanked out. \n\nOther applications might not be so smart - they might, say, indicate they need to copy the image over - but not actually do it until later (because users complain that it takes forever to attach images). \nBut they'll update an index of images to say the file is there. Then something might prompt a save of the indexes, but not trigger the image to be copied over. \nSo, you'd have a reference saying \"Image here!\", with no image. A document version of a \"404 - File not found!\" \n\nSource: 12+ years software development and all the associated bugs with saving of data. ", "provenance": null }, { "answer": "You got lucky. \n\n* 1 Windows (or OSX, Linux...) are lying to users. \n\nWhen you copy data to a USB device, the system push it into the RAM (technical slang : a buffer), because its fast. That's what [this](_URL_0_) is not telling you. The end of the progress bar does not mean 'data has been copied on the device' but 'data copied to the buffer'. *sigh*\n\nThen, while you go back to using the computer, the system copies the buffer to the device in the background. Invisibly...\n\nHence the problem, the user has no clue about this, and even if he knows (you know now) he has no way to know if the buffer to device transfer is done. That's why the system requires an intermediate step, that is 'asking for ejection'. When this signal is pulled, the system will urgently push all the remaining data from buffer to the device and then tell you that it is safe to unplug that device from the machine.\n\n* 2 What else \n\nThey could do things differently, like having a list of USB device with little spinners, sand timers indicating, or color codes indicating which one has no more data to be sent to. ", "provenance": null }, { "answer": "Windows : Hot pluggable, just don't do it when there is a lamp flashing\n\nApple / Linux / Unix : not recommended to just unplug, the USB drive is mounted in the file system, however in the most cases, there won't be a damage.\nUSB Drive itself: their electronic is built in a way, that during the plugin the not avoidable \"unclear\" status does not cause negative effect.\nCompare to simple analogue electronic: a loudspeaker who is connected to an amplifier, which does not mute by relay, makes terrible noises when the amplifier is powered on or off.", "provenance": null }, { "answer": "On early XP, you'll most likely corrupt something. Whether it looked like it was in use or not. On XP (SP2+?) and later Windows it might only corrupt the files that are in use or being transferred at the time of removal.", "provenance": null }, { "answer": "It's just like cleaning your but after taking a shit without looking; You might have done it right or wrong, but you will only really notice when you look at it again.", "provenance": null }, { "answer": null, "provenance": [ { "wikipedia_id": "48650400", "title": "Zinstall Migration Kit Pro", "section": "Section::::Operation.:Transfer scenarios.:Transfer directly from old PC's hard drive.\n", "start_paragraph_id": 8, "start_character": 0, "end_paragraph_id": 8, "end_character": 293, "text": "In this scenario, the transfer is done directly from the old (source) machine's hard drive. The drive can be connected externally (USB adapter / enclosure), or internally, as a secondary drive. This scenario works even for old computers that are unable to boot, as long as the data is intact.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "33089070", "title": "Windows To Go", "section": "Section::::Differences from standard installation.\n", "start_paragraph_id": 7, "start_character": 0, "end_paragraph_id": 7, "end_character": 490, "text": "BULLET::::- Drive removal detection: As a safety measure designed to prevent data loss, Windows pauses the entire system if the USB drive is removed, and resumes operation immediately when the drive is inserted within 60 seconds of removal. If the drive is not inserted in that time-frame, the computer shuts down to prevent possible confidential or sensitive information being displayed on the screen or stored in RAM. It is also possible to encrypt a Windows To Go drive using BitLocker.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "50670667", "title": "USBKill", "section": "Section::::Use.\n", "start_paragraph_id": 14, "start_character": 0, "end_paragraph_id": 14, "end_character": 573, "text": "It can also be used in reverse, with a whitelisted flash drive in the USB port attached to the user's wrist via a lanyard serving as a key. In this instance, if the flash drive is forcibly removed, the program will initiate the desired routines. \"[It] is designed to do one thing,\" wrote Aaron Grothe in a short article on USBKill in \"\", \"and it does it pretty well.\" As a further precaution, he suggests users rename it to something innocuous once they have loaded it on their computers in case someone might be looking for it on a seized computer in order to disable it.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "31066630", "title": "USB dead drop", "section": "Section::::Comparison to other types of data transfer.:Drawbacks to the system infrastructure.\n", "start_paragraph_id": 15, "start_character": 0, "end_paragraph_id": 15, "end_character": 285, "text": "BULLET::::- Removal of stored data: anyone with physical access can erase all of the data held within the USB dead drop (via file deletion or disk formatting), or make it unusable by encrypting the data or the whole drive and hiding the key (see also the related topic of ransomware).\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "3738839", "title": "Features new to Windows Vista", "section": "Section::::Shell & User interface.:Windows Explorer.:File operations.\n", "start_paragraph_id": 23, "start_character": 0, "end_paragraph_id": 23, "end_character": 343, "text": "If an external data storage device is unexpectedly disengaged or accidentally removed while copying files onto it, the user is given the chance to retry the operation without restarting that file copy operation from the beginning; this gives the user the chance to reconnect that external data storage device involved and retry the operation.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "16473708", "title": "Windows Easy Transfer", "section": "Section::::Transfer methods.\n", "start_paragraph_id": 20, "start_character": 0, "end_paragraph_id": 20, "end_character": 292, "text": "BULLET::::- A USB flash drive or an external hard disk drive. In this mode Windows Easy Transfer saves archive files of files and settings on the source machine to a user-specified location, which does not need to be a USB drive; the destination machine is then given access to the archives.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "2089047", "title": "U3 (software)", "section": "Section::::Removal.\n", "start_paragraph_id": 30, "start_character": 0, "end_paragraph_id": 30, "end_character": 565, "text": "Reformatting the drive will remove some of the software (the hidden \"SYSTEM\" folder), but not all of it. The virtual CD-ROM drive cannot be removed by reformatting because it is presented to the host system as a physical device attached to a USB hub; the official U3 Launchpad Removal Software was available on the U3 website and disabled the virtual CD drive device, leaving only the USB mass storage device active on the U3 USB hub controller, at which point the remaining software can be removed by a subsequent format, performed by the removal software itself.\n", "bleu_score": null, "meta": null } ] } ]
null
2iv3v6
Panzer tanks: "first-class visual and command facilities"?
[ { "answer": "On a purely technical level, \"visual and command\" references both German radios and optics. Although German tank design was relatively conservative, especially at the start of the war, their tanks incorporated good optical equipment and effective radios. The latter was especially effective at coordinating German panzers as an effective unit. German optical sights were often quite advanced as well. This gave the early war German tank commanders a better sense of situational awareness than his French, British, or Russian counterparts. \n\nBut the Germans' early superiority in situational awareness was not just of a technical nature. Most German commanders did not fight \"buttoned-up,\" which is tanker parlance for fighting inside the vehicle. At the top of a German turret was an circular armored cupola with vision ports all around it. This allowed a commander to scan terrain with the \"Mark One Eyeball\" from a [relatively safe position](_URL_0_) as seen in this photo of a Panzer IV. Early war Allied tanks did not have this ergonomic design. For example, the [earlyT-34's top hatch](_URL_2_) was notoriously clumsy and did not give the commander a good view and exposed him to fire from the rear and flanks. When the Germans captured Allied tanks, one of the first things they would do if able was replace the top hatch with a German one, as in these [*Beutepanzers*](_URL_1_). \n\nSo in sum, not only did early war panzers possess better optics and radios, they were a more ergonomic design than their contemporaries. \n\n*Sources*\n\nForczyk, Robert. *Tank Warfare on the Eastern Front*. Barnsley: Pen & Sword Military, 2014. \n\nZaloga, Steve. *Panzer IV Vs Char B1 Bis: France 1940*. Oxford: Osprey Pub, 2011.", "provenance": null }, { "answer": null, "provenance": [ { "wikipedia_id": "5645027", "title": "Panzer Command (board game)", "section": "Section::::Description.\n", "start_paragraph_id": 6, "start_character": 0, "end_paragraph_id": 6, "end_character": 507, "text": "\"Panzer Command\" is a tactical level simulation of armored combat, recreating the battles that raged across the steppes of the Soviet Union during the middle years of World War II. Each of two players commands the 40 to 60 company-sized units of a German armored division or Soviet tank corps, maneuvering forces across treacherous terrain to engage the enemy in a life or death struggle. The challenge of battlefield command is yours in the thought-provoking, exciting game experience of \"Panzer Command\".\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "357868", "title": "Panzer General", "section": "Section::::Gameplay.\n", "start_paragraph_id": 4, "start_character": 0, "end_paragraph_id": 4, "end_character": 286, "text": "\"Panzer General\" is an operational-level game, and units approximate battalions, although the unit size and map scale from one scenario to the next are elastic. While the names and information for the units are reasonably accurate, the scenarios only approximate historical situations.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "21581104", "title": "Sd.Kfz. 265 Panzerbefehlswagen", "section": "", "start_paragraph_id": 1, "start_character": 0, "end_paragraph_id": 1, "end_character": 539, "text": "The kleiner Panzerbefehlswagen (), known also by its ordnance inventory designation Sd.Kfz. 265, was the German Army's first purpose-designed armored command vehicle; a type of armoured fighting vehicle designed to provide a tank unit commander with mobility and communications on the battlefield. A development of the Army's first mass-produced tank, the Panzer I \"Ausf\". A, the Sd.Kfz. 265 saw considerable action during the early years of the war, serving in \"Panzer\" units through 1942 and with other formations until late in the war.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "21581104", "title": "Sd.Kfz. 265 Panzerbefehlswagen", "section": "", "start_paragraph_id": 2, "start_character": 0, "end_paragraph_id": 2, "end_character": 411, "text": "The \"kleiner Panzerbefehlswagen\", is commonly referred to as a command tank, but as it is without a turret or offensive armament and merely is built on the chassis of the Panzer I light tank, it does not retain the capabilities or role of a tank. Instead, it functions more along the line of an armored personnel carrier in conveying the unit commander and his radio operator under armor about the battlefield.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "1737627", "title": "Panzerwaffe", "section": "Section::::Organization.\n", "start_paragraph_id": 13, "start_character": 0, "end_paragraph_id": 13, "end_character": 455, "text": "A \"panzer corps\" consisted of two to three divisions and auxiliary attachments. \"Panzergruppen\" (\"Panzer Groups\") were commands larger than a corps, approximately the size of an army, and named after their commander (e.g. \"Panzergruppe Hoth\"). These were later recognized as \"Panzerarmeen\" (\"Panzer Armies\"), an army-level command of two to three corps. These higher-level organizations almost always mixed ordinary infantry units with the \"Panzerwaffe\".\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "8955049", "title": "ZootFly", "section": "Section::::Video games.:\"Panzer Elite Action: Fields of Glory\".\n", "start_paragraph_id": 19, "start_character": 0, "end_paragraph_id": 19, "end_character": 712, "text": "Developed for PC and the Xbox for JoWood Productions, \"Panzer Elite Action\" follows the story of three tank commanders and their crew. The German commander, encouraged by easy successes in Poland and France, moves on to the Eastern Front and the brink of victory with the taking of Stalingrad. Players meet the Russian commander in desperate straits as he helps defend Stalingrad, and follow him as the tide turns against the Germans and he joins the massive battle of Kursk. The American commander enters the war at the Normandy Beachhead on D-Day. After the struggle for the Bocage, he defends against the German outbreak at the Battle of the Bulge, and then drives on to the victorious crossing of the Rhine.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "4570713", "title": "Panzer Leader (game)", "section": "", "start_paragraph_id": 1, "start_character": 0, "end_paragraph_id": 1, "end_character": 1117, "text": "Panzer Leader is the sequel to Avalon Hill's \"PanzerBlitz\" game. Like its predecessor, it is a tactical platoon level hex and counter board wargame depicting \"WWII\" tank and infantry combat on the Western European front. It features 4 geomorphic map tiles, which can be put together in a variety of ways to play the provided scenarios (which are printed on cardstock, showing all the necessary information for a scenario) or home-made scenarios. The 20 provided scenarios cover various battles on the Western Front, with most of the scenarios involving the Normandy campaign or the Battle of the Bulge. Two scenarios cover the amphibious assaults on Omaha and Gold beaches and include special rules for naval fire. While based on PanzerBlitz, the rules were cleaned up and included additional mechanics such as for air attacks and engineers, as well new spotting rules to prevent PanzerBush\" tactics - units could no longer fire from concealment without revealing themselves to enemies. Several optional and experimental rules are provided, including one for opportunity fire to further nullify PanzerBush maneuvers.\n", "bleu_score": null, "meta": null } ] } ]
null
5or7wg
why do competitions require you to answer a ridiculous question when you enter?
[ { "answer": "Well, in the UK a competition with a question is regarded legally as a game of skill, even if the question is stupidly simple. Without a question it would be regarded as a lottery, which has very much tougher rules and regulations.", "provenance": null }, { "answer": "Well, without any indication of what you consider a \"ridiculous question\" it's going to be hard to know what the best answer is. \n \nThat said: Contests have to take care to not be treated as illegal lotteries. There are two ways to do this: \n1) Make the contest free for anyone to enter (e.g. \"no purchase necessary\"). \n2) Make the contest include a 'skill' component. Asking a question meets this standards. ", "provenance": null }, { "answer": null, "provenance": [ { "wikipedia_id": "48544333", "title": "Challenge (competition)", "section": "", "start_paragraph_id": 1, "start_character": 0, "end_paragraph_id": 1, "end_character": 1678, "text": "A challenge is a request made to the holder of a competitive title for a match between champion and challenger, the winner of which will acquire or retain the title. In some cases the champion has the right to refuse a challenge; in others, this results in forfeiting the title. The challenge system derives from duelling and its code of honour. While many competitive sports use some form of tournament to determine champions, a challenge match is the normal way of deciding professional boxing titles and the World Chess Championship. Some racket sports clubs have a reigning champion who may be challenged by any other club member; a ladder tournament extends the challenge concept to all players, not just the reigning champion. At élite-level competition, there is usually some governing body which authorises and regulates challenges, such as FIDE in chess. In some cases there is a challengers' tournament, the winner of which gains the right to play the challenge round against the reigning champion; in tennis this was the case at Wimbledon until 1922 and in the Davis Cup until 1972. The FA Cup's official name remains the \"Football Association Challenge Cup\", although not since its second season in 1873 has the reigning champion receive a bye to the final. The America's Cup is contested according to the terms of its 1887 deed of gift between yachts representing the champion yacht club and a challenging club. Since 1970, the usual practice, by mutual consent, is for an initial formal \"challenger of record\" replaced by the actual challenger after a qualifying tournament. However, in 1988 and 2010 there were court cases arising from non-consensual challenges.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "6820632", "title": "Sarabanda", "section": "Section::::Games.:Games in the \" Special 2017 \".\n", "start_paragraph_id": 77, "start_character": 0, "end_paragraph_id": 77, "end_character": 675, "text": "A reason was made to listen to the five competitors, the first to book and give the correct answer could choose one of the eight subjects available. If the competitor did not guess the reason, another competitor could be booked until the correct answer was given. The chosen subject consisted of three questions to which the competitor had to answer in thirty seconds. In the case of a wrong answer even to only one of the questions or time expired, the competitor was eliminated. The first two competitors to make mistakes (or alternatively the last remaining in the race) were eliminated. Competitors who answered the three questions correctly qualified for the next game.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "27948608", "title": "¿Quién quiere ser millonario? (Venezuelan game show)", "section": "Section::::Format.\n", "start_paragraph_id": 12, "start_character": 0, "end_paragraph_id": 12, "end_character": 496, "text": "The contest is a development of single-choice questions with four options, the contestant must answer correctly to advance and in turn increase the monetary gains. The participant must be a person with extensive domain knowledge as well as stress, because as each question above, thanks to his poise and assertiveness, increasing levels of difficulty and demands of the questions. Meanwhile, viewers live for an hour the challenge of the contestant, who is facing a monumental machinery staging.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "34493264", "title": "Competitive programming", "section": "Section::::Overview.\n", "start_paragraph_id": 8, "start_character": 0, "end_paragraph_id": 8, "end_character": 752, "text": "In most contests, the judging is done automatically by host machines, commonly known as judges. Every solution submitted by a contestant is run on the judge against a set of (usually secret) test cases. Normally, contest problems have an all-or-none marking system, meaning that a solution is \"Accepted\" only if it produces satisfactory results on all test cases run by the judge, and rejected otherwise. However, some contest problems may allow for partial scoring, depending on the number of test cases passed, the quality of the results, or some other specified criteria. Some other contests only require that the contestant submit the output corresponding to given input data, in which case the judge only has to analyze the submitted output data.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "58434340", "title": "Chase the Case", "section": "Section::::Format.:Final.\n", "start_paragraph_id": 8, "start_character": 0, "end_paragraph_id": 8, "end_character": 564, "text": "A challenge consists of four questions, alternating between the two contestants and starting with the challenged opponent. If the opponent wins, all cases stay with their holders. If the challenger wins, they take control of the opponent's case and the opponent is eliminated from the game with no winnings. The case originally held by the challenger is removed from play and opened to reveal its value. If the score is tied after all four questions have been asked, a tiebreaker is played on the buzzer. Once a challenge is resolved, the buzzer questions resume.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "53234921", "title": "Takeover Bid", "section": "Section::::Format.:Round 2: Crazy Cryptics.\n", "start_paragraph_id": 7, "start_character": 0, "end_paragraph_id": 7, "end_character": 356, "text": "Questions in this round were on the buzzer, and each contained clues to an answer with a double meaning. (E.g. \"You can't hang out the washing without her\" would lead to \"peg,\" as in the girl's name Peg and a clothes peg.) The first question was open to all three contestants, and the first to respond correctly gained the right to challenge one opponent.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "11729698", "title": "The Rich List (New Zealand game show)", "section": "Section::::How the game works.\n", "start_paragraph_id": 5, "start_character": 0, "end_paragraph_id": 5, "end_character": 221, "text": "The two teams bid upwards against each other while predicting how many examples of a particular subject they will be able to list. If a team fails to list as many answers as they predicted, the other team wins the round.\n", "bleu_score": null, "meta": null } ] } ]
null
9mzp5a
what is treasury yields and why does it cause the market to slide?
[ { "answer": "First all the quick answers:\n\nTreasuries are US government debt. Yields are how much interest the US government will pay lenders to loan money to them for a certain period of time. 10-year treasuries are the long term benchmark rate, they're the bench mark because they're a very liquid market. All the other rates are just different terms (pretty similar to a 3-year, 5-year, or 7-year car loan) the government can get more money for less interest by offering a variety of terms. \n\nNow the why:\n\nTreasuries are like the basic unit of investing. The US government has a very good reputation with investors, so treasuries are the safest return one can get over a period of time. \n\nThat means that when treasury rates rise, all other investments need to adjust (because if riskier investments don't provide at least that much return, investors are better off selling the other investment and buying the treasury). Sort of like an employer known to hire essentially everyone paying more than other more selective employers, the selective employer is going to have to keep their wages higher than the employer who hires anyone if they want to keep getting employees. \n\nSo when treasury yields rise, all other investment income rates also have to rise, and that means most investment prices fall. ", "provenance": null }, { "answer": "The National debt that you often hear about? That is the result of the government selling bonds to investors. The government collects money now, and agrees to pay it back down the road with interest.\n\nThese bonds are called Treasury Bills, or T-bills. The yield is the % of interest they pay out to buyers. The time frame is how long the money is loaned for, ie. a 10 year treasury gets paid back after 10 years. The longer the timeline, the higher the risk that things like inflation will counteract the interest gains, so longer term bonds pay a higher interest rate then shorter ones, where the external risks are better known. An increase in yield means that an investor get more money for the same (super low) risk investment.\n\nIf, as an investor I have the choice of low risk 2% return or higher risk 5% return, I may be more likely to take the higher risk. If the low risk option pays 4% vs. high risk 5%, I am a lot more likely to choose the safe option. So I might sell high risk stock and buy low risk T-bills instead. Less demand for stocks (more sellers than buyers) causes their price to fall.", "provenance": null }, { "answer": "Some of the other comments have glossed over what yield actually is, and why it rises/falls.\n\nA bond is a special kind of fixed interest loan where you borrow some money, pay interest only for a number of years, then pay back the full amount of the loan in one single final payment. For example, the US treasury takes out a loan for $1000 over 10 years at 1%pa interest. They pay back $10 (interest) each year, then after 10 years, they pay back the full $1000.\n\nWhat if you lend the treasury $1000 for 10 years, then 2 years later you need that $1000 back? Treasury won't give it back early, but you can sell the loan to somebody else. You might find someone willing to pay you back the full $1000. That person will then get the remaining $10 yearly payments and the final $1000 payment. From this person's point of view, they have lent the treasury $1000 for 8 years at 1% interest. \n\nHowever, maybe you can't find anyone willing to pay back the full $1000. So, desperate for money you agree to sell the loan to someone who will only pay you $900. However as the new owner of the loan, they are still entitled to the full $10 annual payments and the full $1000 final payment. This is basically the same thing as earning a higher rate of interest. In fact, from this person's point of view, they have lent treasury $900 for 8 years at something closer to 2.5% interest.\n\nThis effective interest rate is called the yield. It matters because anyone wanting to sell an 8 year treasury bond at the same time as you are trying to offload your one must offer an effective interest rate equal to 2.5% to compete with you. Especially, even the US treasury won't be able to sell new bonds for 8 years unless they offer an interest rate of 2.5%.\n\nThe yield is a reflection of how willing people are to buy \"second hand\" loans at any particular time. \n\n", "provenance": null }, { "answer": null, "provenance": [ { "wikipedia_id": "8748548", "title": "Fixed-income attribution", "section": "Section::::Yield curve attribution.:Yield curve attribution.\n", "start_paragraph_id": 76, "start_character": 0, "end_paragraph_id": 76, "end_character": 225, "text": "Since the yield of virtually any fixed-income instrument is affected by changes in the shape of the Treasury curve, it is not surprising that traders examine future and past performance in the light of changes to this curve.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "469806", "title": "Treasury stock", "section": "Section::::Accounting for treasury stock.\n", "start_paragraph_id": 21, "start_character": 0, "end_paragraph_id": 21, "end_character": 407, "text": "Another common way for accounting for treasury stock is the par value method. In the par value method, when the stock is purchased back from the market, the books will reflect the action as a retirement of the shares. Therefore, common stock is debited and treasury stock is credited. However, when the treasury stock is resold back to the market the entry in the books will be the same as the cost method.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "469806", "title": "Treasury stock", "section": "Section::::Accounting for treasury stock.\n", "start_paragraph_id": 20, "start_character": 0, "end_paragraph_id": 20, "end_character": 351, "text": "One way of accounting for treasury stock is with the cost method. In this method, the paid-in capital account is reduced in the balance sheet when the treasury stock is bought. When the treasury stock is sold back on the open market, the paid-in capital is either debited or credited if it is sold for less or more than the initial cost respectively.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "4868219", "title": "Earnings yield", "section": "Section::::Applications.\n", "start_paragraph_id": 4, "start_character": 0, "end_paragraph_id": 4, "end_character": 522, "text": "The earnings yield can be used to compare the earnings of a stock, sector or the whole market against bond yields. Generally, the earnings yields of equities are higher than the yield of risk-free treasury bonds. Some of this may result in dividends, while some may be kept as retained earnings. The market price of stocks may increase or decrease, reflecting the additional risk involved in equity investments. The average P/E ratio for U.S. stocks from 1900 to 2005 is 14, which equates to an earnings yield of over 7%.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "58508", "title": "Contango", "section": "Section::::Description.\n", "start_paragraph_id": 14, "start_character": 0, "end_paragraph_id": 14, "end_character": 454, "text": "because there are few holders who can make an arbitrage profit by selling the spot and buying back the future. A market that is steeply backwardated—\"i.e.\", one where there is a very steep premium for material available for immediate delivery—often indicates a perception of a current \"shortage\" in the underlying commodity. By the same token, a market that is deeply in contango may indicate a perception of a current supply \"surplus\" in the commodity.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "146719", "title": "Subsidy", "section": "Section::::Economic effects.\n", "start_paragraph_id": 36, "start_character": 0, "end_paragraph_id": 36, "end_character": 434, "text": "Assuming the market is in a perfectly competitive equilibrium, a subsidy increases the supply of the good beyond the equilibrium competitive quantity. The imbalance creates deadweight loss. Deadweight loss from a subsidy is the amount by which the cost of the subsidy exceeds the gains of the subsidy. The magnitude of the deadweight loss is dependent on the size of the subsidy. This is considered a market failure, or inefficiency.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "29269618", "title": "Causes of the Great Recession", "section": "Section::::Other factors.:Credit creation as a cause.\n", "start_paragraph_id": 196, "start_character": 0, "end_paragraph_id": 196, "end_character": 485, "text": "A positively sloped yield curve allows Primary Dealers (such as large investment banks) in the Federal Reserve system to fund themselves with cheap short term money while lending out at higher long-term rates. This strategy is profitable so long as the yield curve remains positively sloped. However, it creates a liquidity risk if the yield curve were to become inverted and banks would have to refund themselves at expensive short term rates while losing money on longer term loans.\n", "bleu_score": null, "meta": null } ] } ]
null
48e5e1
why does urinating feel different when you are sick?
[ { "answer": "For a multitude of reasons, among them:\n\n+ When sick your system is generating different chemicals from the immunologic system fight, which generate different contents, pH and even smell for your pee, the different contents and pH can irritate the urethra and be painful\n+ Your sensibility usually is higher due to the disease, so you can not only feel more intensively the urine pass, but also the above mentioned irritation\n+ Your urination frequency messes up due to the sickness and you end up going to the bathroom at unusual (for your normal daily routine) times, which can also feel different", "provenance": null }, { "answer": null, "provenance": [ { "wikipedia_id": "515064", "title": "Diabetic neuropathy", "section": "Section::::Pathogenesis.:Autonomic neuropathy.\n", "start_paragraph_id": 34, "start_character": 0, "end_paragraph_id": 34, "end_character": 241, "text": "Urinary symptoms include urinary frequency, urgency, incontinence and retention. Again, because of the retention of urine, urinary tract infections are frequent. Urinary retention can lead to bladder diverticula, stones, reflux nephropathy.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "38098048", "title": "Functional incontinence", "section": "", "start_paragraph_id": 1, "start_character": 0, "end_paragraph_id": 1, "end_character": 283, "text": "Functional incontinence is a form of urinary incontinence in which a person is usually aware of the need to urinate, but for one or more physical or mental reasons they are unable to get to a bathroom. The loss of urine can vary, from small leakages to full emptying of the bladder.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "179400", "title": "Urinary incontinence", "section": "Section::::Mechanism.\n", "start_paragraph_id": 19, "start_character": 0, "end_paragraph_id": 19, "end_character": 384, "text": "During urination, detrusor muscles in the wall of the bladder contract, forcing urine out of the bladder and into the urethra. At the same time, sphincter muscles surrounding the urethra relax, letting urine pass out of the body. Incontinence will occur if the bladder muscles suddenly contract (detrusor muscle) or muscles surrounding the urethra suddenly relax (sphincter muscles).\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "159421", "title": "Urination", "section": "Section::::Anatomy and physiology.:Physiology.:Experience of urination.\n", "start_paragraph_id": 28, "start_character": 0, "end_paragraph_id": 28, "end_character": 648, "text": "The need to urinate is experienced as an uncomfortable, full feeling. It is highly correlated with the fullness of the bladder. In many males the feeling of the need to urinate can be sensed at the base of the penis as well as the bladder, even though the neural activity associated with a full bladder comes from the bladder itself, and can be felt there as well. In females the need to urinate is felt in the lower abdomen region when the bladder is full. When the bladder becomes too full, the sphincter muscles will involuntarily relax, allowing urine to pass from the bladder. Release of urine is experienced as a lessening of the discomfort.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "4253241", "title": "Vesicoureteral reflux", "section": "Section::::Signs and symptoms.\n", "start_paragraph_id": 4, "start_character": 0, "end_paragraph_id": 4, "end_character": 257, "text": "In infants, the signs and symptoms of a urinary tract infection may include only fever and lethargy, with poor appetite and sometimes foul-smelling urine, while older children typically present with discomfort or pain with urination and frequent urination.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "331556", "title": "Enuresis", "section": "Section::::Causes.:Nocturnal enuresis.:Excessive output of urine during sleep.\n", "start_paragraph_id": 16, "start_character": 0, "end_paragraph_id": 16, "end_character": 437, "text": "Normally, the body produces a hormone that can slow the making of urine. This hormone is called antidiuretic hormone, or ADH. The body normally produces more ADH during sleep so that the need to urinate is lower. If the body does not produce enough ADH at night, the making of urine may not be slowed down, leading to the bladder overfilling. If a child does not sense the bladder filling and awaken to urinate, then wetting will occur.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "67972", "title": "Urolagnia", "section": "Section::::Overview.\n", "start_paragraph_id": 5, "start_character": 0, "end_paragraph_id": 5, "end_character": 556, "text": "Urolagnia is a paraphilia. During the activity, urine may be consumed or the person may bathe in it. Other variations include arousal from wetting or seeing someone else urinate in their pants or underclothes, or wetting the bed. Other forms of urolagnia may involve a tendency to be sexually aroused by smelling urine-soaked clothing or body parts. In many cases, a strong correlation or conditioning arises between urine smell or sight, and the sexual act. For some individuals the phenomenon may include a diaper fetish and/or arousal from infantilism.\n", "bleu_score": null, "meta": null } ] } ]
null
3wuokw
why did sega drop out of the video game console business?
[ { "answer": "Because people stopped buying them.\n\nThere is an apocryphal story that Sony released a *Final Fantasy* game the day of the Saturn's release, in a move to deliberately cripple sales of the new system.", "provenance": null }, { "answer": "The fact of the matter is, they lost.\n\nThe Nintendo sold in much higher numbers than the SEGA master system then the Super Nintendo outsold the Genesis. Now, someone has to be number 2, so SEGA was OK. Then came the next generation and the entrance of Sony. The orginal PlayStation did better than SEGA's Saturn and then the following generation the Dreamcast was simply slaughtered by the PS2, despite having a 1 year head start. And that was the end of SEGA making consoles. \n\nAt first Nintendo and SEGA were fighting for position, and SEGA lost. Next thing you know they lose again, then the real fight is between Nintendo and Sony and SEGA is simply left on the sidelines. There's really only enough console market for 2 players in the hardcore gamer space. Nintendo and SEGA, Nintendo and Sony, Microsoft and Sony. The only reason Nintendo still survives is that they stayed with the younger gamer target market when Sony and MS moved to older rated games.\n\nEven today Nintendo struggles to be a third console maker and that's only because the system is radically different than the top 2.\n\nThe second thing to understand is that home consoles were always a secondary revenue stream for SEGA. They started as an arcade company and that division was still profitable while the dreamcast was burning money like mad.\n\nThey made an businesses decision to remove themselves from that market. The same thing Nintendo could easily decide to do today, shift all it's attention to the handheld market and just walk away from home consoles. \n\nSega left the market because they lost. They could of kept throwing money at the pwo ", "provenance": null }, { "answer": "Sega spent a lot of money making the dreamcast. They didn't get enough money in return. They decided they can make more money with software alone. Big reason being, the market is too saturated with giants with a lot more financial backing then sega will ever have. That said - Dreamcast is still my favorite and I can't wait to play Shenmue 3... finally... ", "provenance": null }, { "answer": "SEGA made some pretty dumb decisions back then. The company created two \"add-ons\" to the Genesis: SEGA CD and the 32X, which didn't sell very well. Besides, the next console they made was the SEGA Saturn, which also didn't have very good sales, in fact, they were horrible. \n\nAnd the final nail in the coffin was the Dreamcast. It had potential, *BUT*, due to the small library of games, not many people bought it. Because of all of these failures in the console market, SEGA became a Third Party Company.", "provenance": null }, { "answer": "They were in rocky waters and the dreamcast was their last shot. The lack of killer apps( there were great games just not great selling games) + the ps2 dropping was the nail in the coffin for Sega as a console maker. They spent a lot of money on that venture and lost big time.", "provenance": null }, { "answer": "I happened to work for a competitor to Sega during that time. Sony changed the game when the Playstation came out. Then Microsoft followed with the Xbox. BOTH Sega and Nintendo were unable to compete with both Sony's and Microsoft's ability to sell at steep losses for extended periods. They simply didn't have the cash reserves to do this.\n\nSega simply bowed out. They said \"no thanks\" to selling hardware at such steep losses. Nintendo interestingly did not - it was actually expected that they would. Nintendo came back with a very innovative approach. They let Sony and Microsoft duke it out for the gamers' first hardware choice - they made sure they would be the second. So gamers tend to have a Sony OR Microsoft console and a Nintendo console. They succeeded surprisingly well here. It is not the type of move Sega could have made. Sega always competed with high end hardware. They were very successful here with Genesis. \n\nNow.. there are plenty of things Sega lost/failed at as others have mentioned. They did actually learn from their mistakes - the Dreamcast was a fantastic machine. The Dreamcast (and Sega) just never stood a chance of competing against Sony and MS. Sega did the right thing in bowing out BEFORE failing. It is actually one of their few good decisions and the reason why they still make games. Some are even pretty good.\n\nEDIT: I say that with the Dreamcast being my favorite system of all time and with its cancellation being rather heartbreaking. Yet.. it was the right decision.", "provenance": null }, { "answer": "I lived through it, the SNES versions of the same games cost less and had more levels than the same games on Genesis. Sunset Riders was my favorite game on Genesis and when I played it on my friend's SNES, it has about twice the content and I found out it was $10 less at the same store I got the Genesis version. Our wealthy friend had a Neo Geo that was fucking mind blowing, but it cost way too much for a game system that could only play games (there was no media center or DVD playback back then). Neo Geo lost early on volume. Sega got a reputation as being not as good as SNES. \n\nThat's when Sega started doing that bullshit X32 and whatnot add-on death spiral. Now your parents could buy you a game for the Genesis and find out it doesn't work because they didn't get this crazy add-on and they would tell their friends not to get their kids a Sega thingy. Sega never learned and would keep doing this shit like grown people were playing their consoles back then. It would be a cash cow now, like physical DLC, but not back then. Their add-on crap wasn't doing the trick, so they upped the anti.\n\nThe Saturn blew the SNES out of the water and supported 16 players at a time when the average TV was below 20\" and in standard definition. If you find an old Saturn with the splitters, controllers and bomber man, you can have a great time on a new big screen. My wealthy friend had a Saturn and we did bomber man at a party with 16 player and it was awesome (so awesome that at one point in college I purchased a Saturn, 16 controllers, the splitters and Bomber man and ran a tournament with cash prizes). The problem with Saturn is it only had a few really good games and not much when it released (the flagship game, Nights into Dreams, was lame). PlayStation came out the same year as the Saturn and had better game offering that weren't centered around 16 people squinting at a 2 inch square section of a TV to be cool. Neither was so much better than the SNES that Nintendo was threatened. Sega was second fiddle again, this time to a nobody in the form of Sony.\n\nN64 was revolutionary and came out a few years after PlayStation. PlayStation had already made a name for itself in an unbelievable way (more than half the market by then) and Nintendo had to step it up to compete. Sega and PlayStation were too close to their last releases to respond and Nintendo cleaned up for a while. PlayStation responded with new games for it's market dominating platform, Sega Saturn had flopped, so it had no response but to double down on the next release.\n\nDreamcast was nuts and came out a few years after N64; it sold like hot cakes for a few months on hype that it was a PlayStation killer. It had some good games, but not enough and PlayStation fired back with PS2 and host of amazing games and DVD playback. That was it. Everyone that bought a Dreamcast wished they had bought a PS2 and no one bought any Dreamcast games. Sega lost their ass because they had loss lead the console, but made tons of money on games for other platforms and they gave up on the console business.", "provenance": null }, { "answer": "I read about Sega and Nintendo in the book [The Console Wars](_URL_0_). The book does a good job of telling how Sega of America (SoA) did the impossible job of taking the console fight to Nintendo, and through some genius marketing was able to take majority market share with the Genesis. However, according to the book, the Genesis never really took off in Japan like it did in the US because the Japanese executives did not want to follow SoA's sales strategies. The book really makes it seem like Sega of Japan (SoJ) really held a grudge against the American office and started to actively fight against their ideas. IMO, the biggest mistake Sega made was when the SoA President found a company making an incredible next gen processor that he wanted to use in the Saturn. For petty reasons, SoJ didn't want to use it. Feeling guilty for getting the processor company's hopes up, the SoA President had them call up a friend of his at Nintendo. That processor became the technology in the Nintendo 64. Had that processor actually been used in the Saturn, I bet Sega would still be around. \n\nThe horrible launch of the Saturn was just one of many missteps caused by the friction between SoA and SoJ. It seems to me that the inability for the two offices to work together was the real reason Sega failed at the console market.\n\nTL:DR: IMO, Sega failed at consoles because Sega of Japan office didn't want to listen to any advice or work together with Sega of America.", "provenance": null }, { "answer": "OOH! I got this one!\n\nFirst, it is important to know that in the beginning, Sega was primarily a arcade developer. \n\nAs they made games and hardware, they made a home console, with strong 1st party support. The Genesis blew the NES, and \"Brought the arcade home\". It was super popular for a time. \n\nThen the Super Nintendo came out, and we had the first big Console War, (the likes of we wouldn't see again until the PS3/360 era.)\n\nIn the end, Sega took a mild loss. The 32 Bit era was about to start, so Sega, unsure what to do, gave Sega of Japan and Sega of America a project to make a 32 bit system. SoJ made the Saturn, SoA made the 32X. The 32X was a 32-Bit add-on for the Genesis, that could do some nice things, but nothing near the Saturn. Still, it robbed Saturn of sales, and was a terrible failure. This lowered confidence in Sega.\n\nWhile this was going on, Nintendo had really strict censorship rules, and Sega's laxness made it the \"edgier\" of the two. \n\n32-bit era begins, this time with a new challenger, Sony PlayStation. Sony doesn't have Nintendo's censorship, and the PlayStation is much easier to program for than the Saturn. \n\nThe issues with the 32X, and lackluster starting library hurt Saturn in the beginning, and Playstation kept the hurt going. \n\nSega got the bright idea to launch a next-next gen system (Dreamcast), before the public/industry was ready to upgrade, and was full of features like Online Play, that didn't have the necessary tech to pull off well. \n\nOnce the PS2 and the Xbox came around, the Dreamcast went from looking cutting edge, to looking like the Wii. \n\nThe fact it could play burned games out of the box didn't help revenue either. \n\nAfter having 2~3 systems in a row die hard, Sega went back to doing what they did best, make games. \n\nAs an aside, the slow death of Arcades contributed to the slow death of Sega, as their IP's were less and less in the public's eye. ", "provenance": null }, { "answer": "Because their last two consoles flopped, at least, in the US market.\n\nSaturn was doing well in Japan, but the US division made a really, really stupid move by suddenly releasing the Saturn early without telling anyone. This pissed off retailers who weren't ready to sell it yet, some even going so far as throwing out all their Sega merch and replacing it with Nintendo/Sony stuff. It also pissed off developers who didn't have their games ready yet, because they were not told about the sudden early release. The Saturn flopped really, really hard in the US.\n\nDreamcast did okay in the US, but was just horribly overshadowed by the PS2. With PS2 and now Xbox entering the market, Sega focused on just making games for the Dreamcast to try and ride out their remaining time before the PS2 released. After PS2 and Xbox released, they just rode out the rest of Dreamcast's life and decided against making any future consoles.\n\nI suppose you could say it started as early as the Genesis addons, which both failed pretty hard as well. The CD sold *okay*, but the 32x just sold horribly.", "provenance": null }, { "answer": "This might be a bit more lengthy but I do think G4's Icons documentary on the Dreamcast gives a pretty good explanation of how Sega left the console game if you want to know more.\n\n_URL_0_ ", "provenance": null }, { "answer": "Sega had difficulty selling their later consoles. People stopped buying them, so companies didn't want to make games for them. It reached a point where it didn't make sense anymore to make consoles. But people still loved games Sega made, so instead of selling a limited number of great games to a small number of console owners, they decided to make games for wider audiences on other companies consoles. ", "provenance": null }, { "answer": "As usual. The top comment is wrong. Oh well... There was a lack of unity between different parts of Sega. Sega of Japan had a different vision than Sega of America. (there are more divisions of Sega than just those two) There are a few books that talk about this and there are many sources on the internet that will tell you. But yes the top comment is completely wrong.... But essentially the big problem is that with no unity. Pieces of Sega were all doing their own thing. This is why Sega CD came out followed quickly by 32x. And then there was the Sega Neptune that never saw light of day. And finally the Saturn which was a technological mess because it was hard to develop games for 2 cpus and it was also incredibly expensive. (Saturn was like $400 console which is like an a $644.20 console today) 4 pieces of hardware being developed in rapid succession... The Dreamcast was a step in the right direction, however the company burned far too many bridges with a failed Sega CD, 32x, and Saturn. All of this information you can verify by searching the internet and/or going to the public library. I really wish people would stop up voting non-sense. Argumentum ad populum... This question doesn't even need to be asked. It's easy to look this information up yourself.", "provenance": null }, { "answer": "Because Sega had released the Dreamcast and in doing so created the perfect video game system. Sega would never surpass what they did and knew they should leave the console business while they were on top.", "provenance": null }, { "answer": "A lot of people aren't explaining WHY the Sega consoles sold poorly compared to their competitors.\n\nThe answer is that almost every Sega console had a slew of underutilized functions. For example, the Sega Genesis had online multiplayer capabilities built into the hardware, but none of the games released in the United States had online multiplayer MODES. It was expensive to test, and very few people had internet at the time, most of the games for the Genesis were developed to be ported to multiple consoles, and it didn't increase the sale value of the cartridge, so most developers didn't bother.\n\nSo customers had to pay more for a Sega console because it had a bunch of functions... that they couldn't use. Nintendo consoles cost a lot less, and had roughly the same quality of experience.\n\nThe cost-to-functionality ratio is the same reason the Xbox broke into the market.", "provenance": null }, { "answer": "A lot of people are only viewing this from the Western market perspective. Sega's hardware is very much alive, at least in Japan. If you walk into a game center (the arcades), the majority of the arcade machines are made by Sega, as well as the pseudo-gambling machines. I never go to a pachinko parlor but I wouldn't be surprised if a decent chunk or more of them are made by Sega. They also have UFO catcher machines (claw machines) and other entertainment devices.\n\nAlso the Dreamcast that many people are citing as a the reason for their failure had a larger library and better reception in Japan. To me, it just seems like Sega decided to stay domestic while Nintendo and Sony aggressively fought for the international market. I'm sure Sega makes a pretty enough penny with essentially every aspect EXCEPT for home consoles, that they don't really need to care about it to make coin.", "provenance": null }, { "answer": "But also aside from the dream cast sega fucked up on the 32x, sega CD, game gear and the saturn didnt win either. They made mistake after mistake", "provenance": null }, { "answer": "Because they were bad at console business. Let me explain. \n\nSega had a horrible reputation of not supporting their consoles. They would build something and then ignore it. They would move onto the next project while developers scrambled to get content out for the system. So while developers were pushing content for Genesis, Sega was releasing Sega CD. Sega CD was out for 3 years before being discontinued. In those 3 years, Sega developed and released the 32X. \n\nThis addon required developers to switch gears and develop for it instead of the Genesis or the Sega cd. Even though these were addon accessories, they still pulled development away from the original system. The 32X came out shortly after the Sega CD and was out for 2 years before being discontinued. \n\nThe reason they were discontinued was the release of the Sega Saturn. This required developers to quickly switch to the new console and essentially wasted tons of development time to the previous addons that could have been used to develop for the Saturn. \n\nThe Saturn was released in 1995 and discontinued in 1998 the same year as release of the Dreamcast. \n\nThese years are all NA, below is JP. I bring this up because the 32X and Saturn debacle is really the answer you seek. The 32X was under promised and ended up being much better than they anticipated. It was announced around the same time the street date for the Saturn was announced. This caused marketing problems and they ended up marketing the 32x as a transitional product aimed at people who couldn't afford the twice as expensive Saturn. This caused the 32x to sell out. In fact, more people wanted the 32x than the Saturn so eventually Sega discontinued it to focus on the Saturn. Doing this left developers out in the cold. 32x and Saturn games were not compatible. The quick release of the Dreamcast gave developers pause. They just went through all of this and most were not eager to relive the experience. Thus the developers did not follow Sega to the Dreamcast. This combined with the Sony Onslaught is why they stopped making consoles. \n\nA quick timeline. \n\n- SG-1000 1983. \n- MasterSystem 1985. \n- Genesis 1988. \n- Sega CD 1991. \n- 32X 1994. \n- Saturn 1994. \n- Dreamcast 1998. \n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n", "provenance": null }, { "answer": "Sega challenged Nintendo at the height of its power, directly and very aggressively. The Genesis/Mega Drive made the videogame market truly competitive for the first time in years. The NES was clearly inferior and Sega had about 2 or 3 years to enjoy being the cutting edge (unless you count the NeoGeo and I don't). \n\nThen, along comes the SNES and the era of the Radical Mascot games, which Sega basically started with Sonic the Hedgehog. This helped Sega compete, but now the shoe was on the other foot: The Super NES had better graphics, much better sound and a lot of amazing games. Sega's head start was becoming a liability by 93 or so. Fighting games, primitive 3D and FPS, and even JRPG games were exploding in popularity and the SNES was the console better suited for them, for a variety of reasons.\n\nSega comes to believe that the key to regaining the initiative is to regain the technological advantage. First, there was the Sega CD. whereas Nintendo and their loyal devs were more or less defining the near-term future of video gaming, Sega, like many others at the time, bought into the idea that barely-interactive movie/games were going to overwhelm everyone with visuals that the SNES couldn't match.\n\nThat being an expensive failure, the next step was 32 bits. It's twice as many! Unfortunately, the first attempt at that was the 32X, another expensive Genesis add-on. It had few games and fewer good ones. And then, just one year later, Sega was asking people to lay out 400 more $mackeroos for ANOTHER 32 bit machine. You can imagine how that must have felt to 32X owners. Sega actually intended, from the start, to market the 32X as a 32 bit bargain model and the Saturn as the flagship console.\n\nNever one to say no to a terrible idea, Sega released the Saturn with no warning, literally. Tom Kalinske of Sega of America was expected to show the console at E3 1995. Instead, he announced that it was already for sale,in limited quantities, at select retailers, for $399. This was done at the behest of Sega of Japan, to give the Saturn a head start (of four months). All this accomplished was surprising gamers, infuriating all the retailers (big ones like WalMart) who were not informed by Sega or included in the early distribution and, in some cases, retaliated by refusing to stock Saturns and games. Sony immediately follows that announcement with one of their own, about the Playstation: \"$299\". Saturn was dead by 1998.\n\nSo, after some big shakeups at the company, the Dreamcast comes out at the very end of the decade, and Sega seems to have finally gotten their shit straight. It's cheap, the games look amazing, there are a lot of good and unique titles very quickly, and it largely delivered on its promise to be the first console to emphasize online gaming. And it had the terrible luck to debut just ahead of what would go on to be the most popular console ever.\n\ntl;dr: Sega went full retard and snapped out of it too late to save itself.", "provenance": null }, { "answer": "It's really too bad, for a number of reasons, but primarily because House of Pain really went to bat for them. \n \nAlso, Sega had the best commercials.", "provenance": null }, { "answer": "The simplest explanation is that Sony took Sega's market in the console wars. Sega was always a bit more rebellious than Nintendo when it came to mature audience games. When Sony came along it took that market.", "provenance": null }, { "answer": "They made too many consoles in short spans of time that failed. Consumers lost trust in them. In a short period of time, they launched the Sega CD, 32x, then surprise launch of the saturn that retailers werent even ready for. 3rd party developers didnt even have games or dev kits for the thing. They jumped ship much like they did with the wii u. The saturn was underpowered when it came to 3D and that was where gaming was headed with the arrival of the playstation. Dreamcast was their attempt to make things right with consumers and 3rd party developers. It almost worked, but ps2 came and was more powerful and had the sony name behind it at that point. Even though the ps2 launched horribly it still killed the dreamcast. Sega bowed out and went software only. Long story short, sega killed themselves by flooding the market with too many systems that were weaker and more expensive than the competition and pissing off their fans and developers.", "provenance": null }, { "answer": "Anyone else play the heck out of Phantasy Star Online? I had a close knit group of friends in middle school who played this game with me until there wasn't much more to accomplish... probably the most fun out of any mmo I've played.", "provenance": null }, { "answer": "Because thats what you do when you make the greatest platform of all time. You retire a champion ", "provenance": null }, { "answer": "To get to the other side?", "provenance": null }, { "answer": "Basically the failure of the Dreamcast. I highly recommend [The GamingHistorian's video on the Dreamcast](_URL_0_), if you're interested, but to summarize, there are at least three main reasons for the Dreamcast's failure (summarizing GamingHistorian): \n\n1) the PS2. It was a much more powerful console---period. Also, it had a DVD drive while the Dreamcast still used CDs. The XBOX and Gamecube didn't help either. Sega just came late to the nextgen party with an all-around weak system (although it did have some revolutionary technology, like the memory card screen and online play)\n\n2) Lack of third party support for games. This was due to the failure of the Sega Saturn, which had a similar \"too late to the party\" story, so third party developers---especially big ones like EA---weren't too keen on developing games for the Dreamcast. \n\n3) Game piracy. When the Dreamcast came out, CD burners were getting quite popular too, and so people figured out how to rip games and turn them into images that could be read by the dreamcast. \n\nAgain, the video in the link above is highly recommended. ", "provenance": null }, { "answer": "Anyone remember how easy it was too play copied games on the dreamcast? None of the pain of physically modding the system. Just a boot disk then bam! Crazy taxi", "provenance": null }, { "answer": "I'm a guy who never got into video games, but I can't see the name Sega without hearing it as \"SEGA!\".\n\nWhy is that?", "provenance": null }, { "answer": "Love how all the top answers fail to mention buyers pirated the shit out of Dreamcast games.", "provenance": null }, { "answer": "It was meant to be. Sega was on the path out of the console business from early on.\n\nTheir first console, the Master System, had several incremental \"upgrades\" and I believe on the third try it finally found some success at competing with the NES. However, before there was even really a firm idea of \"console generations\", they released the totally different Genesis / Megadrive, thinking that more accurate arcade-to-home games would give them to ability to beat the NES. \n\nThat turned out to be their fatal error - the Genesis came out 2 years before the SNES, and Nintendo made their console significantly more powerful. Sega's system had enough traction by the SNES' release (with the help of Sonic, smart advertising, and continuing domination of then-thriving arcades) that it was almost an even battle. Yet, Sega had an inferiority complex and wouldn't rest with weaker hardware. \n\nCartridge-based games sometimes had extra chips in them to supplement the hardware's graphics and sound processing. But they added to the cost of each game, and it was redundant to sell the same extra hardware that could only be used with each game cartridge. So Sega basically thought they'd save the customers money in the long run by creating a peripheral 'supplemental console', the 32X, which would turn the Genesis into a more advanced machine than the SNES. Now, even in 1994 when it was released, there was still no firm concept of a 5-7 year console generation - consoles were being released every year, and even consumers didn't know how long a console \"should\" last for. That said, even Sega knew by the time the 32X was released that it was DOA. The 'Ultra 64' was on the horizon, which brought with it gamers' expectations of powerful 3D gaming at home. People who bought it felt duped, and everyone else was made more wary of Sega's offerings. (Others have simply mentioned the Sega CD and 32X in the same breath, but they were very different - the Sega CD actually lived for 4 years or so and did what it was advertised to do. It had nowhere near the negative impact the 32X had.)\n\nThe Saturn was successful in Japan, but a flop elsewhere, mostly due to Sega ignoring consumers' attitudes towards them post-32X. They made no effort to win back disappointed buyers, and without the enthusiasm of a core fanbase, they had a real uphill struggle. Add to that, they were selling the Saturn at a price point which was not competitive at all, and didn't even have a 3D Sonic game, which is what casual fans had been expecting to see.\n\nWith the Dreamcast, they released what the Saturn should have been, but faced a new set of challenges they were unprepared for. Sony drove up consumer expectations of what the next generation would look like, to the point where people thought the graphics would look like the CGI cutscenes in FF7, but in real-time (to be fair, very late PS2 games did have extremely impressive graphics). Dreamcast's graphics were clean, sharp (480p, a resolution that is *still* in use on the few Wii releases trickling out), and on par with even the most advanced arcade games of the time. Yet, gamers wanted more. Sonic still looked like a game, not CGI. Dreamcast was selling better than the Saturn, but not well enough. With MS' announced entry into the console wars, they knew they had no more tricks up their sleeve. The trajectory they had been on since the Megadrive had played out, and they landed back at being a games company.", "provenance": null }, { "answer": "Actually the reason was all the good game developers were jumping ship to go to the upcoming Xbox as well as the PS2, so they decided to scrap the hardware division and stay with game development.", "provenance": null }, { "answer": "What?!!! OMG, when did this happen?!", "provenance": null }, { "answer": "There are two main reasons. One, there was a clash of cultures between America and Japan. Japan would develop the games, and America would market them. Japan became offended when its American counterpart suggested what games needed to be developed. Japan developed its own games, which were quirky, and difficult to market to American audiences. It wasn't so bad during the Genesis era because there were so many games, but the clash just got worse and worse during the Saturn era. By the time of the Dreamcast, it didn't matter that SEGA had reinvented itself and had the superior machine on the market.\n\nThe second reason was market oversaturation. With the Genesis and Game Gear, SEGA had enough of a market share to chip away at Nintendo's monopoly, and developers were grateful for the benefits of the free market. But then SEGA alienated its base with too much hardware. The SEGA CD and 32X weren't next generation, but they weren't first gen either. They were placeholders? While SEGA was between systems, promising that it would support the SEGA CD and 32X, the Saturn dropped. And it dropped in such a way as to not only alienate consumers, but also retailers--only certain retailers were authorized to sell the Saturn. So if you bought a 32X for Christmas, it was dead by March. Japan fired its American CEO and marketing team to ensure that SEGA's machine would never find an audience.\n\nThen, of course, the PSX drops, and Saturn never catches it. Those who are saying that it was the Dreamcast are wrong; SEGA died well before that, during the Saturn years.", "provenance": null }, { "answer": "1. The Sega Genesis had too many add ons that weren't all that great. (Sega 32X and Sega CD)\n\n2. The Sega Saturn was released early causing backlash from retailers and some even refusing to carry it. It was difficult to program for. It also did not have a native Sonic title.\n\n3. The Sega Dreamcast did not have a DVD player. At the time Playstation 2 was actually one of the cheapest DVD players on the market. (This is one reason I could possibly see Nintendo dropping out at some point. It's 2015 and we cant watch DVDs or Blurays on the Wii or Wii U)", "provenance": null }, { "answer": "Real reason. Sony and Microsoft are both multi-billion dollar corporations. Their consoles don't need to make money for the company to remain profitable, because they make money in so many other ways. Sony can finance the Play Station through their film or television divisions, or vice versa. Microsoft can finance through anything. Sega doesn't have that luxury, and the profit to risk involved in consoles was too high. Nintendo on the other hand has found a niche with \"interactive\" games, and they have a great list of proprietary games and characters, allowing them to remain profitable, albeit not as profitable as Sony or Microsoft. ", "provenance": null }, { "answer": "Sega's management was pretty screwed up throughout the '90s. While Nintendo was focusing on continuing to make great games for the SNES (supporting the console well into the decade with hits like Super Metroid and Donkey Kong Country), Sega was churning out hardware without real purpose.\n\nFirst the 32x and the Sega CD, with both technologies being \"the next big thing\" in gaming but neither with the library to back it up. Then the Game Gear with its \"console-like graphics\", monstrous size and ridiculous battery autonomy. The Saturn was released ahead of schedule so it could beat the PlayStation to market, surrounded with poor communication with the audience and even the game developers. Finally, the DreamCast, the one hardware since the Genesis that showed real promise, landed with a thud amidst the popularity of the PlayStation (and the N64) and hype for the PlayStation 2.", "provenance": null }, { "answer": "Sega had a few flops in a row.\n\nThey tried to push additional life out of the Sega Genesis just before the Saturn released with the 32x. The 32x was way cheaper than any of the independent consoles, but it wasn't really that much of a leap. They launched this just before the Sega Saturn as well, which wound up hurting the Saturn's sales.\n\nThe Saturn itself was technically the most powerful console of that generation, but it was so damn convoluted to write games for that most developers never took full advantage of it. Meanwhile, Sony's Playstation was more or less the same hardware as Namco's System 11 arcade cabinets, so more developers were familiar with it from the get-go.\n\nThe Saturn pretty much flopped, and for Sega to have remained in the console business after those flops the Dreamcast had to be wildly successful. Unfortunately, it simply didn't measure up enough to keep them in the game.", "provenance": null }, { "answer": "After their highly successful 16bit console, the Genesis / Megadrive, Sega decided to make a 32bit console next, called the Saturn. The Sega Saturn released in September 1995. \n\nSony had just entered the games console industry with their 64bit PlayStation the same year. Because of this, Sega's main competitors at the time, Nintendo, decided that they would skip the 32bit generation and instead focus development on their own 64bit console, which we came to know as the N64. It released in June 1996, less than a year after the Saturn, and an entire generation ahead in terms of tech (at least that's what consumers thought anyway, 64 > 32). \n\nSeeing that they were being left behind, Sega had to quickly focus their efforts on producing their own 64bit console, which would come to be known as the Dreamcast. This resulted in them abandoning the Saturn early, which subsequently had a very short lifespan of under 3 years, as games ceased to be produced for the Saturn in 1998. The console was considered a disappointment by consumers.\n\nUnfortunately, Sega was not able to finally ship their 64bit Dreamcast until late November 1998. 3 years after the Playstation entered the market, and over 2 years since the N64. By this time, both Sega's competitors had estabished games libraries and fan-bases, and Sega found themselves unable to compete. \n\nThat, coupled with the fact that the Dreamcast suffered from rampant piracy, led to Sega withdrawing from the games console industry, after two successive systems being considered large failures. \n\nEDIT: Dreamcast was in fact 128bit, looks like Sega tried to put themselves a set ahead of the competition, but unfortunately it was too little, too late. Consumers did not trust in Sega's claims to have superior hardware after the slew of low-quality hardware failures (Sega CD and 32X expansions for the Genesis, the Saturn, and the canceled Neptune) in the preceding years.\n\nThat's a breif breakdown of events, anyway. Not that anyone will see this comment since it will be drowned below 1000 others.", "provenance": null }, { "answer": "Because staying in the console industry simply wasn't profitable enough, the genesis was huge nothing after was even close, to the point of mostly being \"failures\" from a profit perspective.\n\nThe CD did okay (approx. 2.25 million units.)\n\nThe 32x did less okay ( > 1 Million units.)\n\nBut those were just add ons so sega tried a new console \n\nThe saturn did poorly (approx. 17 million units.)\n\nThe dreamcast did even worse (approx. 9 million units.)\n\nUltimately they just couldn't beat their competitors, namely the ps1 selling 102 million units by the end of its run, and then just a year after the dreamcast's release the ps2 came out selling 155 million by the ens of its run. \n\nSega could have made another console but it just wouldn't make sense profits wise.", "provenance": null }, { "answer": "Remember when biggie smalls was bragging in his songs about owning a Super Nintendo AND a sega genesis? Pepperidge farm remembers.", "provenance": null }, { "answer": "A combination of factors really.\n\nBefore the Dreamcast Sega was supporting, in different regions, the Sega Saturn, The 32x, The Genesis/Megadrive, The SegaCD, the Master System, and the Game Gear.\n\nBy support I don't nessicarily mean games so much as technical and customer support. Also while the Megadrive did INSANELY well in Brazil (to the point it's still being made,) and in Japan the Saturn was hot (we did'nt get a lot of the quirky import titles, but on the other hand Japan had never really let the arcade die, so 2d was still a thing where here stateside we were '3d or GTFO.')\n\nAs you can understand this costs a *LOT* of money to keep going.\n\nAdditionally Sega had alienated both retailers and customers with how they mishandled the 32x and the saturn. The SegaCD everyone can forgive, quirky interesting thing that was always advertised as 'just an addon.' The 32x was at first advertised as Sega's next gen, up to and including shells (with or without hardware I don't know but only ones I've seen are empty shells) for a 'Neptune' console which would have been a combination 32x/Genesis all in one.\n\nThe problem is when the 32x was released the Saturn was already out in Japan, and was slated for release for the Christmas shopping rush of 1995. This gave developers and retailers time to gear up and get a solid release date.... except Sega announced during a conference in May where they demo'd the Sega Saturn 'Oh yea, you can buy one RIGHT NOW.' This left the Saturn with a piss poor launch lineup, retailers with their confidence shaken in Sega to the point KB Toys refused to stock their hardware, and on top of all that the Saturn was stupidly difficult to program for because they went and slapped a co-processor in at the last minute.\n\nBy the way the 'extra' time was because Sony was set to release the Playstation roughly around then and the board thought 'well hey let's just go with getting more units in people's hands huh?\n\nThen in 1998 Bernie Stoler, head of Sega of America basically said in an interview 'The Saturn is not our Future.' Granted the Dreamcast was in development but this was a public interview. Consumers and Retailers not to mention Developers saw '*oh shit Sega's jumping platforms again.*'\n\nSega did learn from the Dreamcast's poor Japanese launch and spent most of 1999 hyping the hell out of it. Just hyping and showing demos and gearing up and they came in with all guns blazing. The problem wasn't the dreamcast itself, or even Sony even though several of the PS2 demos were rendered rather than real time. Sega's problem is that they didn't have the money to keep going nor the faith from third party developers to keep mindshare in the face of the playstation 2's launch.\n\nThere were attempts by sega to get Microsoft, who had partnered with them for the dreamcast, to try making the xbox able to play dreamcast games so their customers would have a migration path. While that didn't happen this explains Jet Set Radio Future, Shinmu, and other games getting ports or sequels on the original xbox.", "provenance": null }, { "answer": "Imagine if Sega did succeed with the DC, where would be now? Do you think they would have gone to making just games still, or would we have some awesome Sega console today?", "provenance": null }, { "answer": "There's a lot of misunderstanding that the Dreamcast was the reason for Sega pulling out of the console market, but it was actually selling okay when it was cancelled. Not great, but it was on track to outsell the Gamecube over it's lifetime. The real issue was the loss of consumer trust after the Sega CD and 32x. These were expansions for the Sega Genesis which were unsupported due to low sales and released back to back. This was then followed by the Sega Saturn. That is 3 hardware pieces released in a few years, back to back. People were wary of Sega because it seemed to just drop support at the first sign of trouble. So no one in the largest market for Sega (North America) wanted to risk buying the Saturn because they were worried that the company would do the same thing to the Saturn as they did to the last two hardware pieces. This was on top of the fact that the Saturn had very few games, no killer app, and a lot of the bigger games for the Saturn only came out in Japan.\n\nBy the time the Dreamcast came out, Sega was already financially crippled by the last 3 hardware pieces. New management took over Sega part way through the strategy of the Dreamcast and that new Management decided the console market was too volatile and canned support about two and a half years into the system. This new management also did things like cancelling sequels to well known Sega franchises, such as a 3D Streets of Rage 4 that was in development based on it's genre alone without knowing the popularity of Streets of Rage. So what it really came down to was not the Dreamcast, but everything between the Sega Genesis and it's eventual release. Had Sega not released the 32x or Sega CD, it's possible they could have kept consumer trust when moving on to the Saturn and Dreamcast. On another short note, this is why Nintendo decided to stick with the Wii U as long as it has despite the sales it currently has. They wanted to avoid Sega-ing themselves out of the console market.", "provenance": null }, { "answer": "After the dreamcast Sega decided it would make more money and do better if it was a software developer only. Basically did it because their really wanst any room for them in the market anymore. That being said Id love for them to team up with old time rival Nintendo to make a kick ass console. ", "provenance": null }, { "answer": null, "provenance": [ { "wikipedia_id": "29035", "title": "Sega Saturn", "section": "Section::::History.:Decline.\n", "start_paragraph_id": 31, "start_character": 0, "end_paragraph_id": 31, "end_character": 781, "text": "From 1993 to early 1996, although Sega's revenue declined as part of an industry-wide slowdown, the company retained control of 38% of the U.S. video game market (compared to Nintendo's 30% and Sony's 24%). 800,000 PlayStation units were sold in the U.S. by the end of 1995, compared to 400,000 Saturn units. In part due to an aggressive price war, the PlayStation outsold the Saturn by two-to-one in 1996, while Sega's 16-bit sales declined markedly. By the end of 1996, the PlayStation had sold 2.9 million units in the U.S., more than twice the 1.2 million Saturn units sold. The Christmas 1996 \"Three Free\" pack, which bundled the Saturn with \"Daytona USA\", \"Virtua Fighter 2\", and \"Virtua Cop,\" drove sales dramatically and ensured the Saturn remained a competitor into 1997.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "58798706", "title": "History of Sega", "section": "", "start_paragraph_id": 2, "start_character": 0, "end_paragraph_id": 2, "end_character": 940, "text": "In response to a downturn in the arcade-game market in the early 1980s, Sega began to develop video game consoles—starting with the SG-1000 and Master System—but struggled against competing products such as the Nintendo Entertainment System. Around the same time, Sega executives David Rosen and Hayao Nakayama executed a management buyout of the company with backing from CSK Corporation. Sega released its next console, the Sega Genesis (known as the Mega Drive outside North America) in 1988. Although it initially struggled, the Genesis became a major success after the release of \"Sonic the Hedgehog\" in 1991. Sega's marketing strategy, particularly in North America, helped the Genesis outsell main competitor Nintendo and their Super Nintendo Entertainment System for four consecutive Christmas seasons in the early 1990s. While the Game Gear and Sega CD achieved less, Sega's arcade business was also successful into the mid 1990s.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "25156284", "title": "List of Sega video game consoles", "section": "", "start_paragraph_id": 2, "start_character": 0, "end_paragraph_id": 2, "end_character": 889, "text": "Sega was one of the primary competitors to Nintendo in the video game console industry. A few of Sega's early consoles outsold their competitors in specific markets, such as the Master System in Europe. Several of the company's later consoles were commercial failures, however, and the financial losses incurred from the Dreamcast console caused the company to restructure itself in 2001. As a result, Sega ceased to manufacture consoles and became a third-party video game developer. The only console that Sega has produced since is the educational toy console Advanced Pico Beena in 2005. Third-party variants of Sega consoles have been produced by licensed manufacturers, even after production of the original consoles had ended. Many of these variants have been produced in Brazil, where versions of the Master System and Genesis are still sold and games for them are still developed.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "32401", "title": "History of video games", "section": "Section::::1980s.:Third generation consoles (1983–1995) (8-bit).\n", "start_paragraph_id": 75, "start_character": 0, "end_paragraph_id": 75, "end_character": 529, "text": "Whilst a broken gaming industry in the US took several local businesses to bankruptcy and practically ended retail interest in video gaming products, an 8-bit third generation of video game consoles started in Japan as early as 1983 with the release of both Nintendo's Family Computer (\"Famicom\") and Sega's SG-1000 on July 15. The first clearly trumped the second in terms of commercial success in the country, causing Sega to replace it, two years later, by a severely improved and modernized version called the Sega Mark III.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "485939", "title": "Fifth generation of video game consoles", "section": "Section::::History.:Aftermath of the fifth generation.\n", "start_paragraph_id": 36, "start_character": 0, "end_paragraph_id": 36, "end_character": 446, "text": "After the dust settled in the fifth generation console wars, several companies saw their outlooks change drastically. Atari Corporation, which was not able to recover its losses, ended up being purchased by JT Storage and stopped making game hardware. Sega's loss of consumer confidence (coupled with its previous console failures) along with their financial difficulties, set the company up for a similar fate in the next round of console wars.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "58798706", "title": "History of Sega", "section": "Section::::Sega Saturn and falling sales (1994–1999).:Financial losses.\n", "start_paragraph_id": 57, "start_character": 0, "end_paragraph_id": 57, "end_character": 1911, "text": "The Saturn failed to take the lead in the market as its predecessor had. After the launch of the Nintendo 64 in 1996, sales of the Saturn and its games were sharply reduced, while the PlayStation outsold the Saturn by three-to-one in the U.S. in 1997. As of August 1997, Sony controlled 47% of the console market, Nintendo 40%, and Sega only 12%. Neither price cuts nor high-profile game releases proved helpful. Following five years of generally declining profits, in the fiscal year ending March 31, 1998 Sega suffered its first parent and consolidated financial losses since its 1988 listing on the Tokyo Stock Exchange. Due to a 54.8% decline in consumer product sales (including a 75.4% decline overseas), the company reported a net loss of ¥43.3 billion (US$327.8 million) and a consolidated net loss of ¥35.6 billion (US$269.8 million). Shortly before announcing its financial losses, Sega announced that it was discontinuing the Saturn in North America to prepare for the launch of its successor. The Saturn would last longer in Japan and Europe. The decision to abandon the Saturn effectively left the Western market without Sega games for over one year. Sega suffered an additional ¥42.881 billion consolidated net loss in the fiscal year ending March 1999, and announced plans to eliminate 1,000 jobs, nearly a quarter of its workforce. With lifetime sales of 9.26 million units, the Saturn is considered a commercial failure, although its install base in Japan surpassed the Nintendo 64's 5.54 million. Lack of distribution has been cited as a significant factor contributing to the Saturn's failure, as the system's surprise launch damaged Sega's reputation with key retailers. Conversely, Nintendo's long delay in releasing a 3D console and damage caused to Sega's reputation by poorly supported add-ons for the Genesis are considered major factors allowing Sony to gain a foothold in the market.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "1030901", "title": "Osborne effect", "section": "Section::::Other examples.\n", "start_paragraph_id": 16, "start_character": 0, "end_paragraph_id": 16, "end_character": 936, "text": "When Sega began publicly discussing their next-generation system (eventually released as the Dreamcast), barely two years after launching the Saturn, it became a self-defeating prophecy. This move, combined with Sega's recent history of short-lived consoles, particularly the Sega Mega-CD and 32X which were considered ill-conceived \"stopgaps\" that turned off gamers and developers alike, led to a chain reaction that quickly caused the Saturn's future to collapse. Immediately following the announcement, sales of the console and software substantially tapered off in the second half of 1997, while many planned games were canceled, causing the console's life expectancy to shorten substantially. While this let Sega focus on bringing out its successor, premature demise of the Saturn caused customers and developers to be skeptical and hold out, which led to the Dreamcast's demise as well, and Sega's exit from the console industry.\n", "bleu_score": null, "meta": null } ] } ]
null
4vd7su
Why do some metals turn bright red and white when they are melting? Why don't they just turn to liquid like mercury does?
[ { "answer": "Because the melting points of most metals are much higher than Mercury and at high temperatures, the thermal radiations all objects gives off (including you, your dog and your chair) shifts into the visible spectrum.\n\nThis is called the [Draper point](_URL_0_). It happens at 525 Celsius.", "provenance": null }, { "answer": "It's [blackbody radiation](_URL_0_). Everything glows when heated. Murcury \"just\" turns into a liquid because it melts when it's too cool to glow. If you heat it to the temperature you heat metal to make it glow red, it will glow red.", "provenance": null }, { "answer": "Blackbody radiation. *Everything* glows, just most of it at 'colors' that we can't see (infrared). If something gets hot enough, it glows red. Hotter still, white. Liquid or solid depends on how much energy a particular molecule can handle. ", "provenance": null }, { "answer": null, "provenance": [ { "wikipedia_id": "35802855", "title": "Properties of metals, metalloids and nonmetals", "section": "Section::::Properties.:Metals.\n", "start_paragraph_id": 5, "start_character": 0, "end_paragraph_id": 5, "end_character": 341, "text": "Some metals appear coloured (Cu, Cs, Au), have low densities (e.g. Be, Al) or very high melting points, are liquids at or near room temperature, are brittle (e.g. Os, Bi), not easily machined (e.g. Ti, Re), or are noble (hard to oxidise) or have nonmetallic structures (Mn and Ga are structurally analogous to, respectively, white P and I).\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "30364", "title": "Transition metal", "section": "Section::::Characteristic properties.:Physical properties.\n", "start_paragraph_id": 54, "start_character": 0, "end_paragraph_id": 54, "end_character": 509, "text": "In general, transition metals possess a high density and high melting points and boiling points. These properties are due to metallic bonding by delocalized d electrons, leading to cohesion which increases with the number of shared electrons. However the group 12 metals have much lower melting and boiling points since their full d subshells prevent d–d bonding, which again tends to differentiate them from the accepted transition metals. Mercury has a melting point of and is a liquid at room temperature.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "4459356", "title": "Solvated electron", "section": "", "start_paragraph_id": 4, "start_character": 0, "end_paragraph_id": 4, "end_character": 297, "text": "Alkali metals dissolve in liquid ammonia giving deep blue solutions, which are conducting in nature. The blue colour of the solution is due to ammoniated electrons, which absorb energy in the visible region of light. Alkali metals also dissolve in hexamethylphosphoramide, forming blue solutions.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "12527791", "title": "Alkali metal halide", "section": "Section::::Properties.\n", "start_paragraph_id": 5, "start_character": 0, "end_paragraph_id": 5, "end_character": 361, "text": "The alkali metal halides exist as colourless crystalline solids, although as finely ground powders appear white. They melt at high temperature, usually several hundred degrees to colorless liquids. Their high melting point reflects their high lattice energies. At still higher temperatures, these liquids evaporate to give gases composed of diatomic molecules.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "22317868", "title": "Medieval stained glass", "section": "Section::::Composition, manufacture and distribution.:Colour.:Inherent colour.\n", "start_paragraph_id": 20, "start_character": 0, "end_paragraph_id": 20, "end_character": 610, "text": "In an oxidising environment metal (and some non-metal) ions will lose electrons. In iron oxides, Fe (ferrous) ions will become Fe (ferric) ions. In molten glass this will result in a change in glass colour from pale blue to yellow/brown. In a reducing environment the iron will gain electrons and colour will change from yellow/brown to pale blue. Similarly manganese will change in colour depending on its oxidation state. The lower oxidation state of manganese (Mn) is yellow in common glass while the higher oxidation states (Mn or higher) is purple. A combination of the two states will give a pink glass.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "1177234", "title": "Potassium dichromate", "section": "Section::::Uses.:Analytical reagent.:Silver test.\n", "start_paragraph_id": 37, "start_character": 0, "end_paragraph_id": 37, "end_character": 435, "text": "When dissolved in an approximately 35% nitric acid solution it is called Schwerter's solution and is used to test for the presence of various metals, notably for determination of silver purity. Pure silver will turn the solution bright red, sterling silver will turn it dark red, low grade coin silver (0.800 fine) will turn brown (largely due to the presence of copper which turns the solution brown) and even green for 0.500 silver.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "27119", "title": "Silver", "section": "Section::::Chemistry.\n", "start_paragraph_id": 18, "start_character": 0, "end_paragraph_id": 18, "end_character": 876, "text": "Silver does not react with air, even at red heat, and thus was considered by alchemists as a noble metal along with gold. Its reactivity is intermediate between that of copper (which forms copper(I) oxide when heated in air to red heat) and gold. Like copper, silver reacts with sulfur and its compounds; in their presence, silver tarnishes in air to form the black silver sulfide (copper forms the green sulfate instead, while gold does not react). Unlike copper, silver will not react with the halogens, with the exception of fluorine gas, with which it forms the difluoride. While silver is not attacked by non-oxidizing acids, the metal dissolves readily in hot concentrated sulfuric acid, as well as dilute or concentrated nitric acid. In the presence of air, and especially in the presence of hydrogen peroxide, silver dissolves readily in aqueous solutions of cyanide.\n", "bleu_score": null, "meta": null } ] } ]
null
cn6xjl
How do Colloids work?
[ { "answer": "Colloids aren’t a molecule or additive, they’re a state of a mixture. Specifically, a colloid is formed when the particles (not atomic particles, particulate/macroscopic particles) are too small to settle out. The particles are so small that gravity doesn’t affect them nearly as much as other forces, like boyancy, eddy currents, or molecular interactions. So these forces determine where any individual particle goes, rather than gravity. \n\nThink of it like dust in the air, the dust is so light any little breeze can set it floating. But since liquids are so much more viscous than air the dust literally can’t fall because the force of gravity can’t pull it down strong enough to move the liquid out of the way. And also, the boyant force isn’t strong enough to send it to the surface, so it just hangs there, totally dependant on the currents of the liquid.", "provenance": null }, { "answer": null, "provenance": [ { "wikipedia_id": "1701055", "title": "Sol–gel process", "section": "Section::::Particles and polymers.\n", "start_paragraph_id": 10, "start_character": 0, "end_paragraph_id": 10, "end_character": 1110, "text": "The term \"colloid\" is used primarily to describe a broad range of solid-liquid (and/or liquid-liquid) mixtures, all of which contain distinct solid (and/or liquid) particles which are dispersed to various degrees in a liquid medium. The term is specific to the size of the individual particles, which are larger than atomic dimensions but small enough to exhibit Brownian motion. If the particles are large enough, then their dynamic behavior in any given period of time in suspension would be governed by forces of gravity and sedimentation. But if they are small enough to be colloids, then their irregular motion in suspension can be attributed to the collective bombardment of a myriad of thermally agitated molecules in the liquid suspending medium, as described originally by Albert Einstein in his dissertation. Einstein concluded that this erratic behavior could adequately be described using the theory of Brownian motion, with sedimentation being a possible long-term result. This critical size range (or particle diameter) typically ranges from tens of angstroms (10 m) to a few micrometres (10 m).\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "5346", "title": "Colloid", "section": "Section::::As a model system for atoms.\n", "start_paragraph_id": 48, "start_character": 0, "end_paragraph_id": 48, "end_character": 754, "text": "In physics, colloids are an interesting model system for atoms. Micrometre-scale colloidal particles are large enough to be observed by optical techniques such as confocal microscopy. Many of the forces that govern the structure and behavior of matter, such as excluded volume interactions or electrostatic forces, govern the structure and behavior of colloidal suspensions. For example, the same techniques used to model ideal gases can be applied to model the behavior of a hard sphere colloidal suspension. In addition, phase transitions in colloidal suspensions can be studied in real time using optical techniques, and are analogous to phase transitions in liquids. In many interesting cases optical fluidity is used to control colloid suspensions.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "5200746", "title": "Sedimentation (water treatment)", "section": "Section::::Basics.\n", "start_paragraph_id": 4, "start_character": 0, "end_paragraph_id": 4, "end_character": 225, "text": "Colloids are particles of a size between 1 nm (0.001 µm) and 1 µm depending on the method of quantification. Because of Brownian motion and electrostatic forces balancing the gravity, they are not likely to settle naturally.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "5346", "title": "Colloid", "section": "", "start_paragraph_id": 1, "start_character": 0, "end_paragraph_id": 1, "end_character": 733, "text": "In chemistry, a colloid is a mixture in which one substance of microscopically dispersed insoluble particles is suspended throughout another substance. Sometimes the dispersed substance alone is called the colloid; the term colloidal suspension refers unambiguously to the overall mixture (although a narrower sense of the word \"suspension\" is distinguished from colloids by larger particle size). Unlike a solution, whose solute and solvent constitute only one phase, a colloid has a dispersed phase (the suspended particles) and a continuous phase (the medium of suspension) that arise by phase separation. To qualify as a colloid, the mixture must be one that does not settle or would take a very long time to settle appreciably.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "4855071", "title": "Dispersion (chemistry)", "section": "Section::::Types of dispersions.\n", "start_paragraph_id": 16, "start_character": 0, "end_paragraph_id": 16, "end_character": 225, "text": "A colloid is a heterogeneous mixture of one phase in another, where the dispersed particles are usually. Like solutions, dispersed particles will not settle if the solution is left undisturbed for a prolonged period of time.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "1234517", "title": "Nanoparticle", "section": "Section::::Definition.\n", "start_paragraph_id": 7, "start_character": 0, "end_paragraph_id": 7, "end_character": 533, "text": "The terms colloid and nanoparticle are not interchangeable. A colloid is a mixture which has solid particles dispersed in a liquid medium. The term applies only if the particles are larger than atomic dimensions but small enough to exhibit Brownian motion, with the critical size range (or particle diameter) typically ranging from nanometers (10 m) to micrometers (10 m). Colloids can contain particles too large to be nanoparticles, and nanoparticles can exist in non-colloidal form, for examples as a powder or in a solid matrix.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "30778041", "title": "Particle", "section": "Section::::Distribution of particles.\n", "start_paragraph_id": 17, "start_character": 0, "end_paragraph_id": 17, "end_character": 935, "text": "Colloidal particles are the components of a colloid. A colloid is a substance microscopically dispersed evenly throughout another substance. Such colloidal system can be solid, liquid, or gaseous; as well as continuous or dispersed. The dispersed-phase particles have a diameter of between approximately 5 and 200 nanometers. Soluble particles smaller than this will form a solution as opposed to a colloid. Colloidal systems (also called colloidal solutions or colloidal suspensions) are the subject of interface and colloid science. Suspended solids may be held in a liquid, while solid or liquid particles suspended in a gas together form an aerosol. Particles may also be suspended in the form of atmospheric particulate matter, which may constitute air pollution. Larger particles can similarly form marine debris or space debris. A conglomeration of discrete solid, macroscopic particles may be described as a granular material.\n", "bleu_score": null, "meta": null } ] } ]
null
2q3hq2
why should someone never refreeze something that has been unfrozen ?
[ { "answer": "Generally when frozen foods are frozen for sale/storage they are done so in a way that prevents large ice crystals from forming and damaging the food. You can't easily do this at home.\n\nThis was the big advantage that Birds Eye had when it first started - its founder, Clarence Birdseye, realized that fish frozen by the Inuit in Newfoundland was much better than fish frozen in New York and he reasoned that the difference was the speed at which they were frozen. Turns out, if you freeze something faster the ice crystals don't get as big and mess with the food as much.\n\nWhen you unfreeze and re-freeze something you freeze it slowly since you probably don't have an industrial freezing machine and that causes larger ice crystals to form in the food, which makes it taste worse. The rule isn't really \"don't unfreeze and refreeze,\" but rather \"don't freeze for preservation without the proper machinery/techniques.\"", "provenance": null }, { "answer": "When you freeze things, all the water in it crystallizes. Those crystals can rupture cell walls, destroying the texture of the food.\n\nIn commercial environments, they use blast freezers to chill the food really fast, giving you very tiny crystals. Tiny crystals are less likely to damage your food. Home freezers are much warmer & result in much larger crystals which are likely to puncture cell walls, leaving you with nasty, limp broccoli.\n\n", "provenance": null }, { "answer": null, "provenance": [ { "wikipedia_id": "339605", "title": "Frozen food", "section": "Section::::Defrosting.\n", "start_paragraph_id": 49, "start_character": 0, "end_paragraph_id": 49, "end_character": 260, "text": "People sometimes defrost frozen foods at room temperature because of time constraints or ignorance; such foods should be promptly consumed after cooking or discarded and never be refrozen or refrigerated since pathogens are not killed by the freezing process.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "934835", "title": "Savers", "section": "Section::::Business.\n", "start_paragraph_id": 6, "start_character": 0, "end_paragraph_id": 6, "end_character": 413, "text": "Items deemed resellable are displayed for purchase in stores. Savers also has a recycling program and attempts to recycle any reusable items that cannot be sold at the stores, as well as any items that do not sell over a period of time to make room for fresh merchandise. Savers has buyers for its recyclables throughout the world and attempts to keep as much donated product out of the waste stream as possible.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "4065672", "title": "Reuse", "section": "", "start_paragraph_id": 1, "start_character": 0, "end_paragraph_id": 1, "end_character": 623, "text": "Reuse is the action or practice of using something again, whether for its original purpose (conventional reuse) or to fulfil a different function (creative reuse or repurposing). It should be distinguished from recycling, which is the breaking down of used items to make raw materials for the manufacture of new products. Reuse – by taking, but not reprocessing, previously used items – helps save time, money, energy and resources. In broader economic terms, it can make quality products available to people and organizations with limited means, while generating jobs and business activity that contribute to the economy.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "4065672", "title": "Reuse", "section": "Section::::Addressing issues of repair, reuse and recycling.\n", "start_paragraph_id": 47, "start_character": 0, "end_paragraph_id": 47, "end_character": 395, "text": "One way to address this is to increase product longevity; either by extending a product’s first life or addressing issues of repair, reuse and recycling. Reusing products, and therefore extending the use of that item beyond the point where it is discarded by its first user is preferable to recycling or disposal, as this is the least energy intensive solution, although it is often overlooked.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "2263904", "title": "Carbon footprint", "section": "Section::::Ways to reduce personal carbon footprint.\n", "start_paragraph_id": 27, "start_character": 0, "end_paragraph_id": 27, "end_character": 381, "text": "This can also be done by using reusable items such as thermoses for daily coffee or plastic containers for water and other cold beverages rather than disposable ones. If that option isn't available, it is best to properly recycle the disposable items after use. When one household recycles at least half of their household waste, they can save 1.2 tons of carbon dioxide annually.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "7623241", "title": "Sustainable Development Strategy in Canada", "section": "Section::::Action plan.:The 3Rs.\n", "start_paragraph_id": 78, "start_character": 0, "end_paragraph_id": 78, "end_character": 325, "text": "Reuse means suitable to use again or for further use. It is the quality or state of being to be reusable. What cannot be reduced they should try to reuse. After a product or material has been used once, every effort should be made to reuse it. Products that can be reused should be favoured over those that are non-reusable.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "8246", "title": "Dumpster diving", "section": "Section::::Overview.\n", "start_paragraph_id": 13, "start_character": 0, "end_paragraph_id": 13, "end_character": 964, "text": "A wide variety of things may be disposed while still repairable or in working condition, making salvage of them a source of potentially free items for personal use, or to sell for profit. Irregular, blemished or damaged items that are still otherwise functional are regularly thrown away. Discarded food that might have slight imperfections, near its expiration date, or that is simply being replaced by newer stock is often tossed out despite being still edible. Many retailers are reluctant to sell this stock at reduced prices because of the risks that people will buy it instead of the higher-priced newer stock, that extra handling time is required, and that there are liability risks. In the United Kingdom, cookery books have been written on the cooking and consumption of such foods, which has contributed to the popularity of skipping. Artists often use discarded materials retrieved from trash receptacles to create works of found objects or assemblage.\n", "bleu_score": null, "meta": null } ] } ]
null
2k9icr
the ending of the sopranos
[ { "answer": "Here's a very detailed analysis:\n\n_URL_0_\n\nI personally think you have to pick one of two narratives:\n\n1. Tony was killed by the man in the members only jacket, with the ending tied to prior episodes where the characters discuss the fact that you you never see death coming. The fade to black is Tony's point of view as he dies.\n\nor\n\n2. Tony lives, and the audience is just on edge experiencing what it will be like for Tony for the rest of his life -- always worrying about death and his assassination from an unnoticed attacker.\n", "provenance": null }, { "answer": null, "provenance": [ { "wikipedia_id": "470797", "title": "The Bald Soprano", "section": "Section::::Meaning.\n", "start_paragraph_id": 11, "start_character": 0, "end_paragraph_id": 11, "end_character": 417, "text": "\"The Bald Soprano\" appears to have been written as a continuous loop. The final scene contains stage instructions to start the performance over from the very beginning, with the Martin couple substituted for the Smith couple and vice versa. However, this decision was only added in after the show's hundredth performance, and it was originally the Smiths who restarted the show, in exactly the same manner as before.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "78242", "title": "The Sopranos", "section": "", "start_paragraph_id": 1, "start_character": 0, "end_paragraph_id": 1, "end_character": 631, "text": "The Sopranos is an American crime drama television series created by David Chase. The story revolves around Tony Soprano (James Gandolfini), a New Jersey-based Italian-American mobster, portraying the difficulties that he faces as he tries to balance his family life with his role as the leader of a criminal organization. These are explored during his therapy sessions with psychiatrist Jennifer Melfi (Lorraine Bracco). The series features Tony's family members, mafia colleagues, and rivals in prominent roles—most notably his wife Carmela (Edie Falco) and his protégé/distant cousin Christopher Moltisanti (Michael Imperioli).\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "1629034", "title": "George Brecht", "section": "Section::::New York avant-garde.:Flute Solo.\n", "start_paragraph_id": 20, "start_character": 0, "end_paragraph_id": 20, "end_character": 396, "text": "'[A] soprano was bugging everybody with temper tantrums during rehearsal. At a certain point the orchestra crashed onto a major seventh and there was silence for the soprano and flute cadenza. Nothing happened. The soprano looked into the orchestra pit and saw that my father had completely taken apart his flute, down to the last screw. (I used this idea in my 1962 FLUTE SOLO).' George Brecht \n", "bleu_score": null, "meta": null }, { "wikipedia_id": "4591767", "title": "Denial, Anger, Acceptance", "section": "Section::::Reception.\n", "start_paragraph_id": 40, "start_character": 0, "end_paragraph_id": 40, "end_character": 748, "text": "In a retrospective review, Emily VanDerWerff of \"The A.V. Club\" wrote that the \"[ending] montage - intercut with Tony watching Meadow sing - is one of the first moments when \"The Sopranos\" takes music and rises above its prosaic, muddy universe to become something like sublime\"; VanDerWerff commented that although the episode \"is a 'Let's get the plot wheels turning!' kind of episode, and those sorts of episodes can be a little trying from time to time\", there is nonetheless \"lots of it that is just expertly executed\". Alan Sepinwall praised Gandolfini's performance as well as the story involving Carmela and Charmaine, writing that the show \"has a really great eye and ear for insults – particularly ones not necessarily intended as such\".\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "666604", "title": "List of The Sopranos episodes", "section": "", "start_paragraph_id": 1, "start_character": 0, "end_paragraph_id": 1, "end_character": 387, "text": "\"The Sopranos\", a television drama series created by David Chase, premiered on the premium television channel HBO in the United States on January 10, 1999, and ended on June 10, 2007. The series consists of a total of 86 episodes over six seasons. Each episode is approximately 50 minutes long. The first five seasons each consist of thirteen episodes, and the sixth season twenty-one. \n", "bleu_score": null, "meta": null }, { "wikipedia_id": "11002461", "title": "Made in America (The Sopranos)", "section": "Section::::Reception.:Response.:Initial.\n", "start_paragraph_id": 70, "start_character": 0, "end_paragraph_id": 70, "end_character": 247, "text": "Tim Goodman of the \"San Francisco Chronicle\" characterized the finale as \"[a]n ending befitting genius of \"Sopranos\"\" and wrote that \"Chase managed, with this ending, to be true to reality [...] while also steering clear of trite TV conventions.\"\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "4915293", "title": "List of The Sopranos characters in the Soprano crime family", "section": "Section::::DiMeo crime family overview.:History.:Shooting of Tony Soprano.\n", "start_paragraph_id": 33, "start_character": 0, "end_paragraph_id": 33, "end_character": 514, "text": "The shooting of Tony Soprano set off a media frenzy, with reporters stalking the Soprano house and outside the hospital where Tony lay in a coma. Junior Soprano was arrested and questioned about the shooting, which he insisted must have been a self-inflicted gunshot by Tony, whom he labeled as a \"depression case\". The captains of the Family agreed to cut all ties to Junior and allow Tony to decide what happens to him. Junior was judged to be mentally unstable and was sent to a mental rehabilitation facility.\n", "bleu_score": null, "meta": null } ] } ]
null
3fpcdl
why do some sites not allow you to have special characters in your password? wouldn't it be better to always have as secure a password as possible?
[ { "answer": "There have been news stories about websites getting hacked, and the hackers making off with lots of customer/employee/third-party information. How do hackers normally do this?\n\nIn the younger days of the internet, most sites were vulnerable to an exploit called SQL injection. SQL is a programming language associated with a type of database. A database is what actually stores information that website visitors put into the website so they can use the site how they want. Most sites these days have a database they rely on to control--among other things--how users can access the site's services. Other sites use flat files or databases stored in flat files, such as those created and managed by Microsoft Access, which come with their own problems, but we'll stick with actual databases here. \n\nFor example, whenever you sign up to use the services of a website, the information you put into the account setup page is then stored in a database. You are actually telling the web application what information to put into the database by inputting values into the page's input fields, then clicking the \"Submit\" button. The page will ask you for things such as a username and password, maybe your name and birthdate, and whatever else the web site needs to build a profile of you. Depending on the site, it may even ask a user for their credit card details and other financial information. The web application updates the database by using an application account to log into the database, then either inserting new information into the database, or updating the information already in the database so it can be used later. When you then log into the website later, you input your username and password you set up when you created your account, and the application attempts to retrieve the information from the database, checking to see if what you put in is the same as what is in the database, then it gives you access to the services available to your account.\n\nWhere it will sometimes go wrong is when the web application is told to get data from the database that the application is not supposed to return to a normal user. This is accomplished by knowing what the application expects the user to put into a certain input field, knowing how the application's queries to the database are written, and knowing that either the application administrator or the database administrator did not limit the web application's access to the database--giving the application access to everything in the database, instead of limiting the application for just the tables and permissions it needs.\n\nHere is a simplistic example. A user wants to log into a web site. The user puts in the username and password for his or her account. The web application usually then logs into the database and performs a query--SELECT \\* FROM users where username=user1 and password=password2. If a valid row is returned, the user gets logged into the site, is given the permissions the account is set up with, and continues the session. If no data is returned, the user is deposited on the \"Wrong password\" page. Seems logical, right?\n\nNow, say a hacker logs into the same website, but he or she does not put in a valid username--the hacker simply puts in an asterisk in the username and password fields. Depending on how the application is written, it may log into the database and perform this query instead--SELECT \\* FROM users where username=\\* and password=\\* (reddit may have a formatting problem with that). Depending on how the application and database are set up, the hacker may get a database error, he or she may get logged into the first account set up in the web site (which often is the admin account), or the application may return a list of user accounts, complete with usernames and passwords (and maybe credit card numbers, expiration dates, etc). You can see how this may be a problem. There are other methods of tricking the application into giving up data, using a similar method of SQL injection, but the result is often the same--either someone has acquired more information than they need, or someone is just given the keys to the kingdom.\n\nTo combat this problem, some websites have written the database queries their applications use to get data from the database so that queries do not return so much data, which in some cases results in a tiny performance boost. For example, a SELECT userid FROM users where username=\\* and password=* may simply return a list of userids instead of the entire users table--but some applications will allow the hacker to be logged into the first account in the database--the admin account, usually. Also, most sites do not save account passwords to their databases in readable form--they run the passwords through an encryption algorithm before saving them as hashes in the database--instead of \"password2\", there may be a long string of random characters and numbers in its place. So, a user logs into the database with \"password2\", the application immediately puts the user's input through the encryption, and if it matches the resulting hash in the database, the user is allowed access to the account. However, in poorly-written applications, this still results in an unauthorized user logged into someone else's account. In an effort to stop this, most sites today use code which scrubs most special characters out of input fields to guard against sql injection. So, when a hacker goes to put asterisks in an input field, the hacker presses Submit, the application immediately scrubs the asterisks out, and the hacker is left with either a database error, or--on well-written applications--deposited on a Wrong Password page. This is normally called \"sanitizing\" the input. Most of the sites you will browse have this feature in the application, but some sites will not. Best to not try and find out.\n\nSo, basically, you're not allowed to have special characters in the password field on some sites because the website owners don't want other people getting access to your information, so they run most of their inputs through a scrubber that stops what used to be a very prolific exploit.", "provenance": null }, { "answer": "Lazyness. Sanitizing inputs takes time, effort and knowledge. So some developers take the easy (and less secure) way out.", "provenance": null }, { "answer": null, "provenance": [ { "wikipedia_id": "152420", "title": "Passphrase", "section": "Section::::Compared to passwords.\n", "start_paragraph_id": 14, "start_character": 0, "end_paragraph_id": 14, "end_character": 980, "text": "But passwords are typically not safe to use as keys for standalone security systems (e.g., encryption systems) that expose data to enable offline password guessing by an attacker. Passphrases are theoretically stronger, and so should make a better choice in these cases. First, they usually are (and always should be) much longer—20 to 30 characters or more is typical—making some kinds of brute force attacks entirely impractical. Second, if well chosen, they will not be found in any phrase or quote dictionary, so such dictionary attacks will be almost impossible. Third, they can be structured to be more easily memorable than passwords without being written down, reducing the risk of hardcopy theft. However, if a passphrase is not protected appropriately by the authenticator and the clear-text passphrase is revealed its use is no better than other passwords. For this reason it is recommended that passphrases not be reused across different or unique sites and services.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "24304", "title": "Password", "section": "Section::::Choosing a secure and memorable password.\n", "start_paragraph_id": 15, "start_character": 0, "end_paragraph_id": 15, "end_character": 201, "text": "In 2013, Google released a list of the most common password types, all of which are considered insecure because they are too easy to guess (especially after researching an individual on social media):\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "4459886", "title": "Password strength", "section": "Section::::Password guess validation.:Human-generated passwords.\n", "start_paragraph_id": 27, "start_character": 0, "end_paragraph_id": 27, "end_character": 1042, "text": "The full strength associated with using the entire ASCII character set (numerals, mixed case letters and special characters) is only achieved if each possible password is equally likely. This seems to suggest that all passwords must contain characters from each of several character classes, perhaps upper and lower case letters, numbers, and non-alphanumeric characters. In fact, such a requirement is a pattern in password choice and can be expected to reduce an attacker's \"work factor\" (in Claude Shannon's terms). This is a reduction in password \"strength\". A better requirement would be to require a password NOT to contain any word in an online dictionary, or list of names, or any license plate pattern from any state (in the US) or country (as in the EU). If patterned choices are required, humans are likely to use them in predictable ways, such a capitalizing a letter, adding one or two numbers, and a special character. This predictability means that the increase in password strength is minor when compared to random passwords.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "8672984", "title": "Password fatigue", "section": "Section::::Related issues.\n", "start_paragraph_id": 13, "start_character": 0, "end_paragraph_id": 13, "end_character": 484, "text": "Many sites, in an attempt to prevent users from choosing easy-to-guess passwords, add restrictions on password length or composition which contribute to password fatigue. In many cases, the restrictions placed on passwords actually serve to decrease the security of the account (either by preventing good passwords or by making the password so complex the user ends up storing it insecurely, such as on a post-it note). Some sites also block non-ASCII or non-alphanumeric characters.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "4459886", "title": "Password strength", "section": "Section::::Password guess validation.:Usability and implementation considerations.\n", "start_paragraph_id": 40, "start_character": 0, "end_paragraph_id": 40, "end_character": 400, "text": "Authentication programs vary in which characters they allow in passwords. Some do not recognize case differences (e.g., the upper-case \"E\" is considered equivalent to the lower-case \"e\"), others prohibit some of the other symbols. In the past few decades, systems have permitted more characters in passwords, but limitations still exist. Systems also vary in the maximum length of passwords allowed.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "4459886", "title": "Password strength", "section": "Section::::Password policy.:Creating and handling passwords.\n", "start_paragraph_id": 90, "start_character": 0, "end_paragraph_id": 90, "end_character": 808, "text": "The hardest passwords to crack, for a given length and character set, are random character strings; if long enough they resist brute force attacks (because there are many characters) and guessing attacks (due to high entropy). However, such passwords are typically the hardest to remember. The imposition of a requirement for such passwords in a password policy may encourage users to write them down, store them in mobile devices, or share them with others as a safeguard against memory failure. While some people consider each of these user resorts to increase security risks, others suggest the absurdity of expecting users to remember distinct complex passwords for each of the dozens of accounts they access. For example, in 2005, security expert Bruce Schneier recommended writing down one's password:\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "24304", "title": "Password", "section": "Section::::Factors in the security of a password system.:Password security architecture.\n", "start_paragraph_id": 73, "start_character": 0, "end_paragraph_id": 73, "end_character": 272, "text": "BULLET::::- Some systems require characters from various character classes in a password—for example, \"must have at least one uppercase and at least one lowercase letter\". However, all-lowercase passwords are more secure per keystroke than mixed capitalization passwords.\n", "bleu_score": null, "meta": null } ] } ]
null
105pfj
What sort of judicial system did the Confederacy have during the Civil War?
[ { "answer": "Someone did a presentation about this in the constitutional law class I took ages ago but I barely remember it. I do remember that at a national level, the [Confederate Constitution took the Article III language virtually word-for-word](_URL_0_) from the United States Constitution. It empowered the creation of a supreme court and tribunals inferior to it, but Davis and the Confederate Congress never got around to appointing justices to a Supreme Court of the Confederacy. I believe most of the district court judges simply continued serving in the antebellum benches. Given that early American courts frequently cited British common law precedent from before the Revolution (and the Supreme Court still cites pre-revolutionary British common law to this day), I'd theorize that the Confederate district courts continued to operate under prior precedent unless that precedent was explicitly changed -- but don't cite me on that.\n\nState courts, to my knowledge, functioned the same way they did before secession since the states themselves didn't change. It'd be interesting to see the extent to which the Confederate courts changed after 1865, because Republican state legislatures theoretically could've created a whole new judiciary by ousting the sitting judges on treason charges. That'd be interesting to research too.\n\nIf you do end up researching the Confederate judiciary, please send me your findings! My interest has definitely been piqued.", "provenance": null }, { "answer": null, "provenance": [ { "wikipedia_id": "7023", "title": "Confederate States of America", "section": "Section::::Government and politics.:Constitution.:Judicial.\n", "start_paragraph_id": 203, "start_character": 0, "end_paragraph_id": 203, "end_character": 895, "text": "Confederate district courts were authorized by Article III, Section 1, of the Confederate Constitution, and President Davis appointed judges within the individual states of the Confederate States of America. In many cases, the same US Federal District Judges were appointed as Confederate States District Judges. Confederate district courts began reopening in early 1861, handling many of the same type cases as had been done before. Prize cases, in which Union ships were captured by the Confederate Navy or raiders and sold through court proceedings, were heard until the blockade of southern ports made this impossible. After a Sequestration Act was passed by the Confederate Congress, the Confederate district courts heard many cases in which enemy aliens (typically Northern absentee landlords owning property in the South) had their property sequestered (seized) by Confederate Receivers.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "7023", "title": "Confederate States of America", "section": "Section::::Government and politics.:Constitution.:Judicial.\n", "start_paragraph_id": 202, "start_character": 0, "end_paragraph_id": 202, "end_character": 460, "text": "The Confederate Constitution outlined a judicial branch of the government, but the ongoing war and resistance from states-rights advocates, particularly on the question of whether it would have appellate jurisdiction over the state courts, prevented the creation or seating of the \"Supreme Court of the Confederate States;\" the state courts generally continued to operate as they had done, simply recognizing the Confederate States as the national government.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "2642853", "title": "Institutions in the Southern Victory Series", "section": "Section::::Politics.:Confederate States of America.:Judicial system.\n", "start_paragraph_id": 124, "start_character": 0, "end_paragraph_id": 124, "end_character": 631, "text": "The Confederate Constitution was modeled on that of the United States, allowing for an independent judiciary - a Supreme Court and various subordinate courts, and in addition each state had its own judicial system. Prior to the advent of Featherston, the judiciary was independent and on occasion made rulings displeasing to members of the legislative and executive branches - though discrimination of Blacks, first as slaves and later as non-citizen \"residents\", was a basic ingredient of the system which the courts consistently upheld (and which was, in fact, never challenged by any significant force among white Confederates)\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "7023", "title": "Confederate States of America", "section": "Section::::Government and politics.:Constitution.:Legislative.\n", "start_paragraph_id": 180, "start_character": 0, "end_paragraph_id": 180, "end_character": 337, "text": "The only two \"formal, national, functioning, civilian administrative bodies\" in the Civil War South were the Jefferson Davis administration and the Confederate Congresses. The Confederacy was begun by the Provisional Congress in Convention at Montgomery, Alabama on February 28, 1861. It had one vote per state in a unicameral assembly.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "253866", "title": "Confederate States Constitution", "section": "Section::::Judicial review.\n", "start_paragraph_id": 78, "start_character": 0, "end_paragraph_id": 78, "end_character": 613, "text": "Although the Confederate States Supreme Court was never constituted, the supreme courts of the various Confederate states issued numerous decisions interpreting the Confederate Constitution. Unsurprisingly, given that the Confederate Constitution was based on the United States Constitution, the Confederate State Supreme Courts often used United States Supreme Court precedents. The jurisprudence of the Marshall Court, thus, influenced the interpretation of the Confederate Constitution. The state courts repeatedly upheld robust powers of the Confederate Congress, especially on matters of military necessity.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "32411133", "title": "Kenneth C. Martis", "section": "Section::::Contributions to Political Geography.\n", "start_paragraph_id": 10, "start_character": 0, "end_paragraph_id": 10, "end_character": 1355, "text": "During the Civil War the Confederate Constitution set up a government with a president and a legislature, which was composed of a Senate and House of Representatives. The Confederate congressional atlas maps the districts, characteristics, elections and roll call voting behavior of this institution. The illustration shows the Union occupation status of the 106 districts of the Confederate House of Representatives in late 1863 and early 1864. Note the Confederacy admitted the slave states of Missouri and Kentucky, and they had full voting rights in the Confederate Congress, in spite of being Union controlled virtually from the beginning of the war, and in spite of their continued representation in the United States Congress. At the end of the First Confederate Congress only a little over a half of the House districts (52.8%) were unoccupied. The Union occupied areas mostly supported the increasingly stringent legislative proposals of Confederate President Jefferson Davis with respect to measures like conscription, impressment and habeas corpus. In other words, Confederate congressmen from occupied districts tended to support increasing conscription knowing men from their areas would not be subject, but many representatives from unoccupied districts tended to vote no, knowing men from their areas would bear the brunt of being drafted.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "14465040", "title": "Knox County, Kentucky", "section": "Section::::History.\n", "start_paragraph_id": 5, "start_character": 0, "end_paragraph_id": 5, "end_character": 449, "text": "The Civil War Battle of Barbourville was fought on September 19, 1861, between 800 Confederate soldiers from General Felix Zollicoffer's command and 300 Union troops who attempted to defend the Union's Camp Andrew Johnson. The Union men tore up the planks on the bridge in an attempt to keep the Confederates from crossing, but the more numerous Confederates succeeded anyway. They destroyed the camp and seized the arms and equipment it contained.\n", "bleu_score": null, "meta": null } ] } ]
null
3jhveu
stock dividends
[ { "answer": "If a corporation makes a profit, it may decide that it wants to share some of the corporation's profits with its owners (known as shareholders). The profit that is paid by a corporation to its shareholders is called the dividend. The dividend is issued \"per share\", which means that the corporation might pay $1 per share. In that situation, someone who owns 10 shares of the corporation's stock would receive $10 and someone else who owns 25 shares would get $25. The corporation is free to decide the amount it pays per share.", "provenance": null }, { "answer": "Ok, so to understand dividends, you have to understand the point of a company issuing stock.\n\nA company issues stock in order to exchange shares of ownership of the company for capital they can use to expand. The owners of a private company will issue stock, and sell it to the public in what is called an initial public offering (IPO).\n\nUsually, the IPO is the main time that a company can actually turn shares of ownership into cash money. Most companies and the original owners of the private company will retain at least some shares for themselves that they have the option to sell later, but they usually don't sell since this is also equivalent to giving up some control of the company.\n\nThis is because owning a share of stock also gives you some rights in regards to the company. For one, the company's financial reports are required to be made available to share holders.\n\nAlso, important decisions about the company can sometimes be left to a vote among share holders. For example, the membership of the board of directors is usually nominally controlled by a vote, and if enough share holders are upset with a given board member, they can kick them out.\n\nThat's a long explanation to make the point that owning a share of stock makes you a partial owner of the company. As a part owner, you are entitled to some of the proceeds of the company, should there be any. This is the important bit that explains dividends.\n\nUsually, a company doesn't make any money while it still has room to expand. Every bit of money that comes in and isn't earmarked for something else, gets rolled back into hiring new people, buying new equipment, and producing more of whatever the company does.\n\nBut, the theoretical expectation is that once a company gets big enough, it will run out of ways to productively expand, and more or less stop growing. When a company reaches maturity like this, it will start issuing dividends out of its cash reserves. These are the disbursements of the money that share holders have purchased the rights to.\n\nNow, normally, many companies aren't anywhere near that level of maturity. But in theory the possibility that any given company will eventually start paying dividends is what drives the stock price on the open market. If a company is doing well, it's more likely to eventually issue dividends, and the price per share should go up. If it's doing poorly, then it may fail before it starts paying dividends and the price should go down.\n\nWe tend to lose sight of this, since most people make money from the stock market by buying stock low, and selling high, without ever receiving dividends at all, but the whole system makes a lot more sense when you realize that stock value is driven by potential future dividends and not some sort of inexplicable self sustaining cycle.", "provenance": null }, { "answer": null, "provenance": [ { "wikipedia_id": "447151", "title": "Dividend yield", "section": "Section::::S&P 500.\n", "start_paragraph_id": 32, "start_character": 0, "end_paragraph_id": 32, "end_character": 464, "text": "In 1982 the dividend yield on the S&P 500 Index reached 6.7%. Over the following 16 years, the dividend yield declined to just a percentage value of 1.4% during 1998, because stock prices increased faster than dividend payments from earnings, and public company earnings increased slower than stock prices. During the 20th century, the highest growth rates for earnings and dividends over any 30-year period were 6.3% annually for dividends, and 7.8% for earnings\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "447151", "title": "Dividend yield", "section": "Section::::Dow Industrials.\n", "start_paragraph_id": 28, "start_character": 0, "end_paragraph_id": 28, "end_character": 608, "text": "The dividend yield of the Dow Jones Industrial Average, which is obtained from the annual dividends of all 30 companies in the average divided by their cumulative stock price, has also been considered to be an important indicator of the strength of the U.S. stock market. Historically, the Dow Jones dividend yield has fluctuated between 3.2% (during market highs, for example in 1929) and around 8.0% (during typical market lows). The highest ever Dow Jones dividend yield occurred in 1932 when it yielded over 15%, which was years after the famous stock market collapse of 1929, when it yielded only 3.1%.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "447151", "title": "Dividend yield", "section": "Section::::Common share dividend yield.\n", "start_paragraph_id": 7, "start_character": 0, "end_paragraph_id": 7, "end_character": 381, "text": "Unlike preferred stock, there is no stipulated dividend for common stock (\"ordinary shares\" in the UK). Instead, dividends paid to holders of common stock are set by management, usually with regard to the company's earnings. There is no guarantee that future dividends will match past dividends or even be paid at all. The historic yield is calculated using the following formula:\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "16741267", "title": "High-yield stocks", "section": "Section::::Concept.\n", "start_paragraph_id": 5, "start_character": 0, "end_paragraph_id": 5, "end_character": 370, "text": "A high dividend yield indicates undervaluation of the stock because the stock's dividend is high relative to the stock price. High dividend yields are a particularly sought after by income and value investors. High-yield stocks tend to outperform low yield and no yield stocks during bear markets because many investors consider dividend paying stocks to be less risky.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "13733340", "title": "Common stock dividend", "section": "", "start_paragraph_id": 1, "start_character": 0, "end_paragraph_id": 1, "end_character": 380, "text": "A common stock dividend is the dividend paid to common stock owners from the profits of the company. Like other dividends, the payout is in the form of either cash or stock. The law may regulate the size of the common stock dividend particularly when the payout is a cash distribution tantamount to a liquidation. Such cash dividends may serve the intent of defrauding creditors.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "447151", "title": "Dividend yield", "section": "Section::::Preferred share dividend yield.\n", "start_paragraph_id": 5, "start_character": 0, "end_paragraph_id": 5, "end_character": 383, "text": "Dividend payments on preferred stocks (\"preference shares\" in the UK) are set out in the prospectus. The name of the preferred share will typically include its nominal yield: for example, a 6% preferred share. However, the dividend may under some circumstances be passed or reduced. The current yield is the ratio of the annual dividend to the current market price, which will vary.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "315801", "title": "U.S. Steel", "section": "Section::::History.:Dividend history.\n", "start_paragraph_id": 29, "start_character": 0, "end_paragraph_id": 29, "end_character": 483, "text": "The Board of Directors considers the declaration of dividends four times each year, with checks for dividends declared on common stock mailed for receipt on 10 March, June, September, and December. In 2008, the dividend was $0.30 per share, the highest in company history, but on April 27, 2009, it was reduced to $0.05 per share. Dividends may be paid by mailed check, direct electronic deposit into a bank account, or be reinvested in additional shares of U.S. Steel common stock.\n", "bleu_score": null, "meta": null } ] } ]
null
d4oqx2
what determines whether a pro sports team is named after a city (dallas cowboys), or after a state (minnesota vikings)?
[ { "answer": "It is down to the owner plus any special tax break the local community has given them which may also involve promoting the city or state. In addition existing teams in the state may restrict the naming process.", "provenance": null }, { "answer": "That’s completely up to the team owner(s), they can name it whatever they like. \n\nLook at the Angels in baseball, for example. Over they years, they’ve been the California Angels, Anaheim Angels, Los Angeles Angels, and now the Los Angeles Angels of Anaheim.", "provenance": null }, { "answer": null, "provenance": [ { "wikipedia_id": "102607", "title": "Minneapolis–Saint Paul", "section": "Section::::Sports.\n", "start_paragraph_id": 288, "start_character": 0, "end_paragraph_id": 288, "end_character": 296, "text": "Some other sports teams gained their names from being in Minnesota before relocating. The Los Angeles Lakers get their name from once being based in Minneapolis, the City of Lakes. The Dallas Stars also derived their present name from their tenure as a Minnesota team, the Minnesota North Stars.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "5646753", "title": "All American Football", "section": "Section::::Gameplay.\n", "start_paragraph_id": 4, "start_character": 0, "end_paragraph_id": 4, "end_character": 532, "text": "The game did not have licenses from the NFL, NFLPA or the NCAA. Because of this, pro teams were only referred to by city (Green Bay, Pittsburgh, etc.), state (Minnesota) or region (New England). Most of the college teams featured were ones with names that were based geographically (Michigan, Wisconsin, etc.) or militaristic (Army and Navy). This feature was purely cosmetic as the teams all played the same regardless of which one was chosen. It also allowed users to choose their own team colors no matter which team they chose.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "7308411", "title": "Madden Football 64", "section": "Section::::License.\n", "start_paragraph_id": 4, "start_character": 0, "end_paragraph_id": 4, "end_character": 772, "text": "Teams are referred to by city only, usually the city in which the real life team's stadium is located. The New England Patriots are referred to as \"Foxboro\", the Tennessee Oilers as \"Nashville\", the Arizona Cardinals as \"Phoenix\", the Minnesota Vikings as \"Minneapolis\", the Tampa Bay Buccaneers simply as \"Tampa\", the Carolina Panthers as \"Charlotte\", and a historic team, the Los Angeles Rams referred to as \"Anaheim\". Team uniforms are altered; all uniforms have white pants, helmet colors are often altered to be different from jerseys (only the Denver Broncos's home jersey and helmet are the same color), and even some already different colors are changed – Foxboro's helmet in this game is red, and Charlotte's is Carolina blue, when in real life both were silver.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "437210", "title": "Tecmo Bowl", "section": "Section::::Gameplay.:Teams.\n", "start_paragraph_id": 8, "start_character": 0, "end_paragraph_id": 8, "end_character": 356, "text": "Tecmo was not able to get the NFL's consent to use real team names. As a result, the teams in the game are identified solely by their home city or state. However, through the NFLPA license, each roster mimics that of the NFL team based out of the same city or state. Tecmo Bowl only uses players from twelve of the best and most popular teams of the time.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "13258806", "title": "1987 Seattle Seahawks season", "section": "Section::::Schedule.:Regular season.\n", "start_paragraph_id": 12, "start_character": 0, "end_paragraph_id": 12, "end_character": 204, "text": "The teams fielded by NFL clubs bore little resemblance to those the fans had come to recognize through previous seasons. Fans tagged the replacement player teams with mock names like \"Seattle Sea-scabs.\"\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "31440731", "title": "2011–12 Russian Second Division", "section": "Section::::Team names.\n", "start_paragraph_id": 88, "start_character": 0, "end_paragraph_id": 88, "end_character": 857, "text": "In the Russian sports tradition, each team has a proper name written in parentheses followed by the indication of the city it represents in brackets: \"Spartak\" (Moscow), rather than Moscow Spartak, as would be in the English-language tradition. In English, the parentheses and brackets are usually omitted. Further, while North American team names normally use the plural (Chicago Bulls), Russian team names are usually singular. The names tend to reflect the imagined profession of the team players (or rather their fans, like with Edmonton Oilers), or refer to a geographical object related to the city the team represents (usually, a river or a mountain range), or to one of the former Russian-wide sports associations (Spartak, Dynamo etc.), or else to the sponsoring corporation. Below is the list of Second Division teams with their names translated:\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "30871687", "title": "1967 NHL expansion", "section": "Section::::Expansion teams.\n", "start_paragraph_id": 11, "start_character": 0, "end_paragraph_id": 11, "end_character": 678, "text": "Six franchises were ultimately added: the California Seals (San Francisco – Oakland), Los Angeles Kings, Minnesota North Stars, Philadelphia Flyers, Pittsburgh Penguins, and St. Louis Blues. Had one of the teams been unable to start, a franchise would have then been awarded to Baltimore. Four of those teams are still playing in their original cities under their original names. In 1978, the North Stars merged with the Cleveland Barons, who were the relocated Seals, and in 1993 the North Stars became the Dallas Stars. Both the San Francisco-Oakland market and the Minneapolis-St. Paul markets were eventually granted new teams as the San Jose Sharks and the Minnesota Wild.\n", "bleu_score": null, "meta": null } ] } ]
null
1cmgip
If I travel fast enough to red or blue shift radio waves, what would happen to the sound coming from a radio program?
[ { "answer": "Well, it would change the station first of all... So if you tuned in at 98.5 MHz you might instead tune in at 98.6 MHz. \n\nAfter that I think things should be slowed down, regardless of if it is [AM](_URL_0_) or [FM](_URL_1_). If we had a 10% elongation of the wave, we should also have a 10% slow down in modulation of the wave, so a 100Hz tone lasting for a second would instead be ~90Hz lasting for 1.1 seconds. ", "provenance": null }, { "answer": "If it is apmlitude modulation then obviously if you compensate for the shift of the carrier wave (by tuning to the shifted frequency) and not compensate the shift of the carried wave you will hear a shifted recording. \nIt is easy to imagine visually - stretch/compress the compound wave, demodulate (filter the carrier) and you are left with stretched/compressed recording. \nSame applies for frequency modulation.\n\nAlthough, if phase shift keying is employed, this should not be impacted since the phase shift would remain the same even if the carrier wave gets shifted.\n", "provenance": null }, { "answer": "In the case of AM, the entire wave is based on the amplitude of the carrier which is not changed during doppler shifting. The entire signal appears compressed (or expanded) due to the shifting, so the modulating signal should also appear compressed or expanded by the same ratio. So you'd hear a proportional shift in tone.\n\nI don't posses the math-fu to try to work out how the spectrum of FM radio would change. I tried staring at _URL_0_ but I don't have nearly enough familiarity with fourier transforms to give you an answer that I'd be confident about. ~~My rather simple reasoning would have me believe that the signal would get louder during a blue shift and softer during a redshift....~~ Edit: D'oh of course it would have to do the same thing. You can't have a signal come in faster than it's being transmitted from the source.", "provenance": null }, { "answer": "In both cases you would hear the signal speed up or slow down, and in both cases you would have to adjust your tuner up or down. Both the modulating (high frequency) and modulated (voice or music) signals are functions of linear-time (in opposition to time squared or the log of time), so if a 500THz red light gets shifted down to 400THz, your radio station at 100MHz will be shifted down to 80MHz, and the 1KHz test tone will shift down to 800Hz.\n\nModulation is cool because the modulated signal is still present in its original form. In the case of AM, you can pick it out by connecting the peaks or valleys of the modulated wave. In FM, you can pick it out by shifting the spectrum around -- which is incidentally exactly what happens when you have red or blue shift!\n\nYou could almost say that traveling at a certain rate is the same thing as single-sideband FM modulation!", "provenance": null }, { "answer": null, "provenance": [ { "wikipedia_id": "272522", "title": "FM broadcast band", "section": "Section::::CCIR bandplan.:Center frequencies.\n", "start_paragraph_id": 9, "start_character": 0, "end_paragraph_id": 9, "end_character": 297, "text": "BULLET::::- Some digitally-tuned FM radios are unable to tune using 50 kHz or even 100 kHz increments. Therefore, when traveling abroad, stations that broadcast on certain frequencies using such increments may not be heard clearly. This problem will not affect reception on an analog-tuned radio.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "52141", "title": "Inversion (meteorology)", "section": "Section::::Consequences.:Electromagnetic radiation (radio and television).\n", "start_paragraph_id": 91, "start_character": 0, "end_paragraph_id": 91, "end_character": 641, "text": "Very high frequency radio waves can be refracted by inversions, making it possible to hear FM radio or watch VHF low-band television broadcasts from long distances on foggy nights. The signal, which would normally be refracted up and away from the ground-based antenna, is instead refracted down towards the earth by the temperature-inversion boundary layer. This phenomenon is called tropospheric ducting. Along coast lines during Autumn and Spring, due to multiple stations being simultaneously present because of reduced propagation losses, many FM radio stations are plagued by severe signal degradation causing them to sound scrambled.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "296102", "title": "Specific Area Message Encoding", "section": "Section::::Format of digital parts.\n", "start_paragraph_id": 9, "start_character": 0, "end_paragraph_id": 9, "end_character": 389, "text": "Since there is no error correction, the digital part of a SAME message is transmitted three times, so that decoders can pick \"best two out of three\" for each byte, thereby eliminating most errors which can cause an activation to fail. However, consumer weather radio receivers often activate (unmute the audio) after hearing only one out of the three headers (\"with a significant delay\").\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "22072345", "title": "Photon Doppler velocimetry", "section": "Section::::Theory.\n", "start_paragraph_id": 4, "start_character": 0, "end_paragraph_id": 4, "end_character": 363, "text": "If the shifted return light is then interfered with the original source, the resulting wave will have a beat frequency in the range of a few GHz. This beat frequency is slow enough that it can be monitored with a simple photo-detector and high speed oscilloscope. By recording the beat frequency over time, a complete velocity history of the surface is obtained.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "98132", "title": "Radio wave", "section": "Section::::Speed, wavelength, and frequency.\n", "start_paragraph_id": 7, "start_character": 0, "end_paragraph_id": 7, "end_character": 271, "text": "Radio waves in a vacuum travel at the speed of light. When passing through a material medium, they are slowed according to that object's permeability and permittivity. Air is thin enough that in the Earth's atmosphere radio waves travel very close to the speed of light.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "41658930", "title": "Pip-squeak", "section": "Section::::Description.:Broadcaster.\n", "start_paragraph_id": 24, "start_character": 0, "end_paragraph_id": 24, "end_character": 439, "text": "A separate radio control switch stopped the radio signal from broadcasting while the clock continued to move. This allowed the pilot to set up the system early in the flight, and then turn it off when better communications were needed, like in combat. The system could then be turned on again at any time, with the clock still in the proper position. Sector Commanders could ask pilots to turn it on by asking \"Is your Cockerel crowing?\".\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "271195", "title": "Radio propagation", "section": "Section::::Free space propagation.\n", "start_paragraph_id": 10, "start_character": 0, "end_paragraph_id": 10, "end_character": 276, "text": "Radio waves in vacuum travel at the speed of light. The Earth's atmosphere is thin enough that radio waves in the atmosphere travel very close to the speed of light, but variations in density and temperature can cause some slight refraction (bending) of waves over distances.\n", "bleu_score": null, "meta": null } ] } ]
null
bzohlr
What makes an explosive effective at different jobs?
[ { "answer": "Armor penetration effectiveness is usually achieved by concentrating the blast into a small area by what's known as a [shaped charge](_URL_0_). \n\nOther common explosives are [gun powder/black powder and flash powder](_URL_1_) (common in the fireworks industry). The big difference is the speed at which they burn. You have to confine gun powder into a small area in order for it to be effective (such as bullets), and even then it's still a relatively small explosion. Flash powder on the other hand is known as a high explosive because it converts to a gas incredibly fast. It's the difference between a loud pop of gun powder and the fragmenting explosive that flash powder creates.\n\nHopefully someone else could provide more in depth explanations for the \"why\".", "provenance": null }, { "answer": "Armor penetration and launching have been covered in the previous post. With “anti-personnel” charges it’s a little different. With explosions that cause damage to people, it’s usually not the explosion itself, but the shrapnel from said explosion that causes the damage.", "provenance": null }, { "answer": "When it comes to demolishing buildings, it's not really down to the type of explosive used. You can use any high explosive for the job. The challenge/skill with demolition is figuring out which parts of the building are actually holding it up (load bearing). Then putting enough of your explosive to destroy those parts, and destroy them in a way that causes the building to collapse inward on itself instead of falling to the side which would damage stuff around it.", "provenance": null }, { "answer": "Something not mentioned yet is that different explosives have differing degrees of 'brisance'. Think of it as the 'shattering capability' - one explosion might 'push' an object away at high speed, where another might shatter it into tiny fragments but not necessarily propel those fragments as fast.\n\nC4 has extremely high brisance for antipersonnel and anti-armour, and gunpowder has low brisance for launching objects.", "provenance": null }, { "answer": "Application of force is the really simple answer. Explosives in construction are designed to \"break-up\" or loosen material, so that it can be removed with an excavator. Building demolitions are incredibly precise operations, but the same concept is being used. They're just very carefully applying explosive force to the right columns and beams to bring the structure down safely so that the parts can be scooped up and hauled off. In this case they apply force in a sort of \"all around me\" form, a pressure bubble that can fracture concrete and compacted earth.\n\n & #x200B;\n\nArmor piercing is usually never done by the actual explosive. An RPG uses a conical sheet of copper and a shaped charged to force a liquid jet of copper through armor plates. This copper is so hot that it will instantly cook anyone inside the sealed compartment of the vehicle by heating the air. From what I've seen most AP munitions follow a similar principal. On tanks for instance, their armor piercing Sabot rounds are really just a casing for a huge metal rod (I think tungsten? Maybe depleted uranium?) that just YEETs and applies a bunch of force into the armor in an attempt to poke a hole in it. Sabot rounds actually don't use any explosives beyond the charge used to fire it and shed the casing. The kinetic energy involved in these sorts of munitions is so great that most humans would suffer fatal injuries just from the shrapnel produced on the other end of the round. The important point for both these types of AP munitions is that the force is applied in a very small area, like the diameter of a quarter or something, and is supported by incredibly high kinetic energy and heat. \n\n & #x200B;\n\nThis is why angular and sloped armor is so effective at preventing armor piercing munitions. A slope would redirect a lot of the energy of your tungsten rod, or cause an RPG to glance and lose a lot of the effectiveness of it's copper jet.", "provenance": null }, { "answer": "Quarry/Mine Blaster here.\nSome explosives have a very high velocity and but lower gas content. They have a \"high brisance\" which cracks material but doesn't throw it. Think of it as a very fast slap.\n\nOther explosives have a lower velocity but create a very large amount of gas very quickly. They don't shatter the material as much but they throw and heave it further which also aids in breaking the material. Think of this as a large shove instead of a fast slap.", "provenance": null }, { "answer": "Building demolitions are done by staggering out the explosions to build a bigger shockwave with fewer explosives. The idea is to time the second explosive so that it detonates at the same moment that the shockwave from the first explosive reaches it. Then the third is timed to coincide with the arrival of this new, larger shockwave. \n\nOn top of that you have far more control about where and how the building collapses with a series of charges than with a single big boom. You can collapse a building inward to minimize the total footprint, or you can collapse it to the side, away from other structures. \n\nWhen done properly, explosive demolitions reduce the total work enormously and give you a lot of control on how the structure comes down. When done wrong you may end up with a partial collapse or no collapse, which can very dangerous and unpredictable.", "provenance": null }, { "answer": "keep in mind that various utility aside, the most desired property of any explosive is that it doesnt explode until you want it to. There are many more powerful explosives than TNT and C4 but the reason those are popular with construction and military crews are because they are stable in regular conditions and dont blow up the workers on the way to the job.", "provenance": null }, { "answer": "Explosion creates force, it all depends on how you utilize it. \n\nFor example, High Explosive Anti Tank type weapons such as RPGs focus the explosion into a single point, creating a very hot jet used for penetration. \n\nFor heavy armour, the Brits came up with HESH (High Explosive Squash Head), in which the tank round would hit the armour, the explosive would flatten out against it and then it would detonate, causing the shockwave to travel through the armour and make the inner face shatter, internally sending bits of metal flying within a tank. \n\nThe Americans used thin metal on their High Explosive shells and lots of explosive filler, thought process being you kill a guy with the actual explosion. This was very useful for destroying buildings and other material as it was simply a large amount of force.\n\nFor anti infantry, the Soviets used High Explosive Fragmentation. Basically a giant grenade, the shell had a moderate lining designed to cause maximum shrapnell when detonated, meaning it would be able to deal damage beyond the shockwave radius by expelling shrapnel", "provenance": null }, { "answer": "For the most part the explosives are the same. The means of delivery is what changes. Building = detcord and shape charges. Anti personal = shrapnel. Anti vehicle = shape charge with metal cone. This is not 100% but you get the point. Source....many many years ago I was with Army bomb unit.", "provenance": null }, { "answer": "For destructive purposes the main considerations is amount, placement, and, in some cases, the shape of the explosives. The explosives can be shaped as a Chevron to concentrate the force to limit collateral damage and decrease the amount of explosives needed.", "provenance": null }, { "answer": "The brisance of an explosive, a measure of it's explosive pressure, would give you an idea of power, in a general sense. However, as others have pointed out, an explosive is oftentimes used as a tool in conjunction with other tools or techniques to achieve a more specific purpose.\n\nFor example, an explosive charge on the back of a copper plate results in the copper plate liquefying and being blown outwards. If this is shaped over a railroad rail, you can effectively cut through it like butter. Another example is using an explosive to compress a magnetic field, which can result in a MASSIVE output of millions of amps (explosively-pumped flux compressors, used for generating huge EMPs).", "provenance": null }, { "answer": "Detonation velocity is heavily used as a metric because it is very easy to measure, and reasonably correlates to other performance metrics. A lot of these correlations were established fairly early in the history of explosives engineering and have stuck around because they work.\n\n & #x200B;\n\nIn an ideal world, if you want to achieve destructive effect, you use the most powerful explosive available to you, in reality you are cost/sensitivity/packaging limited. So you get what you can.\n\n & #x200B;\n\nFor blast applications: total energetic output per gram of HE seems to dominate (and why metallized explosives are dominant in the most modern applications).\n\n & #x200B;\n\nFor demolitions: really any explosive will do, its where you put it that matters and how efficiency you use the bulk you have.\n\n & #x200B;\n\nArmor pen, you use shaped charges, which requires a high VOD and a high Gurney velocity.\n\n & #x200B;\n\nExplosive acceleration of objects that can survive it (ie chunks of metal), require a high Gurney velocity (actually you need a thermodynamically high PV isentrope), and this parameter correlates with detonation velocity.\n\n & #x200B;\n\nA lot of the comments in this thread are absolute junk, as an FYI.", "provenance": null }, { "answer": "Several factors.\n\nDetonation velocity, explosion shape, shrapnel, placement, and confinement.\n\nI'll go through each of your particular questions one by one.\n\nSo there are 2 ways of thinking about building destruction. Demolition, and destruction.\n\nFor demolition, it's all about placement. Holes are drilled in critical support columns and explosives are inserted directly into the holes. The size of these explosives is usually relatively small, but they're highly contained and will cause critical damage to the support.\n\nFor destruction, explosion velocity is critical. We're talking a bomb that's just planted somewhere to do damage. Typically you want a slower more \"rumbling\" explosion that will be more likely to damage stone and concrete. ANFO is really good for this, and is why it causes such utter devastation when it goes off.\n\nAnti-personnel is typically achieved by shrapnel. Essentially you want to shoot a gun in every direction at once with as many little pieces moving as fast as possible. So you want a high velocity explosion with a breakable shell around it that will go in every direction upon detonation. This is exactly why the classic \"pineapple\" grenade has those bumps on it. They're weak points that break apart.\n\nArmor penetration is all about the shape of the explosion and what it's propelling. Most armor penetrating explosives have what's known as a shaped charge. This is a cone shaped high velocity explosive that is covered by a metal (usually copper due to its high heat conductivity). When it detonates, it fires a spear of superheated copper into whatever it's pointed at. Usually piercing armor and raising the inside of whatever it hit to several tens of thousands of degrees for a short period of time.\n\nLaunching a projectile is all about confinement. Basically you don't want gas escaping around the projectile. And you want all of that energy to be transferred to the projectile. However, you can't have **too** large of a charge as that may damage or burst the barrel. Longer barrels typically allow the gas to expand behind the projectile for a longer period of time which will usually result in higher muzzle velocity. Although there's a limit to this. Longer barrel doesn't always mean faster or more accurate bullet.", "provenance": null }, { "answer": null, "provenance": [ { "wikipedia_id": "10192", "title": "Explosive", "section": "Section::::Properties of explosive materials.\n", "start_paragraph_id": 34, "start_character": 0, "end_paragraph_id": 34, "end_character": 311, "text": "To determine the suitability of an explosive substance for a particular use, its physical properties must first be known. The usefulness of an explosive can only be appreciated when the properties and the factors affecting them are fully understood. Some of the more important characteristics are listed below:\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "265112", "title": "Improvised explosive device", "section": "Section::::Counterefforts.:Detection and disarmament.\n", "start_paragraph_id": 76, "start_character": 0, "end_paragraph_id": 76, "end_character": 1246, "text": "Because the components of these devices are being used in a manner not intended by their manufacturer, and because the method of producing the explosion is limited only by the science and imagination of the perpetrator, it is not possible to follow a step-by-step guide to detect and disarm a device that an individual has only recently developed. As such, explosive ordnance disposal (IEDD) operators must be able to fall back on their extensive knowledge of the first principles of explosives and ammunition, to try and deduce what the perpetrator has done, and only then to render it safe and dispose of or exploit the device. Beyond this, as the stakes increase and IEDs are emplaced not only to achieve the direct effect, but to deliberately target IEDD operators and cordon personnel, the IEDD operator needs to have a deep understanding of tactics to ensure he is neither setting up any of his team or the cordon troops for an attack, nor walking into one himself. The presence of chemical, biological, radiological, or nuclear (CBRN) material in an IED requires additional precautions. As with other missions, the EOD operator provides the area commander with an assessment of the situation and of support needed to complete the mission.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "25016560", "title": "Sensitivity (explosives)", "section": "", "start_paragraph_id": 2, "start_character": 0, "end_paragraph_id": 2, "end_character": 441, "text": "Sensitivity, stability and brisance are three of the most significant properties of explosives that affect their use and application. All explosive compounds have a certain amount of energy required to initiate. If an explosive is too sensitive, it may go off accidentally. A safer explosive is less sensitive and will not explode if accidentally dropped or mishandled. However, such explosives are more difficult to initiate intentionally.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "10192", "title": "Explosive", "section": "Section::::Properties of explosive materials.:Sensitivity.\n", "start_paragraph_id": 41, "start_character": 0, "end_paragraph_id": 41, "end_character": 400, "text": "Sensitivity is an important consideration in selecting an explosive for a particular purpose. The explosive in an armor-piercing projectile must be relatively insensitive, or the shock of impact would cause it to detonate before it penetrated to the point desired. The explosive lenses around nuclear charges are also designed to be highly insensitive, to minimize the risk of accidental detonation.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "2277296", "title": "Phlegmatized explosive", "section": "", "start_paragraph_id": 2, "start_character": 0, "end_paragraph_id": 2, "end_character": 323, "text": "Explosive compounds may exist in material states that limit their application. For instance, nitroglycerin is normally an oily liquid. Phlegmatization of nitroglycerin allows it to be formed as a solid, commonly known as dynamite. It also allows the liquid, which is very sensitive to shock, to be handled more vigorously.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "46411064", "title": "Staged Detonation", "section": "", "start_paragraph_id": 4, "start_character": 0, "end_paragraph_id": 4, "end_character": 370, "text": "It does not necessarily involve different explosive compounds. Because the properties of an explosive compound is in some degree related to its density, which is a result of with what pressure it is compressed during manufacture, the stages in a staged detonation may also come from each section having been pressed to a different density, giving it varying properties.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "2242641", "title": "Shock sensitivity", "section": "Section::::Sensitivities vary widely.\n", "start_paragraph_id": 5, "start_character": 0, "end_paragraph_id": 5, "end_character": 427, "text": "Still less sensitive materials such as blasting agents like ANFO, or shell fillings like Composition B, are so insensitive that the impulse from the detonator must be amplified by an explosive booster charge to secure reliable detonation. Some polymer bonded explosives — especially those based on TATB — are designed for use in insensitive munitions, which are unlikely to detonate even if struck by another explosive weapon.\n", "bleu_score": null, "meta": null } ] } ]
null
1q7zq8
why do wombat's poop cubes?
[ { "answer": "Wombats poop on top of rocks and logs near their burrows. The reason for this is not to keep intruders away, but to use as an indicator to know where their home is. Wombats have terrible eyesight however they have an extraordinary sense of smell. The reason for these rubiks poop is because if the Wombat's are to effectively effectively smell their way home, their turds must remain where they dropped it, hence the fact that their excretion is cube, not circular. Who want's their shit rolling away?", "provenance": null }, { "answer": null, "provenance": [ { "wikipedia_id": "2859751", "title": "Pouch (marsupial)", "section": "Section::::Variations.\n", "start_paragraph_id": 5, "start_character": 0, "end_paragraph_id": 5, "end_character": 910, "text": "In wombats and marsupial moles, the pouch opens backward or down. Backwards facing pouches would not work well in kangaroos or opossums as their young would readily fall out. Similarly, forward-facing pouches would not work well for wombats and marsupial moles as they both dig extensively underground. Their pouches would fill up with dirt and suffocate the developing young. Kangaroo mothers will lick their pouches clean before the joey crawls inside. Kangaroo pouches are sticky to support their young joey. Koalas are unable to clean out their pouches since they face backwards, so just prior to giving birth to the young koala joey, a self-cleaning system is activated, secreting droplets of an anti-microbial liquid that cleans it out. In a relatively short time, the cleansing droplets clean out all of the crusty material left inside, leaving an almost sterile nursery ready to receive the tiny joey.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "33864", "title": "Wombat", "section": "Section::::Characteristics.\n", "start_paragraph_id": 5, "start_character": 0, "end_paragraph_id": 5, "end_character": 1046, "text": "Wombats dig extensive burrow systems with their rodent-like front teeth and powerful claws. One distinctive adaptation of wombats is their backward pouch. The advantage of a backward-facing pouch is that when digging, the wombat does not gather soil in its pouch over its young. Although mainly crepuscular and nocturnal, wombats may also venture out to feed on cool or overcast days. They are not commonly seen, but leave ample evidence of their passage, treating fences as minor inconveniences to be gone through or under, and leaving distinctive cubic feces. As wombats arrange these feces to mark territories and attract mates, it is believed that the cubic shape makes them more stackable and less likely to roll, which gives this shape a biological advantage. The method by which the wombat produces them is not well understood, but it is believed that the wombat intestine stretches preferentially at the walls. The adult wombat produces between 80 and 100, pieces of feces in a single night, and four to eight pieces each bowel movement.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "2859751", "title": "Pouch (marsupial)", "section": "", "start_paragraph_id": 1, "start_character": 0, "end_paragraph_id": 1, "end_character": 618, "text": "The pouch is a distinguishing feature of female marsupials and monotremes (and rarely in the males as in the water opossum and the extinct thylacine); the name marsupial is derived from the Latin \"marsupium\", meaning \"pouch\". Marsupials give birth to a live but relatively undeveloped fetus called a joey. When the joey is born it crawls from inside the mother to the pouch. The pouch is a fold of skin with a single opening that covers the teats. Inside the pouch, the blind offspring attaches itself to one of the mother’s teats and remains attached for as long as it takes to grow and develop to a juvenile stage. \n", "bleu_score": null, "meta": null }, { "wikipedia_id": "50338893", "title": "Primo Toys", "section": "Section::::Cubetto Playset.:Influences and research.\n", "start_paragraph_id": 7, "start_character": 0, "end_paragraph_id": 7, "end_character": 636, "text": "The concept for Cubetto was inspired by Italian physician and educator Maria Montessori's early learning methods and MIT's programming language LOGO, which was designed by a team directed by Seymour Papert in the 1960s as a way to teach children the basic principles of coding. The square \"ground\" robot that rotates only through 90 degrees while roaming a checkerboard field is similar to the screen robots (NAKIs) of the pioneering educational robotics language OZNAKI. Cubetto overall is a radical innovation, but its use of coloured pieces inserted in slots/holes for robot control and training is very similar to the TORTIS system\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "15521989", "title": "Mash and Peas", "section": "", "start_paragraph_id": 1, "start_character": 0, "end_paragraph_id": 1, "end_character": 519, "text": "Mash and Peas was a parodic sketch show written by and starring Matt Lucas & David Walliams. Their first television work together, it originally aired on Paramount Comedy 1 and Channel 4 between 1996 and 1997. The episodes were repeated before the channel's relaunch in 1999. The programme is made up of parodies of various television genres, introduced by the childish and incompetent Danny Mash (Lucas) and Gareth Peas (Walliams). Edgar Wright directed and long-standing collaborator Paul Putner appeared throughout.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "143577", "title": "The Wombles", "section": "", "start_paragraph_id": 1, "start_character": 0, "end_paragraph_id": 1, "end_character": 457, "text": "The Wombles are fictional pointy-nosed, furry creatures created by Elisabeth Beresford and originally appearing in a series of children's novels from 1968. They live in burrows, where they aim to help the environment by collecting and recycling rubbish in creative ways. Although Wombles supposedly live in every country in the world, Beresford's stories are concerned with the lives of the inhabitants of the burrow on Wimbledon Common in London, England.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "10727548", "title": "Evolution of mammals", "section": "Section::::Earliest crown mammals.:Metatheria.\n", "start_paragraph_id": 125, "start_character": 0, "end_paragraph_id": 125, "end_character": 690, "text": "BULLET::::- The newborn marsupial uses its forelimbs (with relatively strong hands) to climb to a nipple, which is usually in a pouch on the mother's belly. The mother feeds the baby by contracting muscles over her mammary glands, as the baby is too weak to suck. The newborn marsupial's need to use its forelimbs in climbing to the nipple was historically thought to have restricted metatherian evolution, as it was assumed that the forelimb couldn't become specialised intro structures like wings, hooves or flippers. However, several bandicoots, most notably the pig-footed bandicoot, have true hooves similar to those of placental ungulates, and several marsupial gliders have evolved.\n", "bleu_score": null, "meta": null } ] } ]
null
2zq7a7
how does the army/military wash clothes while deployed?
[ { "answer": "Modern armies generally deploy with enough support machinery to wash clothing, and hand washing is quite effective.\n\nHowever everyone smells like ass anyway. Welcome to the Army.", "provenance": null }, { "answer": "At the laundry. Deployed at base you still sleep in a bed/cot. Deployed in the field, you don't wash", "provenance": null }, { "answer": "Depends. If they are at let's say camp leather neck they just drop it of and pick it up the next day. When I was deployed I was at a fob without running water so we used 5 gallon buckets and scrub brushes. and hung them out to dry. Which was the worst since sandstorms would come out of nowhere and make your wet clothes muddy", "provenance": null }, { "answer": "They do have laundry areas, with machines and detergent and everything.\n\nIf you have a base with 1500 people working 12+ hours a day on it, there are some \"creature comforts\" that are going to be there. They are going to have laundry machines, they are going to have kitchens, they are going to have a recreational area, they are going to have electricity and relatively running water and some buildings with fans/air conditioning.\n\nIf you have a bunch of trained people used to first world amenities, then stick them in a place without them, the best way to get them to work efficiently and not gripe is to give them a few comforts from home.", "provenance": null }, { "answer": "You turn them into 3rd party nationals and pick them up 3 days later, folded and clean. It's actually the best part of being deployed.", "provenance": null }, { "answer": "Shower first with your clothes on, wash everything as you would normally shower with clothes off. Then take the fatigues off and wash again. Not ideal, but it worked. This was Royal Army BTW.", "provenance": null }, { "answer": "5 gallon bucket when you're at a FOB without running water.", "provenance": null }, { "answer": "Ex-US Army here. Typically we dropped them off at the \"Cleaners\". The clothes come back smelling like nothing at all, folded, and wrapped in plastic. This took anywhere from 24-72 hours depending on where we were.\n\nIf I wanted something immediately I just took a box of tide to the sink and hand washed it. Never had to go outside of any sort of base but I had a friend that did. He got dysentery his first day out and ended up shitting on himself. He had to wear that for 3 days until they came back. ", "provenance": null }, { "answer": "On the sub we had 2 washers and dryers. For 150 people. Some people don't wash their clothes, but you were given a schedule to do it once a week. ", "provenance": null }, { "answer": "From my experience, there are three situations.\n\n1. You wash it yourself. Depending where you get stationed it, a pretty cherry camp will have a laundry room provided either by the Army, MRW, or locals.\n\n2. Speaking of locals, sometimes there will be a laundry run by the hadjis. Toss a third of your clothes in a bag. Don't put too much, you have to be careful of delays or any other reason you don't get your laundry back and you don't want the same thing for a week or two straight. Even recycling dirty clothes is better than seeing salt stains on your uniforms. Drop the bag off and pick it up a few days later.\n\n3. Army has these... well shit, I don't know what they're called. Part of a quartermaster unit, they drag around water bags and set up showers and laundry services. Just like the hadjis, except not as reliable as far as losing your shit goes (sorry, I had to).\n\n4. And then the warning from method 2: sometimes you just don't. You're at a shitty FOB, you're on a shitty convoy, or god's in a shitty mood. You just recycle dirty clothes and worry you're creating a new bacteria that's going to destroy mankind from the bottom of your laundry bag by hanging it on the laundry outside of your tent.", "provenance": null }, { "answer": "No one has mentioned the Navy. Ships have laundry rooms. Size depends on size of ship and size of crew. I was on a amphibious ship, LSD type. Crew was ~350. IIRC we had 5 washers and 5 dryers. There wasn't a rotation, meaning if it was open you could use it. That's all if you wanted to do your own laundry. There is a ships laundry that the crew isn't allowed to walk into themselves and use. Crew members work down there and do laundry all day. Someone is assigned to clean berthing and take all laundry down and pick it up and bring it back on a regular basis. If you wanted to send your uniform in that laundry, go ahead. But it wasn't recommended on my ship. That laundry was usually used for linen.", "provenance": null }, { "answer": "I got out of the Infantry in 82, reckon things have changed a lot, back then when you were in the field you got nasty and kept getting nastier till you got home. Didn't matter to us, you go full nose blind after awhile. Used to seriously gag out our old ladies when we got back though, you could always tell, you see a couple going down the road with the windows down and the temps 5 below 0, you know she's about to hurl just tryin to get his nasty ass back to the house!", "provenance": null }, { "answer": "Just for the record for he first six months I was in Iraq in 2003 we washed out clothes ourselves. We had one of those old fashioned wash boards and a bucket and hung our shit up to dry on clothes lines made out of 550 cord. Absolute worst part of the deployment for me (except getting shot at... Maybe). ", "provenance": null }, { "answer": null, "provenance": [ { "wikipedia_id": "47813845", "title": "Washboard (laundry)", "section": "Section::::Description and use.\n", "start_paragraph_id": 4, "start_character": 0, "end_paragraph_id": 4, "end_character": 592, "text": "Many parts of the world still use washboards for washing clothes. Clothes are soaked in hot soapy water in a washtub or sink, then squeezed and rubbed against the ridged surface of the washboard to force the cleansing fluid through the cloth to carry away dirt. Washboards may also be used for washing in a river, with or without soap. Then the clothes are rinsed. The rubbing has a similar effect to beating the clothes and household linen on rocks, an ancient method, but is less abrasive. Military personnel often use washboards to do their laundry when no local laundry facilities exist.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "41191060", "title": "Wash rack", "section": "", "start_paragraph_id": 4, "start_character": 0, "end_paragraph_id": 4, "end_character": 604, "text": "The main purpose of a wash rack is to clean equipment while protecting the environment from contaminates commonly found on construction, maintenance and military vehicles or equipment. To comply with U.S. Department of Agriculture (USDA) regulations, which are intended to prevent soil-borne insects or other potentially harmful organisms from entering the United States, U.S. military vehicles and equipment must be thoroughly washed before being shipped home. As such, wash racks are commonly used by the US military to ensure vehicles are clean and safe before they are brought back into the country.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "5958223", "title": "4th Brigade Combat Team, 2nd Infantry Division", "section": "Section::::Iraq 2009–10.:Last patrol.\n", "start_paragraph_id": 66, "start_character": 0, "end_paragraph_id": 66, "end_character": 270, "text": "Upon arrival at the final destination in Camp Virginia, Kuwait, soldiers stripped their Strykers and prepared them for the wash-racks. At the site, soldiers and civilian contractors spent approximately 32 hours per vehicle, completely cleaning them both inside and out.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "5814133", "title": "Field shower", "section": "", "start_paragraph_id": 1, "start_character": 0, "end_paragraph_id": 1, "end_character": 497, "text": "A field shower is equipment used to provide sanitation and decontamination facilities to military personnel, equipment and vehicles using various liquids, including water in the field of operations. Usually the showering facility is provided by the combat service support elements or decontamination units to combat units deployed away from permanent properties that offer the facilities, or when combat units have been exposed to hazardous chemicals and need to quickly decontaminate themselves.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "23962876", "title": "Forward Operating Base Paliwoda", "section": "Section::::History.\n", "start_paragraph_id": 6, "start_character": 0, "end_paragraph_id": 6, "end_character": 759, "text": "Forward Operating Base Paliwoda, like many bases in Iraq, has portable shower units for soldiers to use. But at Paliwoda, persistent problems with the makeshift electrical system installed by an Iraqi contractor mean the water often is cold if it is running at all. The MWR (Morale, Welfare and Relaxation) building at Paliwoda began with about 15 computers and 10 telephones for soldiers to communicate with family at home, a second-hand ping pong table, a television, and a few board games; it has since been reduced to the telephones and computers. Once a day a convoy delivers food from the Kellogg, Brown and Root chow hall at Anaconda, unless the unit in control of the FOB has cooks attached to them. Also there is a gym with weight lifting equipment.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "4116322", "title": "Yad Sarah", "section": "Section::::Services.:Services for the homebound.\n", "start_paragraph_id": 20, "start_character": 0, "end_paragraph_id": 20, "end_character": 233, "text": "BULLET::::- Laundry service: Volunteers pick up soiled linens and bedclothes from the homes of incontinent individuals, wash and iron them, and return them. This service is available in Israel's three major cities for a nominal fee.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "50548816", "title": "Narodny Tyl", "section": "Section::::Activity.:Medical Narodny Tyl.\n", "start_paragraph_id": 10, "start_character": 0, "end_paragraph_id": 10, "end_character": 377, "text": "Medical unit sends soldiers medicines for general purpose (anti-inflammatory and antipyretic agents, antibiotics) and specialized individual first aid kits, staffed nearly NATO standards (military IFAKs). The structure includes hemostatic Celox, Combat-Application-Tourniquets, bandages... As of December 2014 the troops were transferred more than five thousand of these kits.\n", "bleu_score": null, "meta": null } ] } ]
null
7m2k57
in movie scenes depicting large crowds or groups, do the extras usually have scripted dialogue or do they ad-lib?
[ { "answer": "They aren't talking at all. They are just pretending to talk. The sound is added in later. If they really talked it would interfere with the main actors dialogue recording. Sometimes directors will give them some motivation (like ask one couple in the background to pretend to argue or something). Even if they are working or doing something they are asked to do it completely silently. \n\nSource: I work in the film industry. ", "provenance": null }, { "answer": null, "provenance": [ { "wikipedia_id": "99545", "title": "This Is Spinal Tap", "section": "Section::::Production.:Development.\n", "start_paragraph_id": 54, "start_character": 0, "end_paragraph_id": 54, "end_character": 314, "text": "Virtually all dialogue in the film is improvised. Actors were given outlines indicating where scenes would begin and end and character information necessary to avoid contradictions, but everything else came from the actors. As often as possible, the first take was used in the film, to capture natural reactions. \n", "bleu_score": null, "meta": null }, { "wikipedia_id": "1306526", "title": "Prince Charles Cinema", "section": "Section::::History.\n", "start_paragraph_id": 4, "start_character": 0, "end_paragraph_id": 4, "end_character": 968, "text": "The cinema was used as the setting for a number of stunts in the British sketch show, \"Trigger Happy TV\". Filming was facilitated by the cinema having a balcony from which aerial shots could be taken, and the apparent willingness of the management to subject their patrons to some hilarious (and ultimately harmless) pranks. Various sketches involved the show's presenter, Dom Joly, along with extras from the show, annoying cinema-goers by dressing up as severely obese people trying to squeeze past whilst spilling popcorn from massively oversized buckets, sitting in front of them with enormous fake wigs, and dressing up as Beefeaters taking up whole rows of seats. Other more bizarre incidents involved the use of animal costumes. In one scene two rabbits were seen simulating sexual intercourse, and in another Joly dressed up as a snake and slithered around on the floor, as a supposed addition to a screening advising people to be vigilant about pick-pockets.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "427173", "title": "Distance (2001 film)", "section": "Section::::Production.\n", "start_paragraph_id": 18, "start_character": 0, "end_paragraph_id": 18, "end_character": 339, "text": "There was a lot of improvisation when it came to shooting the film. Each of the main actors had information withheld about the other characters' backstory and were instead given the scenarios in which the characters would interact. The relationships between the characters were thus loosely built upon the actors' real world interactions.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "28609281", "title": "Star Wars Uncut", "section": "Section::::Production.\n", "start_paragraph_id": 4, "start_character": 0, "end_paragraph_id": 4, "end_character": 469, "text": "Many of the sequences are filmed in deliberately crude, low-budget or otherwise comical manners, and the actors do not always resemble the original cast. One scene is a stop-motion sequence using Lego \"Star Wars\" figurines. Another mimics the animation style of the 1968 Beatles film \"Yellow Submarine\". Others are parodies of specific pop culture subgenres, such as anime and grindhouse films. \"Star Wars\" Pez candy dispensers are featured prominently in some scenes.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "2760519", "title": "Rear projection effect", "section": "Section::::Technique.\n", "start_paragraph_id": 5, "start_character": 0, "end_paragraph_id": 5, "end_character": 481, "text": "These so-called \"process shots\" were widely used to film actors as if they were inside a moving vehicle, who were, in reality, in a vehicle mock-up on a soundstage. In these cases the motion of the backdrop film and foreground actors and props were often different due to the lack of steadicam-like imaging from the moving vehicles used to produce the plate. This was most noticeable as bumps and jarring motions of the background image that would not be duplicated by the actors.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "9168916", "title": "Conversations with Other Women", "section": "Section::::Split screen.\n", "start_paragraph_id": 41, "start_character": 0, "end_paragraph_id": 41, "end_character": 541, "text": "At a panel on acting at the Telluride Film Festival, the actors spoke of the challenge of working in a two-camera system. Unlike traditionally shot and cut films, the actors knew that all moments of a take could end up on screen and thus 'acted through' every take. The actors were constantly 'in the moment'. The resulting film presents the actors' work in the way musicians play in a duet, with action, dialogue and reaction running on both sides of the frame in real time. The movie presents two remarkable achievements in screen acting.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "2083607", "title": "Still Smokin (film)", "section": "Section::::Plot.\n", "start_paragraph_id": 4, "start_character": 0, "end_paragraph_id": 4, "end_character": 486, "text": "For much of the film, they do versions of many of their recorded characters (with the exception of Pedro and the Man, whom they had already portrayed on film in \"Up in Smoke\") and several new skits. As most of the sketches featured did not directly fit into the narrative, they are instead presented as cutaways. The final act of the movie features their live performance (filmed at the Tuschinski Theater in September 1982), climaxing with a version of the \"Ralph And Herbie\" routine.\n", "bleu_score": null, "meta": null } ] } ]
null
rmxpp
Do animals get tired of eating the same food day after day?
[ { "answer": "Modern humans actually a require a great multitude of different vitamins, minerals, and nutrients in order to survive. A lion can eat meat all day everyday and never get scurvy. For humans however, it makes sense from an evolutionary standpoint being as we have such a long list of required nutrients, that we would evolve to crave variety in food. \n\nI'm not a scientist but off the top of my head I can think of quite a handful of dire reprecussions associated with vitamin/mineral deficiency such as: Vitamin C deficiency(scurvy), Vitamin b12 deficiency(anemia), calcium deficiency(osteoporosis), protein deficiency(too many to list, ask any vegetarian that doesn't understand nutrition what this is like), and so on. If you eat fruit exclusively you'll lack protein, if you eat meat exclusively you'll lack fiber and vitamins, etc.\n\n", "provenance": null }, { "answer": "To clear this up a little bit: The dry food we feed our pets contains a good amount of [varied nutrients, minerals and vitamins](_URL_0_) that animals need.\n\nIf the world were to go under and survival were the main priority, you'd be better off salvaging dried cat and dog foods then almost anything else.", "provenance": null }, { "answer": "Another point to be considered is the number of taste buds the animal has. The dog for example, has about 1/6 the amount of tastebuds that a human has. Another interesting example is the chicken which has somewhere around 16 tastebuds. ", "provenance": null }, { "answer": "A 2006 paper titled \"Brain mechanisms underlying flavour and appetite\" by Edmund Rolls in the Philosophical Transactions of the Royal Society shows that non-human animals (in this case, macaques) definitely experience sensory satiation. I can post some of my notes on the paper, but I was focused on functional neuroanatomy at the time rather than the topic at hand. ", "provenance": null }, { "answer": "I know that certain animals such as land hermit crabs preferentially choose foods that they have not eaten recently([source](_URL_0_)). This seems to be an adaptation to both help ensure that the crab gets all the nutrients it can, and to avoid eating bad food too often (I would imagine this is a trait that may be found in other scavenging \"custodial\" species of animals).", "provenance": null }, { "answer": "Animals are able to detect deficiency in indispensable amino acids and adjust which foods they eat accordingly.\n\nHao, S. et al. Uncharged tRNA and sensing of amino acid deficiency in mammalian piriform cortex. Science 307, 1776–1778 (2005)\n\n_URL_0_", "provenance": null }, { "answer": "there is an episode of [radiolab](_URL_0_) that explores zoos and there is a story about what animasl are fed in zoos and what has changed over the last 20 years considering what animals like/dont like to eat. i would recommend giving it a listen its a great episode.", "provenance": null }, { "answer": "I did my doctorate in neuronal control of appetite/obesity. We would put mice on a high fat diet (bright pink, made up of a ~60% fat, some carbs+protein) instead of their normal 'chow' (like dry biscuits, mostly carbs and protein). The mice would go mental, they loved having a new food. Wait a few weeks, switch back, same deal.", "provenance": null }, { "answer": "Dogs like what food other dogs like. \n\nThey're suckers for peer pressure. \n\nOr maybe it's just common sense, if one dog likes this food, then other dogs know it is safe to eat. \n\n*Source: Dogs acquire food preferences from interacting with recently fed conspecifics.*\n\nSocial transmission of food preferences has been documented in many species including humans, rodents, and birds. In the current experiment, 12 pairs of domestic dogs (Canis familiaris) were utilized. Within each pair, one dog (the demonstrator) was fed dry dog food flavored with either basil or thyme. The second dog (the observer) interacted with one demonstrator for 10 min before being given an equal amount of both flavored foods. Observers exhibited a significant preference for the flavored diet consumed by their demonstrators, indicating that dogs, like rats, prefer foods smelled on a conspecific's breath.\n\n[PMID: 17049752](_URL_0_)\n\n", "provenance": null }, { "answer": "Alright, here's a few points to answer your question and add a little onto it (I work in the pet food industry):\n\nYes, they do get tired of the same food. That's where table scraps come in handy- although they spoil the pet, it adds a little bit of variety and interest into what they're eating. If you don't want to do that, canned food is the best way to go; just add a scoop on top every day of something different.\n\nThat being said: it's necessary to give animals different proteins so their bodies can create the different immune responses to those proteins. It gives them energy, and in puppies and kittens, helps with maintaining their general health later in life. It's a fallacy that dogs and cats should have the same food for their whole life. (It's also a fallacy that kibble helps their teeth, but I digress.) We recommend people change up proteins every bag, if possible. The more frequent, the better.\n\nAnswering a few posts below: my store only sells human-grade pet food. Many of the dog food companies owned by larger human food companies, such as Mars' Pedigree and Royal Canin, use the scraps that are indigestible to humans and puts them into their dog food lines. (personally, we avoid anything like that; anything in our store is going to be human grade and, heck, looks better than half the stuff I eat.) Avoid essentially everything you see on TV, and especially Science/Prescription Diet.\n\nI have many many points that I'd love to get out there but it's too late and I'm tired. If anyone has any specific questions, feel free to ask. If mods need proof of some sort, PM me.", "provenance": null }, { "answer": "Dogs are much more likely to get tired of their food if you give them a lot of table scraps, things like pieces of fat or skin or bone are like candy to dogs, and you can spoil their taste for their normal kibble if you give in to the puppy dog eyes too often. I actually have this problem with my dog, for a good month or so he would refuse to eat his kibble unless we added things like dry sausage or turkey slices.", "provenance": null }, { "answer": null, "provenance": [ { "wikipedia_id": "728513", "title": "Laziness", "section": "Section::::Animals.\n", "start_paragraph_id": 24, "start_character": 0, "end_paragraph_id": 24, "end_character": 1009, "text": "It is common for animals (even those like hummingbirds that have high energy needs) to forage for food until satiated, and then spend most of their time doing nothing, or at least nothing in particular. They seek to \"satisfice\" their needs rather than obtaining an optimal diet or habitat. Even diurnal animals, which have a limited amount of daylight in which to accomplish their tasks, follow this pattern. Social activity comes in a distant third to eating and resting for foraging animals. When more time must be spent foraging, animals are more likely to sacrifice time spent on aggressive behavior than time spent resting. Extremely efficient predators have more free time and thus often appear more lazy than relatively inept predators that have little free time. Beetles likewise seem to forage lazily due to a lack of foraging competitors. On the other hand, some animals, such as pigeons and rats, seem to prefer to respond for food rather than eat equally available \"free food\" in some conditions.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "14990054", "title": "Sleep in non-human animals", "section": "Section::::Mammals.:Duration.\n", "start_paragraph_id": 32, "start_character": 0, "end_paragraph_id": 32, "end_character": 1397, "text": "As with birds, the main rule for mammals (with certain exceptions, see below) is that they have two essentially different stages of sleep: REM and NREM sleep (see above). Mammals' feeding habits are associated with their sleep length. The daily need for sleep is highest in carnivores, lower in omnivores and lowest in herbivores. Humans sleep less than many other omnivores but otherwise not unusually much or unusually little in comparison with other mammals. Many herbivores, like Ruminantia (such as cattle), spend much of their wake time in a state of drowsiness, which perhaps could partly explain their relatively low need for sleep. In herbivores, an inverse correlation is apparent between body mass and sleep length; big mammals sleep less than smaller ones. This correlation is thought to explain about 25% of the difference in sleep amount between different mammals. Also, the length of a particular sleep cycle is associated with the size of the animal; on average, bigger animals will have sleep cycles of longer durations than smaller animals. Sleep amount is also coupled to factors like basal metabolism, brain mass, and relative brain mass. The duration of sleep among species is also directly related to basal metabolic rate (BMR). Rats, which have a high BMR, sleep for up to 14 hours a day, whereas elephants and giraffes, which have lower BMRs, sleep only 3–4 hours per day.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "1105264", "title": "Hermann's tortoise", "section": "Section::::Ecology.\n", "start_paragraph_id": 19, "start_character": 0, "end_paragraph_id": 19, "end_character": 548, "text": "Early in the morning, the animals leave their nightly shelters, which are usually hollows protected by thick bushes or hedges, to bask in the sun and warm their bodies. They then roam about the Mediterranean meadows of their habitat in search of food. They determine which plants to eat by the sense of smell. (In captivity, they are known to eat dandelions, clover, and lettuce, as well as the leaves, flowers, and pods of almost all legumes.) In addition to leaves and flowers, the animals eat small amounts of fruits as supplementary nutrition.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "5596641", "title": "Horse behavior", "section": "Section::::Eating patterns.\n", "start_paragraph_id": 53, "start_character": 0, "end_paragraph_id": 53, "end_character": 1033, "text": "Horses have a strong grazing instinct, preferring to spend most hours of the day eating forage. Horses and other equids evolved as grazing animals, adapted to eating small amounts of the same kind of food all day long. In the wild, the horse adapted to eating prairie grasses in semi-arid regions and traveling significant distances each day in order to obtain adequate nutrition. Thus, they are \"trickle eaters,\" meaning they have to have an almost constant supply of food to keep their digestive system working properly. Horses can become anxious or stressed if there are long periods of time between meals. When stabled, they do best when they are fed on a regular schedule; they are creatures of habit and easily upset by changes in routine. When horses are in a herd, their behavior is hierarchical; the higher-ranked animals in the herd eat and drink first. Low-status animals, that eat last, may not get enough food, and if there is little available feed, higher-ranking horses may keep lower-ranking ones from eating at all.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "198055", "title": "Common crane", "section": "Section::::Behaviour.:Diet.\n", "start_paragraph_id": 19, "start_character": 0, "end_paragraph_id": 19, "end_character": 328, "text": "Animal foods become more important during the summer breeding season and may be the primary food source at that time of year, especially while regurgitating to young. Their animal foods are insects, especially dragonflies, and also snails, earthworms, crabs, spiders, millipedes, woodlice, amphibians, rodents, and small birds.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "861450", "title": "Scaly-sided merganser", "section": "Section::::Ecology.\n", "start_paragraph_id": 7, "start_character": 0, "end_paragraph_id": 7, "end_character": 981, "text": "They spend most of the daylight time foraging, except around noon when they take some time to rest, preen and socialize at the river banks, where they also sleep. The food of \"M. squamatus\" consists of aquatic arthropods, frogs and small to medium-sized fish. Stonefly (Plecoptera) and Phryganeidae giant caddisfly larvae may constitute the bulk of its diet when available. Beetles and crustaceans are eaten less regularly, though the latter may be more important in autumn. As aquatic insect larvae hatch in the course of the summer, fish become more prominent in the diet. Favorite fish species include the dojo loach (\"Misgurnus anguillicaudatus\") and the lenok \"Brachymystax lenok\". More rarely eaten are such species as the lamprey \"Eudontomyzon morii\", the sculpin \"Mesocottus haitej\", or the Arctic grayling (\"Thymallus arcticus\"). Thus, they are opportunistic feeders; regarding fish, they will probably eat any species that has the correct elongated shape and small size.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "10154337", "title": "Neuse River waterdog", "section": "Section::::Diet and feeding.\n", "start_paragraph_id": 15, "start_character": 0, "end_paragraph_id": 15, "end_character": 317, "text": "Adults feed from the mouths of their retreats or move about in search of prey at night. Olfaction and sight play important roles in locating food. Animals are active away from cover at night, and also during raised water levels and increased turbidity. Individuals are inactive when stream temperatures exceed 18 °C.\n", "bleu_score": null, "meta": null } ] } ]
null
382www
why has the euro held its value for so many years, then all of the sudden dropped to almost the same value as the usd?
[ { "answer": "US Fed has slowed down how much money they are putting into circulation (went interest rates are slowly going up.)\n\nGreece and partially Spain have forced the EU to begin their own QE", "provenance": null }, { "answer": "I don't care how anyone ELI5 this question but please do it using an analogy with candy or something. ", "provenance": null }, { "answer": "The comments about Quantitative Easing (QE) are somewhat correct. The central bank of Europe is a authority that regulates the Euro, while the U.S has the Fed to regulate the dollar. One key difference with the Euro however is that countries in Europe are just that; still separate countries. This means that their economic policy needs are varied. Greece could use a massive devaluation in the Euro, while Germany is doing just fine where it is. Trying to operate under a common currency takes all control away from the central banks of these countries, and they must accept what the European Central Bank does. So Greece would like to see the Euro drop (their goods become cheaper to the rest of the world, comparing relative exchange rates.) This increases tourism, exports, etc. The result is a boost in Greece's economy that they need. Therefore the ECB saw it necessary to provide this relief to the countries struggling in the EU, meanwhile the U.S has started to raise interest rates.", "provenance": null }, { "answer": null, "provenance": [ { "wikipedia_id": "21553182", "title": "International status and usage of the euro", "section": "Section::::Reserve currency status.\n", "start_paragraph_id": 43, "start_character": 0, "end_paragraph_id": 43, "end_character": 360, "text": "In the second term of 2007, euro as a reserve currency had reached a record level of 25.6% (a +0.8% increase from the year before) – at the expense of the US dollar, which dropped to 64.8% (a drop of 1.3% from the year before). By the end of 2007, shares of euro increased to 26.4% as the dollar slumped to its lowest level since records began in 1999, 63.8%.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "21553182", "title": "International status and usage of the euro", "section": "Section::::Reserve currency status.\n", "start_paragraph_id": 40, "start_character": 0, "end_paragraph_id": 40, "end_character": 372, "text": "Since its introduction, the euro has been the second most widely held international reserve currency after the US dollar. The euro inherited this status from the German mark, and since its introduction, it has increased its standing, mostly at the expense of the dollar. The increase of 4.4% in 2002 is due to the introduction of euro banknotes and coins in January 2002.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "34201323", "title": "Brazilian real", "section": "Section::::History.\n", "start_paragraph_id": 6, "start_character": 0, "end_paragraph_id": 6, "end_character": 710, "text": "Soon after its introduction, the real unexpectedly gained value against the U.S. dollar, due to large capital inflows in late 1994 and 1995. During that period it attained its maximum dollar value ever, about US$1.20. Between 1996 and 1998 the exchange rate was tightly controlled by the Central Bank of Brazil, so that the real depreciated slowly and smoothly in relation to the dollar, dropping from near 1:1 to about 1.2:1 by the end of 1998. In January 1999 the deterioration of the international markets, disrupted by the Russian default, forced the Central Bank, under its new president Arminio Fraga, to float the exchange rate. This decision produced a major devaluation, to a rate of almost R$2:US$1.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "4044516", "title": "World currency", "section": "Section::::Historical and current world.:Euro.\n", "start_paragraph_id": 17, "start_character": 0, "end_paragraph_id": 17, "end_character": 364, "text": ", the euro surpassed the dollar in the combined value of cash in circulation. The value of euro notes in circulation has risen to more than €610 billion, equivalent to US$800 billion at the exchange rates at the time. A 2016 report by the World Trade Organisation shows that the world's energy, food and services trade are made 60% with US dollar and 40% by euro.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "9472", "title": "Euro", "section": "Section::::Exchange rates.:Against other major currencies.\n", "start_paragraph_id": 87, "start_character": 0, "end_paragraph_id": 87, "end_character": 902, "text": "The euro is the second-most widely held reserve currency after the U.S. dollar. After its introduction on 4 January 1999 its exchange rate against the other major currencies fell reaching its lowest exchange rates in 2000 (3 May vs Pound sterling, 25 October vs the U.S. dollar, 26 October vs Japanese yen). Afterwards it regained and its exchange rate reached its historical highest point in 2008 (15 July vs U.S. dollar, 23 July vs Japanese yen, 29 December vs Pound sterling). With the advent of the global financial crisis the euro initially fell, to regain later. Despite pressure due to the European sovereign-debt crisis the euro remained stable. In November 2011 the euro's exchange rate index – measured against currencies of the bloc's major trading partners – was trading almost two percent higher on the year, approximately at the same level as it was before the crisis kicked off in 2007.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "270673", "title": "Pound sterling", "section": "Section::::History.:Recent exchange rates.\n", "start_paragraph_id": 82, "start_character": 0, "end_paragraph_id": 82, "end_character": 809, "text": "The pound and the euro fluctuate in value against one another, although there may be correlation between movements in their respective exchange rates with other currencies such as the US dollar. Inflation concerns in the UK led the Bank of England to raise interest rates in late 2006 and 2007. This caused the pound to appreciate against other major currencies and, with the US dollar depreciating at the same time, the pound hit a 15-year high against the US dollar on 18 April 2007, reaching US$2 the day before, for the first time since 1992. The pound and many other currencies continued to appreciate against the dollar; sterling hit a 26-year high of US$2.1161 on 7 November 2007 as the dollar fell worldwide. From mid-2003 to mid-2007, the pound/euro rate remained within a narrow range (€1.45 ± 5%).\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "18717338", "title": "United States dollar", "section": "Section::::Value.\n", "start_paragraph_id": 117, "start_character": 0, "end_paragraph_id": 117, "end_character": 570, "text": "The value of the U.S. dollar was therefore no longer anchored to gold, and it fell upon the Federal Reserve to maintain the value of the U.S. currency. The Federal Reserve, however, continued to increase the money supply, resulting in stagflation and a rapidly declining value of the U.S. dollar in the 1970s. This was largely due to the prevailing economic view at the time that inflation and real economic growth were linked (the Phillips curve), and so inflation was regarded as relatively benign. Between 1965 and 1981, the U.S. dollar lost two thirds of its value.\n", "bleu_score": null, "meta": null } ] } ]
null
spssr
the modern "war on women"
[ { "answer": "Don't have time to respond fully right now, spend 7 minutes watching the video at the bottom. \n\n_URL_0_", "provenance": null }, { "answer": "There are a variety of issues that have come up that seem to relegate women to second class citizens, two of the biggest being birth control and abortion.\n\nThe fact that viagra is covered by insurance, but some are fighting to keep birth control uncovered. Many in the media hinted that women who needed birth control were sluts. \n\nThe bill that would require women to undergo an invasive vaginal ultrasound (which many likened to being raped) before an abortion. The idea being that they would change their mind after seeing their unborn baby.\n\nIn addition, tax breaks for women who stay home and have children lead to an environment where women are encouraged to give up careers for children and their husbands. \n\nThere is nothing wrong with women or men staying home or being chaste, but when the government is supporting this, it creates an oppressive environment.\n", "provenance": null }, { "answer": "A couple of recent developments lead women to say this:\n\n* All across the nation Planned Parenthood has been labeled as abortion factories and pure evil by republicans. Efforts are underway to defund them. Supporters on the other hand say that abortions are just a very small (and not publicly funded) part of what PP does (best cancer screenings, pep smears, birth control...)\n\n* The GOP is opposed to extending the Violence Against Women Act which makes it easier for women to get help in domestic violence situations.\n\n* They are against abortions even if the mother's health is in danger. Or if she was raped.\n\n* GOP controlled states want to pass/already passed laws that force women who want to abort to look at the unborn child/see videos of other abortions/watch educational videos.\n\n* Some republicans want to end/or did end equal-pay-laws (which makes it easier for woman to sure if they are being discriminated against). \n\n* House republicans wanted to pass a measure that would only cover abortions if a rape was \"forcible\". Hence precluding for example statuary rape.\n\nThoee are just a couple of points. There are many more. \n\nBut if those things constitute a \"war on women\" surely is a very subjective matter.", "provenance": null }, { "answer": "In this election year, many conservatives, particularly Rick Santorum, have gone the \"tradition values\" route, which include a number of issues that primarily affect women:\n\n* abortion - there have been many pro-life candidates, but Santorum takes it a step further, wanted to ban abortion in the case of rape\n* ultrasound guilt trips - requiring the patient to view an ultrasound, in some cases a penetrative vaginal ultrasound, before getting an abortion\n* birth control - Santorum opposes all forms of birth control, and supports a state's right to restrict or even ban birth control for adult women\n* state at home moms - Ann Romney has been touting her choice to be a stay at home mom, when in fact her wealth make her more of a stay at home nanny-cook-housekeeper supervisor.", "provenance": null }, { "answer": null, "provenance": [ { "wikipedia_id": "5844533", "title": "Women Against War", "section": "Section::::Present.\n", "start_paragraph_id": 7, "start_character": 0, "end_paragraph_id": 7, "end_character": 249, "text": "The modern Women Against War movement was created by women in the Capital Region and surrounding communities. The vision statement of the modern-day movement is that war is not the answer and that women can help to develop alternatives to violence.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "8160567", "title": "Women in warfare and the military (1900–45)", "section": "", "start_paragraph_id": 1, "start_character": 0, "end_paragraph_id": 1, "end_character": 436, "text": "This timeline of women in warfare and the military (1900–45) deals with the role of women in the military around the world from 1900 through 1945. By the end of the 19th century, women in some countries were starting to serve in limited roles in various branches of the military. The two major events in this time period were World War I and World War II. Please see Women in World War I and Women in World War II for more information.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "13433057", "title": "Feminism (international relations)", "section": "Section::::Gender & War.\n", "start_paragraph_id": 11, "start_character": 0, "end_paragraph_id": 11, "end_character": 1055, "text": "A prominent basis for much of feminist scholarship on war is to emphasize the ways in which men are seen as the sole actors in war. Women, on the other hand, are commonly conceived of as acted upon throughout conflict and conflict resolutions. As asserted by Swati Parashar, they are documented as “grieving widows and mothers, selfless nurses and anti-war activists”. The reality is that women play various roles in war and for different reasons, depending on the conflict. It is noted that women have actively participated in war since the mid-nineteenth century. This process of eliminating women from war is a tool used to discredit women as agents in the international arena. A focal point for many feminist scholars is mass rape during wartime. These scholars will seek to explain why wartime sexual violence is so prevalent through history and today. Some scholars turn to explanations such as rape as a weapon or as a reward for soldiers during the war. Others see sexual violence as an inevitable consequence when social restraints are removed. \n", "bleu_score": null, "meta": null }, { "wikipedia_id": "59758191", "title": "List of women pacifists and peace activists", "section": "Section::::Introduction.\n", "start_paragraph_id": 3, "start_character": 0, "end_paragraph_id": 3, "end_character": 702, "text": "Women have been active in peace movements since the 19th century. After the First World War broke out in 1914, many women's organizations became involved in peace activities. In 1915, the International Congress of Women in the Hague brought together representatives from women's associations in several countries, leading to the establishment of the Women's International League for Peace and Freedom. This in turn led to national chapters which continued their work in the 1920s and 1930s. After the Second World War, European women once again became involved in peace initiatives, mainly as a result of the Cold War, while from the 1960s the Vietnam War led to renewed interest in the United States.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "682382", "title": "Phage therapy", "section": "Section::::Cultural impact.\n", "start_paragraph_id": 61, "start_character": 0, "end_paragraph_id": 61, "end_character": 228, "text": "The 2012 collection of military history essays about the changing role of women in warfare, \"Women in War – from home front to front line\" includes a chapter featuring phage therapy: \"Chapter 17: Women who thawed the Cold War\".\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "43831243", "title": "Timeline of women in war in the United States, Pre-1945", "section": "", "start_paragraph_id": 1, "start_character": 0, "end_paragraph_id": 1, "end_character": 342, "text": "This is a timeline of women in warfare in the United States up until the end of World War II. It encompasses the colonial era and indigenous peoples, as well as the entire geographical modern United States, even though some of the areas mentioned were not incorporated into the United States during the time periods that they were mentioned.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "41685926", "title": "Timeline of women in warfare and the military in the United States, 2000–2010", "section": "", "start_paragraph_id": 1, "start_character": 0, "end_paragraph_id": 1, "end_character": 227, "text": "This article lists events involving women in warfare and the military in the United States from 2000 until 2010. For 2011 onward, please see Timeline of women in warfare and the military in the United States from 2011–present.\n", "bleu_score": null, "meta": null } ] } ]
null
3r0dsl
if one had an electric motor on a car that turned one wheel and three generators on the other three wheels, with two battery banks (one charging and one being used) could i run this car forever?
[ { "answer": "Nope! Any question that asks \"could I run such-and-such system forever?\" violates the first and second laws of thermodynamics.\n\nMore specifically, you would not be able to recover electricity at a fast enough rate with the three generators to refill the battery powering the motor on the first wheel. Eventually, it will lose power altogether depending on how efficient your generators and motor are.", "provenance": null }, { "answer": "No.\n\nYou're using energy to pull the car *and* using energy to turn the generating wheels to store again as energy. So that, alone, means your plan can't work. Let's say the car takes 50 watts to move, and each generator wheel needs, oh, 10 watts of power to turn and generate power. That means your motor wheel (*in an ideal universe without thermodynamics...we'll get to that later*) would need to put out 30 watts of power *just to turn the other wheels*. But it also has to drag the car, so the total the motor wheel has to use is 80 watts, but you're just getting 30, so no matter what you have a net loss of 50. Even if the car only took a fraction of a watt to move, it would still be a net loss of power.\n\nBut way more importantly, thermodynamics is a thing. Energy is *never* transferred without loss. The wires don't transfer the electricity from the batteries perfectly, so you lose a little. The motor wheel loses energy to friction against the road. The generator wheels also lose friction to the road. The wires lose energy to the batteries, etc.\n\nIf you just had a motor directly turning a generator, you would *still* lose energy. That's like asking if a crank can turn itself...\n\n*Those numbers are just made up off the top of my head and wildly inaccurate.", "provenance": null }, { "answer": "Nope, because the system is not perfectly efficient. There are losses due to heat, (caused by friction of mechanical parts, and electrical losses in the wiring), noise etc, so that the amount of power generated would always be less than the amount needed to move the car.\n\nWhat you are proposing is perpetual motion and by all known laws of the universe, is impossible. Google **Entropy**--the most powerful thing in the universe.", "provenance": null }, { "answer": null, "provenance": [ { "wikipedia_id": "42780793", "title": "Invalid carriage", "section": "Section::::Between the wars.\n", "start_paragraph_id": 9, "start_character": 0, "end_paragraph_id": 9, "end_character": 359, "text": "From the 1930s to late 1940s, Nelco Industries made a three-wheeled battery powered vehicle. Steering was by means of a tiller connected to the front wheel. The tiller also provided speed control. Forward or reverse by a separate control. The 24 volt electric motor could act as a generator to recharge the battery when going downhill. The motor was 24 volt.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "951197", "title": "History of the electric vehicle", "section": "Section::::First practical electric cars.\n", "start_paragraph_id": 11, "start_character": 0, "end_paragraph_id": 11, "end_character": 343, "text": "An early electric-powered two-wheel cycle was put on display at the 1867 World Exposition in Paris by the Austrian inventor Franz Kravogl, but it was regarded as a curiosity and could not drive reliably in the street. Another cycle, this time with three wheels, was tested along a Paris street in April 1881 by French inventor Gustave Trouvé \n", "bleu_score": null, "meta": null }, { "wikipedia_id": "51666499", "title": "Rutherford (rocket engine)", "section": "Section::::Description.\n", "start_paragraph_id": 7, "start_character": 0, "end_paragraph_id": 7, "end_character": 204, "text": "Each engine has two small motors that generate while spinning at 40 000 rpm. The first-stage battery, which has to power the pumps of nine engines simultaneously, can provide over 1 MW of electric power.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "30733", "title": "Tram", "section": "Section::::History.:Other power sources.:Battery.\n", "start_paragraph_id": 57, "start_character": 0, "end_paragraph_id": 57, "end_character": 248, "text": "As early as 1834, Thomas Davenport, a Vermont blacksmith, had invented a battery-powered electric motor which he later patented. The following year he used it to operate a small model electric car on a short section of track four feet in diameter.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "11756603", "title": "History of trams", "section": "Section::::Electric.:Battery.\n", "start_paragraph_id": 44, "start_character": 0, "end_paragraph_id": 44, "end_character": 248, "text": "As early as 1834, Thomas Davenport, a Vermont blacksmith, had invented a battery-powered electric motor which he later patented. The following year he used it to operate a small model electric car on a short section of track four feet in diameter.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "564559", "title": "Tinkertoy", "section": "Section::::Standard parts.\n", "start_paragraph_id": 21, "start_character": 0, "end_paragraph_id": 21, "end_character": 285, "text": "Sets with battery-powered electric motors were available; these sets also typically included at least one wooden \"double pulley,\" with a single snug-fitting through-drilled center hole, and grooved rims at two diameters, allowing different moving parts to operate at different speeds.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "52412490", "title": "Mercedes-Benz EQ", "section": "Section::::EQC.\n", "start_paragraph_id": 10, "start_character": 0, "end_paragraph_id": 10, "end_character": 201, "text": "The vehicle has two electric motors, one on the front axle and one on the rear axle. It is all-wheel drive and has a power output of and . The battery is floor-mounted and has an estimated range of . \n", "bleu_score": null, "meta": null } ] } ]
null
7c79ka
Why were European states such as Britain, France, Germany and more, able and willing to colonize and conquer places like Africa, the Americas, and more?
[ { "answer": "Oh dear. I'll be frank with you: the reason why you haven't received an answer to this question is because the answer would be enormous. You could spend the rest of your life studying how these questions apply to but a single group of people - like the Maya - and never arrive at a definitive conclusion.\n\nI'm a bit occupied at the moment but I want to point you in the direction of [a series of posts I made a while back.](_URL_0_) A redditor asked why people in the United States do not say that native peoples were conquered, another user replied incorrectly, and I offered a brief overview of how Europe responded to the conquest. I think the linked reply will give you a taste of how complex the \"motivation\" question is. \n\nBut the real issue I want to discuss is the \"why couldn't indigenous peoples defend themselves\". Again, I want to point you to [a post](_URL_1_) I made on that thread which contextualizes the topic you are talking about. After you have read that material, why don't we see what questions you have and we can go from there?", "provenance": null }, { "answer": null, "provenance": [ { "wikipedia_id": "2164837", "title": "History of Virginia", "section": "Section::::Early European exploration.\n", "start_paragraph_id": 30, "start_character": 0, "end_paragraph_id": 30, "end_character": 201, "text": "After their discovery of the New World in the 15th century, European states began trying to establish New World colonies. England, the Dutch Republic, France, Portugal, and Spain were the most active.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "39353023", "title": "Settler colonialism", "section": "Section::::In early modern and modern times.\n", "start_paragraph_id": 17, "start_character": 0, "end_paragraph_id": 17, "end_character": 243, "text": "During the early modern period, some European nation-states and their agents adopted policies of colonialism, competing with each other to establish colonies outside of Europe, at first in the Americas, and later in Asia, Africa, and Oceania.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "7299", "title": "Colonialism", "section": "Section::::Impact of colonialism and colonisation.:Slavery and indentured servitude.\n", "start_paragraph_id": 501, "start_character": 0, "end_paragraph_id": 501, "end_character": 393, "text": "European nations entered their imperial projects with the goal of enriching the European metropole. Exploitation of non-Europeans and other Europeans to support imperial goals was acceptable to the colonisers. Two outgrowths of this imperial agenda were slavery and indentured servitude. In the 17th century, nearly two-thirds of English settlers came to North America as indentured servants.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "2317810", "title": "Colbertism", "section": "Section::::History.\n", "start_paragraph_id": 5, "start_character": 0, "end_paragraph_id": 5, "end_character": 432, "text": "In the 17th century, European powers had already successfully colonized some part of the world. England had a successful hold on North America and various other areas, including India, Spain had a large hold of South America and North America, and the Dutch had successful outposts in India. The French were beginning to colonize parts of North America, but did not have permanent settlements like the Spanish and British colonies.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "4305070", "title": "History of Western civilization", "section": "Section::::Fall of the western empires: 1945–1999.\n", "start_paragraph_id": 221, "start_character": 0, "end_paragraph_id": 221, "end_character": 836, "text": "The loss of overseas colonies partly also led many Western nations, particularly in continental Europe, to focus more on European, rather than global, politics as the European Union rose as an important entity. Though gone, the colonial empires left a formidable cultural and political legacy, with English, French, Spanish, Portuguese, Russian and Dutch being spoken by peoples across far flung corners of the globe. European technologies were now global technologies – religions like Catholicism and Anglicanism, founded in the West, were booming in post colonial Africa and Asia. Parliamentary (or presidential) democracies, as well as rival Communist style one party states invented in the West had replaced traditional monarchies and tribal government models across the globe. Modernity, for many, was equated with Westernisation.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "9458703", "title": "Pre-modern human migration", "section": "Section::::Early Modern period.:Colonial empires.\n", "start_paragraph_id": 54, "start_character": 0, "end_paragraph_id": 54, "end_character": 701, "text": "European Colonialism from the 16th to the early 20th centuries led to an imposition of a European colonies in many regions of the world, particularly in the Americas, South Asia, Sub-Saharan Africa and Australia, where European languages remain either prevalent or in frequent use as administrative languages. Major human migration before the 18th century was largely state directed. For instance, Spanish emigration to the New World was limited to settlers from Castile who were intended to act as soldiers or administrators. Mass immigration was not encouraged due to a labour shortage in Europe (of which Spain was the worst affected by a depopulation of its core territories in the 17th century).\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "106086", "title": "Guns, Germs, and Steel", "section": "Section::::Synopsis.:Outline of theory.\n", "start_paragraph_id": 19, "start_character": 0, "end_paragraph_id": 19, "end_character": 1005, "text": "Diamond also proposes geographical explanations for why western European societies, rather than other Eurasian powers such as China, have been the dominant colonizers, claiming Europe's geography favored balkanization into smaller, closer nation-states, bordered by natural barriers of mountains, rivers, and coastline. Threats posed by immediate neighbours ensured governments that suppressed economic and technological progress soon corrected their mistakes or were outcompeted relatively quickly, whilst the region's leading powers changed over time. Other advanced cultures developed in areas whose geography was conducive to large, monolithic, isolated empires, without competitors that might have forced the nation to reverse mistaken policies such as China banning the building of ocean-going ships. Western Europe also benefited from a more temperate climate than Southwestern Asia where intense agriculture ultimately damaged the environment, encouraged desertification, and hurt soil fertility.\n", "bleu_score": null, "meta": null } ] } ]
null
1q3tuy
What was life like in Spain in the early 70's?
[ { "answer": "I don't have any sources at hand but I will tell you what I remember from my Spanish High School history lessons:\n\nFranco no longer is the dictator he was at the end of the Civil War, his aging and loosening of the executive power have made him a somewhat ceremonial figure, specially after he appointed his close confidant, admiral Luis Carrero Blanco as prime minister in june 1973. In this time ETA became an active threat, but they almost never made moves on civilians, and targeted mostly politicians and the military, Carrero Blanco himself being one of the casualties.\n\nWhen talking about political issues you have to take into account that all Spanish parties besides Francoist movements were not dead in any way and were active either very discreetly or abroad. People did discuss politics but no real protests or manifestations ocurred because a) censorship was strict and powerful all the way until Franco's death and b) Spain was ending a period of incredible economic and social prosperity. Almost everyone everywhere was happy, had a wealthy lifestyle and managed to do things that many other western nations had done 40 years before, such as buying their own cars, partying, going to concerts and travelling. Tourism BOOMED in Spain, and has not stopped since. It is rumoured also that Francoist authorities somehow managed to rig the Eurovision Song Contest in favor of Spain in 1968, making a singer called Massiel win with an awful song called 'La La La'.\n\nIt was only after Franco's death that the country was somewhat shaken up and had some economic troubles and challenges, but once democracy had been brought back, it would most certainly have seemed that the country had been a stable democratic state in appearance for some time. Cuturally, economically and diplomatically speaking, the country was the same as its neigbours except for the fact that it was a dictatorship. By 1972 people were just waiting for Franco to die.", "provenance": null }, { "answer": null, "provenance": [ { "wikipedia_id": "159557", "title": "Tourism in Spain", "section": "Section::::Nightlife.\n", "start_paragraph_id": 39, "start_character": 0, "end_paragraph_id": 39, "end_character": 660, "text": "The nightlife in Spain is very attractive to both tourists and locals. Spain is known to have some of the best nightlife in the world. Big cities such as Madrid and Barcelona are favorites amongst the large and popular discothèques. For instance, Madrid is known as the number one party city for clubs such as Pacha and Kapital (seven floors), and Barcelona is famous for Opium and Sutton famous clubs. The discothèques in Spain are open until odd hours such as 7am. The Baleraric Islands, such as Ibiza and Mallorca, are known to be major party destinations, as well as favored summer resort and in Andalusia, Malaga, specially the area of the Costa del Sol.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "63624", "title": "Ibiza", "section": "Section::::Tourism.:Nightlife.\n", "start_paragraph_id": 42, "start_character": 0, "end_paragraph_id": 42, "end_character": 831, "text": "Night life in Ibiza has undergone several changes since the island's opening to international tourism in the late 1950s. Origins of today's club culture may be traced back to the hippie gatherings held during the 1960s and 1970s. During these, people of various nationalities sharing the hippie ethos would regroup, talk, play music and occasionally take drugs. These would most often happen on beaches during the day, with nude bathing a common sight, and in rented fincas in the evenings or at nights. Apart from this confidential scene, which nevertheless attracted many foreigners to the island, local venues during the 1960s consisted mostly of bars, which would be the meeting points for Ibicencos, ex-pats, seafarers and tourists alike. The Estrella bar on the port and La Tierra in the old city of Eivissa were favourites.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "60164594", "title": "Women in Francoist Spain", "section": "Section::::Timeline.:1940s.\n", "start_paragraph_id": 17, "start_character": 0, "end_paragraph_id": 17, "end_character": 464, "text": "The 1940s and 1950s were a dark period in Spanish history, where the country was still recovering from the effects of the Spanish Civil War, where the economy was poor and people suffered a huge number of deprivations as a result of the loss of life and the repressive nature of the regime which sought to vanquish any and all remaining Republican support by going after anyone who had been affiliated with or expressed any sympathies towards the Second Republic.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "2045842", "title": "Costa Brava", "section": "Section::::History.\n", "start_paragraph_id": 33, "start_character": 0, "end_paragraph_id": 33, "end_character": 421, "text": "A few years after the Spanish Civil War when some sort of order had been restored, the gradual breaking down of Spain's international isolation in the 1950s cleared the way for new options in tourism. The sea and the sun were drawing increasing numbers of people, which combined with the Côte d'Azur already being overcrowded in those days, enhanced the appeal of Costa Brava for holiday-makers who made their way there.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "27393", "title": "Economy of Spain", "section": "Section::::Economic and financial crisis.:Employment crisis.:Youth crisis.\n", "start_paragraph_id": 39, "start_character": 0, "end_paragraph_id": 39, "end_character": 1348, "text": "During the early 1990s, Spain experienced a period of economic crisis as a result of a larger, Europe-wide economic episode that led to a rise in unemployment rates. Many young adults in Spain found themselves trapped in a cycle of temporary jobs, which resulted in the creation of a secondary class of workers through reduced wages, job stability and advancement opportunities. As a result, many Spaniards, predominantly unmarried young adults, emigrated to other countries in order to pursue job opportunities and raise their standard of life, which left only a small amount of young adults living below the poverty line in Spain. Spain experienced another economic crisis during the 2000s, which also prompted a rise in Spanish citizens emigrating to neighboring countries with more job stability and better economic standings. Youth unemployment remains a concern in Spain, prompting researchers such as Anita Wölfl to suggest that Spain could decrease unemployment by making labor market programs and job-search assistance accessible to the most disadvantaged youth. She has also posited that this would improve Spain's weakened youth labor market, as issues with the school to work transition has made it difficult to find long-term employment. As a solution, Wölfl has suggested making improvements by matching their skills with businesses.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "577482", "title": "Cuenca, Spain", "section": "Section::::History.\n", "start_paragraph_id": 13, "start_character": 0, "end_paragraph_id": 13, "end_character": 740, "text": "The first decades of the 20th century were as turbulent as in other regions of Spain. There was poverty in rural areas, and the Catholic Church was attacked, with monks, nuns, priests and a bishop of Cuenca, Cruz Laplana y Laguna, being murdered. During the Spanish Civil War Cuenca was part of the republican zone (\"Zona roja\" or: \"the red zone\"). It was taken in 1938 by General Franco's troops. During the post-war period the area suffered a major economic decline, causing many people to migrate to more prosperous regions, mainly the Basque Country and Catalonia, but also to other countries such as Germany. The city started to recover slowly from 1960 to 1970, and the town limits went far beyond the gorge to the flat surroundings.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "5285217", "title": "Cervantes, Lugo", "section": "", "start_paragraph_id": 3, "start_character": 0, "end_paragraph_id": 3, "end_character": 305, "text": "Until the second half of the 20th century this part of Spain was isolated. Many settlements had no road access or electricity and were cut off by snow in winter. Communities were small and self-sufficient. This way of life is shown in the small museum at Piornedo which is located in a preserved palloza.\n", "bleu_score": null, "meta": null } ] } ]
null
2ka6ey
What will I hear if I talk while breaking the sound barrier?
[ { "answer": "Depends, in a jet it eill sound like you talking, if your head is exposed it will sound like nothing (and you would be screaming neways). What you hear is transmitted through the air (sound is just waves in air). So if the air is contained ans traveling at the same velocity, no change. If it isnt contained all you will hear is the air, and the air friction would be increadibly painful ", "provenance": null }, { "answer": null, "provenance": [ { "wikipedia_id": "4049625", "title": "Noise barrier", "section": "Section::::Design.\n", "start_paragraph_id": 10, "start_character": 0, "end_paragraph_id": 10, "end_character": 753, "text": "The acoustical science of noise barrier design is based upon treating an airway or railway as a line source. The theory is based upon blockage of sound ray travel toward a particular receptor; however, diffraction of sound must be addressed. Sound waves bend (downward) when they pass an edge, such as the apex of a noise barrier. Barriers that block line of sight of a highway or other source will therefore block more sound. Further complicating matters is the phenomenon of refraction, the bending of sound rays in the presence of an inhomogeneous atmosphere. Wind shear and thermocline produce such inhomogeneities. The sound sources modeled must include engine noise, tire noise, and aerodynamic noise, all of which vary by vehicle type and speed.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "30182396", "title": "Timeline of United States inventions (1946–1991)", "section": "Section::::Cold War (1946–1991).:Post-war and the late 1940s (1946–1949).\n", "start_paragraph_id": 27, "start_character": 0, "end_paragraph_id": 27, "end_character": 390, "text": "In aerodynamics, the sound barrier usually refers to the point at which an aircraft moves from transonic to supersonic speed. On October 14, 1947, just under a month after the United States Air Force had been created as a separate service, tests culminated in the first manned supersonic flight where the sound barrier was broken, piloted by Air Force Captain Chuck Yeager in the Bell X-1.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "40453154", "title": "2012 in Austria", "section": "Section::::Events.\n", "start_paragraph_id": 6, "start_character": 0, "end_paragraph_id": 6, "end_character": 300, "text": "BULLET::::- October 14 – Austrian skydiver Felix Baumgartner becomes the first person to break the sound barrier without any machine assistance during a record space dive out of the \"Red Bull Stratos\" helium-filled balloon from 24 miles (39 kilometers) over Roswell, New Mexico in the United States.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "4049625", "title": "Noise barrier", "section": "Section::::Design.\n", "start_paragraph_id": 11, "start_character": 0, "end_paragraph_id": 11, "end_character": 281, "text": "The noise barrier may be constructed on private land, on a public right-of-way, or on other public land. Because sound levels are measured using a logarithmic scale, a reduction of nine decibels is equivalent to elimination of approximately 86 percent of the unwanted sound power.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "47374", "title": "2012", "section": "Section::::Events.:October.\n", "start_paragraph_id": 49, "start_character": 0, "end_paragraph_id": 49, "end_character": 295, "text": "BULLET::::- October 14 – Austrian skydiver Felix Baumgartner becomes the first person to break the sound barrier without any machine assistance during a record space dive out of the \"Red Bull Stratos\" helium-filled balloon from 128,000 ft equaling over Roswell, New Mexico in the United States.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "2072768", "title": "The Sound Barrier", "section": "", "start_paragraph_id": 1, "start_character": 0, "end_paragraph_id": 1, "end_character": 510, "text": "The Sound Barrier (known in the United States, as Breaking Through the Sound Barrier and Breaking the Sound Barrier) is a 1952 British aviation film directed by David Lean. It is a fictional story about attempts by aircraft designers and test pilots to break the sound barrier. It was David Lean's third and final film with his wife Ann Todd, but it was his first for Alexander Korda's London Films, following the break-up of Cineguild. \"The Sound Barrier\" stars Ralph Richardson, Ann Todd, and Nigel Patrick.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "34879651", "title": "2012 in Europe", "section": "Section::::Events.:October.\n", "start_paragraph_id": 122, "start_character": 0, "end_paragraph_id": 122, "end_character": 287, "text": "BULLET::::- October 14: Austrian skydiver Felix Baumgartner becomes the first person to break sound barrier without mechanical assistance during a record space dive out of the \"Red Bull Stratos\" helium-filled balloon from 39,045 kilometers over Roswell, New Mexico in the United States.\n", "bleu_score": null, "meta": null } ] } ]
null
bgdvnn
Is it possible that society actually needs wars as an engine for progress in technology? What does history say about this?
[ { "answer": "This question is so broad, the answer will depend pretty much entirely on what you want it to be. It would be easy to name many cases in which war produced technological innovations, but just as easy to cite many cases in which it didn't. Whichever point you want to prove, you can pick your examples to match. Someone who reads the question and thinks immediately of the World Wars will say \"competitive arms development, and the wartime challenges of logistics and medical care, contribute to technological invention and improvement in ways that might otherwise have taken longer, if they would have happened at all.\" But someone who thinks of, say, the Peloponnesian War might answer \"war is only a destructive force; the priorities and costs of war actually inhibit any technological development that might otherwise have received the necessary funding, manpower and thought.\" Whole swathes of human history attest to the fact that endemic warfare often produces anarchy and poverty, not innovation and technological change.\n\nThis is complicated by the question whether the particular technologies developed in wartime (and for the sake of fighting wars more effectively) actually matter outside of that context. Wars may make a society better at fighting wars, but does that help anyone in society at large? It's easy to point at technologies that were invented for a military purpose and have since made the leap into civilian life; but similarly, it's easy to point at technological innovations (like, say, siege towers or anti-tank shells) that serve only to solve military problems and don't contribute anything to the way people live.\n\nThe guide you offer into these hugely subjective topics is that you're asking whether society *needs* war as a way to propel technological change. The implied assumption is that without wars, such change might not happen at all, or at a much slower rate and in fewer ways. This framing would theoretically allow us to put all of the war-related technology of history onto a big pile and ask (passing by the question whether all of it has a use in society) whether war was needed to produce all that, or whether it would have been developed regardless. But the problem there is that there's no cut-and-dried distinction between \"war tech\" and \"civilian tech.\" They build on each other. For example, the steam engine was invented in an entirely civilian context and applied first in industries like mining and cloth making. But then it was adopted by navies to propel ships, which then kicked off a host of military innovations related to the new energy source. Modern tanks may be marvels of offensive and defensive technology, but the first tanks were designed around readily available agricultural tractor chassis. Does society need war to generate more advanced technology, or does military technology need society to produce things that allow it to develop?\n\nAny answer will inevitably devolve into a chicken-and-egg question. Who actually owes whom for what? Which innovations can militaries wholly claim (especially given that modern military technology is developed in a network of government contracting and liaison with civilian industry)? Can we isolate the improvements made during certain conflicts and can we assume they would not have been made without those conflicts? How do we define \"need\" when we say that society needs warfare to propel technological change?\n\nSuch questions can only be answered on a case-by-case basis. Technological improvements need to be seen in their historical context: not just the when and why of their development, but the origins of their parts and their principles. You don't just randomly come up with radar or nuclear fission to fight wars better. Similarly, society at large doesn't just sit there waiting for the boffins at the War Department to give them these things to play around with. The development of new technology is a process with different people contributing for different reasons - some in the military, some in universities, some in their shed or their study. It's impossible to say categorically which side needs the other to make any progress.\n\nJust as importantly, the mere existence of a military conflict does not speed up military innovation; there needs to be a context in which new technologies are available from other spheres and new technology is thought to offer opportunities for major tactical or strategic advantage. If these conditions are absent, wars will simply be fought in the tried-and-tested manner until one side wins. Many resources will be spent or destroyed in the process. It's not by definition an ideal environment for the development of different technology.\n\nIn short, we can't simply answer this question one way or the other. It is uncontroversial that research spurred on by war has contributed substantially to the improvement of existing technologies and the development of new ones (especially in recent times). After all, people invest ingenuity and resources in things that matter to them, and warfare has tended to matter a lot. On the other hand, it is also uncontroversial that technological innovation happens outside of the military sphere, and that militaries benefit substantially from this. It is also presumably uncontroversial that war is not primarily a creative force, but one mainly interested in enhancing its ability to destroy. Any attempt to resolve these contradictions in a single universal truism about war's influence on technology seems futile to me.", "provenance": null }, { "answer": "There's a bit of determinism wrapped up in this question that I'd like to tackle -- although u/Iphikrates and u/restricteddata have done a good job of delineating how we think of \"technology\" and how it's applied, and how our specific technological-collegiate-military-industrial research-driven era is very much a product of the postwar period -- there's also a bit of an assumption that technology wins wars, and the side that techs the best will win. \n\nI'd like to tackle this by relating the question to ships and shipbuilding, which seems like it's an obvious area where \"better technology\" will win the day. But the weird thing is that the early days of the English (later British) navy's real rise as a power started at least partly because they adopted an inferior technology. \n\nI was reading about cannons the other day, as one does, and I was reminded of a major change in procurement that happened starting around the 1540s in England. Henry VIII was a Renaissance prince in many senses, not least his admiration for and desire to own many large guns. At the time, large cannons were cast from bronze, an alloy of copper and tin; while England (or Cornwall, anyhow) was rich in tin, copper had to be imported or guns themselves imported entire, which was expensive even for someone who dissolved monasteries and seized church property. (Copper was not discovered in England until Elizabeth's reign.) In the 1540s, though, crown investment in gun foundries in the Weald of Kent started to pay off in the form of cast iron guns, the first of which was cast in 1543. \n\nIron is not nearly as good a material as bronze for casting guns. Iron melts at a much higher point than bronze; it is heavier (more dense) than bronze; it is prone to flaws in the casting, which can compromise its strength; the black powder used at the time will corrode the inside of the gun (sulphuric acid is one of its combustion products). \n\nIron guns will also burst without warning, whereas bronze guns will bulge around flaws and then split. This was a major drawback for gunners at the time, who had no way of calculating how much powder and shot a gun might be safely charged with, other than by loading it and firing it off. \n\nBut the crucial advantage iron guns had over bronze ones was that they were substantially cheaper -- medieval England was rich in iron, and iron guns and shot (bronze guns usually used stone shot) were between 10-20 percent of the cost of bronze guns. (The cost of iron guns actually fell by about 20-25 percent, from £10-12 per ton to £8-9 per ton, from 1565 to 1600, in a period of otherwise rapid inflation.) \n\nSo, the iron gun was heavy, of uncertain strength, and prone to bursting without warning; but you could also get 5 or 10 iron guns for the cost of one bronze gun. The Royal Navy was still armed mostly with bronze through Elizabeth's reign, but cheap iron guns were suddenly widely available for smaller ships, including many that straddled the late-Elizabethan line between private commerce, privateering, and piracy. \n\nSo the reason for this rather long-winded answer is to point out that the measures we're often drawn into for what technology is \"better\" don't always match with how the rubber meets the road. The Japanese Zero was a \"better\" aircraft than the Grumman Wildcat for values of \"better\" that value speed, range and dogfighting, but the Wildcat racked up something like a 6:1 kill ratio when pilots fought in pairs and took advantage of their planes' ruggedness. British-built ships in the period I study are [often argued to be \"worse\" than French and Spanish designs,](_URL_0_), yet Trafalgar was so decisive a British victory that it was the last fleet combat for 111 years. Warfighting is not just about the stuff, but also logistics, training, performance and a host of other things.", "provenance": null }, { "answer": null, "provenance": [ { "wikipedia_id": "562666", "title": "War economy", "section": "", "start_paragraph_id": 6, "start_character": 0, "end_paragraph_id": 6, "end_character": 490, "text": "On the supply side, it has been observed that wars sometimes have the effect of accelerating progress of technology to such an extent that an economy is greatly strengthened after the war, especially if it has avoided the war-related destruction. This was the case, for example, with the United States in World War I and World War II. Some economists (such as Seymour Melman) argue, however, that the wasteful nature of much of military spending eventually can hurt technological progress.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "1265697", "title": "Lester Frank Ward", "section": "Section::::Theory of war and conflict.\n", "start_paragraph_id": 52, "start_character": 0, "end_paragraph_id": 52, "end_character": 985, "text": "In \"Pure Sociology: A Treatise on the Origin and Spontaneous Development of Society\" (1903) Ward theorizes that throughout human history conflict and war has been the force that is most responsible for human progress. It was through conflict that hominids gained dominance over animals. It was through conflict and war that Homo Sapiens wiped out the less advanced hominid species and it was through war that the more technologically advanced races and nations expanded their territory and spread civilization. Ward sees war as a natural evolutionary process and like all natural evolutionary processes war is capricious, slow, often ineffective and shows no regard for the pain inflicted on living creatures. One of the central tenets of Wards world view is that the artificial is superior to the natural and thus one of the central goals of Applied Sociology is to replace war with a system that retains the progressive elements that war has provided but without the many downsides.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "27129241", "title": "Ian Morris (historian)", "section": "Section::::\"War! What is it Good For?\".\n", "start_paragraph_id": 15, "start_character": 0, "end_paragraph_id": 15, "end_character": 1079, "text": "\"War! What is it Good For?: Conflict and the Progress of Civilization from Primates to Robots\" was published by Farrar, Straus & Giroux in the US and Profile Books in Britain in April 2014. Morris argues that there is enough evidence to trace the history of violence across many thousands of years and that a startling fact emerges. For all of its horrors, over the last 10,000 years, war has made the world safer and richer, as it is virtually the only way that people have found to create large, internally pacified societies that then drive down the rate of violent death. The lesson of the last 10,000 years of military history, he argues, is that the way to end war is by learning to manage it, not by trying to wish it out of existence. Morris also devotes a chapter to the 1974-1978 Gombe Chimpanzee War in Tanzania. The German translation of the book, \"Krieg: Wozu er gut ist\", was published by Campus Verlag in October 2013. A Dutch translation was published in 2014 by Spectrum (Houten/Antwerp): \"Verwoesting en vooruitgang\". Five more translations are being prepared.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "863", "title": "American Civil War", "section": "Section::::Memory and historiography.:Technological significance.\n", "start_paragraph_id": 271, "start_character": 0, "end_paragraph_id": 271, "end_character": 1016, "text": "Numerous technological innovations during the Civil War had a great impact on 19th-century science. The Civil War was one of the earliest examples of an \"industrial war\", in which technological might is used to achieve military supremacy in a war. New inventions, such as the train and telegraph, delivered soldiers, supplies and messages at a time when horses were considered to be the fastest way to travel. It was also in this war when countries first used aerial warfare, in the form of reconnaissance balloons, to a significant effect. It saw the first action involving steam-powered ironclad warships in naval warfare history. Repeating firearms such as the Henry rifle, Spencer rifle, Colt revolving rifle, Triplett & Scott carbine and others, first appeared during the Civil War; they were a revolutionary invention that would soon replace muzzle-loading and single-shot firearms in warfare, as well as the first appearances of rapid-firing weapons and machine guns such as the Agar gun and the Gatling gun.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "47067336", "title": "Laudato si'", "section": "Section::::Content.:Technology.\n", "start_paragraph_id": 24, "start_character": 0, "end_paragraph_id": 24, "end_character": 1244, "text": "Given these significant short-comings of technology, \"scientific and technological progress cannot be equated with progress of humanity and history,\" and we are deluded by the myth of progress to believe that \"ecological problems will solve themselves simply with the application of new technology and without need for ethical consideration or deep change.\" A profound redefinition of progress and \"liberation from the dominant technocratic paradigm\" are needed, i.e., \"we have the freedom needed to limit and direct technology; we can put it at the service of another type of progress, one which is healthier, more human, more social, more integral.\" More fundamentally, according to the pontiff, we need to recognize that \"technology severed from ethics will not easily be able to limit its own power,\" and that \"the most extraordinary scientific advances, the most amazing technical abilities, the most astonishing economic growth, unless they are accompanied by authentic social and moral progress, will definitively turn against man.\" Pope Francis adds that the environmental crisis can ultimately only be solved if our immense technological developments are accompanied by a \"development in human responsibility, values, and conscience.\"\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "24983621", "title": "The Unconquerable World", "section": "", "start_paragraph_id": 5, "start_character": 0, "end_paragraph_id": 5, "end_character": 636, "text": "The key political progress has been the idea of democracy - even the worst democracy - carries within it the principle of equality which is a deeply seated contradiction to an also deeply embedded practice of inequality - see Tocqueville. Ironically modern national democracy allowed for a new kind of army, in which it was possible to mobilise masses of men prepared to die - apparently in defence of their own national interest and the principle of democracy. The disaster of the modern war system was fed by an unholy confluence of democracy, science, industrial revolution, and imperialism which developed through the 19th century.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "33183164", "title": "History of the Technion – Israel Institute of Technology", "section": "Section::::Overview.\n", "start_paragraph_id": 3, "start_character": 0, "end_paragraph_id": 3, "end_character": 627, "text": "The extent to which technology determines history and the creation and destiny of nations is a question of historical scholarship, with the Technion – Israel Institute of Technology cited as a striking example. Initiated with the help of increasing Jewish unity made possible by the new communication technologies of the Second Industrial Revolution, the Technion was born 36 years before Israel declared independence. In that time it educated the engineers and brought the expertise to literally lay the infrastructure for a modern state. This included the fundamental infrastructure of electricity, water supplies and roads.\n", "bleu_score": null, "meta": null } ] } ]
null
3klkpd
What do you end up with if you tear something apart at a 'molecular level'?
[ { "answer": "Well first, that sounds like garbledeygook ad copy, so I wouldn't put much stock in the scientific value of that TV spot. \n\nAs for what breaking something at a \"molecular level\" would mean, that would depend on the nature of the substance. For crystalline materials, breaking a single crystal, you just get two smaller crystals. For many biological materials, breaking them would just separate some molecules into the respective parts. For large covalently-linked materials like rubber or cellulose, at some point chemical bonds would have to break, meaning you get new \"molecules\", but the nature of those molecules are poorly defined to begin with because the entire mass is connected by covalent bonds. Is that a single molecule? Not by most definitions. \n\nCould a blender break a single small molecule? No. Could it break a larger one like a molecule of nucleic acid or protein? Yes, the shear force could lead to breakage of a bond. Technically a \"different substance\" but with most of the same properties. The more important function of a blender is disrupting the arrangement of various molecules with respect to each other, not their action on individual molecules themselves.", "provenance": null }, { "answer": null, "provenance": [ { "wikipedia_id": "53030002", "title": "Damage", "section": "Section::::Physical damage.\n", "start_paragraph_id": 4, "start_character": 0, "end_paragraph_id": 4, "end_character": 245, "text": "Although all damage at the atomic level manifests as broken atomic bonds, the manifestation of damage at the macroscopic level depends on the material, and can include cracks and deformation, as well as structural weakening that is not visible.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "53030002", "title": "Damage", "section": "Section::::Physical damage.\n", "start_paragraph_id": 3, "start_character": 0, "end_paragraph_id": 3, "end_character": 880, "text": "All physical damage begins on the atomic level, with the shifting or breaking of atomic bonds, and the rate at which damage to any physical thing occurs is therefore largely dependent on the elasticity of such bonds in the material being subjected to stress. Damage can occur where atomic bonds are not completely broken, but are shifted to create unstable pockets of concentration and diffusion of the material, which are more susceptible to later breakage. The effect of outside forces on a material depends on the relative elasticity or plasticity of the material; if a material tends towards elasticity, then changes to its consistency are reversible, and it can bounce back from potential damage. However, if the material tends towards plasticity, then such changes are permanent, and each such change increases the possibility of a crack or fault appearing in the material.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "15368725", "title": "Bond cleavage", "section": "Section::::Applications.\n", "start_paragraph_id": 14, "start_character": 0, "end_paragraph_id": 14, "end_character": 299, "text": "In biochemistry, the process of breaking down large molecules by splitting their internal bonds is catabolism. Enzymes which catalyse bond cleavage are known as lyases, unless they operate by hydrolysis or oxidoreduction, in which case they are known as hydrolases and oxidoreductases respectively.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "56847880", "title": "Fragmentation (medicine)", "section": "", "start_paragraph_id": 1, "start_character": 0, "end_paragraph_id": 1, "end_character": 674, "text": "In medicine, fragmentation is an operation that breaks of solid matter in a body part into pieces. Physical force (e.g., manual force, ultrasonic force), applied directly or indirectly through intervening body parts, are used to break down the solid matter into pieces. The solid matter may be an abnormal by-product of a biological function, or a foreign body. The pieces of solid matter are not taken out, but are eliminated or absorbed through normal biological functions. Examples would be the fragmentation of kidney and urinary bladder stones (nephrolithiasis and urolithiasis, respectively) by shock-wave lithotripsy, laser lithotripsy, or transurethral lithotripsy.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "9213410", "title": "Hematoma block", "section": "", "start_paragraph_id": 2, "start_character": 0, "end_paragraph_id": 2, "end_character": 605, "text": "When a bone is fractured as a result of an injury, the two fragments may be displaced relative to each other. If they are not, usually no treatment is required other than immobilisation in an appropriate cast. If displacement does occur, then the space separating the fragments fills with blood shed by the damaged blood vessels within the bone. This collection, or pool, of blood is known as a hematoma. Injection of a suitable local anesthetic by needle and syringe through the skin into this hematoma produces relief of the pain caused by the fracture, allowing the bones to be painlessly manipulated.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "10356", "title": "Endothermic process", "section": "Section::::Details.\n", "start_paragraph_id": 3, "start_character": 0, "end_paragraph_id": 3, "end_character": 945, "text": "All chemical reactions involve both the breaking of existing and the making of new chemical bonds. A reaction to break a bond always requires the input of energy and so such a process is always endothermic. When atoms come together to form new chemical bonds, the electrostatic forces bringing them together leave the bond with a large excess of energy (usually in the form of vibrations and rotations). If that energy is not dissipated, the new bond would quickly break apart again. Instead, the new bond can shed its excess energy - by radiation, by transfer to other motions in the molecule, or to other molecules through collisions - and then become a stable new bond. Shedding this excess energy is the exothermicity that leaves the molecular system. Whether a given overall reaction is exothermic or endothermic is determined by the relative contribution of these bond breaking endothermic steps and new bond stabilizing exothermic steps.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "927421", "title": "Molecular lesion", "section": "", "start_paragraph_id": 1, "start_character": 0, "end_paragraph_id": 1, "end_character": 444, "text": "A molecular lesion or point lesion is damage to the structure of a biological molecule such as DNA, enzymes, or proteins that results in reduction or absence of normal function or, in rare cases, the gain of a new function. Lesions in DNA consist of breaks and other changes in the chemical structure of the helix (see \"types of DNA lesions\") while lesions in proteins consist of both broken bonds and improper folding of the amino acid chain.\n", "bleu_score": null, "meta": null } ] } ]
null
1cyoq4
why does money exist?
[ { "answer": "Because it's inconvenient to have to trade water buffalo for skittles.\n", "provenance": null }, { "answer": "Less flippantly, it exists because resources are scarce. Money helps us decide how to allocate scarce resources.\n\nFor example: there is a limited amount of beautiful, beachfront property on earth. How do we decide who should get to use that property? Money provides us the answer - whoever is willing to pay the most for use of the property, gets to use it.\n\nIt is important to note that money is just *one possible* system for allocating scarce resources. Money has some advantages, which I will get into. But there are other possibilities. For example, you could allocate scarce resources by decree, with a committee deciding who gets the beachfront property. Or, you could allocate scarce resources by whoever gets it first. Whoever first builds a house on the property gets it. \n\nThe advantage to using money to allocate scarce resources is that money can generally only be made by being a productive member of society. And that means making life better for people, because if you didn't, they wouldn't give you their money! That's the theory, anyway, and plenty of people would disagree with it, but the fact is that it's worked damn well for Western civilization so far, problems and all. Are there *better* ways to allocate resources than money? Maybe! Who knows! Money is certainly very well tested.\n\nSo if you want to do away with money, then you have to come up with an alternate system of allocating scarce resources. Because resources will *always* be scarce. Even if everyone on earth had all the food they wanted, and we somehow managed to make everyone an Xbox 360 and a nice car, beachfront property would still be in limited supply. As would Ferraris and 20 carat emeralds. All of these need to be allocated somehow. How do you propose to do that?", "provenance": null }, { "answer": "The problems you think start with money, actually start with human vices.\n\nThe capitalist - market system would actually work flawlessly, if not for human greed, carelessness, irrationality, gluttony,irresponsibility, lack of respect and care for their fellow humans, inability to think long term, be emphatic and support equality.\n\nWell educated, egalitarian, enlightened and empathic cultures are significantly less prone to have economic problems, and even when they do, the negative impact on human lives is lessened, and spread equally, without significantly harming anyone.", "provenance": null }, { "answer": null, "provenance": [ { "wikipedia_id": "18915542", "title": "Nomisma", "section": "", "start_paragraph_id": 2, "start_character": 0, "end_paragraph_id": 2, "end_character": 289, "text": "BULLET::::- \"\"...but money has become by convention a sort of representative of demand; and this is why it has the name 'money' (nomisma)-because it exists not by nature but by law (nomos) and it is in our power to change it and make it useless.\"\" Aristotle, Nicomachean Ethics [1133b 1].\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "1029281", "title": "Monochrom", "section": "Section::::Main projects (in chronological order).\n", "start_paragraph_id": 72, "start_character": 0, "end_paragraph_id": 72, "end_character": 730, "text": "BULLET::::- To quote monochrom's press statement: \"Money is frozen desire. Thus it governs the world. Money is used for all forms of trade, from daily shopping at the supermarket to trafficking in human beings and drugs. In the course of all these transactions, our money wears out quickly, especially the smaller bank notes that are changing hands constantly. ... Money is dirty, and thus it is a living entity. This is something we take literally: money is an ideal environment for microscopic organisms and bacteria. We want to make your money grow. In a potent nutrient fluid under heat lamps we want to get as much life as we can out of your dollar bills.\" (\"Growing Money\" was part of the \"Experience The Experience\" tour.)\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "9223", "title": "Economics", "section": "Section::::Macroeconomics.:Inflation and monetary policy.\n", "start_paragraph_id": 138, "start_character": 0, "end_paragraph_id": 138, "end_character": 648, "text": "Money is a \"means of final payment\" for goods in most price system economies, and is the unit of account in which prices are typically stated. Money has general acceptability, relative consistency in value, divisibility, durability, portability, elasticity in supply, and longevity with mass public confidence. It includes currency held by the nonbank public and checkable deposits. It has been described as a social convention, like language, useful to one largely because it is useful to others. In the words of Francis Amasa Walker, a well-known 19th-century economist, \"Money is what money does\" (\"Money is \"that\" money does\" in the original).\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "2763667", "title": "History of money", "section": "Section::::Theories of money.\n", "start_paragraph_id": 13, "start_character": 0, "end_paragraph_id": 13, "end_character": 313, "text": "Other theorists also note that the status of a particular form of money always depends on the status ascribed to it by humans and by society. For instance, gold may be seen as valuable in one society but not in another or that a bank note is merely a piece of paper until it is agreed that it has monetary value.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "3944846", "title": "Philosophy of Søren Kierkegaard", "section": "Section::::Themes in his philosophy.:Abstraction.\n", "start_paragraph_id": 17, "start_character": 0, "end_paragraph_id": 17, "end_character": 1080, "text": "How is money an abstraction? Money gives the illusion that it has a direct relationship to the work that is done. That is, the work one does is worth so much, equals so much money. In reality, however, the work one does is an expression of who one is as a person; it expresses one's goals in life and associated meaning. As a person, the work one performs is supposed to be an external realization of one's relationship to others and to the world. It is one's way of making the world a better place for oneself and for others. What reducing work to a monetary value does is to replace the concrete reality of one's everyday struggles with the world —to give it shape, form and meaning— with an abstraction. Kierkegaard lamented that \"a young man today would scarcely envy another his capacities or skill or the love of a beautiful girl or his fame, no, but he would envy him his money. Give me money, the young man will say, and I will be all right.\" But Kierkegaard thinks this emphasis on money leads to a denial of the gifts of the spirit to those who are poor and in misery. \n", "bleu_score": null, "meta": null }, { "wikipedia_id": "22937795", "title": "Value-form", "section": "Section::::Genesis of the forms of value.:Money-form of value.\n", "start_paragraph_id": 76, "start_character": 0, "end_paragraph_id": 76, "end_character": 460, "text": "When money is generally used in trade, money becomes the general expression of the form of value of goods being traded; usually this is associated with the emergence of a state authority issuing legal currency. At that point the form of value appears to have acquired a fully independent, separate existence from any particular traded object (behind this autonomy, however, is the power of state authorities or private agencies to \"enforce\" financial claims).\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "10568255", "title": "Understanding (TV series)", "section": "Section::::Episodes.\n", "start_paragraph_id": 23, "start_character": 0, "end_paragraph_id": 23, "end_character": 277, "text": "BULLET::::21. \"Money\": Money is the most powerful tool that Man has ever invented. It can build and destroy empires, and make people to go to war. Some people even believe that money is the key to happiness. What makes an object money? Where does it come from and who decides?\n", "bleu_score": null, "meta": null } ] } ]
null
24chw0
how do intangible currencies like bitcoin and dogecoin have value?
[ { "answer": "People will accept it for goods and services. Therefore it has value.", "provenance": null }, { "answer": null, "provenance": [ { "wikipedia_id": "55886", "title": "Local currency", "section": "Section::::Benefits.\n", "start_paragraph_id": 18, "start_character": 0, "end_paragraph_id": 18, "end_character": 999, "text": "5. While most of these currencies are restricted to a small geographic area or a country, through the Internet electronic forms of complementary currency can be used to stimulate transactions on a global basis. In China, Tencent's QQ coins are a virtual form of currency that has gained wide circulation. QQ coins can be bought for Renminbi and used to buy virtual products and services such as ringtones and on-line video game time. They can also be obtained through on-line exchange for goods and services at about twice the Renminbi price, by which additional 'money' is being directly created. Though virtual currencies are not 'local' in the tradition sense, they do cater to the specific needs of a particular community, a virtual community. Once in circulation, they add to the total effective purchasing power of the on-line population as in the case of local currencies. The Chinese government has begun to tax the coins as they are exchanged from virtual currency to actual hard currency.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "39596725", "title": "Coinbase", "section": "", "start_paragraph_id": 1, "start_character": 0, "end_paragraph_id": 1, "end_character": 290, "text": "Coinbase is a digital currency exchange headquartered in San Francisco, California. They broker exchanges of Bitcoin, Bitcoin Cash, Ethereum, Ethereum Classic, and Litecoin with fiat currencies in approximately 32 countries, and bitcoin transactions and storage in 190 countries worldwide.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "36662188", "title": "Cryptocurrency", "section": "Section::::Legality.:U.S. tax status.\n", "start_paragraph_id": 74, "start_character": 0, "end_paragraph_id": 74, "end_character": 255, "text": "In a paper published by researchers from Oxford and Warwick, it was shown that bitcoin has some characteristics more like the precious metals market than traditional currencies, hence in agreement with the IRS decision even if based on different reasons.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "6796824", "title": "Virtual currency", "section": "Section::::Limits on being currency.\n", "start_paragraph_id": 12, "start_character": 0, "end_paragraph_id": 12, "end_character": 473, "text": "The IRS decided in March 2014, to treat bitcoin and other virtual currencies as property for tax purposes, not as currency. Some have suggested that this makes bitcoins not fungible—that is one bitcoin is not identical to another bitcoin, unlike one gallon of crude oil being identical to another gallon of crude oil—making bitcoin unworkable as a currency. Others have stated that a measure like accounting on average cost basis would restore fungibility to the currency.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "54143900", "title": "Economics of bitcoin", "section": "Section::::Classification.\n", "start_paragraph_id": 4, "start_character": 0, "end_paragraph_id": 4, "end_character": 686, "text": "The question whether bitcoin is a currency or not is disputed. Bitcoins have three useful qualities in a currency, according to \"The Economist\" in January 2015: they are \"hard to earn, limited in supply and easy to verify\". Economists define money as a store of value, a medium of exchange and a unit of account, and agree that bitcoin has some way to go to meet all these criteria. It does best as a medium of exchange, the number of merchants accepting bitcoin has passed 100,000. , the bitcoin market suffered from volatility, limiting the ability of bitcoin to act as a stable store of value, and retailers accepting bitcoin use other currencies as their principal unit of account.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "49895034", "title": "Coincheck", "section": "", "start_paragraph_id": 1, "start_character": 0, "end_paragraph_id": 1, "end_character": 331, "text": "Coincheck is a bitcoin wallet and exchange service headquartered in Tokyo, Japan, founded by Koichiro Wada and Yusuke Otsuka. It operates exchanges between bitcoin, ether and fiat currencies in Japan, and bitcoin transactions and storage in some countries. In April 2018, Coincheck was acquired by Monex Group for 3.6 billion yen.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "222427", "title": "Crypto-anarchism", "section": "Section::::Anonymous trading.\n", "start_paragraph_id": 22, "start_character": 0, "end_paragraph_id": 22, "end_character": 611, "text": "Bitcoin is a currency generated and secured by peer-to-peer networked devices that maintain a communal record of all transactions within the system that can be used in a crypto-anarchic context. The idea behind bitcoin can be traced to \"The Crypto Anarchist Manifesto\". There exist a large number of altcoins, some of which have opague ledgers such that transactions between peers can be untraceable (the first protocol for this is known as the Zerocoin protocol, see also Monero). Some altcoin currencies also act as decentralized autonomous organizations, or act as platforms for enabling such organizations.\n", "bleu_score": null, "meta": null } ] } ]
null
r6pma
Evolution Debate
[ { "answer": "[Nothing in Biology Makes Sense Except in the Light of Evolution - Theodosius Dobzhansky](_URL_0_)", "provenance": null }, { "answer": "The best \"proof\" for evolution that I can come up with is the fact that we get sick every year. All viruses mutate and \"evolve\" in order to become better resistant to our treatment methods. Every year, a new strain of cold appears that is more resistant to last years medicine.\n\nThis is essentially natural selection in it's purest form. Every year when you take medicine to combat your cold, there is a small amount of the virus that resists the treatment. This small amount is not enough to keep you sick and therefore you recover. However, this small remaining amount then breeds and multiplies until it is the new dominant strain of virus. To my knowledge, this process is one of the easiest ways to validate natural selection.\n\n_URL_0_", "provenance": null }, { "answer": "[Evidence of common descent](_URL_1_) \n[20+ evidences for evolution](_URL_0_) \n[A website with a load of links supporting evolution](_URL_2_)", "provenance": null }, { "answer": "[Laboratory Observed Speciation](_URL_3_)\n\n[A starting point for objections to evolution](_URL_1_)\n\n[Nylon-eating bacteria](_URL_2_) The latter being interesting in that it has apparently evolved enzymes only useful for digesting nylon products, which are not only different from its relatives, but also which would have also obviously been useless before humans invented nylon in the 30's.\n\n[Discussion on the misuse of the scientific terms fact and theory in the evolution debate](_URL_0_)", "provenance": null }, { "answer": "You may be presented with arguments, presented as *fact* that you may not be able to counter if you do not prepare for them.\n\nFor example, it is sometimes claimed there has been insufficient time, given a particular rate of random mutation, to produce the observed complexity in the natural world.\nYou can question the assumptions, such as how many mutations would be required to produce a given complexity, but you won't likely have a better basis to estimate this made-up quantity. You can, however, point to natural selection as a way to amplify the rate of beneficial mutation propagation. That is, the process is not entirely random, more of a go with what works \"survival of the fittest\".\n\nAnother example is that very complex systems could never evolve because intermediate stages are not viable, and would not produce survival benefits.\nI have heard eyes used in this line of reasoning, and point out that there are existing examples of photo-receptive organisms all along the spectrum of \"eyes\", such as deep sea hydrothermal shrimp, copepods, etc.\n\n\n[Here](_URL_0_) is one *point*. And, [here](_URL_1_) is its *counterpoint*.\n\nedit: fixed link (1)", "provenance": null }, { "answer": "[The peppered moth](_URL_0_) is a great and easy to understand example of observable evolution that has been studied in depth.\n\n The moth was originally light grey & splotchy to help it camouflage on the lichen growing on trees. During the industrial revolution the pollution slowly killed off the lichen. The moths that were born with mutations making them a slightly darker shade allowed them a better chance of survival because they could hide from predators better (the trees were black with pollution from coal burning) . Eventually most of the moths born were a dark grey, almost black shade, because the trait of having dark color allowed them to survive and reproduce to pass that trait on.\nNow that the pollution has been cleaned up/less coal burning the numbers of light color moths are starting to increase again.\n", "provenance": null }, { "answer": "This will not be a debate in the sense that both parties do not follow the same rules of debate, nor share the same basis of assumptions.", "provenance": null }, { "answer": "Point out the fact that the theory of evolution is a serious scientific endeavor with multitudes of observations and evidence that all point to the truth of that theory. Then point out that Christianity, Judaism and Islam all tell completely different stories about the exact same God, and how many sects within those religions disagree completely (sometimes violently) about the interpretation of just one source of evidence (i.e. whichever holy text). It's not a matter of believing in evolution, evolution is an observable scientific fact. Saying you believe in evolution is like saying you believe in physics or the water cycle.", "provenance": null }, { "answer": "There's an avalanche of information supporting evolution, but I think the most helpful would be to know weak points for counterarguments like Intelligent Design.\n\nRead Behe's book that entails the concept of 'Irreducible Complexity,' and figure out (or simply do a quick Google search) of why that concept does not work.\n\nGood counterarguments to Behe include 'Arch' theory, which is a good place to start!", "provenance": null }, { "answer": "If you're looking for books, try Why Evolution is True, by Jerry Coyne. [There's an accompanying blog that might be worth reading too](_URL_0_).", "provenance": null }, { "answer": "1) Concrete examples : Lenski's work on the long term evolution expereiment, and vinyl eating bacteria are great examples. Google them; it will show up.\n\n2) Read [this](_URL_0_). I link that all the time; it's a fantastic article, and is readable at the educated layman level. It's not that fact-packed, but it will help you understand the concept a bit better.\n\n3) To counter their arguments, make them prove their opinion at least to the point that evolution has been proven to them; similar levels of rigor. I'm sure your bio textbook has some examples in it, so they need to present data supporting their proposed mechanism at at least the same level of rigor. Introduce them to this concept before you start any discussion.\n\n4) \"God did it because I believe God did it\" doesn't count because you can counter with a statement of equal rigor: \"God didn't do it because I don't believe God did it.\" The way you do this is important, though; when they say that, casually comment that their statement is a bit flippant. When they counter that it isn't, argue that if you used the opposite statement (God didn't do it because I believe he didn't), you would feel uncomfortable as the comment would seem flippant to you. They will disagree to support their stance. Now you can counter with that exact statement.\n\n4) Talkorigins is a great website. It's about as old as the internet, I think.\n\n\n", "provenance": null }, { "answer": "If you explain how evolution works you'll see that it is almost a tautology. Also I just read [this](_URL_0_) which might give you some pointers if you want to convince even those who won't listen to your other arguments.", "provenance": null }, { "answer": "OP said not to focus on just religious counter-points to evolution, but who else doesn't believe in evolution? Or at least who doesn't believe in evolution and has their own point of view on the matter other than \"I don't know.\"\n\nIt would probably be best to focus on the religious points, like irreducible complexity, and it \"violating\" the second law of thermodynamics. \n\nRead through this wiki\n\n_URL_0_\n\nIt should tackle just about everything that you could possibly encounter. \n\nFor points of your own, I find very simple and tangible examples work best. Like how different antibiotics on a petri dish of the same bacteria will kill different sized circles of the bacterial lawn and how that can be analogous to how evolved that bacteria is to resisting that antibiotic. ", "provenance": null }, { "answer": "One type of argument I find very compelling against Intelligent Design is pointing out examples of poor design in some living beings, like the Recurrent Laryngeal Nerve on mammals, particularly on giraffes. \n\n_URL_0_\n\nThe left laryngeal nerve could make a very straight path right into the larynx, but instead it descends down to the thorax, passes through the aortic arch and comes up again to reach the larynx. Imagine this path in an animal like a giraffe: this means that instead of making the straight path of a some 20 - 30 centimeters from the vague nerve to the larynx, it goes down and up again a two meter long neck!! And this introduces some problems: material waste, delays in the nervous impulse reaching the larynx, need for more energy expenses for a sufficiently strong nervous impulse to reach the destination (because of dissipation), etc.\n\nIt's easy to explain this in an evolutionary setting. Fishes don't have necks, so the the equivalent of the larynx nerve make a straight path to its destination through the various blood vessels in the fish thorax. Evolution is constrained to generate new body plans based on the existing ones. It can't just invent something entirely new. New bodies evolve in small steps from the previous existing bodies. And all intermediate states must be viable! So, as mammal necks start growing, the nerve is already stuck with its path through the thorax. Selection would favor an animal with a shorter nerve, but there are not many ways to get from long nerve to short nerve in small viable steps. \n\nAs biologists often put it: evolution is not global optimizer. It must work through small, viable steps on what already exists. \n\nAn intelligent all-knowing designer is not constrained by this! He can just make the perfect animal from scratch. So... how do you explain poor design by a perfect designer? Deliberate sloppiness? \n\n\n\n\n\n", "provenance": null }, { "answer": "When are they going to have the Gravity debate?", "provenance": null }, { "answer": "Evolution is nothing but the change in allele frequency over time, so I'd start with that simple definition. I'd also bring up all the examples of antibiotic resistance such as MRSA.", "provenance": null }, { "answer": "[Index to Creationist Claims](_URL_0_)", "provenance": null }, { "answer": "1. There is no debate, this isn't a popularity contest so unfortunately less intelligent don't get to participate this time\n2. HxNx, x = different numbers, flu strains, different receptors show up each year\n3. DNA viruses, like herpes/HIV, have large genomes to change their protein coat to evade the immune system, aka evolving\n4. Historical evidence of humans, *Homo* species developing bigger brains and growing a bigger skeletal system\n5. HLA B53 gene more frequently found in West African populations, which help against malaria\n6. CCR5-Δ32 _URL_0_ ", "provenance": null }, { "answer": "Depending on the structure of the debate you may want to try to take a more objective view. Your clear bias compromises you...just a thought.", "provenance": null }, { "answer": "Youtube user [CDK007 has done a lot of videos explaining evolution and countering creationist/ID attacks on evolution](_URL_3_).\n\nA few picks:\n\n* [The basics of how evolution works](_URL_1_) / [Part 2](_URL_6_)\n\n* [Clock evolution / argument against a common straw man attack](_URL_4_)\n\n* [Irreducible complexity](_URL_0_)\n\n* [Evolution of the Flagellum](_URL_5_)\n\n* [Why Intelligent Design is wrong](_URL_2_) / [Part 2](_URL_7_)\n\nAlso remember that evolution does not cover where life originated from, only what happens when you already have self replicating organisms. The theory of how life started is a separate theory called abiogenesis.", "provenance": null }, { "answer": "Despite its controversiality and likelihood to be completely opposed to the scientific community, deleting the original post was unnecessary. Now we have no context or 2nd party.", "provenance": null }, { "answer": null, "provenance": [ { "wikipedia_id": "8787159", "title": "Objections to evolution", "section": "Section::::Defining evolution.\n", "start_paragraph_id": 14, "start_character": 0, "end_paragraph_id": 14, "end_character": 423, "text": "One of the main sources of confusion and ambiguity in the creation–evolution debate is the definition of \"evolution\" itself. In the context of biology, evolution is genetic changes in populations of organisms over successive generations. The word also has a number of different meanings in different fields, from evolutionary computation to molecular evolution to sociocultural evolution to stellar and galactic evolution.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "7422172", "title": "Evolution: A Theory in Crisis", "section": "", "start_paragraph_id": 1, "start_character": 0, "end_paragraph_id": 1, "end_character": 294, "text": "Evolution: A Theory in Crisis is a 1985 book by Michael Denton, in which the author argues that the scientific theory of evolution by natural selection is a \"theory in crisis\". Reviews by scientists say that the book distorts and misrepresents evolutionary theory and contains numerous errors.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "1571390", "title": "Sociocultural evolution", "section": "Section::::Modern theories.:Sociobiology.\n", "start_paragraph_id": 82, "start_character": 0, "end_paragraph_id": 82, "end_character": 487, "text": "The current theory of evolution, the modern evolutionary synthesis (or neo-darwinism), explains that evolution of species occurs through a combination of Darwin’s mechanism of natural selection and Gregor Mendel’s theory of genetics as the basis for biological inheritance and mathematical population genetics. Essentially, the modern synthesis introduced the connection between two important discoveries; the units of evolution (genes) with the main mechanism of evolution (selection).\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "9127632", "title": "Biology", "section": "Section::::Foundations of modern biology.:Evolution.\n", "start_paragraph_id": 19, "start_character": 0, "end_paragraph_id": 19, "end_character": 437, "text": "The term \"evolution\" was introduced into the scientific lexicon by Jean-Baptiste de Lamarck in 1809, and fifty years later Charles Darwin posited a scientific model of natural selection as evolution's driving force. (Alfred Russel Wallace is recognized as the co-discoverer of this concept as he helped research and experiment with the concept of evolution.) Evolution is now used to explain the great variations of life found on Earth.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "423727", "title": "Scientific consensus", "section": "Section::::Politicization of science.\n", "start_paragraph_id": 13, "start_character": 0, "end_paragraph_id": 13, "end_character": 452, "text": "The theory of evolution through natural selection is also supported by an overwhelming scientific consensus; it is one of the most reliable and empirically tested theories in science. Opponents of evolution claim that there is significant dissent on evolution within the scientific community. The wedge strategy, a plan to promote intelligent design, depended greatly on seeding and building on public perceptions of absence of consensus on evolution.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "19919849", "title": "Precambrian rabbit", "section": "Section::::Theoretical background.\n", "start_paragraph_id": 8, "start_character": 0, "end_paragraph_id": 8, "end_character": 840, "text": "Further confusion arose in 1980–1981, when there was a long debate in the pages of \"Nature\" about the scientific status of the theory of evolution. Specifically, the argument was on the factors influencing and nature of the unit of selection in the genome, with one side positing natural selection, and the other, neutral mutation. Neither of the parties seriously doubted that the theory was both scientific and, according to current scientific knowledge, true. Some participants objected to statements that appeared to present the theory of evolution as an absolute dogma, however, rather than as a hypothesis that so far has performed very well, and both sides quoted Popper in support of their positions. Evolution critics such as Phillip E. Johnson took this as an opportunity to declare that the theory of evolution was unscientific.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "57068222", "title": "Ladybird Expert", "section": "Section::::Selected Titles.:Evolution.\n", "start_paragraph_id": 28, "start_character": 0, "end_paragraph_id": 28, "end_character": 336, "text": "Evolution is a 2017 study guide to evolution written by Steve Jones and illustrated by Rowan Clifford. The volume, according to the publisher's website, explores the extraordinary diversity of life on our planet through the complex interactions of one very simple theory, and, according to its author, goes from foxes to human frailty.\n", "bleu_score": null, "meta": null } ] } ]
null
1mi7am
in u.s., "math". in u.k., "maths". why?
[ { "answer": "They're both shortened forms of the word \"mathematics\". Americans just shorten the whole word indiscriminately, while the rest of the world keeps the plural *s* and shortens the root of the word. ", "provenance": null }, { "answer": "Linguists discuss the math vs. maths wording: _URL_0_", "provenance": null }, { "answer": null, "provenance": [ { "wikipedia_id": "27684253", "title": "Common Core State Standards Initiative", "section": "Section::::Reception and criticism.\n", "start_paragraph_id": 85, "start_character": 0, "end_paragraph_id": 85, "end_character": 605, "text": "The mathematicians Edward Frenkel and Hung-Hsi Wu wrote in 2013 that the mathematical education in the United States is in \"deep crisis\" caused by the way math is currently taught in schools. Both agree that math textbooks, which are widely adopted across the states, already create \"mediocre de facto national standards\". The texts, they say, \"are often incomprehensible and irrelevant\". The Common Core State Standards address these issues and \"level the playing field\" for students. They point out that adoption of the Common Core State Standards and how best to test students are two separate issues.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "1109879", "title": "Math League", "section": "", "start_paragraph_id": 1, "start_character": 0, "end_paragraph_id": 1, "end_character": 557, "text": "Math League is a Math competition for elementary, middle, and high school students in the United States, Canada, and other countries. The Math League was founded in 1977 by two high school mathematics teachers, Steven R. Conrad and Daniel Flegler. Math Leagues, Inc. publishes old contests through a series of books entitled \"Math League Press\". The purpose of the Math League Contests is to provide students \"an enriching opportunity to participate in an academically-oriented activity\" and to let students \"gain recognition for mathematical achievement\".\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "18831", "title": "Mathematics", "section": "Section::::History.:Etymology.\n", "start_paragraph_id": 15, "start_character": 0, "end_paragraph_id": 15, "end_character": 488, "text": "The word \"mathematics\" comes from Ancient Greek μάθημα (\"máthēma\"), meaning \"that which is learnt\", \"what one gets to know\", hence also \"study\" and \"science\". The word for \"mathematics\" came to have the narrower and more technical meaning \"mathematical study\" even in Classical times. Its adjective is (\"mathēmatikós\"), meaning \"related to learning\" or \"studious\", which likewise further came to mean \"mathematical\". In particular, (\"mathēmatikḗ tékhnē\"), , meant \"the mathematical art\".\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "2436129", "title": "Matha", "section": "Section::::Etymology.\n", "start_paragraph_id": 6, "start_character": 0, "end_paragraph_id": 6, "end_character": 232, "text": "A \"matha\" (Sanskrit: मठ) refers to \"cloister, institute or college\", and in some contexts refers to \"hut of an ascetic, monk or renunciate\" or temple for studies. The root of the word is \"math\", which means \"inhabit\" or \"to grind\".\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "5570773", "title": "Mathematics education in Australia", "section": "Section::::Queensland.:Mathematics A.\n", "start_paragraph_id": 21, "start_character": 0, "end_paragraph_id": 21, "end_character": 1003, "text": "Maths A covers more practical topics than Maths B and C, but it is still OP eligible. There are considerably fewer algebraic concepts in this subject, and it is suitable for students who either struggled with mathematics in Year 10, or who do not require a knowledge of abstract mathematics in the future. Maths A is designed to help students to develop an appreciation of the value of Mathematics to humanity. Students learn how mathematical concepts may be applied to a variety of life situations including business and recreational activities. The skills encountered are relevant to a vast array of careers (trade, technical, business etc.). Assessments in the subject include both formative and summative written tests, assignments and practical work. It is assessed in the categories: Knowledge & Procedures (KAPS); Modelling & Problem Solving (MAPS); Communication & Justification (CAJ). Although Maths A is not a pre-requisite subject, but it is sufficient for entrance to many tertiary courses.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "14740035", "title": "Mathematics (UIL)", "section": "", "start_paragraph_id": 1, "start_character": 0, "end_paragraph_id": 1, "end_character": 310, "text": "Mathematics (sometimes referred to as General Math, to distinguish it from other mathematics-related events) is one of several academic events sanctioned by the University Interscholastic League. It is also a competition held by the Texas Math and Science Coaches Association, using the same rules as the UIL.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "8002340", "title": "Teen Talk Barbie", "section": "Section::::Controversy.\n", "start_paragraph_id": 5, "start_character": 0, "end_paragraph_id": 5, "end_character": 1026, "text": "Educators including the National Council of Teachers of Mathematics objected to the \"Math class is tough\" phrase as detrimental to the effort to encourage girls to study math and science, and particularly in association with the phrases about shopping; the American Association of University Women criticized it in a report about girls receiving a relatively poor education in math and science. Mattel initially offered to exchange dolls for nonspeaking ones on request, and later apologized to the American Association of University Women, withdrew the math class phrase from those to be used in future dolls, and offered an exchange to purchasers who had a doll with that phrase. The criticism gave rise to the 1994 \"Lisa vs. Malibu Stacy\" episode of \"The Simpsons\", in which Lisa Simpson objects to sexist utterances by a \"Malibu Stacy\" doll such as \"Thinking too much gives you wrinkles.\" , the collector's price for one of the estimated 3,500 Teen Talk Barbies including the phrase \"Math class is tough\" was around $500.\n", "bleu_score": null, "meta": null } ] } ]
null
229yxj
if number of offspring in mammals roughly correlates with mammary glands, why are twins not the predominant birth in humans?
[ { "answer": "Due to bilateral symmetry (animals being roughly symmetrical down the middle) all mammals, as far as I know, have an even number of nipples, even ones where only one offspring at a time is the norm.", "provenance": null }, { "answer": null, "provenance": [ { "wikipedia_id": "7860110", "title": "Sexual differentiation in humans", "section": "Section::::Sex determination.\n", "start_paragraph_id": 5, "start_character": 0, "end_paragraph_id": 5, "end_character": 698, "text": "Most mammals, including humans, have an XY sex-determination system: the Y chromosome carries factors responsible for triggering male development. In the absence of a Y chromosome, the fetus will undergo female development. This is because of the presence of the sex-determining region of the Y chromosome, also known as the SRY gene. Thus, male mammals typically have an X and a Y chromosome (XY), while female mammals typically have two X chromosomes (XX). In humans, biological sex is determined by five factors present at birth: the presence or absence of a Y chromosome, the type of gonads, the sex hormones, the internal genitalia (such as the uterus in females), and the external genitalia.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "21304415", "title": "Sexual reproduction", "section": "Section::::Animals.:Mammals.\n", "start_paragraph_id": 21, "start_character": 0, "end_paragraph_id": 21, "end_character": 677, "text": "There are three extant kinds of mammals: monotremes, placentals and marsupials, all with internal fertilization. In placental mammals, offspring are born as juveniles: complete animals with the sex organs present although not reproductively functional. After several months or years, depending on the species, the sex organs develop further to maturity and the animal becomes sexually mature. Most female mammals are only fertile during certain periods during their estrous cycle, at which point they are ready to mate. Individual male and female mammals meet and carry out copulation. For most mammals, males and females exchange sexual partners throughout their adult lives.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "246891", "title": "Y chromosome", "section": "Section::::Overview.:Variations.\n", "start_paragraph_id": 7, "start_character": 0, "end_paragraph_id": 7, "end_character": 367, "text": "Most therian mammals have only one pair of sex chromosomes in each cell. Males have one Y chromosome and one X chromosome, while females have two X chromosomes. In mammals, the Y chromosome contains a gene, SRY, which triggers embryonic development as a male. The Y chromosomes of humans and other mammals also contain other genes needed for normal sperm production.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "311440", "title": "Mammary gland", "section": "Section::::Other mammals.:General.\n", "start_paragraph_id": 39, "start_character": 0, "end_paragraph_id": 39, "end_character": 687, "text": "The breasts of the adult human female vary from most other mammals that tend to have less conspicuous mammary glands. The number and positioning of mammary glands varies widely in different mammals. The protruding teats and accompanying glands can be located anywhere along the two milk lines. In general most mammals develop mammary glands in pairs along these lines, with a number approximating the number of young typically birthed at a time. The number of teats varies from 2 (in most primates) to 18 (in pigs). The Virginia opossum has 13, one of the few mammals with an odd number. The following table lists the number and position of teats and glands found in a range of mammals:\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "15822899", "title": "Man", "section": "", "start_paragraph_id": 2, "start_character": 0, "end_paragraph_id": 2, "end_character": 651, "text": "Like most other male mammals, a man's genome typically inherits an X chromosome from his mother and a Y chromosome from his father. The male fetus produces larger amounts of androgens and smaller amounts of estrogens than a female fetus. This difference in the relative amounts of these sex steroids is largely responsible for the physiological differences that distinguish men from women. During puberty, hormones which stimulate androgen production result in the development of secondary sexual characteristics, thus exhibiting greater differences between the sexes. However, there are exceptions to the above for some transgender and intersex men.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "46764", "title": "Even-toed ungulate", "section": "Section::::Anatomy.:Other.\n", "start_paragraph_id": 152, "start_character": 0, "end_paragraph_id": 152, "end_character": 366, "text": "The number of mammary glands is variable and correlates, as in all mammals, with litter size. Pigs, which have the largest litter size of all even-toed ungulates, have two rows of teats lined from the armpit to the groin area. In most cases, however, even-toed ungulates have only one or two pairs of teats. In some species, these form an udder in the groin region.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "36695798", "title": "Mammalian reproduction", "section": "", "start_paragraph_id": 1, "start_character": 0, "end_paragraph_id": 1, "end_character": 344, "text": "Most mammals are viviparous, giving birth to live young. However, the five species of monotreme, the platypuses and the echidnas, lay eggs. The monotremes have a sex determination system different from that of most other mammals. In particular, the sex chromosomes of a platypus are more like those of a chicken than those of a therian mammal.\n", "bleu_score": null, "meta": null } ] } ]
null
3ltm0c
How did the combatant nations of WW2 disarm their soldiers when fighting ended? Did lots of soldiers hold on to their weapons, and/or take sidearms home with them? How much military hardware was unaccounted for?
[ { "answer": "It depended on the country. I can shed some light on the American and Soviet methods of disarmament.\n\nWhen the Red Army began to demobilize large formations of its troops, the Soviet officials told them to hand over any firearms (government issue or enemy capture) or face potentially being sent to a labor camp. The Soviet government was very careful about how it went about demobilizing its vast army. Its troops had been exposed to capitalist societies, the shortcomings of the government during wartime, and many had witnessed/participated in atrocities that would tarnish the image of the Red Army soldier which was so essential to Russian post-war propaganda. Though the government bestowed numerous gifts upon discharged troops, every soldier who was demobilized also had his/her bag searched on the train before arriving back home. Most did not put up too much resistance to this action because there were still so many weapons and explosives laying about the numerous battlefields back home in Russia. (Source: Ivan's War by Catherine Merridale, pp. 356-357)\n\nIn the American military, soldiers were officially allowed to take home one souvenir firearm, and they had to register it before departing back home. The lines to do so were pretty long at places like Camp Lucky Strike in Le Havre, and many soldiers sold off what extra weapons they had. I don't know what the policy was for service weapons or if a service member could send weapons through the mail before being sent to demobilization camps. It wouldn't surprise me if they did.", "provenance": null }, { "answer": "I can help a bit with Finland. [EDIT: improved sources.]\n\nThe Finnish army was disarmed essentially in two phases: the interim peace treaty signed 19 September 1944 stipulated that the army must be demobilized from its peak of over 500 000 troops to peacetime strength of 43 000 in 2.5 months. The latter part, mostly comprised of the youngest troops, still fought in the Lapland War against the retreating Germans. Additionally, the paramilitary Civil Guard had to be dismantled, and with it its own depots and stored weapons.[1]\n\nPlanning for demobilization began as soon as the Continuation War began in 1941 [1], as the chaos following the First World War was still within memory and everyone wanted to avoid that. There were fears of widespread unemployment, housing crisis and problems when men returned to their workplaces after years of absence. In principle, they had a legal right to return to their pre-war places of work, but as you can imagine many of those posts had been filled during the war for example.[2] \n\nIn practice, Finnish units first moved on foot and by rail to the local area where they had been raised, and demobilized there. At designated demobilization points, the men turned in their weapons and other gear; they could keep their uniforms (with epaulettes cut off) and shoes, but had to turn in hats and belts.[1] (After the war, probably the most common menswear was the old uniform jacket, or something made from it.)\n\nIn principle, every weapon had to be turned in, including those captured from the enemy. In practice, quite a few men had already, during the war, smuggled captured trophies back home on leave for example; these trophy guns (mostly pistols) still turn up from old homes and estates. Hiding war trophies was illegal, but many officers seemed to turn a blind eye to that as long as it wasn't overtly conspicuous; on the other hand, I've heard of one sergeant for example who was court martialed, fined severely and demoted to private for hiding captured binoculars. \n\nHowever, there was also semi-official squirreling away happening: the so-called Weapons Cache Case.[3] Afraid of Soviet designs for Finland, high-ranking officers selected trusted men and began secretly caching equipment for guerrilla war all around Finland. The goal was to gather enough weapons, equipment and supplies for 8000 men, but after careful planning, in a few days enough equipment and food had been cached for about 35 000 men in over 1300 caches all around the country. Most of these weapons were obtained from demobilization depots and had been marked as \"damaged beyond repair\" or \"lost\" in official accounts. The operation came to light in spring 1945, and most of the weapons were returned to depots. However, it seems that as the caches were being cleared, at least some of the weapons went missing entirely - taken by men involved, as final \"life insurance.\" There have been cases where a SMG or even a light machine gun turns up during renovation, although these are rare nowadays. \n\nAccording to Finnish Police, substantial but unknown percentage of illegal firearms in Finland - estimated to be some tens of thousands in total - are thought to be old war trophies, even though thousands have been collected through amnesty legislation. As said, pistols such as Nagant feature prominently.\n\nSources: \n\n[1] master's thesis of officer cadet Ratinen, J. (2009) *Suomen Puolustusvoimien liikekannallepanokyky sodan jälkeisinä vuosina.* National Defence College, Helsinki. Particularly pp. 19-23, also pp. 31-34 for Weapons Cache Case.\n\n[2] Holmila, A. and Mikkonen, S. (2015) *Suomi sodan jälkeen: Pelon, katkeruuden ja toivon vuodet 1944-1949.* Jyväskylä: Atena. Chapter 1 in particular. \n\n[3] Lukkari, M (1992): Asekätkentä (3. täydennetty painos). Helsinki: Otava.", "provenance": null }, { "answer": null, "provenance": [ { "wikipedia_id": "3378101", "title": "Captured German equipment in Soviet use on the Eastern front", "section": "", "start_paragraph_id": 1, "start_character": 0, "end_paragraph_id": 1, "end_character": 365, "text": "During World War II, losses of major items of equipment were substantial in many battles all throughout the war, with no exception on the Eastern Front. Due to the expense of producing such equipment as replacements, many armies made an effort to recover and re-use enemy equipment that fell into their hands , applicable to both Nazi Germany and the Soviet Union.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "898110", "title": "Table of organization and equipment", "section": "Section::::Soviet Union and Russia.\n", "start_paragraph_id": 7, "start_character": 0, "end_paragraph_id": 7, "end_character": 568, "text": "The commissar, Tkachenko, went on to urgently request vehicles (including ambulances, of which there were none), small arms and support weapons, draught horses, and a closer supply base. After the first day of fighting he further reported that the lack of high-explosive shells forced the artillery to fire armor-piercing rounds at enemy firing points and troops; there were no cartridges for the submachine guns; many of the mens' uniforms and footwear were worn out; and it was impossible to commit the replacements into the fighting because of the lack of weapons.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "353000", "title": "Polish–Soviet War", "section": "Section::::Course.:1920.:Logistics and plans.\n", "start_paragraph_id": 47, "start_character": 0, "end_paragraph_id": 47, "end_character": 647, "text": "Logistics, nonetheless, were very bad for both armies, supported by whatever equipment was left over from World War I or could be captured. The Polish Army, for example, employed guns made in five countries, and rifles manufactured in six, each using different ammunition. The Soviets had many military depots at their disposal, left by withdrawing German armies in 1918–1919, and modern French armaments captured in great numbers from the White Russians and the Allied expeditionary forces in the Russian Civil War. Still, they suffered a shortage of arms; both the Red Army and the Polish forces were grossly underequipped by Western standards.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "1248450", "title": "Lesley J. McNair", "section": "Section::::World War II.:Army Ground Forces.:Personnel recruiting and training.:Individual replacement system.\n", "start_paragraph_id": 81, "start_character": 0, "end_paragraph_id": 81, "end_character": 540, "text": "These initiatives were not always successful; by late 1944 and early 1945, the number of units fighting continuously or nearly continuously caused the replacement system to break down. As a result, rear echelon soldiers were often pulled from their duties to fill vacancies in front line combat units, and training for some replacement soldiers and units was cut short so they could be rushed into combat. Some units were worn down to the point of combat ineffectiveness. In others, low morale, fatigue, and sickness became more prevalent.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "4031593", "title": "1 Service Battalion", "section": "Section::::History.:Royal Canadian Ordnance Corps.\n", "start_paragraph_id": 16, "start_character": 0, "end_paragraph_id": 16, "end_character": 432, "text": "In the Second World War, the RCOC had a strength of 35,000 military personnel, not including the thousands of civilian personnel employed at RCOC installations. They procured all the material goods required by the Army, from clothing to weapons. Up until 1944, the RCOC was responsible for maintenance and repair. Ordnance Field Parks, that carried everything from spare parts to spare artillery, supported the Divisions and Corps.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "40429270", "title": "5 Service Battalion", "section": "Section::::History.:The Royal Canadian Ordnance Corps.\n", "start_paragraph_id": 15, "start_character": 0, "end_paragraph_id": 15, "end_character": 432, "text": "In the Second World War, the RCOC had a strength of 35,000 military personnel, not including the thousands of civilian personnel employed at RCOC installations. They procured all the material goods required by the Army, from clothing to weapons. Up until 1944, the RCOC was responsible for maintenance and repair. Ordnance Field Parks, that carried everything from spare parts to spare artillery, supported the Divisions and Corps.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "1761990", "title": "Kamenets-Podolsky pocket", "section": "Section::::2nd phase of the operation. 21 March- 17 April 1944.:Hube organizes move west.\n", "start_paragraph_id": 83, "start_character": 0, "end_paragraph_id": 83, "end_character": 205, "text": "Though supplies were still being brought in, they were insufficient to maintain the Army's fighting strength. Zhukov sent a terse ultimatum: Surrender, or every German soldier in the pocket would be shot.\n", "bleu_score": null, "meta": null } ] } ]
null
5a8de6
why is a fan higher pitched when on higher speeds?
[ { "answer": "Sounds are based on frequency. The greater the frequency, the higher the pitch of the sound. As the fan is spinning faster, its frequency is greater and therefore the sound is higher.", "provenance": null }, { "answer": null, "provenance": [ { "wikipedia_id": "4045710", "title": "Computer fan", "section": "Section::::Physical characteristics.:Rotational speed.\n", "start_paragraph_id": 53, "start_character": 0, "end_paragraph_id": 53, "end_character": 476, "text": "The speed of rotation (specified in revolutions per minute, RPM) together with the static pressure determine the airflow for a given fan. Where noise is an issue, larger, slower-turning fans are quieter than smaller, faster fans that can move the same airflow. Fan noise has been found to be roughly proportional to the fifth power of fan speed; halving the speed reduces the noise by about 15 dB. Axial fans may rotate at speeds of up to around 23,000 rpm for smaller sizes.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "175973", "title": "Overclocking", "section": "Section::::Disadvantages.:General.\n", "start_paragraph_id": 60, "start_character": 0, "end_paragraph_id": 60, "end_character": 878, "text": "BULLET::::- Fan noise: High-performance fans running at maximum speed used for the required degree of cooling of an overclocked machine can be noisy, some producing 50 dB or more of noise. When maximum cooling is not required, in any equipment, fan speeds can be reduced below the maximum: fan noise has been found to be roughly proportional to the fifth power of fan speed; halving speed reduces noise by about 15 dB. Fan noise can be reduced by design improvements, e.g. with aerodynamically optimized blades for smoother airflow, reducing noise to around 20 dB at approximately 1 metre or larger fans rotating more slowly, which produce less noise than smaller, faster fans with the same airflow. Acoustical insulation inside the case e.g. acoustic foam can reduce noise. Additional cooling methods which do not use fans can be used, such as liquid and phase-change cooling.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "644397", "title": "Hush kit", "section": "Section::::Design.\n", "start_paragraph_id": 6, "start_character": 0, "end_paragraph_id": 6, "end_character": 259, "text": "This kind of high-pitched noise is much less of an issue on modern high-bypass turbofan engines as the significantly larger front fans they employ are designed to spin at much lower speeds than those found in older turbojet, and low-bypass turbofan, engines.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "12877572", "title": "Fan (machine)", "section": "Section::::Noise.\n", "start_paragraph_id": 63, "start_character": 0, "end_paragraph_id": 63, "end_character": 242, "text": "Fans generate noise from the rapid flow of air around blades and obstacles causing vortexes, and from the motor. Fan noise has been found to be roughly proportional to the fifth power of fan speed; halving speed reduces noise by about 15 dB.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "27883135", "title": "Airbreathing jet engine", "section": "Section::::Types of airbreathing jet engines.:Turbofan engine.\n", "start_paragraph_id": 28, "start_character": 0, "end_paragraph_id": 28, "end_character": 770, "text": "The comparatively large frontal fan has several effects. Compared to a turbojet of identical thrust, a turbofan has a much larger air mass flow rate and the flow through the bypass duct generates a significant fraction of the thrust. Because the additional duct air has not been ignited which it gives it a slow speed, but no extra fuel is needed to provide this thrust. Instead, the energy is taken from the central core, which gives it also a reduced exhaust speed. The average velocity of the mixed exhaust air is thus reduced (low specific thrust) which is less wasteful of energy but reduces the top speed. Overall, a turbofan can be much more fuel efficient and quieter, and it turns out that the fan also allows greater net thrust to be available at slow speeds.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "103077", "title": "Turbofan", "section": "Section::::Principles.\n", "start_paragraph_id": 7, "start_character": 0, "end_paragraph_id": 7, "end_character": 640, "text": "Because the turbine has to additionally drive the fan, the turbine is larger and has larger pressure and temperature drops, and so the nozzles are smaller. This means that the exhaust velocity of the core is reduced. The fan also has lower exhaust velocity, giving much more thrust per unit energy (lower specific thrust). The overall effective exhaust velocity of the two exhaust jets can be made closer to a normal subsonic aircraft's flight speed. In effect, a turbofan emits a large amount of air more slowly, whereas a turbojet emits a smaller amount of air quickly, which is a far less efficient way to generate the same thrust (see \n", "bleu_score": null, "meta": null }, { "wikipedia_id": "238601", "title": "Quiet PC", "section": "Section::::Individual components in a quiet PC.:Cooling systems.:Fan.\n", "start_paragraph_id": 90, "start_character": 0, "end_paragraph_id": 90, "end_character": 819, "text": "Fan noise is often proportional to fan speed, so fan controllers can be used to slow down fans and to precisely choose fan speed. Fan controllers can produce a fixed fan speed using an inline resistor or diode; or a variable speed using a potentiometer to supply a lower voltage. Fan speed can also be reduced more crudely by plugging them into the power supply's 5 volt line instead of the 12 volt line (or between the two for a potential difference of 7 volts, although this cripples the fan's speed sensing). Most fans will run at 5 volts once they are spinning, but may not start reliably at less than 7 V. Some simple fan controllers will only vary the fans' supply voltage between 8 V and 12 V to avoid this problem entirely. Some fan controllers start the fan at 12 V, then drop the voltage after a few seconds.\n", "bleu_score": null, "meta": null } ] } ]
null
8s7i6p
why during medical trials both control and subject group are told they are receiving experimental drug instead both being told they receive placebo?
[ { "answer": "Telling people you're feeding them sugar pills when in fact they're taking an experimental drug with possibly disastrous side effects is considered unethical, as it will make them more likely to shrug off bleeding from their ears and eyes as \"probably just allergies or something.\" \n\nTelling people you're feeding them an experimental drug when in fact you're just giving them sugar pills is less likely to do any harm. ", "provenance": null }, { "answer": "Adding to the other comment, when it eventually goes public, the people will be told they are receiving the actual drug and not just a placebo. It helps both groups stay in the correct state of mind", "provenance": null }, { "answer": "AFAIK during clinical trials participants are not told anything about which of the two they're getting. As far as they know, they could either be getting the real drug or the placebo, and they won't be told which it is/was until after the trial is completed. Telling people they're getting the real drug when they're not (which would be true for the control group) would be very unethical. Did you hear/read somewhere that this was how it worked? ", "provenance": null }, { "answer": "I’ve never heard of a study where the subjects are told they definitively are going to get one or the other, and possibly take the other. This would be considered unethical. An ethical study is one where the subjects are told they *may* get the placebo or drug beforehand. Thereon the best kind of subject is where both the subjects and researchers don’t know who got what until the end. Now having said all that I can answer the question. The power of suggestion is quite strong and the psychosomatic effects (mind effecting the body) of either a placebo or nocebo (opposite of placebo - where you perceive a negative effect from something which shouldn’t normally), can really make or break a study. In that sense it wouldn’t be a good control if you said everyone was told it was the drug or not. Because you couldn’t tell that it actually was the drug or not, or a placebo or nocebo. Again, this is why the best studies are when subjects and researchers only know who was given what until after the study. In that sense you will only be looking at raw data in all possible circumstances. What your looking at then, is a subject who only reports what they feel/effects, completely unknown. Sure there will be people who make up there own minds, but because it’s not already based on a prejudiced expectation of what they ‘believe’ the pill to be, the data is much more valuable. ", "provenance": null }, { "answer": null, "provenance": [ { "wikipedia_id": "2221807", "title": "Clinical control group", "section": "", "start_paragraph_id": 3, "start_character": 0, "end_paragraph_id": 3, "end_character": 327, "text": "If a drug is being tested, the control group will frequently be given a placebo. This is done as a double blind test, as neither the healthcare professional nor the patient know if they are receiving the drug under test or a placebo, and don't find out which substance was administered until after the experiment is concluded.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "1243554", "title": "Pharmacovigilance", "section": "Section::::Risk management.:Risk/benefit profile of drugs.\n", "start_paragraph_id": 77, "start_character": 0, "end_paragraph_id": 77, "end_character": 863, "text": "The variables in a clinical trial are specified and controlled, but a clinical trial can never tell you the whole story of the effects of a drug in all situations. In fact, nothing could tell you the whole story, but a clinical trial must tell you enough; \"enough\" being determined by legislation and by contemporary judgements about the acceptable balance of benefit and harm. Ultimately, when a drug is marketed it may be used in patient populations that were not studied during clinical trials (children, the elderly, pregnant women, patients with co-morbidities not found in the clinical trial population, etc.) and a different set of warnings, precautions or contraindications (where the drug should not be used at all) for the product's labeling may be necessary in order to maintain a positive risk/benefit profile in all known populations using the drug.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "17537", "title": "Lysergic acid diethylamide", "section": "Section::::Research.:Psychedelic therapy.\n", "start_paragraph_id": 124, "start_character": 0, "end_paragraph_id": 124, "end_character": 401, "text": "Two recent reviews concluded that conclusions drawn from most of these early trials are unreliable due to serious methodological flaws. These include the absence of adequate control groups, lack of followup, and vague criteria for therapeutic outcome. In many cases studies failed to convincingly demonstrate whether the drug or the therapeutic interaction was responsible for any beneficial effects.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "4402358", "title": "Theralizumab", "section": "Section::::Medicines and Healthcare products Regulatory Agency view.\n", "start_paragraph_id": 39, "start_character": 0, "end_paragraph_id": 39, "end_character": 530, "text": "In December 2006, the final report of the Expert Group on Phase One Clinical Trials was published. It found that the trial had not considered what constituted a safe dose in humans, and that then-current law had not required it. It made 22 recommendations, including the need for independent expert advice before a high-risk study was allowed, testing only one volunteer at a time (sequential inclusion of participants) in case there were rapid ill effects, and administering drugs slowly by infusion rather than as an injection.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "24931", "title": "Psychotherapy", "section": "Section::::Effects.:Evaluation.\n", "start_paragraph_id": 92, "start_character": 0, "end_paragraph_id": 92, "end_character": 789, "text": "One issue with trials is what to use as a placebo treatment group or non-treatment control group. Often, this group includes patients on a waiting list, or those receiving some kind of regular non-specific contact or support. Researchers must consider how best to match the use of inert tablets or sham treatments in placebo-controlled studies in pharmaceutical trials. Several interpretations and differing assumptions and language remain. Another issue is the attempt to standardize and manualize therapies and link them to specific symptoms of diagnostic categories, making them more amenable to research. Some report that this may reduce efficacy or gloss over individual needs. Fonagy and Roth's opinion is that the benefits of the evidence-based approach outweighs the difficulties.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "7096313", "title": "Controlling for a variable", "section": "Section::::Experiments.\n", "start_paragraph_id": 5, "start_character": 0, "end_paragraph_id": 5, "end_character": 272, "text": "In controlled experiments of medical treatment options on humans, researchers randomly assign individuals to a treatment group or control group. This is done to reduce the confounding effect of irrelevant variables that are not being studied, such as the placebo effect. \n", "bleu_score": null, "meta": null }, { "wikipedia_id": "46232572", "title": "In silico clinical trials", "section": "Section::::Rationale.\n", "start_paragraph_id": 6, "start_character": 0, "end_paragraph_id": 6, "end_character": 708, "text": "Predicting low-frequency side effects has been difficult, because such side effects need not become apparent until the treatment is adopted by many patients. The appearance of severe side-effects in phase three often causes development to stop, for ethical and economic reasons. Also, in recent years many candidate drugs failed in phase 3 trials because of lack of efficacy rather than for safety reasons. One reason for failure is that traditional trials aim to establish efficacy and safety for most subjects, rather than for individual subjects, and so efficacy is determined by a statistic of central tendency for the trial. Traditional trials do not adapt the treatment to the covariates of subjects: \n", "bleu_score": null, "meta": null } ] } ]
null
2bhon2
- why do most honour killings involve murdering the victim? why not kill the rapist instead?
[ { "answer": "In such cultures women are viewed as property, to be bought, sold, or traded. The honor killing is in retribution for the perceived dishonor of allowing themselves to be raped, as it damages or destroys their value to their male owner.\n\nIt is fucked up.", "provenance": null }, { "answer": "If someone smashes the windows in your car, take a shit in it, slashes the tires you would get a new car, it was just property damage. Maybe youll find the dude who did this but either way that car is useless. \n\nMost of these cultures view women as property so not only should it be replaced but its broken goods, so broken you have to set the car on fire so your neighbors dont give you shit for having a smashed up wreck in your driveway. ", "provenance": null }, { "answer": "Because it's not about the act, or the property being damaged, or the people involved. \n\nIts about maintaining family honour, and removing the mark against the family. The girl doesn't factor into it at all, because she was supposed to protect herself, and the family was supposed to help her do that. They failed, and so it has brought shame ~~too~~ to the family. To rectify, they destroy the evidence of shame to remove the mark on the family. \n\nMost Muslims do not believe in honour killings, by the way, nor do any sects therein publicly accept them as part of their faith.", "provenance": null }, { "answer": "Because even if you are the victim and you got raped, you still had sex outside of (or before) your marriage, which means you dishonored your family. This dishonor sticks to the family, and you, for the rest of your lifetime. Thus they significantly shorter your lifetime.\n\nAlso because the people that thought up these religious rules were like superhigh on acid.", "provenance": null }, { "answer": "The real issues here are the twin concepts of dishonor and atonement. \n\nWhat is dishonor? Dishonor is a state of extremely depreciated social value within your community. Each society treats their 'dishonored' differently. Some societies have legal and social rules to protect their dishonored from too much abuse, while other societies essentially revoke their rights and encourage everyone to treat them as an open target. \n\nWho determines what is dishonorable in a society? Well, every member of a society who expresses their opinions on dishonor 'votes' for what their society defines as dishonorable. When enough of a society are in agreement, their beliefs gain momentum and trample any competing beliefs to establish a cultural norm that is difficult to change. Different societies come up with widely different beliefs about dishonor.\n\nAtonement is an act that that a society has collectively agreed will alleviate or remove someone's status of dishonor. Societies with especially draconian beliefs about dishonor often develop strong concepts of atonement. Different societies come up with widely different beliefs about atonement.\n\nCultures with honor killings tend to have a couple things in common:\n* they collectively treat their dishonored very very poorly\n* dishonor is transmissible through association. A dishonored person automatically passes their dishonor onto their family.\n* women are poorly protected in both legal rules and social rules.\n\nAs for the ELI5, why do most honor killings involve murdering the victim? Why not kill the rapist instead?\n\nThe areas where honor killings developed have well established legal rules and social rules that make it very hard for women to pursue justice against male perpetrators. These societies have long-held and sacred beliefs that men are inherently honorable and women aren't, and they have written their laws as if it were a fact that women were inherently untrustworthy. Because men are deemed to be more honorable then women, any evidence they produce in court will be weighted as more significant. The evidence demanded from women is overwhelming and often impossible to supply. This makes it almost impossible for women to get justice against men in the courts. On top of that, because society views women as being innately less honorable then men, the general population is quick to condemn any woman who visably seeks justice in the courts. They are almost automatically assumed to be a liar who is seeking to ruin the reputation of an innocent man, and they will be treated as such. This makes it painful for women to even pursue the justice that they probably won't get.\n\nOn top of all this, women in these societies aren't really considered free people, they are considered property of men and are owned by either a father or a husband. Rape, for all intents and purposes, is treated as a property crime in these cultures. The victim of rape isn't the woman, it is the man who owns her. A sexually impure woman is considered dishonored in these cultures, which ultimately means she has lost her trade value and will be treated worse by society. And if that weren't enough injustice, these cultures also believe that her dishonor is transmitted directly into her family. Her father suffers from her dishonor, her husband, her children.\n\nKnowing all this, imagine your daughter has been raped. You know that you can't get justice in the courts, and that if the word gets out both here and yourself will be dishonored. The obvious decision is to hush it all up, but it is too late, you daughter already confided in her friends and now the word is out on the street. People are starting to talk about your liar daughter, calling her a whore and saying that probably tempted that nice boy to destroy his life. The word gets out further and you now in the discussion as the disgraced father of a lying whore. Suddenly no one seems to be buying anything from your shop and none of your friends are calling to invite you out to dinner. You can't go after the man, he is utterly protected by the law, besides all of this is probably because of her flirtations. How do you even know she was even raped? She could have made it all up. For all you know she might be a lying whore who would destroy your life for a fast fuck, then come to you afterwards with a fake story and crocodile tears in her eyes. Everyone knows that women can't be trusted, hell the whole of society is grounded on the knowledge that you can't trust women. This wretched whore has made a mockery of your trust and humiliated you in front of the whole town. This dishonor is all her fault, but only you have the power to atone for it. Your society has collectively agreed that you can atone for this dishonor by showing the strength and courage to destroy the thing that ultimately dishonored you; your daughter. \n\ntl;dr Daughters are replaceable in exactly the way that honor isn't.", "provenance": null }, { "answer": null, "provenance": [ { "wikipedia_id": "22379014", "title": "Honour killing in Pakistan", "section": "Section::::Background.\n", "start_paragraph_id": 4, "start_character": 0, "end_paragraph_id": 4, "end_character": 443, "text": "Honour killing is an act of murder, in which a person is killed for his or her actual or perceived immoral behavior. Such \"immoral behavior\" may take the form of alleged marital infidelity, refusal to submit to an arranged marriage, demanding a divorce, perceived flirtatious behaviour and being raped. Suspicion and accusations alone are many times enough to defile a family's honour and therefore enough to warrant the killing of the woman.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "80332", "title": "Human sacrifice", "section": "Section::::Ritual murder.\n", "start_paragraph_id": 158, "start_character": 0, "end_paragraph_id": 158, "end_character": 261, "text": "Ritual killings perpetrated by individuals or small groups within a society that denounces them as simple murder are difficult to classify as either \"human sacrifice\" or mere pathological homicide because they lack the societal integration of sacrifice proper.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "23742139", "title": "Shafia family murders", "section": "Section::::Media coverage of the Shafia murder trial.:\"Montreal Gazette\".\n", "start_paragraph_id": 33, "start_character": 0, "end_paragraph_id": 33, "end_character": 617, "text": "The \"Montreal Gazette\" published a column in which it said that labelling the murders as honour killing is a mistake because domestic violence against women is ubiquitous and framing it into a particular category would mean distancing oneself from a crime that is all too common. The authors argue that premeditation is put forth as a core component to differentiate honour killings from other types of murders, such as crimes of convenience or crimes of passion. However, recent studies indicate that premeditation is as much a component in other cases of domestic violence and murder as it is in \"honour killings.\"\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "9437868", "title": "Honor killing", "section": "Section::::By region.:South Asia.:India.\n", "start_paragraph_id": 196, "start_character": 0, "end_paragraph_id": 196, "end_character": 440, "text": "Honor killings take place in Rajasthan, too. In June 2012, a man chopped off his 20-year-old daughter's head with a sword in Rajasthan after learning that she was dating men. According to police officer, \"Omkar Singh told the police that his daughter Manju had relations with several men. He had asked her to mend her ways several times in the past. However, she did not pay heed. Out of pure rage, he chopped off her head with the sword\".\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "42199068", "title": "Violence against women in India", "section": "Section::::Murders.:Honor killings.\n", "start_paragraph_id": 16, "start_character": 0, "end_paragraph_id": 16, "end_character": 649, "text": "An honor killing is a murder of a family member who has been considered to have brought dishonour and shame upon the family. Examples of reasons for honor killings include the refusal to enter an arranged marriage, committing adultery, choosing a partner that the family disapproves of, and becoming a victim of rape.Village caste councils or \"khap panchayats\" in certain regions of India regularly pass death sentences for persons who do not follow their diktats on caste or gotra. The volunteer group known as Love Commandos from Delhi, runs a helpline dedicated to rescuing couples who are afraid of violence for marrying outside of caste lines.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "35613274", "title": "Domestic violence in India", "section": "Section::::Forms.:Honor killing.\n", "start_paragraph_id": 33, "start_character": 0, "end_paragraph_id": 33, "end_character": 387, "text": "An honour killing is the practice wherein an individual is killed by one or more family member(s), because he or she is believed to have brought shame on the family. The shame may range from refusing to enter an arranged marriage, having sex outside marriage, being in a relationship that is disapproved by the family, starting a divorce proceeding, or engaging in homosexual relations.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "31564756", "title": "Human rights in Liberia", "section": "Section::::Basic rights.\n", "start_paragraph_id": 15, "start_character": 0, "end_paragraph_id": 15, "end_character": 298, "text": "Ritualistic killings, which involve the removal from the victim's corpse of body parts used in tribal rituals, and which are often described in police reports as accidents or suicides, are a common occurrence. Protests against these killings are also common, and sometimes lead to injury and death\n", "bleu_score": null, "meta": null } ] } ]
null