id
stringlengths
10
10
question
stringlengths
18
294
comment
stringlengths
28
6.89k
passages
sequence
presuppositions
sequence
corrections
sequence
labels
sequence
raw_presuppositions
sequence
raw_labels
sequence
raw_corrections
sequence
2018-12577
Why do governments and companies destroy hard drives for security instead of just writing over all of the data 100% and why does it take multiple passes to make sure the data is gone?
It takes a lot of time to overwrite drives like that, this consumes electricity and occupies a computer/employee which could be doing something else. Lastly, it requires that the drive be *working properly* and that's not a sure bet with old equipment. And why not destroy it? You could get some money by selling it, but then you need to *take time* (assign a paid employee) to the task of selling those drives in an attempt to make money back. Seems counterproductive. It's sometimes claimed that after a drive is overwritten, the "strength" of the magnetization can be used to find out what was written before. But my understanding is this doesn't work *or at least hasn't been demonstrated* in practice, and that writing random data (rather than all 0s) more than once would seriously hinder this process, if it was real to begin with. I think the main point is economics, speed, and ease.
[ "BULLET::::- Further analysis by Wright et al. seems to also indicate that one overwrite is all that is generally required.\n\nSection::::E-waste and information security.\n\nE-waste presents a potential security threat to individuals and exporting countries. Hard drives that are not properly erased before the computer is disposed of can be reopened, exposing sensitive information. Credit card numbers, private financial data, account information and records of online transactions can be accessed by most willing individuals. Organized criminals in Ghana commonly search the drives for information to use in local scams.\n", "One challenge with an overwrite is that some areas of the disk may be inaccessible, due to media degradation or other errors. Software overwrite may also be problematic in high-security environments which require stronger controls on data commingling than can be provided by the software in use. The use of advanced storage technologies may also make file-based overwrite ineffective (see the discussion below under \"Complications\").\n", "Data erasure offers an alternative to physical destruction and degaussing for secure removal of all the disk data. Physical destruction and degaussing destroy the digital media, requiring disposal and contributing to electronic waste while negatively impacting the carbon footprint of individuals and companies. Hard drives are nearly 100% recyclable and can be collected at no charge from a variety of hard drive recyclers after they have been sanitized.\n\nSection::::Importance.:Limitations.\n", "Section::::Differentiators.:Full disk overwriting.\n\nWhile there are many overwriting programs, only those capable of complete data erasure offer full security by destroying the data on all areas of a hard drive. Disk overwriting programs that cannot access the entire hard drive, including hidden/locked areas like the host protected area (HPA), device configuration overlay (DCO), and remapped sectors, perform an incomplete erasure, leaving some of the data intact. By accessing the entire hard drive, data erasure eliminates the risk of data remanence.\n", "Software-based data erasure uses a disk accessible application to write a combination of ones, zeroes and any other alpha numeric character also known as the \"mask\" onto each hard disk drive sector. The level of security when using software data destruction tools are increased dramatically by pre-testing hard drives for sector abnormalities and ensuring that the drive is 100% in working order. The number of wipes has become obsolete with the more recent inclusion of a \"verify pass\" which scans all sectors of the disk and checks against what character should be there i.e.; 1 Pass of AA has to fill every writable sector of the hard disk. This makes any more than 1 Pass an unnecessary and certainly a more damaging act especially as drives have passed the 1TB mark.\n", "Section::::Importance.\n\nInformation technology assets commonly hold large volumes of confidential data. Social security numbers, credit card numbers, bank details, medical history and classified information are often stored on computer hard drives or servers. These can inadvertently or intentionally make their way onto other media such as printers, USB, flash, Zip, Jaz, and REV drives.\n\nSection::::Importance.:Data breach.\n", "Section::::Complications.:Advanced storage systems.\n\nData storage systems with more sophisticated features may make overwrite ineffective, especially on a per-file basis. \n", "One study done in 2016 had researchers drop 297 USB drives around the campus of the University of Illinois. The drives contained files on them that linked to webpages owned by the researchers. The researchers were able to see how many of the drives had files on them opened, but not how many were inserted into a computer without having a file opened. Of the 297 drives that were dropped, 290 (98%) of them were picked up and 135 (45%) of them \"called home\".\n\nSection::::Other Concepts.:Quid pro quo.\n\nQuid pro quo means \"something for something\":\n", "If data erasure does not occur when a disk is retired or lost, an organization or user faces a possibility that the data will be stolen and compromised, leading to identity theft, loss of corporate reputation, threats to regulatory compliance and financial impacts. Companies spend large amounts of money to make sure their data is erased when they discard disks. High-profile incidents of data theft include:\n\nBULLET::::- CardSystems Solutions (2005-06-19): Credit card breach exposes 40 million accounts.\n\nBULLET::::- Lifeblood (2008-02-13): Missing laptops contain personal information including dates of birth and some Social Security numbers of 321,000.\n", "Data erasure through overwriting only works on hard drives that are functioning and writing to all sectors. Bad sectors cannot usually be overwritten, but may contain recoverable information. Bad sectors, however, may be invisible to the host system and thus to the erasing software. Disk encryption before use prevents this problem. Software-driven data erasure could also be compromised by malicious code.\n\nSection::::Differentiators.\n", "Government contracts have been discovered on hard drives found in Agbogbloshie, Ghana. Multimillion-dollar agreements from United States security institutions such as the Defense Intelligence Agency (DIA), the Transportation Security Administration and Homeland Security have all resurfaced in Agbogbloshie.\n\nSection::::Data security.:Reasons to destroy and recycle securely.\n", "Storage media may have areas which become inaccessible by normal means. For example, magnetic disks may develop new bad sectors after data has been written, and tapes require inter-record gaps. Modern hard disks often feature reallocation of marginal sectors or tracks, automated in a way that the operating system would not need to work with it. The problem is especially significant in solid state drives (SSDs) that rely on relatively large relocated bad block tables. Attempts to counter data remanence by overwriting may not be successful in such situations, as data remnants may persist in such nominally inaccessible areas.\n", "BULLET::::- Enhanced overwriting involves three passes; each sector is overwritten first with 1s, then with 0s, and then with randomly generated 1s and 0s.\n\nRegardless of which level is used, verification is needed to ensure that overwriting was successful.\n\nApart from overwriting, other methods could be used, such as degaussing, or physical destruction of the media. With some inexpensive media, destruction and replacement may be cheaper than sanitisation followed by reuse. ATA Secure Erase is not approved. Different methods apply to different media, ranging from paper to CDs to mobile phones.\n", "Daniel Feenberg, an economist at the private National Bureau of Economic Research, claims that the chances of overwritten data being recovered from a modern hard drive amount to \"urban legend\". He also points to the \"-minute gap\" Rose Mary Woods created on a tape of Richard Nixon discussing the Watergate break-in. Erased information in the gap has not been recovered, and Feenberg claims doing so would be an easy task compared to recovery of a modern high density digital signal.\n", "Literally shredding paper documents prior to their disposal is a commonplace physical information security control, intended to prevent the information content - if not the media - from falling into the wrong hands. Digital data can also be shredded in a figurative sense, either by being strongly encrypted or by being repeatedly overwritten until there is no realistic probability of the information ever being retrieved, even using sophisticated forensic analysis: this too constitutes a physical information security control since the purged computer storage media can be freely discarded or sold without compromising the original information content. The two techniques may be combined in high-security situations, where digital shredding of the data content is followed by physical shredding and incineration to destroy the storage media.\n", "The problem of data proliferation is affecting all areas of commerce as the result of the availability of relatively inexpensive data storage devices. This has made it very easy to dump data into secondary storage immediately after its window of usability has passed. This masks problems that could gravely affect the profitability of businesses and the efficient functioning of health services, police and security forces, local and national governments, and many other types of organizations. Data proliferation is problematic for several reasons:\n", "Threats to data in use can come in the form of cold boot attacks, malicious hardware devices, [rootkits] and bootkits. The monkey macro Master report generator partially addresses these concerns, through advanced VBA and Pivot table slicers.\n\nSection::::Concerns.:Full memory encryption.\n\nEncryption, which prevents data visibility in the event of its unauthorized access or theft, is commonly used to protect Data in Motion and Data at Rest and increasingly recognized as an optimal method for protecting Data in Use.\n", "As of November 2007, the United States Department of Defense considers overwriting acceptable for clearing magnetic media within the same security area/zone, but not as a sanitization method. Only degaussing or physical destruction is acceptable for the latter.\n", "Data on floppy disks can sometimes be recovered by forensic analysis even after the disks have been overwritten once with zeros (or random zeros and ones). This is not the case with modern hard drives:\n\nBULLET::::- According to the 2014 NIST Special Publication 800-88 Rev. 1, Section 2.4 (p. 7): \"For storage devices containing magnetic media, a single overwrite pass with a fixed pattern such as binary zeros typically hinders recovery of data even if state of the art laboratory techniques are applied to attempt to retrieve the data.\" It recommends cryptographic erase as a more general mechanism.\n", "Since hard disk drives are among the components of a server computer that are the most likely to fail, there has always been demand for the ability to replace a faulty drive without having to shut down the whole system. This technique is called hot-swapping and is one of the main motivations behind the development of SCA. In connection with RAID, for example, this allows for seamless replacement of failed drives.\n", "Some government agencies (e.g., US NSA) require that personnel physically pulverize discarded disk drives and, in some cases, treat them with chemical corrosives. This practice is not widespread outside government, however. Garfinkel and Shelat (2003) analyzed 158 second-hand hard drives they acquired at garage sales and the like, and found that less than 10% had been sufficiently sanitized. The others contained a wide variety of readable personal and confidential information. See data remanence. \n", "The inability to \"read\" some sectors is not always an indication that a drive is about to fail. One way that unreadable sectors may be created, even when the drive is functioning within specification, is through a sudden power failure while the drive is writing. Also, even if the physical disk is damaged at one location, such that a certain sector is unreadable, the disk may be able to use spare space to replace the bad area, so that the sector can be overwritten.\n", "The data must be accurate and, where necessary, kept up to date; every reasonable step must be taken to ensure that data which are inaccurate or incomplete, having regard to the purposes for which they were collected or for which they are further processed, are erased or rectified;\n", "Most disk, disk controller and higher-level systems are subject to a slight chance of unrecoverable failure. With ever-growing disk capacities, file sizes, and increases in the amount of data stored on a disk, the likelihood of the occurrence of data decay and other forms of uncorrected and undetected data corruption increases.\n", "BULLET::::- Because of the very nature of RAID1, both disks will be subjected to the same workload and very closely similar access patterns, stressing them in the same way.\n\nAlso, if the events of failure of two components are maximally statistically dependent, the probability of the joint failure of both is identical to the probability of failure of them individually. In such a case, the advantages of redundancy are negated. Strategies for the avoidance of common mode failures include keeping redundant components physically isolated.\n" ]
[]
[]
[ "normal" ]
[]
[ "normal" ]
[]
2018-02787
Unintended holes in airplanes are bad. But people can safely stand in open cargo planes and helicopters?
Pressurization. A hole in a pressurized cabin at 38,000 feet will cause massive damage as the pressure tries to equalize as quick as possible. An aircraft with an open door from the ground remains at the same pressure as the outside throughout the entire flight.
[ "BULLET::::- Underground hangars: Several air forces have used tunnels dug into a mountainside as underground hangars.\n\nSection::::Alternatives.:Sweden - Bas 60 & Bas 90.\n", "Reason's model, commonly referred to as the Swiss cheese model, was based on Reason's approach that all organizations should work together to ensure a safe and efficient operation. From the pilot's perspective, in order to maintain a safe flight operation, all human and mechanical elements must co-operate effectively in the system. In Reason's model, the holes represent weakness or failure. These holes will not lead to accident directly, because of the existence of defense layers. However, once all the holes line up, an accident will occur.\n", "BULLET::::- Weapons, including nuclear weapons, can be stored in the HAS, sometimes in a vault under the aircraft; e.g., the United States Air Force Weapons Storage and Security System (WS3).\n\nSection::::Disadvantages.\n\nBULLET::::- They are in a fixed known position.\n\nBULLET::::- Hardened shelters are expensive.\n\nBULLET::::- Hardened shelters are usually too small to easily accommodate large aircraft such as strategic transport aircraft and large surveillance aircraft.\n", "In 2004, the TV show \"MythBusters\" examined if explosive decompression occurs when a bullet is fired through the fuselage of an airplane by informally using a pressurised aircraft and several scale tests. The results of these tests suggested that the fuselage design does not allow people to be blown out. Professional pilot David Lombardo states that a bullet hole would have no perceived effect on cabin pressure as the hole would be smaller than the opening of the aircraft's outflow valve. NASA scientist Geoffrey A. Landis points out though that the impact depends on the size of the hole, which can be expanded by debris that is blown through it. Landis went on to say that \"it would take about 100 seconds for pressure to equalise through a roughly hole in the fuselage of a Boeing 747.\" He then stated that anyone sitting next to the hole would have half a ton of force pulling them in the direction of it. On April 17, 2018 a seat-belted woman on Southwest Airlines Flight 1380 was partially blown through an airplane window that had been broken due to debris from an engine failure. Although the other passengers were able to pull her back inside, she later died from her injuries.\n", "The training goes further with rapid decompression profiles, where the chamber is very rapidly ascended from 8,000 ft to 22,000 ft within 10 to 20 seconds, to simulate the loss of a cabin door. For fighter pilots this is done from an altitude of 25,000 ft to 43,000 ft within 5 seconds which simulates the loss of a fighter aircraft's canopy.\n", "Doors which lead from interior, pressurized, sections of an aircraft to exterior or unpressurized areas can pose extreme risk if they are inadvertently opened during flight. This can be mitigated by having doors that open inwardly and are designed to be forced into their door frames by the internal cabin pressure – most cabin doors are of this type. However, an outward opening door is often advantageous for cargo doors to maximise available space, and these need to be secured by hefty locking mechanisms to overcome internal pressure.\n", "Section::::Aviation safety hazards.:Human factors.:Other human factors.\n\nHuman factors incidents are not limited to errors by pilots. Failure to close a cargo door properly on Turkish Airlines Flight 981 in 1974 caused the loss of the aircraft – however, design of the cargo door latch was also a major factor in the accident. In the case of Japan Airlines Flight 123 in 1985, improper repair of previous damage led to explosive decompression of the cabin, which in turn destroyed the vertical stabilizer and damaged all four hydraulic systems which powered all the flight controls.\n\nSection::::Aviation safety hazards.:Human factors.:Controlled flight into terrain.\n", "BULLET::::- Hailemedhin Abera Tegegn, the co-pilot of Ethiopian Airlines Flight 702 – a Boeing 767-3BGER with 202 people on board bound from Addis Ababa, Ethiopia, to Rome, Italy – locks the cockpit door while the pilot is out of the cockpit to use the toilet and flies the plane to Geneva, Switzerland, where he uses a rope to climb from the cockpit window to the runway. He surrenders to authorities and requests asylum.\n", "BULLET::::- On 7 February 2018, a Dana Air flight landed in Abuja and was taxiing on the runway when one of the emergency exit doors fell off. No casualties resulted from the incident. Passengers claimed that the door rattled throughout the flight. A spokesperson for the airline, however claimed that the door could not have fallen off without a conscious effort by a passenger. The Nigerian Civil Aviation Authority instituted an investigation into the incident to determine what exactly happened.\n", "Cabin doors are designed to make it nearly impossible to lose pressurization through opening a cabin door in flight, either accidentally or intentionally. The plug door design ensures that when the pressure inside the cabin exceeds the pressure outside the doors are forced shut and will not open until the pressure is equalised. Cabin doors, including the emergency exits, but not all cargo doors, open inwards, or must first be pulled inwards and then rotated before they can be pushed out through the door frame because at least one dimension of the door is larger than the door frame. Pressurization prevented the doors of Saudia Flight 163 from being opened on the ground after the aircraft made a successful emergency landing, resulting in the deaths of all 287 passengers and 14 crew members from fire and smoke.\n", "The cartoon series \"Channel Umptee-3\" features a character named Holey Moley, who carries several portable holes and uses at least one in every episode.\n\nIn the Disney series The New Adventures of Winnie-the-Pooh, Gopher searches for a hole blown off on an overly windy day.\n\nSection::::Literature.\n\nIn Rajiv Joseph's play, \"Guards at the Taj\", one of the characters, Humayun, invents a \"transportable hole\". Humayun describes it:\n\nIn the novelization of \"E.T. the Extra-Terrestrial\", Elliott uses a portable hole when the lead characters are playing Dungeons & Dragons.\n", "In February 2010, a cargo door came unlatched on an airborne Alpine Air Express Beech 99 carrying mail from Billings, to Kalispell, Montana, at about 1:30 a.m. The plane was about 40 miles north of Lewistown, Montana, when the pilot noted a light on the instrument panel had come on, indicating the door was unlatched. Because there was about 3,000 pounds of mail cargo in between the pilot and the door, he couldn’t close it. Because the door is located below the plane’s airstream, even when open it wouldn’t compromise the ability to fly and land the plane.\n", "With regard to the obstructions that the airplane collided with during the accident sequence, the NTSB ordered the modification or replacement of \"all pump houses adjacent to Runway 13/31 so that they are not obstructions to airplanes\". They also ordered a study on the \"feasibility of building a frangible ILS antenna array for LaGuardia Airport\" Further, they ordered a review of Fokker F28-4000 passenger safety briefing cards \"to ensure that they clearly and accurately depict the operation of the two types of forward cabin doors in both their normal and emergency modes and that they describe clearly and accurately how to remove the overwing emergency exit and cover.\"\n", "BULLET::::- On August 17, 1996, a U.S. Air Force C-130 Hercules aircraft assigned to the 317th Airlift Group at Dyess AFB, Texas was unable to clear Sheep Mountain, crashing into it and killing all nine aboard. The aircraft was supporting the United States Secret Service as part of a POTUS visit to the area.\n\nBULLET::::- On December 20, 2000, actress Sandra Bullock survived the crash of a chartered business jet at Jackson Hole Airport. The aircraft hit a snowbank instead of the runway, shearing off the nose gear and nose cone and damaging the wings.\n", "BULLET::::- March 10 – The Pan American World Airways Boeing 747-121 \"Clipper Ocean Pearl\", operating as Flight 125 with 245 people on board, experiences pressurization problems during climbout from London Heathrow Airport in the United Kingdom, and returns to the airport. An investigation finds that latching problems had allowed the forward cargo door to come ajar. A similar door problem will lead to a fatal accident aboard United Airlines Flight 811 in February 1989.\n", "Prior to 1996, approximately 6,000 large commercial transport airplanes were type certified to fly up to , without being required to meet special conditions related to flight at high altitude. In 1996, the FAA adopted Amendment 25–87, which imposed additional high-altitude cabin-pressure specifications, for new designs of aircraft types. For aircraft certified to operate above 25,000 feet (FL 250; 7,600 m), it \"must be designed so that occupants will not be exposed to cabin pressure altitudes in excess of after any probable failure condition in the pressurization system.\" In the event of a decompression which results from \"any failure condition not shown to be extremely improbable,\" the aircraft must be designed so that occupants will not be exposed to a cabin altitude exceeding for more than 2 minutes, nor exceeding an altitude of at any time. In practice, that new FAR amendment imposes an operational ceiling of 40,000 feet on the majority of newly designed commercial aircraft.\n", "BULLET::::- On January 6, 2008, Servant Air Flight 109 crashed just short of Kodiak Airport shortly after take off, en route to Homer, Alaska. Of the 9 passengers and the pilot aboard the Piper Navajo Chieftain, there were 4 survivors. According to the NTSB, the failure of the nose baggage door latching mechanism resulted in an inadvertent opening of the nose baggage door in flight. Contributing to the accident were the lack of information and guidance available to the operator and pilot regarding procedures to follow should a baggage door open in flight and an inadvertent aerodynamic stall.\n", "Before 1996, approximately 6,000 large commercial transport airplanes were assigned a type certificate to fly up to without having to meet high-altitude special conditions. In 1996, the FAA adopted Amendment 25-87, which imposed additional high-altitude cabin pressure specifications for new-type aircraft designs. Aircraft certified to operate above \"must be designed so that occupants will not be exposed to cabin pressure altitudes in excess of after any probable failure condition in the pressurization system\". In the event of a decompression that results from \"any failure condition not shown to be extremely improbable\", the plane must be designed such that occupants will not be exposed to a cabin altitude exceeding for more than 2 minutes, nor to an altitude exceeding at any time. In practice, that new Federal Aviation Regulations amendment imposes an operational ceiling of on the majority of newly designed commercial aircraft. Aircraft manufacturers can apply for a relaxation of this rule if the circumstances warrant it. In 2004, Airbus acquired an FAA exemption to allow the cabin altitude of the A380 to reach in the event of a decompression incident and to exceed for one minute. This allows the A380 to operate at a higher altitude than other newly designed civilian aircraft.\n", "Outdoor vertical wind tunnels can either be portable or stationary. Portable vertical wind tunnels are often used in movies and demonstrations, and are often rented for large events such as conventions and state fairs. Portable units offer a dramatic effect for the flying person and the spectators, because there are no walls around the flight area. These vertical wind tunnels allow people to fly with a full or partial outdoor/sky view. Outdoor vertical wind tunnels may also have walls or netting around the wind column, to keep beginner tunnel flyers from falling out of the tunnel.\n", "Pressurization presents design and construction challenges to maintain the structural integrity and sealing of the cabin and hull and to prevent rapid decompression. Some of the consequences include small round windows, doors that open inwards and are larger than the door hole, and an emergency oxygen system.\n", "Section::::Incidents and accidents.:Cargo door problem and other major accidents.\n\nThe DC-10 was designed with cargo doors that opened outward instead of conventional inward-opening \"plug-type\" doors. Using outward-opening doors allowed the DC-10's cargo area to be completely filled, since the door was not occupying usable interior space when open. To secure the door against the outward force from the pressurization of the fuselage at high altitudes, outward-opening doors must use heavy locking mechanisms. In the event of a door lock malfunction, there is great potential for explosive decompression.\n\nSection::::Incidents and accidents.:Cargo door problem and other major accidents.:American Airlines Flight 96.\n", "BULLET::::- 7 March:An Indian Air Force Antonov An-32 crashes upon landing in New Delhi, India during poor weather. All 19 people on board are killed.\n\nBULLET::::- 27 March:\n\nBULLET::::- 7 April:A Boeing KC-135R Stratotanker, \"57-1418\", of the 153rd Air Refueling Squadron, Mississippi Air National Guard, is written off while undergoing maintenance at the Oklahoma ALC, Tinker AFB, Oklahoma, when the cabin is over-pressurized during a test and ruptures, tearing a 35 foot (10.6 m) hole in the aft fuselage, allowing tail section to drop to the ground.\n", "Section::::By country.\n\nSection::::By country.:Sweden.\n", "Jackson Hole Airport is one of 16 airports that uses private screeners under contract with the Transportation Security Administration's Screening Partnership Program. Security screeners are employed by the Jackson Hole Airport Board rather than the TSA.\n\nThe largest aircraft seen regularly is the Boeing 757-200 operated by United Airlines and Delta Air Lines. Other aircraft typically seen include the Airbus A319 and A320, Boeing 737-700, Embraer 175, and the Bombardier CRJ-700 regional jet. Due to its high altitude and short runway, Jackson Hole Airport does not typically see stretched versions of aircraft such as the Airbus A321 or Boeing 737-900ER.\n", "BULLET::::- The worker can access from a specialist type of mobile elevating work platform (MEWP) termed an insulating aerial device (IAD) which has a boom of insulating material and which all conductive parts at the platform end are bonded together. There are other requirements for safe working such as gradient control devices, a means of preventing a vacuum in the hydraulic lines, etc.\n\nBULLET::::- The worker can stand on an insulating ladder which is maneuvered to the line by means of non-conductive rope.\n\nBULLET::::- The worker is lowered from a helicopter and transfers himself to the line.\n" ]
[ "Holes in airplanes are bad." ]
[ "Holes in airplanes where there is a pressure difference between in the inside and outside are bad." ]
[ "false presupposition" ]
[ "Holes in airplanes are bad." ]
[ "false presupposition" ]
[ "Holes in airplanes where there is a pressure difference between in the inside and outside are bad." ]
2018-03964
Why can't astronomers say with certainty if a specific asteroid will or won't strike the earth?
Because of the Sun, Mercury, Venus, Earth, Mars, Jupiter, Saturn, Uranus, Neptune, Pluto, asteroid belt, Kuiper Belt, and Oort cloud. Most people assume that asteroids go in a straight line. But what many of us forget is that the gravity of nearby planets affect the trajectory of an asteroid. Even a pull of a body as small as the Moon can drastically affect the direction and speed an asteroid is going, even if it's on the other side of the solar system! Also, nearby stars also have a minuscule, but still measurable, gravitational effect. These effects might only pull an asteroid a couple millimeters in another direction, but over time, these changes add up. Humans don't have enough computational power (yet) to try and compute all these different unknowns. Trying to predict the direction of an asteroid is known as the [N-Body problem]( URL_0 ).
[ "For asteroids that are actually on track to hit Earth the predicted probability of impact continues to increase as more observations are made. This initially very similar pattern makes it difficult to quickly differentiate between asteroids which will be millions of kilometres from Earth and those which will actually hit it. This in turn makes it difficult to decide when to raise an alarm as gaining more certainty takes time, which reduces the time available to react to a predicted impact. However raising the alarm too soon has the danger of causing a false alarm and creating a Boy Who Cried Wolf effect if the asteroid in fact misses Earth.\n", "For asteroids that are actually on track to hit Earth the predicted probability of impact continues to increase as more observations are made. This similar pattern makes it difficult to differentiate between asteroids that will only come close to Earth and those that will actually hit it. This in turn makes it difficult to decide when to raise an alarm as gaining more certainty takes time, which reduces time available to react to a predicted impact. However raising the alarm too soon has the danger of causing a false alarm and creating a Boy Who Cried Wolf effect if the asteroid in fact misses Earth.\n", "Smaller near-Earth objects are far more numerous (millions) and therefore impact Earth much more often, and the vast majority remain undiscovered. They seldom pass close enough to Earth on a previous approach that they become bright enough to observe, and so most can only be observed on final approach. They therefore cannot usually be cataloged well in advance and can only be warned about, a few weeks to days in advance. This is much too late to deflect them away from Earth, but is enough time to mitigate the consequences of the impact by evacuating and otherwise preparing the affected area.\n", "The existing asteroid surveys have a fairly clear-cut division between 'cataloging surveys' which use larger telescopes to mostly identify larger asteroids well before they come very close to Earth, and 'warning surveys' which use smaller telescopes to mostly look for smaller asteroids on their final approach. Cataloging systems focus on finding larger asteroids years in advance and they scan the sky slowly (of the order of once per month), but deeply. Warning systems focus on scanning the sky relatively quickly (of the order of once per night). They typically cannot detect objects that are as faint as cataloging systems but will not miss an asteroid that brightens for just a few days when it passes very close to Earth. Some systems compromise and scan the sky say once per week.\n", "A few near misses have been predicted, years in advance, with a tiny chance of actually striking Earth. A handful of actual impactors have been detected hours in advance. All were small, struck wilderness or ocean, and hurt nobody. Current systems only detect an arriving object when several factors are just right, mainly the direction of approach relative to the Sun, weather, and phase of the Moon. The result is a low rate of success. Performance is improving as existing systems are upgraded and new ones come on line, but the blind spot issue which the current systems face around the Sun can only be overcome by a dedicated space based system.\n", "Current mechanisms for detecting asteroids on final approach rely on ground based telescopes with wide fields of view. Those currently can monitor the sky at most every second night, and therefore miss most of the smaller asteroids which are bright enough to detect for less than two days. Such very small asteroids much more commonly impact Earth than larger ones, but they make little damage. Missing them therefore has limited consequences. Much more importantly, ground-based telescopes are blind to most of the asteroids which impact the day side of the planet and would miss even large ones. These and other problems mean very few impacts are successfully predicted (see §Effectiveness of the current system and §Improving impact prediction).\n", "In December 2004 when Apophis was estimated to have a 2.7% chance of impacting Earth on 13 April 2029, the uncertainty region for this asteroid had shrunk to 83000 km.\n\nSection::::Response to predicted impact.\n\nOnce an impact has been predicted the potential severity needs to be assessed, and a response plan formed. Depending on the time to impact and the predicted severity this may be as simple as giving a warning to citizens. For example, although unpredicted, the 2013 \n", "Based on how well constrained the orbit calculations of identified NEOs are, two scales, the Torino scale and the more complex Palermo scale, rate a risk. Some NEOs have had temporarily positive Torino or Palermo scale ratings after their discovery, but , more precise calculations based on longer observation arcs led to a reduction of the rating to or below 0 in all cases.\n", "Another way to assess the effectiveness of the current system is to look at warning times for asteroids which did not impact Earth, but came reasonably close. Looking at asteroids which came closer than the Moon, the below diagram shows how far in advance of closest approach the asteroids were first detected. Unlike actual asteroid impacts where, by using infrasound sensors, it is possible to assess how many were undetected, there is no ground truth for close approaches. The below chart therefore does not include any statistics for asteroids which were undetected. It can be seen however that the number of detections is increasing as more survey sites come on line (for example ATLAS in 2016 and ZTF in 2018), and that approximately half of the detections are made after the asteroid passes the Earth. \n", "If a more severe impact is predicted, the response may require evacuation of the area, or with sufficient lead time available, an avoidance mission to repel the asteroid. According to expert testimony in the United States Congress in 2013, NASA would require at least five years of preparation before a mission to intercept an asteroid could be launched.\n\nSection::::Effectiveness of the current system.\n", "The ellipses in the diagram on the right show the predicted position of an example asteroid at closest Earth approach. At first, with only a few asteroid observations, the error ellipse is very large and includes the Earth. Further observations shrink the error ellipse, but it still includes the Earth. This raises the predicted impact probability, since the Earth now covers a larger fraction of the error region. Finally, yet more observations (often radar observations, or discovery of a previous sighting of the same asteroid on archival images) shrink the ellipse revealing that the Earth is outside the error region, and the impact probability is near zero.\n", "Sub-150m impacting asteroids would not cause global damage but are still locally catastrophic. They can, by contrast to larger ones, only be detected when they come very close to the Earth, which in most cases only happens during their final approach. Those impacts therefore will always need a constant watch and typically cannot be identified earlier than a few weeks in advance, far too late for interception. According to expert testimony in the United States Congress in 2013, NASA would presently require at least five years of preparation before a mission to intercept an asteroid could be launched. Meeting the asteroid and deflecting it by least the diameter of the Earth after its interception would then needs several additional years.\n", "One final statistic which casts some light on the effectiveness of the current system is the average warning time for an asteroid impact. Based on the few successfully predicted asteroid impacts, the average time between initial detection and impact is currently around 14 hours. Note however that there is some delay between the initial observation of the asteroid, data submission, and the follow up observations and calculations which lead to an impact prediction being made. \n\nSection::::Improving impact prediction.\n", "For larger asteroids ( 100m to 1 km across), prediction is based on cataloging the asteroid, years to centuries before it could impact. This technique is possible as they can be seen from a long distance due to their large size. Their orbits therefore can be measured and any future impacts predicted long before they are on their final approach to Earth. This long period of warning is important as an impact from a 1 km object would cause worldwide damage and a long lead time would be needed to deflect it away from Earth. As of 2018, the inventory is nearly complete for the kilometer-size objects (around 900) which would cause global damage, and approximately one third complete for 140 meter objects (around 8500) which would cause major regional damage.\n", "Larger asteroids can be detected even while far from the Earth, and their orbits can therefore be determined very precisely years in advance of any close approach. Thanks largely to Spaceguard cataloging initiated by a 2005 mandate of the United States Congress to NASA, the inventory of the Near Earth Objects with diameters above 1 kilometer is for instance now 97% complete, while the estimated completeness for 140 meter objects is around one third, and improving. Any impact by one of these known asteroids would be predicted decades to centuries in advance, long enough to consider deflecting them away from Earth. None of them will impact Earth for at least the next century. We are therefore largely safe from kilometer-size impacts for the mid-term future, but sub-km impacts remain a possibility at this point in time.\n", "Follow up observations are important because once a sky survey has reported a discovery it may not return to observe the object again for days or weeks. By this time it may be too faint for it to detect, and in danger of becoming a lost asteroid. The more observations and the longer the observation arc, the greater the accuracy of the orbit model. This is important for two reasons:\n\nBULLET::::1. for imminent impacts it helps to make a better prediction of where the impact will occur and whether there is any danger of hitting a populated area.\n", "NASA's Sentry System continually scans the MPC catalog of known asteroids, analyzing their orbits for any possible future impacts. Like ESA's NEODyS, it gives a MOID for each near-Earth object, and a list of possible future impacts, along with the probability of each. It uses a slightly different algorithm to NEODyS, and so provides a useful cross-check and corroboration.\n\nCurrently, no impacts are predicted (the single highest probability impact currently listed is ~7 m asteroid , which is due to pass Earth in September 2095 with only a 5% predicted chance of impacting).\n\nSection::::Impact calculation.:Impact probability calculation pattern.\n", "In addition, although not strictly part of the prediction process, once an impact has been predicted, an appropriate response needs to be made.\n", "The ellipses in the diagram on the right show the predicted position of an example asteroid at closest Earth approach. At first, with only a few asteroid observations, the error ellipse is very large and includes the Earth. Further observations shrink the error ellipse, but it still includes the Earth. This raises the predicted impact probability, since the Earth now covers a larger fraction of the error region. Finally, yet more observations (often radar observations, or discovery of a previous sighting of the same asteroid on archival images) shrink the ellipse revealing that the Earth is outside the error region, and the impact probability is near zero.\n", "Currently prediction is mainly based on cataloging asteroids years before they are due to impact. This works well for larger asteroids ( 1 km across) as they are easily seen from a long distance. Over 95% of them are already known, so their orbits can be measured and any future impacts predicted long before they are on their final approach to Earth. Smaller objects are too faint to observe except when they come very close and so most can only be observed on final approach. Current mechanisms for detecting asteroids on final approach rely on wide-field ground based telescopes, such as the ATLAS system. However current telescopes only cover part of the Earth and even more importantly cannot detect asteroids on the day-side of our planet, which is why so few of the smaller asteroids that commonly impact Earth are detected during the few hours that they would be visible.\n", "BULLET::::2. for asteroids that will miss Earth this time round, the more accurate the orbit model is, the further into the future its position can be predicted. This allows recovery of the asteroid on its subsequent approaches, and impacts to be predicted years in advance.\n\nSection::::Follow up observations.:Estimating size and impact severity.\n", "The diagram below illustrates the number of successfully predicted impacts each year compared to the number of unpredicted asteroid impacts recorded by infrasound sensors designed to detect detonation of nuclear devices. It shows that the vast majority are still missed due to their small size and dimness, and because approximately half occur on the day side of the Earth.\n", "In addition to the already funded telescopes mentioned above, two separate approaches have been suggested by NASA to improve impact prediction. Both approaches focus on the first step in impact prediction (discovering near-Earth asteroids) as this is the largest weakness in the current system. The first approach uses more powerful ground-based telescopes similar to the LSST. Being ground-based, such telescopes will still only observe part of the sky around Earth. In particular, all ground-based telescopes have a large blind spot for any asteroids coming from the direction of the Sun. In addition, they are affected by weather conditions, airglow and the phase of the Moon.\n", "Section::::List of successfully predicted asteroid impacts.\n\nBelow is the list of all near-Earth objects which may have impacted the Earth and which were successfully predicted beforehand. This list would also include any objects identified as having greater than 50% chance of impacting in the future, but no such future impacts is predicted at this time. As asteroid detection ability increases it is expected that prediction will become more successful in the future.\n\nSection::::See also.\n\nBULLET::::- Asteroid impact avoidance\n\nBULLET::::- Earth-grazing fireball\n\nBULLET::::- Impact event\n\nBULLET::::- List of asteroid close approaches to Earth\n", "As long as their radiant is not too close to the Sun, and for the current Hawaii-based system not too far into the Southern hemisphere, the automated system provides a one-week warning for a diameter asteroid, and a three-week warning for a one. By comparison, the February 2013 Chelyabinsk meteor impact was from an object estimated at diameter. Its arrival direction happened to be close to the Sun and it therefore was in the blind spot of any ATLAS-like system. A similar object arriving from a dark direction would be detected by ATLAS a couple of days in advance.\n" ]
[]
[]
[ "normal" ]
[]
[ "normal" ]
[]
2018-03132
Why does producing more units of a product cost less than producing a small amount of it?
It will cost less per book, not less overall, unless there's something particularly unusual at stake. Basically, there's some amount of work required to print the first book - design, layout, etc. The cost of that is split among all the books printed, since it only needs to happen once. Then each book has its own costs, paper, ink, etc. So the initial costs make up much of the price if the print run is small, but become a smaller and smaller share as it expands.
[ "Economies of scale must be distinguished by economies stemming from an increase in the production of a given plant. When a plant is used below its optimal production capacity, increases of its degree of utilization bring about decreases in the total average cost of production. As noticed, among the others, by Nicholas Georgescu-Roegen (1966) and Nicholas Kaldor (1972) these economies cost are not economies of scale. \n\nSection::::Overview.\n", "Many manufacturing facilities, especially those making bulk materials like chemicals, refined petroleum products, cement and paper, have labor requirements that are not greatly influenced by changes in plant capacity. This is because labor requirements of automated processes tend to be based on the complexity of the operation rather than production rate, and many manufacturing facilities have nearly the same basic number of processing steps and pieces of equipment, regardless of production capacity.\n\nSection::::The determinants of economies of scale.:Economical use of byproducts.\n", "The economies of mass production come from several sources. The primary cause is a reduction of non-productive effort of all types. In craft production, the craftsman must bustle about a shop, getting parts and assembling them. He must locate and use many tools many times for varying tasks. In mass production, each worker repeats one or a few related tasks that use the same tool to perform identical or near-identical operations on a stream of products. The exact tool and parts are always at hand, having been moved down the assembly line consecutively. The worker spends little or no time retrieving and/or preparing materials and tools, and so the time taken to manufacture a product using mass production is shorter than when using traditional methods.\n", "Some economies of scale, such as capital cost of manufacturing facilities and friction loss of transportation and industrial equipment, have a physical or engineering basis.\n\nAnother source of scale economies is the possibility of purchasing inputs at a lower per-unit cost when they are purchased in large quantities.\n\nThe economic concept dates back to Adam Smith and the idea of obtaining larger production returns through the use of division of labor. Diseconomies of scale are the opposite.\n", "Economies of increased dimension are often misinterpreted because of the confusion between indivisibility and three-dimensionality of space. This confusion arises from the fact that three-dimensional production elements, such as pipes and ovens, once installed and operating, are always technically indivisible. However, the economies of scale due to the increase in size do not depend on indivisibility but exclusively on the three-dimensionality of space. Indeed, indivisibility only entails the existence of economies of scale produced by the balancing of productive capacities, considered above; or of increasing returns in te utilisation of a single plant, due to its more efficient use as the quantity produced increases. However, this latter phenomenon has nothing to do with the economies of scale which, by definition, are linked to the use of a larger plant.\n", "Whereas economies of scale for a firm involve reductions in the average cost (cost per unit) arising from increasing the scale of production for a single product type, economies of scope involve lowering average cost by producing more types of products.\n", "Section::::The determinants of economies of scale.\n\nSection::::The determinants of economies of scale.:Physical and engineering basis: economies of increased dimension.\n\nSome of the economies of scale recognized in engineering have a physical basis, such as the square-cube law, by which the surface of a vessel increases by the square of the dimensions while the volume increases by the cube. This law has a direct effect on the capital cost of such things as buildings, factories, pipelines, ships and airplanes.\n\nIn structural engineering, the strength of beams increases with the cube of the thickness.\n", "Economies of scale\n\nIn microeconomics, economies of scale are the cost advantages that enterprises obtain due to their scale of operation (typically measured by amount of output produced), with cost per unit of output decreasing with increasing scale. At the basis of economies of scale there may be technical, statistical, organizational or related factors to the degree of market control.\n\nEconomies of scale apply to a variety of organizational and business situations and at various levels, such as a production, plant or an entire enterprise. When average costs start falling as output increases, then economies of scale are occurring.\n", "Marginal cost is not the cost of producing the \"next\" or \"last\" unit. The cost of the last unit is the same as the cost of the first unit and every other unit. In the short run, increasing production requires using more of the variable input — conventionally assumed to be labor. Adding more labor to a fixed capital stock reduces the marginal product of labor because of the diminishing marginal returns. This reduction in productivity is not limited to the additional labor needed to produce the marginal unit – the productivity of every unit of labor is reduced. Thus the cost of producing the marginal unit of output has two components: the cost associated with producing the marginal unit and the increase in average costs for all units produced due to the \"damage\" to the entire productive process. The first component is the per-unit or average cost. The second component is the small increase in cost due to the law of diminishing marginal returns which increases the costs of all units of sold.\n", "But are they so? Consider margin per VA, (money earned on work done) for both products, for A it is 1.25 while for B it is 5. In above method, VA for first part is 300% more than in second part and still company is charging same margin. In simpler words, A spends 4 days in manufacturing, eats-up resources and generates same amount of money as that of B, which spends only 1 day in company (assuming 10 VA is equal to one day).\n", "As a process becomes larger, more product can be produced per unit time, so when a process technology becomes established or mature, and operates consistently without upsets or “downtime”, more economic efficiency can be gained from scale-up. Given a fixed price for the feedstock (e.g. the price per barrel of crude oil), the product cost can be decreased using a larger scale process because the capital investment and operational costs do not normally increase linearly with scale. For example, the capacity or volume of a cylindrical vessel used to produce a product increases proportional to the square of the radius of the cylinder, so cost of materials per unit volume decreases. But the costs to design and fabricate the vessel have traditionally been less sensitive to scale. In other words, one can design a small vessel and fabricate it for about the same cost as the larger vessel. In addition, the cost to control and operate a process (or a process unit component) does not change substantially with the scale. For example, if it takes one operator to operate a small process, that same operator can probably operate the larger process.\n", "Section::::The determinants of economies of scale.:Economies resulting from the division of labour and the use of superior techniques.\n\nA larger scale allows for a more efficient division of labour. The economies of division of labour derive from the increase in production speed, from the possibility of using specialized personnel and adopting more efficient techniques. An increase in the division of labour inevitably leads to changes in the quality of inputs and outputs.\n\nSection::::The determinants of economies of scale.:Managerial Economics.\n", "The probability of human error and variation is also reduced, as tasks are predominantly carried out by machinery; error in operating such machinery, however, has more far-reaching consequences. A reduction in labour costs, as well as an increased rate of production, enables a company to produce a larger quantity of one product at a lower cost than using traditional, non-linear methods.\n", "The ability of a Major Power to produce military units is abstracted by three factors. Production points from on-map factory complexes are combined with resource points from on-map natural resources. For each pair of points successfully combined (by railing or shipping a resource-point to a factory), the nation gains a number of build points equal to its production multiplier. This multiplier represents how well geared the nation's industry is towards war-time production. As the war proceeds, all production-multipliers go up. The build points can be spent to purchase units for land, sea and air warfare.\n\nSection::::Turn length, initiative and impulses.\n", "A larger scale generally determines greater bargaining power over input prices and therefore benefits from pecuniary economies in terms of purchasing raw materials and intermediate goods compared to companies that make orders for smaller amounts. In this case we speak of pecuniary economies, to highlight the fact that nothing changes from the \"physical\" point of view of the returns to scale. Furthermore, supply contracts entail fixed costs which lead to decreasing average costs if the scale of production increases.\n\nSection::::The determinants of economies of scale.:Economies deriving from the balancing of production capacity.\n", "For example, we should think of two countries that both make cards and pencils and use the same amount of time to make one unit of items (see table). Country one can make 4 pencils if they specialize just in pencils at the expense of one card, but this country can also make ¼ of a card at the expense of one pencil. The same logic goes for country two: if country two makes only pencils, it will make 2 pencils at the expense of 1 card.\n", "The simple meaning of economies of scale is doing things more efficiently with increasing size. Common sources of economies of scale are purchasing (bulk buying of materials through long-term contracts), managerial (increasing the specialization of managers), financial (obtaining lower-interest charges when borrowing from banks and having access to a greater range of financial instruments), marketing (spreading the cost of advertising over a greater range of output in media markets), and technological (taking advantage of returns to scale in the production function). Each of these factors reduces the long run average costs (LRAC) of production by shifting the short-run average total cost (SRATC) curve down and to the right.\n", "Manufacturers incur many costs in the production process. It is the cost accountant's job to trace these costs back to a certain product or process (cost object) during production. Some costs cannot be traced back to a single cost object. Some costs benefit more than one product or process in the manufacturing process. These costs are called \"Joint cost\". Almost all manufacturers incur joint costs at some level the manufacturing process. It can also be defined as the cost to operate joint-product processes including the disposal of waste. With regard to joint costs, it is essential to allocate the joint cost for the different joint products for determining individual product costs. Several methods are used to allocate joint cost. These methods are mainly classified onto engineering and non-engineering methods. Non engineering methods are mainly based on the market share of the product; the higher market share, the higher proportion assigned to it e.g. net realizable value. In this method, the proportions are determined based on the sales value proportions. In the engineering based method, proportions are found based on physical quantities and measurements such as volume, weight, etc.\n", "Assume a business produces clothing. A variable cost of this product would be the direct material, i.e., cloth, and the direct labor. If it takes one laborer 6 yards of cloth and 8 hours to make a shirt, then the cost of labor and cloth increases if two shirts are produced.\n\nThe amount of materials and labor that goes into each shirt increases in direct proportion to the number of shirts produced. In this sense, the cost \"varies\" as production varies.\n\nSection::::Explanation.:Example 2.\n", "Semi-variable cost\n\nThe concept of semi-variable cost (also referred to as semi-fixed cost) is often used to project financial performance at various scales of production, where it is an expense which contains both a fixed-cost component and a variable-cost component. It is related to the scale of production within the business where there is a fixed cost which remains constant across all scales of production whilst the variable cost increases proportionally to production levels.\n", "Economies of productive capacity balancing derives from the possibility that a larger scale of production involves a more efficient use of the production capacities of the individual phases of the production process. If the inputs are indivisible and complementary, a small scale may be subject to idle times or to the underutilization of the productive capacity of some sub-processes. A higher production scale can make the different production capacities compatible. The reduction in machinery idle times is crucial in the case of a high cost of machinery.\n", "They found Simanis theorised these characteristics were often missing with him concluding that “because the high costs of doing business among the very poor demand a high contribution per transaction, companies must embrace the reality that high margins aren’t just a top-of-the-pyramid phenomenon; they’re also a necessity for ensuring sustainable businesses at the bottom of the pyramid.”\n\nSimanis's three solutions for generating higher values are\n\nBULLET::::- a localised base product with final processing prior to sale as close to the target market as possible, saving on labour costs;\n", "However, as the percentages of indirect or overhead costs rose, this technique became increasingly inaccurate, because indirect costs were not caused equally by all products. For example, one product might take more time in one expensive machine than another product—but since the amount of direct labor and materials might be the same, additional cost for use of the machine is not being recognized when the same broad 'on-cost' percentage is added to all products. Consequently, when multiple products share common costs, there is a danger of one product subsidizing another.\n", "As can be seen, such an increase in prices would induce a certain substitution for our hypothetical firm, in fact, 200 units less will be sold. This may be so because some consumers have started to buy a substitute product, the same consumers have bought a smaller quantity of the product given its price increase or maybe because they have stopped from buying that type of product.\n\nIf we want to know whether such price increase has been profitable, we should solve the following equation:br\n\nformula_3\n", "An ideal cost curve assumes technical efficiency, since a firm always has an incentive to be as technically efficient as possible. In general, firms have a variety of methods of using various amounts of inputs, and they select the lowest total cost method for any given amount of output (quantity produced). For example, if you wanted to make a few pins, the cheapest way to accomplish this might be by hiring one jack-of-all-trades and buying a scrap of metal and having him work on it in your home. But if you wished to produce thousands of pins, the lowest total cost might be achieved by renting a factory, buying specialized equipment and hiring an assembly line of factory workers to perform specialized actions at each stage of producing the pin. In the short run, you might not have time to rent a factory, or buy specialized tools, or hire factory workers. Then you would be able to achieve short run minimum costs, but you know the long run costs would be much less. The increase in choices available of how to produce in the long run means that long run costs are always equal to or less than the short run costs, ceteris paribus.\n" ]
[ "Producing more units of a product costs less than producing a small amount of product." ]
[ "Producing more units of a product costs less per product than producing a small amount of product." ]
[ "false presupposition" ]
[ "Producing more units of a product costs less than producing a small amount of product." ]
[ "false presupposition" ]
[ "Producing more units of a product costs less per product than producing a small amount of product." ]
2018-02611
Why do oil prices fluctuate so much?
Price is determined by supply and demand. So anything in the world that changes either of these things will cause the price to fluctuate. Much of the traditional oil drilling has been done by OPEC nations, that is a bunch of oil reps from a selection of oil-producing countries collaborate to control supply and thus control price. However, technology improvements have allowed non-OPEC nations such as the US and Canada to harvest oil from shale. It's a more expensive process, but if the price of oil goes too high, shale oil becomes profitable too. So you have OPEC nations trying to control their supply to make shale oil unprofitable while shale companies try to improve their technology and techniques to keep going. On the demand side, you have developing nations that are starting to use more oil as their population integrates with more technology. But you also have awareness of climate change as well as a general desire to not be reliant on fossil fuels incentivizing countries to seek alternative solutions, reducing the demand. Augment all of this with things like oil futures where investors speculate what the price of oil will be in the future and buy or sell oil at future prices. So even news of a change that might affect oil could affect the price, even if that change does not come to pass.
[ "Bouchouev argued that traditionally there was always more producer hedging than consumer hedging in oil markets. The oil market now attracts investor money which currently far exceeds the gap between producer and consumer. Contango used to be the 'normal' for the oil market. Since c. 2008-9, investors are hedging against \"inflation, US dollar weakness and possible geopolitical events,\" instead of investing in the front end of the oil market. Bouchouev applied the changes in investor behaviour to \"the classical Keynes-Hicks theory of normal backwardation, and the Kaldor-Working-Brennan theory of storage, and looked at how calendar spread options (CSOs) became an increasingly popular risk management tool.\"\n", "OPEC holds meetings with its members at least twice per year. During these meetings, the members usually examine market conditions and fix quotas for themselves. The members divide the shares of oil production among the nation states. Based on the quota system, the percentages of oil production are divided according to the stated reserves of each country. Consequently, in order to obtain larger percentage share, member nations often overstate their reserves. Following this system, OPEC endeavours to change the market. This is an indication that the OPEC petroleum traders exercise enough market power to influence oil prices around the world. They might increase oil prices much higher than the true economic price of the resources and reshape the Megacorpstate.\n", "The formation of OPEC marked a turning point toward national sovereignty over natural resources, and OPEC decisions have come to play a prominent role in the global oil market and international relations. The effect can be particularly strong when wars or civil disorders lead to extended interruptions in supply. In the 1970s, restrictions in oil production led to a dramatic rise in oil prices and in the revenue and wealth of OPEC, with long-lasting and far-reaching consequences for the global economy. In the 1980s, OPEC began setting production targets for its member nations; generally, when the targets are reduced, oil prices increase. This has occurred most recently from the organization's 2008 and 2016 decisions to trim oversupply.\n", "2010s oil glut\n\nThe 2010s oil glut is a considerable surplus of crude oil that started in 2014–2015 and accelerated in 2016, with multiple causes. They include general oversupply as US and Canadian tight oil (shale oil) production reached critical volumes, geopolitical rivalries amongst oil-producing nations, falling demand across commodities markets due to the deceleration of the Chinese economy, and possible restraint of long-term demand as environmental policy promotes fuel efficiency and steers an increasing share of energy consumption away from fossil fuels.\n", "The tight oil (shale oil) boom in the USA starting in the early 2000s through 2010s (as well as increased production capacity in many other countries) greatly limited OPEC's ability to control oil prices. Consequently, due to a drastic fall in Nymex crude oil price to as low as $35.35 dollars per barrel in 2015, many oil-exporting countries have had severe problems in balancing their budget.\n\nBy 2016, many oil exporting countries had been adversely affected by low oil prices including Russia, Saudi Arabia, Azerbaijan, Venezuela and Nigeria.\n\nSection::::Currencies used to trade oil.:Venezuela.\n", "Section::::Supply.:Control over supply.\n\nEntities such as governments or cartels can reduce supply to the world market by limiting access to the supply through nationalizing oil, cutting back on production, limiting drilling rights, imposing taxes, etc. International sanctions, corruption, and military conflicts can also reduce supply.\n\nSection::::Supply.:Control over supply.:Nationalization of oil supplies.\n", "The volume of oil production on tight oil formations in the US depends significantly on the dynamics of the WTI oil price. About six months after the price change, drilling activity changes, and with it the volume of production. These changes and their expectations are so significant that they themselves affect the price of oil and hence the volume of production in the future. \n\nThese regularities are described in mathematical language by a differential extraction equation with a retarted argument.\n\nSection::::See also.\n\nBULLET::::- Tight gas\n\nBULLET::::- Shale gas\n\nBULLET::::- Shale oil extraction\n\nSection::::External links.\n", "Research shows that declining oil prices make oil-rich states less bellicose. Low oil prices could also make oil-rich states engage more in international cooperation, as they become more dependent on foreign investments. The influence of the United States reportedly increases as oil prices decline, at least judging by the fact that \"both oil importers and exporters vote more often with the United States in the United Nations General Assembly\" during oil slumps.\n", "Another factor affecting global oil supply is the nationalization of oil reserves by producing nations. The nationalization of oil occurs as countries begin to deprivatize oil production and withhold exports. Kate Dourian, Platts' Middle East editor, points out that while estimates of oil reserves may vary, politics have now entered the equation of oil supply. \"Some countries are becoming off limits. Major oil companies operating in Venezuela find themselves in a difficult position because of the growing nationalization of that resource. These countries are now reluctant to share their reserves.\"\n", "Rather counterintuitively, the world economy has had to deal with the unforeseen consequences of the 2015-2016 oil glut also known as 2010s oil glut, a major energy crisis that took many experts by surprise. This oversupply crisis started with a considerable time-lag, more than six years after the beginning of the Great Recession: \"\"the price of oil\" [had] \"stabilized at a relatively high level (around $100 a barrel) unlike all previous recessionary cycles since 1980 (start of First Persian Gulf War). But nothing guarantee[d] such price levels in perpetuity\"\".\n\nSection::::Social and economic effects.\n", "BULLET::::- Demand for oil is and will be growing as new markets (e.g. India and China) gains in strength and local consumers start to demand higher life standard.\n\nBULLET::::- 3/4 are the world's oil discoveries located in Middle East, as well as the ratio of the volume of oil needed to be imported to the United States.\n", "Many factors have resulted in possible and/or actual concerns about the reduced supply of oil. The post-9/11 war on terror, labor strikes, hurricane threats to oil platforms, fires and terrorist threats at refineries, and other short-lived problems are not solely responsible for the higher prices. Such problems do push prices higher temporarily, but have not historically been fundamental to long-term price increases.\n\nSection::::Possible causes.:Investment/speculation demand.\n", "Steve Briese, a commodity analyst, who had forecasted in March 2014 a decline to world price to $75 from $100, based on 30 years of extra supply in early December 2014 projected a low of $35 a barrel. On Jan 8, 2015 commodity hedge fund manager Andrew J. Hall suggested that $40-a-barrel is close to “an absolute price floor,” adding that a significant amount of U.S. and Canadian production can’t cover the cash costs of operating at that price.\n", "At the 5th annual World Pensions Forum in 2015, Jeffrey Sachs advised institutional investors to divest from carbon-reliant oil industry firms in their pension fund's portfolio.\n\nSection::::Hedging as risk management.:2015–16 prices: The lows of January 2016.\n", "Many reasons have been given for this widening divergence ranging from a speculative change away from WTI trading (although not supported by trading volumes), dollar currency movements, regional demand variations, and even politics. The depletion of the North Sea oil fields is one explanation for the divergence in forward prices.\n", "The supply of oil is dependent on geological discovery, the legal and tax framework for oil extraction, the cost of extraction, the availability and cost of technology for extraction, and the political situation in oil-producing countries. Both domestic political instability in oil producing countries and conflicts with other countries can destabilise the oil price. In 2008 the \"New York Times\" reported, for example, in the 1940s the price of oil was about $17 rising to just over $20 during the Korean War (1951–1953). During the Vietnam War (1950s1970s) the price of oil slowly declined to under $20. During the Arab oil embargo of 1973—the first oil shock—the price of oil rapidly rose to double in price. During the 1979 Iranian Revolution the price of oil rose. During the second oil shock the price of oil peaked in April 1980 at $103.76. During the 1980s there was a period of \"conservation and insulation efforts\" and the price of oil dropped slowly to . It again reached a peak of c. $65 during the 1990 Persian Gulf crisis and war. Following that, there was a period of global recessions and the price of oil hit a low of before it peaked at a high of $45 on September 11, 2001 only to drop again to a low of $26 on May 8, 2003. The price rose to $80 with the U.S.-led invasion of Iraq. By March 3, 2008 the price of oil reached $103.95 a barrel on the New York Mercantile Exchange.\n", "The shape of production curve of an oil well can also be affected by a number of nongeologic factors:\n\nSection::::Production decline models.:Oil field production decline.\n", "From 2005 onwards, the price elasticity of the crude oil market changed significantly. Before 2005 a small increase in oil price lead to an noticeable expansion of the production volume. Later price rises let the production grow only by small numbers. This was the reason to call 2005 a tipping point.\n", "The report stated that as a result of the imbalance and low price elasticity, very large price increases occurred as the market attempted to balance scarce supply against growing demand, particularly in the last three years. The report forecast that this imbalance would persist in the future, leading to continued upward pressure on oil prices, and that large or rapid movements in oil prices are likely to occur even in the absence of activity by speculators. The task force continues to analyze commodity markets and intends to issue further findings later in the year.\n\nSection::::Oil-storage trade (contango).\n", "Section::::Supply.:Control over supply.:OPEC influence on supply.\n", "There are two views dominating the oil market discourse. There are those who strongly believe that the market has undergone structural changes and that low oil prices are here to stay for a prolonged period. At the other end of the spectrum, there are those who think that this is yet another cycle and oil prices will recover sooner rather than later.\n", "In December 2015, \"The Telegraph\" quoted a major oil broker stating: \"The world is floating in oil. The numbers we are facing now are dreadful\" – and \"Forbes\" magazine stated: \"The ongoing oil price slump has more or less morphed into a complete rout, with profound long-term implications for the industry as a whole.\"\n\nAs 2016 continued, the price gradually rose back into the $40s, with the world waiting to see if and when and how the market would return to balance.\n", "The nationalization of oil supplies and the emergence of the OPEC market caused the spot market to change in both orientation and size. The spot market changed in orientation because it started to deal not only with crude oil but also with refined products. The spot market changed in size because as the OPEC market declined the number of spot market transactions increased.\n", "This price drop has placed many US tight oil producers under considerable financial pressure. As a result, there has been a reduction by oil companies in capital expenditure of over $US400 billion. It is anticipated that this will have effects on global production in the longer term, leading to statements of concern by the International Energy Agency that governments should not be complacent about energy security. Energy Information Agency projections anticipate market oversupply and prices below $US50 until late 2017.\n\nSection::::Possible consequences.:Oil prices.:Effects of historical oil price rises.\n", "Oil price increases were partially fueled by reports that petroleum production is at or near full capacity. In June 2005, OPEC stated that they would 'struggle' to pump enough oil to meet pricing pressures for the fourth quarter of that year. From 2007 to 2008, the decline in the U.S. dollar against other significant currencies was also considered as a significant reason for the oil price increases, as the dollar lost approximately 14% of its value against the Euro from May 2007 to May 2008.\n" ]
[]
[]
[ "normal" ]
[]
[ "normal" ]
[]
2018-07146
What is in Almond Milk that causes it to go bad after 7-10 days?
The same reason milk does. Bacteria and fungi eat the sugars and fats in it and once they build up to a certain level it is spoiled.
[ "The Almond Board of California states: “PPO residue dissipates after treatment.” The U.S. EPA has reported: “Propylene oxide has been detected in fumigated food products; consumption of contaminated food is another possible route of exposure.” PPO is classified as Group 2B (\"possibly carcinogenic to humans\").\n", "Section::::Almonds.:Green fruit rot.\n\nGreen fruit rot can be found throughout virtually all almond-producing regions of California. Green rot is typically controlled by fungicides applied to control other fungal diseases that occur during blooming. It is only when cooler temperatures and heavy moisture is present that almond growers are recommended to make fungicide applications specifically for the disease. When left untreated, green fruit rot can cause up to a 10% yield loss.\n\nSection::::Almonds.:Leaf blight.\n", "All commercially grown almonds sold as food in the United States are sweet cultivars. The US Food and Drug Administration reported in 2010 that some fractions of imported sweet almonds were contaminated with bitter almonds. Eating such almonds could result in vertigo and other typical bitter almond (cyanide) poisoning effects.\n\nSection::::Culinary uses.\n", "Certain natural food stores sell \"bitter almonds\" or \"apricot kernels\" labeled as such, requiring significant caution by consumers for how to prepare and eat these products.\n\nSection::::Culinary uses.:Almond milk.\n", "The USDA approved a proposal by the Almond Board of California to pasteurize almonds sold to the public, after tracing cases of salmonellosis to almonds. The almond pasteurization program became mandatory for California companies in 2007. Raw, untreated California almonds have not been available in the U.S. since then.\n\nCalifornia almonds labeled \"raw\" must be steam-pasteurized or chemically treated with propylene oxide (PPO). This does not apply to imported almonds or almonds sold from the grower directly to the consumer in small quantities. The treatment also is not required for raw almonds sold for export outside of North America.\n", "The growth in consumer demand for almond milk in the early 21st century accounted for one-quarter of the US almond supply, and its use in almond butter manufacturing tripled since 2011.\n\nSection::::Sustainability.\n", "Crown and root rot of almonds is caused by at least 14 different \"Phytophthora\" species. The risk of root or crown infection is greatest during cool to moderate temperatures with prolonged or frequent soil saturation. A tree infected with \"Phytophthora\" can either undergo a period of slow decline that may last years or it can suddenly collapse and die in spring with the advent of warm weather. Eventually, leaves drop, terminal shoots die back, and death of the tree follows. Once in the root or crown the infection may extend into the crown, trunk, or branches. Currently, crown and root rot are a problem affecting 20% of California's almond orchards with potential yield losses of 50%.\n", "Some countries have strict limits on allowable levels of aflatoxin contamination of almonds and require adequate testing before the nuts can be marketed to their citizens. The European Union, for example, introduced a requirement since 2007 that all almond shipments to EU be tested for aflatoxin. If aflatoxin does not meet the strict safety regulations, the entire consignment may be reprocessed to eliminate the aflatoxin or it must be destroyed.\n\nSection::::Mandatory pasteurization in California.\n", "If unfortified, almond milk has less vitamin D than fortified cows' milk; in North America cows' milk must be fortified with vitamin D, but vitamins are added to plant milks on a voluntary basis. Because of its low protein content, almond milk is not a suitable replacement for breast milk, cows' milk, or hydrolyzed formulas for children under two years of age.\n\nSection::::Production.\n", "Sustainability strategies implemented by the Almond Board of California and almond farmers include:\n\nBULLET::::- tree and soil health, and other farming practices\n\nBULLET::::- minimizing dust production during the harvest\n\nBULLET::::- bee health\n\nBULLET::::- irrigation guidelines for farmers\n\nBULLET::::- food safety\n\nBULLET::::- use of waste biomass as coproducts with a goal to achieve zero waste\n\nBULLET::::- use of solar energy during processing\n\nBULLET::::- job development\n\nBULLET::::- support of scientific research to investigate potential health benefits of consuming almonds\n\nBULLET::::- international education about sustainability practices\n\nSection::::Production.\n", "Almond milk is produced from almonds by grinding almonds with water, then straining the pulp from the liquid. This procedure can be done at home. Almond milk is low in saturated fat and calories.\n", "The basic method of modern domestic almond milk production is to grind almonds in a blender with water, then strain out the almond pulp (flesh) with a strainer or cheesecloth. Almond milk can also be made by adding water to almond butter.\n\nIn July 2015, a class action lawsuit was placed in New York against two American manufacturers, Blue Diamond Growers and White Wave Foods, for false advertising on the product label about the small amount of almonds (only 2%) actually in the final product. In October 2015, a judge denied the consumers' request for an injunction.\n\nSection::::Production.:Consumer demand.\n", "Section::::Nutrition.:Health.\n\nAlmonds are included as a good source of protein among recommended healthy foods by the US Department of Agriculture. A 2016 review of clinical research indicated that regular consumption of almonds may reduce the risk of heart disease by lowering blood levels of LDL cholesterol.\n\nSection::::Nutrition.:Potential allergy.\n\nAlmonds may cause allergy or intolerance. Cross-reactivity is common with peach allergens (lipid transfer proteins) and tree nut allergens. Symptoms range from local signs and symptoms (e.g., oral allergy syndrome, contact urticaria) to systemic signs and symptoms including anaphylaxis (e.g., urticaria, angioedema, gastrointestinal and respiratory symptoms).\n\nSection::::Oils.\n", "Section::::Origin and history.:Etymology and names.\n", "ordinary-day versions. For example, a thin split-pea puree, sometimes enriched with fish stock or almond milk (produced by simmering ground almonds in water), replaced meat broth on fast days; and almond milk was a general (and expensive) substitute for cow's milk.\"\n\nIn Persian cuisine, an almond milk based dessert called \"harireh badam\" (almond gruel) is traditionally served during Ramadan.\n\nSection::::Commerce.\n", "Almonds begin bearing an economic crop in the third year after planting. Trees reach full bearing five to six years after planting. The fruit matures in the autumn, 7–8 months after flowering.\n\nSection::::Description.:Drupe.\n", "Sales of almond milk overtook soy milk in the United States in 2013, and by May 2014, it comprised two-thirds of the US plant milk market. In the United Kingdom, almond milk sales increased from in 2011 to in 2013.\n\nSection::::History.\n", "Almonds are susceptible to aflatoxin-producing molds. Aflatoxins are potent carcinogenic chemicals produced by molds such as \"Aspergillus flavus\" and \"Aspergillus parasiticus\". The mold contamination may occur from soil, previously infested almonds, and almond pests such as navel-orange worm. High levels of mold growth typically appear as gray to black filament like growth. It is unsafe to eat mold infected tree nuts.\n", "Section::::Varieties.\n", "In the United States, almond milk remained a niche health food item until the early 2000s, when its popularity began to increase. In 2011 alone, almond milk sales increased by 79%. In 2013, it surpassed soy milk as the most popular plant-based milk in the U.S. As of 2014 it comprised 60 percent of plant-milk sales and 4.1 percent of total milk sales in the US.\n\nPopular brands of almond milk include Blue Diamond's Almond Breeze and WhiteWave Foods' Silk PureAlmond.\n\nWithin the Italian regions of Sicily, Apulia, Calabria, and Campania, almond milk is a protected traditional agricultural product.\n\nSection::::Nutrition.\n", "Almonds can be processed into a milk substitute called almond milk; the nut's soft texture, mild flavor, and light coloring (when skinned) make for an efficient analog to dairy, and a soy-free choice for lactose intolerant people and vegans. Raw, blanched, and lightly toasted almonds work well for different production techniques, some of which are similar to that of soymilk and some of which use no heat, resulting in \"raw milk\" (see raw foodism).\n\nSection::::Culinary uses.:Almond flour and skins.\n", "Historically, almond syrup was an emulsion of sweet and bitter almonds, usually made with barley syrup (orgeat syrup) or in a syrup of orange flower water and sugar, often flavored with a synthetic aroma of almonds. Orgeat syrup is an important ingredient in the Mai Tai and many other Tiki drinks.\n\nDue to the cyanide found in bitter almonds, modern syrups generally are produced only from sweet almonds. Such syrup products do not contain significant levels of hydrocyanic acid, so are generally considered safe for human consumption.\n\nSection::::Nutrition.\n", "With additional process of blanching, dried mango can retain the content of its carotenoids and vitamin C.\n\nSection::::Types.:Shelf life.\n", "To produce marzipan, raw almonds are cleaned \"by sieving, air elutriation, and other electronic or mechanical devices\", then immersed in water with a temperature just below the boiling point for about five minutes, in a process known as blanching. This loosens the almonds' skin, which is removed by passing the almonds through rubber-covered rotating cylinders. This process reduces hydrogen cyanide (HCN) concentration and increases water content. They are then cooled, after which they are coarsely chopped and ground, with up to 35% sugar, into almond flour.\n", "Almond production in California is concentrated mainly in the Central Valley, where the mild climate, rich soil, abundant sunshine and water supply make for ideal growing conditions. Due to the persistent droughts in California in the early 21st century, it became more difficult to raise almonds in a sustainable manner. The issue is complex because of the high amount of water needed to produce almonds: a single almond requires roughly of water to grow properly.\n\nSustainability strategies implemented by the Almond Board of California and almond farmers include:\n\nBULLET::::- tree and soil health, and other farming practices\n" ]
[]
[]
[ "normal" ]
[]
[ "normal", "normal" ]
[]
2018-18021
How did my phone know my friends last name?
Possibly Facebook if you have it linked to your contacts. My iPhone will occasionally say “possibly firstname lastname” when a new contact appears as well ,may be a feature of certain phones messengers.
[ "Examples of emerging companies providing technology to support mobile social address books include: PicDial (which dynamically augments the existing address book with pictures and status from Facebook, MySpace and Twitter, integrates with the call screen so during every call you see the latest picture and status of whoever is calling. It is a network address book so everything can be managed from Windows or Mac as well and lastly you can also set your one callerID picture and status for your friends to see when you call them) FusionOne (whose backup and synchronization solutions lets users easily transfer and update mobile content, including contact information, among different devices); Loopt (whose Loopt service provides a social compass alerting users when friends are near); OnePIN (whose CallerXchange person-to-person contact exchange service lets users share contact info with one click on the mobile phone); and VoxMobili (whose Phone Backup and Synchronized Address Book solutions let users safeguard and synchronize their contact information among different devices).\n", "On January 24, 2012, Smartr Contacts for iPhone was released; one blog called it a \"magic address book\". The app identifies name and contact information for all contacts, including communication history and updates from Facebook, LinkedIn and Twitter. Lifehacker called Smartr Contacts for iPhone the \"Best Address Book for iPhone\".\n\nSection::::Funding.\n", "BULLET::::- Contacts – (formally known as Address Book) a computerized address book. It includes various syncing capabilities and integrates into with other macOS applications and features. Renamed \"Contacts\" on release of Mountain Lion, to match the similarly named iOS app which it syncs with through iCloud.\n", "The servers store registered users' phone numbers, public key material and push tokens which are necessary for setting up calls and transmitting messages. In order to determine which contacts are also Signal users, cryptographic hashes of the user's contact numbers are periodically transmitted to the server. The server then checks to see if those match any of the SHA256 hashes of registered users and tells the client if any matches are found. The hashed numbers are thereafter discarded from the server. In 2014, Moxie Marlinspike wrote that it is easy to calculate a map of all possible hash inputs to hash outputs and reverse the mapping because of the limited preimage space (the set of all possible hash inputs) of phone numbers, and that \"practical privacy preserving contact discovery remains an unsolved problem.\" In September 2017, Signal's developers announced that they were working on a way for the Signal client applications to \"efficiently and scalably determine whether the contacts in their address book are Signal users without revealing the contacts in their address book to the Signal service.\"\n", "Section::::History.\n\nThe first Mobile social address book appeared in 2007 by a company called IQzone Inc., which was founded by John Kuolt. It was the first company to integrate social networking sites like Facebook, Myspace, Linked in and integrate them with the address book (PIM) of a mobile device. Mobile social address books sought to bring the connectivity of social networking to the in-the-moment experience of the mobile phone. Users can easily exchange contact information regardless of their handset, mobile carrier, or social networking application they use.\n", "Traditionally mobile phone forensics has been associated with recovering SMS and MMS messaging, as well as call logs, contact lists and phone IMEI/ESN information. However, newer generations of smartphones also include wider varieties of information; from web browsing, Wireless network settings, geolocation information (including geotags contained within image metadata), e-mail and other forms of rich internet media, including important data—such as social networking service posts and contacts—now retained on smartphone 'apps'.\n\nSection::::Types of evidence.:Internal memory.\n\nNowadays mostly flash memory consisting of NAND or NOR types are used for mobile devices.\n\nSection::::Types of evidence.:External memory.\n", "In January 2012 AndroidPit discovered that Vlingo sends packets of information containing the users GPS co-ordinates, IMEI (unique device identifier), contact list and the title of every song stored on the device back to Nuance without proper warning in the privacy policy. Users of Vlingo have also found the program sending data to servers at the dhs.gov domain name.\n", "Like many iOS applications that use Location Services, parental controls are available. Find My Friends synchronizes with other applications such as Maps and Contacts. The app is supported on the iPhone, iPod Touch, iPad, Apple Watch, or on iCloud.com on Windows. A friend's location can be viewed in OS X 10.10 as well, by clicking \"Details\" in the top right corner of the Messages App.\n\nSection::::Privacy considerations.\n", "BULLET::::- In the Contacts list, the user can assign social-networking profiles to a contact, including information about Facebook, Skype, Windows Live Messenger and e-mail. The contacts list then displays the Facebook or Windows Live Messenger profile picture, including the latest 'status update' from Facebook if the user is logged into the Facebook client. It also displays a small icon showing whether they are logged into Skype or Windows Live Messenger.\n\nSection::::Carriers.\n", "Further to storing mobile numbers, users can add e-mail addresses, home-phone numbers, an image (shows when contact calls), PTT addresses, postal addresses, web addresses, notes, and user identification.\n\nContacts can be put into groups, for example 'work colleagues' but such groups do not appear to be usable for text messaging. There is a Distribution List feature under Messages where users can send group SMS (Only available with the 6210).\n", "Another way to turn the tracking off is to turn Location Services off. This is done by going to Settings Privacy Location Services, selecting the app in the list and selecting the \"Never\" option. Tracking can be turned back on by selecting the \"While Using the App\" option. Location services and hence location tracking does not operate when a device is in “Airplane mode”.\n\nSection::::History.\n\nFind My Friends was announced on October 4, 2011, and released on October 12, 2011, several hours before the actual release of iOS 5.\n", "Android smartphones have the ability to report the location of Wi-Fi access points, encountered as phone users move around, to build databases containing the physical locations of hundreds of millions of such access points. These databases form electronic maps to locate smartphones, allowing them to run apps like Foursquare, Google Latitude, Facebook Places, and to deliver location-based ads. Third party monitoring software such as TaintDroid, an academic research-funded project, can, in some cases, detect when personal information is being sent from applications to remote servers.\n\nSection::::Security and privacy.:Technical security features.\n", "In 2010, Electronic Frontier Foundation launched a website where visitors can test their browser fingerprint. After collecting a sample of 470161 fingerprints, they measured at least 18.1 bits of entropy possible from browser fingerprinting, but that was before the advancements of canvas fingerprinting, which claims to add another 5.7 bits.\n\nFirefox provides a feature to protect against browser fingerprinting since 2015 (version 41), but as of July 2018 it is still experimental and disabled by default.\n", "Section::::Privacy.\n\nSicher uses phone number for user authentication due to phone number being a unique identifier that can be easily confirmed and an efficient anti-spam measure. User's address book is used for discovery of Sicher contacts, however address book data is not stored on Sicher servers. \n\nUser may choose to receive anonymous notifications about new messages, which means that notification on lock screen will not display content of incoming message, including sender's name.\n\nSection::::Controversy.\n", "A different problem occurs when ESN codes are stored in a database (such as for OTASP). In this situation, the risk of at least two phones having the same pseudo-ESN can be calculated using the birthday paradox and works out to about a 50 per cent probability in a database with 4,800 pseudo-ESN entries. 3GPP2 specifications C.S0016 (Revision C or higher) and C.S0066 have been modified to allow the replacement MEID identifier to be transmitted, resolving this problem.\n", "Data acquired from cell phone devices are stored in the .med file format. After a successful logical acquisition, the following fields are populated with data: subscriber information, device specifics, Phonebook, SIM Phonebook, Missed Calls, Last Numbers Dialed, Received Calls, Inbox, Sent Items, Drafts, Files folder. Items present in the Files folder, ranging from Graphics files to Camera Photos and Tones, depend on the phone’s capabilities. Additional features include the myPhoneSafe.com service, which provides access to the IMEI database to register and check for stolen phones.\n", "BULLET::::- \"Join Conference\" (previously \"Nokia Conference\") was originally a Lumia Beta App designed by Nokia, when elevated to the Microsoft Garage it gained Cortana integration, it works by notifying users about their coming meetings in their calendars and shows the meeting IDs and PIN codes. Upon its re-released it was no longer a Lumia exclusive, but the available markets on which application was available was reduced to 17.\n\nBULLET::::- \"SquadWatch\" is a friends tracker that allows users to see where their contacts are if they have registered with the service.\n", "On May 8, 2012, Carrier IQ appointed a Chief Privacy Officer: Magnolia Mobley, formerly Verizon's Lead Privacy Counsel. This news spurred a new round of articles and discussions about privacy in mobile communications.\n\nIn February 2015, HTC One users began reporting that the Carrier IQ agent software was overriding GPS device settings in order to obtain location information even when the GPS was turned off.\n\nSection::::Updates.:Analytics and Carrier IQ.\n\nGenerally speaking, analytics companies collect, synthesize, and present aggregated user information to their customers to help them reduce maintenance costs, increase revenue, and improve the\n", "SnapTags can be used in Google's mobile Android operating system and iOS devices (iPhone/iPod/iPad) using The SnapTag Reader App or third party apps that have integrated the SnapTag Reader SDK. SnapTags can also be used by standard camera phones by taking a picture of the SnapTag and texting it to the designated short code or email address.\n", "Section::::Huawei p9.\n\nBULLET::::- IMSI\n\nSection::::External links.\n\nBULLET::::- Imei track and identify device model, manufacturer from IMEI or TAC numbers\n\nBULLET::::- 3GPP Change Request re Type Allocation Code\n\nBULLET::::- Identify phone and manufacturer by TAC\n\nBULLET::::- Public TAC Database - can download the entire database\n\nBULLET::::- Nokia TAC Database - mobile identification\n\nBULLET::::- Identify phone model and manufacturer by IMEI or TAC - mobile identification\n\nBULLET::::- IMEI application for mobile: Freeware for Java supported mobile phones to find phone model and manufacturer for entered IMEI number.\n\nBULLET::::- Check your phone model and manufacturer by TAC\n", "MobileMe maintained a synchronized address book and calendar feature using Push functions. When a user made a change to a contact or event on one device, it was automatically synced to the MobileMe servers and, by extension, all the user's other devices. Supported devices included the iPhone, Address Book and iCal on OS X, or Microsoft Outlook 2003 or later on Microsoft Windows. Subscription calendars in iCal on a Mac computer were not viewable on the online MobileMe service (although \"Birthdays\" was viewable online; as it gathered its information from Address Book, rather than CalDAV or iCalendar (.ics) subscription calendars). Conversely, on the iPhone \"Birthdays\" from Contacts on the iPhone were not viewable on the Calendar app (nor any other method; except looking them up individually in Contacts. Birthdays Calendar was added on iOS 4.3), but subscription calendars were available to view in Calendar by adding them through SettingsMail, Contacts, CalendarAdd Account.\n", "Find My Friends\n\nFind My Friends (called \"Find Friends\" on the SpringBoard) is a mobile phone tracking app and service for iOS devices developed by Apple Inc. The app allows a person approved by the user, who must also have a Apple device, to access the GPS location of the user's Apple mobile device. The app can be used to track children, family, and friends, besides others such as employees, without them being notified that they are being tracked. The app could also track the location of a person as a safety measure.\n\nSection::::Features.\n", "BULLET::::- Contact lists could only be copied from another phone by Verizon store employees. There was no way for the consumer to do this by any known means (over the air, via a memory or SIM card, wirelessly via Bluetooth and vCard, or via direct USB cable connection).\n\nBULLET::::- Kin had no calendar or appointment application, nor any ability to sync with Outlook calendar or Google Calendar. Some commentators suggested that a social phone should be able to share a social events calendar.\n", "In May 2019, Samsung started rolling out its OTA software update for the Galaxy J6 to have the latest Android 9.0 (Pie) with One UI that also comes with Galaxy S10 phones. The update added features like night mode and a re-styled user interface.\n\nThe phone is secured either by PIN, pattern, password, fingerprint, or facial recognition. Using the fingerprint and/or facial recognition requires one of the three aforementioned traditional security inputs. The phone comes protected by Samsung's Knox software.\n\nSection::::Networking.\n\nNetworking includes: \n", "Skype Click to Call (formerly called the \"Skype Web Toolbar\") recognizes phone and Skype Numbers, and is available for Internet Explorer, Google Chrome, and Mozilla Firefox on Windows. Such numbers on web pages are replaced with an icon that can be clicked to call the number using Skype, or right-clicked to provide further options, such as adding the number to Skype's list of contacts. The feature detects phone numbers automatically, but a web site developer can override the detection algorithm using a Meta element and mark the valid numbers individually.\n" ]
[]
[]
[ "normal" ]
[]
[ "normal" ]
[]
2018-04470
Why different regions have different plug types?
In short, it could be worse. Various groups were formed to agree on standard types. These groups were mostly regional, ie. North American companies agreeing to use one type. British (and British-influenced areas like India) agreeing on different type. Yes, patents were involved. And of course, companies don't like to pay for other peoples' patents, so that was one driving force for compatible standards. And of course, regional groups that decide standards often try to improve on other groups' standards. The UK type is believed to the be safest and much harder to accidentally pull out than the simpler, 2/3 prong US style. (Although in my mind, this comes off as national pride.) You can see the same thing with computer ports. Look back to the 80s and you can see joysticks and drives used all sorts of types. Thank God for USB.
[ "During the first fifty years of commercial use of electric power, standards developed rapidly based on growing experience. Technical, safety, and economic factors influenced the development of all wiring devices and numerous varieties were invented. After the two-prong electric plug was introduced in the 1920s, the three-pin outlet was developed. This format was introduced in order to mitigate the effect of a short circuit event, as the supply would be neutralised with earth. Gradually the desire for trade eliminated some standards that had been used in only a few countries. Former colonies may retain the standards of the colonising country. Sometimes offshore industrial plants or overseas military bases use the wiring practices of their controlling country instead of the surrounding region. Some countries have multiple voltages, frequencies and plug designs in use, which can create inconvenience and safety hazards. Hotels and airports may maintain sockets of foreign standards for the convenience of travellers. By 2018, there were 15 plug and socket types around the world.\n", "Voltage, frequency, and plug type vary, but large regions may use common standards. Physical compatibility of receptacles may not ensure compatibility of voltage, frequency, or connection to earth (ground), including plugs and cords. In some areas, older standards may still exist. Foreign enclaves, extraterritorial government installations, or buildings frequented by tourists may support plugs not otherwise used in a country, for the convenience of travellers.\n\nSection::::Main reference sourceIEC World Plugs.\n", "In Iceland, Magic plugs were widely used in homes and businesses alongside Europlug and Schuko installations. Their installation in new homes was still quite common even in the late 1980s.\n\nSection::::Single phase electric stove plugs and sockets.\n", "Plugs and sockets for portable appliances became available in the 1880s, to replace connections to light sockets with wall-mounted outlets. A proliferation of types developed for both convenience and protection from electrical injury. Today there are about 20 types in common use around the world, and many obsolete socket types are found in older buildings. Coordination of technical standards has allowed some types of plug to be used across large regions to facilitate trade in electrical appliances, and for the convenience of travellers and consumers of imported electrical goods.\n", "In Brazil, this kind of plug is commonly found in high-power appliances like air conditioners, dishwashers, and household ovens. The reasons why they have been unofficially adopted for this use may be the robustness and high-current bearing capabilities, the impossibility of inverting phase (active) and neutral pins, or the fact that Argentina, a border country, uses this plug and used to be more developed than Brazil in the past so there may have been a flux of high-powered appliances from Argentina to Brazil at some point in time.\n", "The Europlug is also used in the Middle East, Africa, South America, and Asia.\n\nSection::::Types in present use.:CEE 7 standard.:CEE 7/17 unearthed plug.\n", "While some forms of power plugs and sockets are set by international standards, countries may have their own different standards and regulations. For example, the colour-coding of wires may not be the same as for small mains plugs.\n\nSection::::Concepts and terminology.\n", "In addition to Germany, it is used in Albania, Austria, Belarus, Bosnia and Herzegovina, Bulgaria, Chile, Croatia, Denmark, Estonia, Finland, Georgia, Greece, Hungary, Iceland, Indonesia, Iran, Italy (standard CEI 23-50), Kazakhstan, Latvia, Lithuania, Luxembourg, Republic of Macedonia, Republic of Moldova, the Netherlands, Norway, Pakistan, Peru, Portugal, Romania, Russia, Serbia, Slovenia, South Korea, Spain, Sweden, Turkey, Ukraine, and Uruguay.\n", "Section::::Obsolete non-BS types.\n\nSection::::Obsolete non-BS types.:Wylex Plug.\n", "Mains electricity by country\n\nMains electricity by country includes a list of countries and territories, with the plugs, voltages and frequencies they commonly use for providing electrical power to appliances, equipment, and lighting typically found in homes and offices. (For industrial machinery, see Industrial and multiphase power plugs and sockets.) Some countries have more than one voltage available. For example, in North America most sockets are attached to a 120 V supply, but there is a 240 V supply available for large appliances. Often different sockets are mandated for different voltage or current levels.\n", "Standard plugs and sockets based on two round pins with centres spaced at 19 mm are in use in Europe, most of which are listed in IEC/TR 60083 \"Plugs and socket-outlets for domestic and similar general use standardized in member countries of IEC\". EU countries each have their own regulations and national standards; for example, some require child-resistant shutters, while others do not. CE marking is neither applicable nor permitted on plugs and sockets.\n\nSection::::Types in present use.:CEE 7 standard.:CEE 7/1 unearthed socket and CEE 7/2 unearthed plug.\n", "Europlugs are also in common use in Italy; they are standardized under CEI 23-34 S 1 for use with the 10 A socket and can be found fitted to Class II appliances with low current requirement (less than 2.5 A).\n\nThe current Italian standards provide for sockets to have child-resistant shutters (\"Sicury\" patent).\n\nSection::::Types in present use.:Italy (Type L).:Italian multiple standard sockets.\n\nIn modern installations in Italy (and in other countries where Type L plugs are used) it is usual to find sockets that can accept more than one standard.\n", "The IEC \"World Plugs\" lists Type G as being used in the following locations: Bahrain, Bangladesh, Belize, Bhutan, Botswana, Brunei Darussalam, Cambodia, Cyprus, Dominica, Falkland Islands, Gambia, Ghana, Gibraltar, Grenada, Guyana, Hong Kong, Iraq, Ireland, Isle of Man, Jordan, Kenya, Kuwait, Lebanon, Macau, Malawi, Malaysia, Maldives, Malta, Mauritius, Myanmar, Nigeria, Oman, Pakistan, Qatar, Saint Kitts and Nevis, Saint Lucia, Saint Vincent and the Grenadines, Saudi Arabia, Seychelles, Sierra Leone, Singapore, Solomon Islands, Sri Lanka, Tanzania, Uganda, United Arab Emirates, United Kingdom, Vanuatu, Vietnam, Yemen, Zambia, Zimbabwe.\n", "The International Electrotechnical Commission publishes a web microsite \"World Plugs\" which provides the main source for this page, except where other sources are indicated. \"World Plugs\" includes some history, a description of plug types, and a list of countries giving the type(s) used and the mains voltage and frequency.\n", "Section::::List of plugs.:Legacy.\n\nBULLET::::- 600 series connector, Australia\n\nBULLET::::- Protea, South Africa\n\nBULLET::::- SS 455 15 50, Sweden and Iceland\n\nBULLET::::- Telebrás plug, Brazil\n\nBULLET::::- Tripolar plug, Italy\n\nBULLET::::- BTicino 2021 (with or without line interruption), Italy (rare)\n\nSection::::List by country or territory.\n", "This plug is often used for air conditioners and washing machines. The IEC \"World Plugs\" lists Type M as being used in the following locations: Bhutan, Botswana, India, Israel, Lesotho, Macau, Malaysia, Mozambique, Namibia, Nepal, Pakistan, Singapore, South Africa, Sri Lanka, Swaziland.\n\nSection::::International usage of BS types.:Standards derived from BS 1363.\n\nSection::::International usage of BS types.:Standards derived from BS 1363.:Irish I.S. 401.\n", "AC power plugs and sockets: British and related types\n\nPlugs and sockets for electrical appliances not hardwired to mains electricity originated in Britain in the 1880s and were initially two-pin designs. These were usually sold as a mating pair, but gradually de facto and then official standards arose to enable the interchange of compatible devices. British standards have proliferated throughout large parts of the former British Empire.\n", "Section::::Obsolete non-BS types.:Dorman & Smith (D&S).\n", "Called \"Tripoliki\" (τριπολική, meaning \"three-pole\"), the standard had 3 round pins, similar to the post-1989 Israeli SI 32 and Thai TIS 166-2549 types. The Tripoliki was virtually abandoned by the decade of 1980, but can still be found in houses constructed before 1980, and not renovated. Previous to the large-scale adoption of Schuko plugs, this was the only way to use an earthed appliance in Greece. It can accept Europlugs, and also (but with no earth connection possible) French and German types.\n", "Section::::Unusual types.\n\nSection::::Unusual types.:Lampholder plug.\n\nA lampholder plug fits into a light socket in place of a light bulb to connect appliances to lighting circuits. Where a lower rate was applied to electric power used for lighting circuits, lampholder plugs enabled the consumers to reduce their electricity costs. Lampholder plugs are rarely fused. Edison screw lampholder adaptors (for NEMA 1-15 plugs) are still commonly used in the Americas.\n\nSection::::Unusual types.:Soviet adaptor plugs.\n", "Note: See the NEMA 1-15 ungrounded (Type A) section of this page for the parallel blade patent reference numbers.\n\nSection::::Obsolete types.:U.S. adaptors.\n\nThese adaptors are obsolete because they are not polarized; polarized versions of these types are still available in the U.S.\n\nSection::::Obsolete types.:UK obsolete types.\n\nBefore the 1970s, several types of proprietary plugs and sockets were commonly used in Britain, alongside types which conformed to national standards.\n\nSection::::Obsolete types.:Old Greek sockets.\n", "Because they have no earth connections they have been or are being phased out in most countries. The regulations of countries using the CEE 7/3 and CEE 7/5 socket standards vary in whether CEE 7/1 sockets are still permitted in environments where the need for earthing is less critical. Sweden, for example, prohibited them from new installations in 1994. In Germany unearthed sockets are rare, whereas in the Netherlands and Sweden it is still common to find them in \"dry areas\" such as in bedrooms or living rooms. Some countries prohibit use of unearthed and earthed sockets in the same room, in the \"insulated room\" concept, so that people cannot touch an earthed object and one that has become live, at the same time.\n", "In Brazil, similar plugs and sockets are still commonly used for high-power appliances like air conditioners, dishwashers, and household ovens. Although being often called \"Argentinian plug,\" it is actually based on the American NEMA 10-20 standard, and is incompatible with Argentinian IRAM plugs. Since Brazil adopted the NBR 14136 standard which includes a 20 A version, the original motivation to use the NEMA 10-20 plug has ceased to exist.\n\nSection::::Types in present use.:Australian/New Zealand standard AS/NZS 3112 (Type I), used in Australasia.\n", "Section::::British plugs and sockets regulatory system.\n\nA Statutory Instrument, the Plugs and Sockets etc. (Safety) Regulations 1987 was introduced to specifically regulate plugs and sockets in the United Kingdom. This was revised by the Plugs and Sockets etc. (Safety) Regulations 1994. The guidance notes to the 1994 regulations state:\n", "The German Schuko-system plug is believed to date from 1925 and is attributed to Albert Büttner. As the need for safer installations became apparent, earthed three-contact systems were made mandatory in most industrial countries.\n\nSection::::Development.:Proliferation.\n" ]
[]
[]
[ "normal" ]
[]
[ "normal", "normal" ]
[]
2018-06201
How does a chirp frequency in a radar work, and how does it improve resolution?
Chriping in radar technology works by sending a radar chirp (i.e. a longer pulse with ever changing frequency) and then working some black engineering magic on the receiver for create a *single sharp pulse* from the chirp (this works if you know for what kind of signal you are looking/filtering for). This improves *[range resolution]( URL_1 )* (i.e. objects close together don't appear as one) because the pulses after your magic filtering on the receiving end are much sharper than in a normal radar. Another benefit of transmitting a longer chirp that then gets compressed in the receiver is that it's harder to send a short, very powerful signal than to send a slightly longer, slightly less powerful signal. You get the same energy in you pulse, you just have more time to send that energy. The keyword for the "signal processing black magic" part is *[cross correlation]( URL_0 )*.
[ "Section::::Signal-to-noise ratio improvements by pulse compression.\n\nThe amplitude of random noise is not changed by the compression process, so the signal to noise ratios of received chirp signals are increased in the process. In the case of a high power search radars, this extends the range performance of the system, while for stealth systems the property will permit lower transmitter powers to be used.\n\nAs an illustration, a possible received noise sequence is shown, which contains a low amplitude chirp signal obscured within it. After processing by the compressor, the compressed pulse is clearly visible above the noise floor.\n", "There are as many ways to increase the bandwidth of a signal as there are forms of modulation – it is simply a matter of increasing the rate of that modulation. However, the two most common methods used in UWB radar, including SAR, are very short pulses and high-bandwidth chirping. A general description of chirping appears elsewhere in this article. The bandwidth of a chirped system can be as narrow or as wide as the designers desire. Pulse-based UWB systems, being the more common method associated with the term \"UWB radar\", are described here.\n", "In a typical radar system, the Doppler frequency is a small fraction of the swept frequency range (i.e. the system bandwidth) of the chirp, so the range errors due to Doppler are found to be minor. For example, for fdTerman F. E., \"Electronic and Radio Engineering, 4th Edition\", McGraw Hill 1955, p.1033/ref\n", "In principle, the transmitted pulses can be generated by applying impulses to the input of the dispersive transmit filter, with the resultant output chirp amplified as necessary for transmission. Alternatively, a voltage controlled oscillator may be used to generate the chirp signal. To achieve maximum transmitted power (and so achieve maximum range) it is normal for a radar system to transmit chirp pulses at constant amplitude from a transmitter run in a near-limiting condition. The chirp signals reflected from targets are amplified in the receiver and then processed by the compression filter to give narrow pulses of high amplitude, as previously described.\n", "A common technique for many radar systems (usually also found in SAR systems) is to \"chirp\" the signal. In a \"chirped\" radar, the pulse is allowed to be much longer. A longer pulse allows more energy to be emitted, and hence received, but usually hinders range resolution. But in a chirped radar, this longer pulse also has a frequency shift during the pulse (hence the chirp or frequency shift). When the \"chirped\" signal is returned, it must be correlated with the sent pulse. Classically, in analog systems, it is passed to a dispersive delay line (often a surface acoustic wave device) that has the property of varying velocity of propagation based on frequency. This technique \"compresses\" the pulse in time – thus having the effect of a much shorter pulse (improved range resolution) while having the benefit of longer pulse length (much more signal returned). Newer systems use digital pulse correlation to find the pulse return in the signal.\n", "Although the sidelobe level has been much reduced, there are some undesirable consequences of the weighting process. Firstly, there is an overall loss of gain, with the peak amplitude of the main lobe reduced by about 5.4 dB and, secondly, the half-power beam width of the main pulse has increased by nearly 50%. In, say, a radar system, these effects would cause loss of range and reduced range resolution, respectively.\n", "Section::::Existing spectral estimation approaches.:Chirped (pulse-compressed) radars.\n", "A major improvement in performance of chirp pulse generation and compression systems was achieved with the development of SAW filters. These allowed much more precision in the synthesis of filter characteristics and consequently in the radar performance. The inherent temperature sensitivity of the quartz substrates was overcome by mounting both the transmit and receive filters in a common package, so providing thermal compensation. The increased precision offered by SAW technology, enabled time sidelobe levels approaching −30 dB to become achievable by radar systems. (In actual fact, the performance level now achievable was set more by limitations in the system hardware than in SAW shortcomings).\n", "For systems using digital processing, it is important to carry out the chirp compression in the digital domain, after the A/D converters. If the compression process is carried out in the analogue domain before digitization (by a SAW filter, for example), the resulting high-amplitude pulses will place excessive demands on the dynamic range of the A/D converters.\n\nSection::::Pre-correction of system characteristics.\n\nThe transmitter and receiver subsystems of a radar are not distortion free. In consequence system performance is often less than optimum. In particular, the time sidelobe levels of the compressed pulses are found to be disappointingly high.\n", "BULLET::::- Sine wave, like air raid siren\n\nBULLET::::- Sawtooth wave, like the chirp from a bird\n\nBULLET::::- Triangle wave, like police siren in the United States\n\nBULLET::::- Square wave, like police siren in the United Kingdom\n\nRange demodulation is limited to 1/4 wavelength of the transmit modulation. Instrumented range for 100 Hz FM would be 500 km. That limit depends upon the type of modulation and demodulation. The following generally applies.\n", "The basics of the method for radar applications were developed in the late 1940s and early 1950s, but it was not until 1960, following declassification of the subject matter, that a detailed article on the topic appeared the public domain. Thereafter, the number of published articles grew quickly, as demonstrated by the comprehensive selection of papers to be found in a compilation by Barton.\n", "BULLET::::- Maximum Unambiguous Range\n\nAt its most simplistic, MUR (Maximum Unambiguous Range) for a Pulse Stagger sequence may be calculated using the TSP (Total Sequence Period). TSP is defined as the total time it takes for the Pulsed pattern to repeat. This can be found by the addition of all the elements in the stagger sequence. The formula is derived from the speed of light and the length of the sequence :\n", "Resolution in the range dimension of the image is accomplished by creating pulses which define very short time intervals, either by emitting short pulses consisting of a carrier frequency and the necessary sidebands, all within a certain bandwidth, or by using longer \"chirp pulses\" in which frequency varies (often linearly) with time within that bandwidth. The differing times at which echoes return allow points at different distances to be distinguished.\n", "where c is the speed of light, usually in metres per microsecond, and TSP is the addition of all the positions of the stagger sequence, usually in microseconds. However, it should be noted that in a stagger sequence, some intervals may be repeated several times; when this occurs, it is more appropriate to consider TSP as the addition of all the unique intervals in the sequence.\n", "Any Doppler frequency shift on the received signals will degrade the ripple cancelation process and this is discussed in more detail next.\n\nSection::::Properties of linear chirps.:Doppler tolerance of linear chirps.\n\nWhenever the radial distance between a moving target and the radar changes with time, the reflected chirp returns will exhibit a frequency shift (Doppler shift). After compression, the resulting pulses will show some loss in amplitude, a time (range) shift and degradation in sidelobe performance.\n", "BULLET::::- Chirp spread spectrum - A part of the wireless telecommunications standard IEEE 802.15.4a CSS (see Chirp Spread Spectrum (CSS) PHY Presentation for IEEE P802.15.4a).\n\nBULLET::::- Chirped mirror\n\nBULLET::::- Chirped pulse amplification\n\nBULLET::::- Chirplet transform - A signal representation based on a family of localized chirp functions, each member of which can usually be expressed as parameterized transformations of each other.\n\nBULLET::::- Continuous-wave radar\n\nBULLET::::- Dispersion (optics)\n", "Against the marine transmitter, the receiver combined a square-law: Power-level detector with cross-collation of a local copy of the pulse against the received signal. This method improved sensitivity for poorer time resolution, because correlated peaks are twice the width of uncorrelated peaks.\n", "Section::::More recent work on chirp compression techniques – some examples.\n\nThe growth in digital processing and methods had a significant influence in the field of chirp pulse compression. An introduction to these techniques is provided in a chapter of the Radar Handbook (3rd ed.), edited by Skolnik.\n", "Section::::Operational use.\n", "Similar problems arise in digital signal processing where the spectral shaping is provided by a window function, a process sometimes called apodization. In the case of an antenna array, similar profiling by \"weighting functions\" is used to reduce the spatial sidelobes of the radiation pattern.br \n\nAlthough spectral shaping of a chirp could be applied in the frequency domain, better results are obtained if the shaping is carried out in the time domain.br\n", "Fortunately it is possible to compensate for several system properties, provided they are stable and can be characterized adequately when a system is first assembled. This is not difficult to implement in radars using digital look-up tables, since these tables can be easily amended to include compensation data. Phase pre-corrections can be included in the expander tables and phase and amplitude corrections can be included in the compressor tables, as required.\n", "Section::::The radar signal in the time domain.:Pulse width.\n\nThe pulse width (formula_1) (or pulse duration) of the transmitted signal is the time, typically in microseconds, each pulse lasts. If the pulse is not a perfect square wave, the time is typically measured between the 50% power levels of the rising and falling edges of the pulse.\n", "Basic Fourier analysis shows that any repetitive complex signal consists of a number of harmonically related sine waves. The radar pulse train is a form of square wave, the pure form of which consists of the fundamental plus all of the odd harmonics. The exact composition of the pulse train will depend on the pulse width and PRF, but mathematical analysis can be used to calculate all of the frequencies in the spectrum. When the pulse train is used to modulate a radar carrier, the typical spectrum shown on the left will be obtained.\n", "To combine these two features requires that the radar carry out \"n\"th time around tracking, that is, it had to be able to track an echo resulting from a transmitted pulse other than that sent as the start of the same PRF period in which the echo was received.\n", "The CHIRP algorithm was developed to process data collected by the very-long-baseline Event Horizon Telescope, the international collaboration that in 2019 captured the black hole image of M87* for the first time. CHIRP was not used to produce the image, but was an algebraic solution for the extraction of information from radio signals producing data by an array of radio telescopes scattered around the globe. Stable sources (that don't change over short periods of time) can also gain signal by integrating the change at each location with the rotation of the earth. Because the radio telescopes used in the project produce vast amounts of data, which contain gaps, the CHIRP algorithm is one of the ways to fill the gaps in the collected data. \n" ]
[]
[]
[ "normal" ]
[]
[ "normal", "normal" ]
[]
2018-05885
why do our hearts not get tired
The heart pumps blood almost directly into itself, and drains blood from its venous circulation directly into the right atrium. This is the main difference. Your other muscle tissues are far away, there is a lot of them relative to the size of the heart, and they receive blood more slowly. This means more delays in getting oxygen to these tissues, and more delay removing metabolic waste products, and just a *lot* of muscle tissue to supply. When we exercise, the expanse of our skeletal muscles is simply too much for our body to run at peak efficiency indefinitely. Our muscles run out of oxygen, glucose, and build up waste products eventually. The heart, conversely, is first in line for the good stuff from our arterial blood supply. Similarly, it's sized such that more than enough blood can get to it, and feed/remove waste from it, so tat it can run indefinitely at peak efficiency. That said, there are things that can happen which cause the heart to become less effective and have to grow in size to compensate. It isn't getting tired per se, but it's certainly strained in these situations. This is cardiomegaly, and is related to heart failure.
[ "Cardiomyocytes contain T-tubules, pouches of membrane that run from the surface to the cell's interior which help to which improve the efficiency of contraction. The majority of these cells contain only one nucleus (although they may have as many as four), unlike skeletal muscle cells which typically contain many nuclei. Cardiac muscle cells contain many mitochondria which provide the energy needed for the cell in the form of adenosine triphosphate (ATP), making them highly resistant to fatigue.\n\nSection::::Structure.:Histology.:Cardiac muscle cells.:T-tubules.\n", "Subjects' symptoms from non-compaction cardiomyopathy range widely. It is possible to be diagnosed with the condition, yet not to have any of the symptoms associated with heart disease. Likewise it possible to have severe heart failure, which even though the condition is present from birth, may only manifest itself later in life. Differences in symptoms between adults and children are also prevalent with adults more likely to have heart failure and children from depression of systolic function.\n\nCommon symptoms associated with a reduced pumping performance of the heart include:\n\nBULLET::::- Breathlessness\n\nBULLET::::- Fatigue\n\nBULLET::::- Swelling of the ankles\n", "Therefore, circadian regulation is more than sufficient to explain periods of activity and quiescence that are adaptive to an organism, but the more peculiar specializations of sleep probably serve different and unknown functions. Moreover, the preservation theory needs to explain why carnivores like lions, which are on top of the food chain and thus have little to fear, sleep the most. It has been suggested that they need to minimize energy expenditure when not hunting.\n\nSection::::Sleep function.:Waste clearance from the brain.\n", "BULLET::::- The heart has a claim to being the muscle that performs the largest quantity of physical work in the course of a lifetime. Estimates of the power output of the human heart range from 1 to 5 watts. This is much less than the maximum power output of other muscles; for example, the quadriceps can produce over 100 watts, but only for a few minutes. The heart does its work continuously over an entire lifetime without pause, and thus does \"outwork\" other muscles. An output of one watt continuously for eighty years yields a total work output of two and a half gigajoules.\n", "Section::::Structure.:Coronary circulation.\n\nHeart tissue, like all cells in the body, needs to be supplied with oxygen, nutrients and a way of removing metabolic wastes. This is achieved by the coronary circulation, which includes arteries, veins, and lymphatic vessels. Blood flow through the coronary vessels occurs in peaks and troughs relating to the heart muscle's relaxation or contraction.\n", "The cardiovascular system is regulated by the autonomic nervous system, which includes the sympathetic and parasympathetic nervous systems. A distinct balance between these systems is crucial for the pathophysiology of cardiovascular disease. An imbalance can be caused by hormone levels, lifestyle, environmental stressors, and injuries.\n", "Section::::Chronic fatigue syndrome.\n\nChronic fatigue syndrome is a name for a group of diseases that are dominated by persistent fatigue. The fatigue is not due to exercise and is not relieved by rest. br\n", "Because the rest of the body, and most especially the brain, needs a steady supply of oxygenated blood that is free of all but the slightest interruptions, the heart works constantly and sometimes works quite hard. Therefore its circulation is of major importance not only to its own tissues but to the entire body and even the level of consciousness of the brain from moment to moment. \n", "BULLET::::- Heart: The heart has a warm and dry temperament. The constant movement of the heart throughout one's life is a proof of its warmness. Consuming too much of food stuff with warm \"Mizaj\" such as pepper and spices would give the person palpitations. Having a fixed shape and being stiff and firm are the signs of the heart dryness.\n", "Section::::Structure.:Histology.:Cardiac muscle cells.\n\nCardiac muscle cells or cardiomyocytes are the contracting cells which allow the heart to pump. Each cardiomyocyte needs to contract in coordination with its neighbouring cells - known as a functional syncytium - working to efficiently pump blood from the heart, and if this coordination breaks down then – despite individual cells contracting – the heart may not pump at all, such as may occur during abnormal heart rhythms such as ventricular fibrillation.\n", "Contracting heart muscle uses a lot of energy, and therefore requires a constant flow of blood to provide oxygen and nutrients. Blood is brought to the myocardium by the coronary arteries. These originate from the aortic root and lie on the outer or epicardial surface of the heart. Blood is then drained away by the coronary veins into the right atrium.\n\nSection::::Structure.:Histology.\n", "Cardiomyocytes show striations similar to those on skeletal muscle cells. Unlike multinucleated skeletal cells, the majority of cardiomyocytes contain only one nucleus, although they may have as many as four. Cardiomyocytes have a high mitochondrial density, which allows them to produce adenosine triphosphate (ATP) quickly, making them highly resistant to fatigue.\n\nSection::::Structure.\n\nThere are two types of cells within the heart: the cardiomyocytes and the cardiac pacemaker cells.\n", "Because cardiac output is related to the quantity of blood delivered to various parts of the body, it is an important indicator of how efficiently the heart can meet the body's demands for perfusion. For instance, physical exercise requires a higher than resting-level of oxygen to support increased muscle activity, where, in the case of heart failure, actual CO may be insufficient to support even simple activities of daily living; nor can it increase sufficiently to meet the higher metabolic demands stemming from even moderate exercise.\n", "Cardiac output is primarily controlled by the oxygen requirement of tissues in the body. In contrast to other pump systems, the heart is a demand pump that does not regulate its own output. When the body has a high metabolic oxygen demand, the metabolically controlled flow through the tissues is increased, leading to a greater flow of blood back to the heart, leading to higher cardiac output.\n", "There is a distinctly different electrical pattern involving the contractile cells. In this case, there is a rapid depolarization, followed by a plateau phase and then repolarization. This phenomenon accounts for the long refractory periods required for the cardiac muscle cells to pump blood effectively before they are capable of firing for a second time. These cardiac myocytes normally do not initiate their own electrical potential, although they are capable of doing so, but rather wait for an impulse to reach them.\n", "There are three types of muscles—cardiac, skeletal, and smooth. Smooth muscles are used to control the flow of substances within the lumens of hollow organs, and are not consciously controlled. Skeletal and cardiac muscles have striations that are visible under a microscope due to the components within their cells. Only skeletal and smooth muscles are part of the musculoskeletal system and only the skeletal muscles can move the body. Cardiac muscles are found in the heart and are used only to circulate blood; like the smooth muscles, these muscles are not under conscious control. Skeletal muscles are attached to bones and arranged in opposing groups around joints. Muscles are innervated, to communicate nervous energy to, by nerves, which conduct electrical currents from the central nervous system and cause the muscles to contract.\n", "Cardiomyocytes, on the other hand, derive the majority of their energy from long-chain fatty acids and their acyl-CoA equivalents. Cardiac ischemia, as it slows the oxidation of fatty acids, causes an accumulation of acyl-CoA and induces K channel opening while free fatty acids stabilize its closed conformation. This variation was demonstrated by examining transgenic mice, bred to have ATP-insensitive potassium channels. In the pancreas, these channels were always open, but remained closed in the cardiac cells.\n\nSection::::Sensor of cell metabolism.:Mitochondrial K and the regulation of aerobic metabolism.\n", "There are three distinct types of muscles: skeletal muscles, cardiac or heart muscles, and smooth (non-striated) muscles. Muscles provide strength, balance, posture, movement and heat for the body to keep warm.\n\nSection::::Muscles.:Skeletal muscle.\n", "Section::::Respiratory system adaptations.\n\nAlthough all of the described adaptations in the body to maintain homeostatic balance during exercise are very important, the most essential factor is the involvement of the respiratory system. The respiratory system allows for the proper exchange and transport of gases to and from the lungs while being able to control the ventilation rate through neural and chemical impulses. In addition, the body is able to efficiently use the three energy systems which include the phosphagen system, the glycolytic system, and the oxidative system.\n\nSection::::Temperature regulation.\n", "Cardiomyocytes make up the atria (the chambers in which blood enters the heart) and the ventricles (the chambers where blood is collected and pumped out of the heart). These cells must be able to shorten and lengthen their fibers and the fibers must be flexible enough to stretch. These functions are critical to the proper form during the beating of the heart.\n", "The existence of a central governor was proposed by Tim Noakes in 1997, but a similar idea was suggested in 1924 by Archibald Hill.\n\nIn contrast to this idea is the one that fatigue is due to peripheral ‘limitation’ or ‘catastrophe’. In this view, regulation by fatigue occurs as a consequence of a failure of homeostasis directly in muscles.\n\nSection::::History.\n\nSection::::History.:Archibald Hill.\n\nThe 1922 Nobel Prize in Physiology or Medicine winner Archibald Hill proposed in 1924 that the heart was protected from anoxia in strenuous exercise by the existence of a governor.\n", "As the center focus of cardiology, the heart has numerous anatomical features (e.g., atria, ventricles, heart valves) and numerous physiological features (e.g., systole, heart sounds, afterload) that have been encyclopedically documented for many centuries.\n\nDisorders of the heart lead to heart disease and cardiovascular disease and can lead to a significant number of deaths: cardiovascular disease is the leading cause of death in the United States and caused 24.95% of total deaths in 2008.\n\nThe primary responsibility of the heart is to pump blood throughout the body.\n", "The cardiac cycle is the performance of the human heart from the ending of one heartbeat to the beginning of the next. It consists of two periods: one during which the heart muscle relaxes and refills with blood, called diastole (), followed by a period of robust contraction and pumping of blood, dubbed systole (). After emptying, the heart immediately relaxes and expands to receive another influx of blood \"returning from\" the lungs and other systems of the body, before again contracting to \"pump blood to\" the lungs and those systems. A normally performing heart must be fully expanded before it can efficiently pump again. Assuming a healthy heart and a typical rate of 70 to 75 beats per minute, each cardiac cycle, or heartbeat, takes about 0.8 seconds to complete the cycle.\n", "Since they had been bred as slaves, the Newcomers are 30% stronger and 20% smarter than the average human, as well as having a life expectancy of about 140 years and keener senses. This is partially due to the way they have been bred for adaptability but also due to their anatomical differences, which include having two hearts. With two hearts, the onset time of poisons is halved. If one heart is damaged, the other will work harder for a limited time. One heart is located in approximately the same spot as a human heart, the second is located in the center of the chest at the bottom of the rib cage.\n", "Section::::Pathomechanisms.\n" ]
[]
[]
[ "normal" ]
[]
[ "normal", "normal" ]
[]
2018-08578
How are humanities academic papers published?
> Since non-humanities (scientific) papers are peer-reviewed, the results are replicable and have tangible outcomes that are quantifiable... Humanities might find it harder to support or reject ideas in objective, tangible ways, but the peer review process doesn't require that. It doesn't attempt to replicate results, that would be a whole project and publication of its own. Peer review is simply respected people in the field going over the paper and giving it a subjective "looks good to me". What you are describing would be follow up research.
[ "Publishing in the humanities is in principle similar to publishing elsewhere in the academy; a range of journals, from general to extremely specialized, are available, and university presses issue many new humanities books every year. The arrival of online publishing opportunities has radically transformed the economics of the field and the shape of the future is controversial. Unlike science, where timeliness is critically important, humanities publications often take years to write and years more to publish. Unlike the sciences, research is most often an individual process and is seldom supported by large grants. Journals rarely make profits and are typically run by university departments.\n", "Although originally intended to run on Open Journal Systems, in 2017 OLH started development of a new platform, Janeway. Currently the main press site and the journal \"Orbit\" are hosted on the new platform. The University of Lincoln, in partnership with the Public Knowledge Project, offered a funded place for an MSc by Research in Computer Science in order to develop an open-source XML typesetting tool as proposed by the Open Library of Humanities technical roadmap. In November 2013 it was announced that the Public Knowledge Project will be funding the development of the typesetter, known as meTypeset.\n", "In 2008, the Press began making out-of-print scholarly books freely available on-line at the University of Pittsburgh Press Digital Editions collection through the University Library System’s D-Scribe Digital Publishing program. By 2010, the University of Pittsburgh Press Digital Editions included more than 750 titles, and more than 350 previously out-of-print titles have been reissued in paperback format as Prologue Editions.\n", "Section::::Major digitized collections.:University of Pittsburgh Press Digital Editions.\n\nThe University of Pittsburgh Press Digital Editions is a collaboration between the University of Pittsburgh Press and the University Library System that has digitized over 745 monographs in order make them freely available to the public via the internet. Mostly out-of-print titles, the collection includes fully searchable titles from the Pitt Latin American Series; Pitt Series in Russian and East European Studies; and Composition, Literacy, and Culture.\n\nSection::::Major digitized collections.:Stephen Foster Sketchbook.\n", "The first specialized journal in the digital humanities was \"Computers and the Humanities\", which debuted in 1966. The Association for Literary and Linguistic Computing (ALLC) and the Association for Computers and the Humanities (ACH) were then founded in 1977 and 1978, respectively.\n", "Researchers in the humanities have developed numerous large- and small-scale digital corporation, such as digitized collections of historical texts, along with the digital tools and methods to analyze them. Their aim is both to uncover new knowledge about corpora and to visualize research data in new and revealing ways. Much of this activity occurs in a field called the digital humanities.\n\nSection::::Today.:In the United States.:STEM.\n", "Specific areas of work that the Arts and Humanities Data Service covers include:\n\nBULLET::::- digital preservation — including a series of preservation handbooks detailing specific preservation issues with various digital file formats and information on its digital repository\n\nBULLET::::- Advice on digitization — including a series of case studies of existing digitization projects, information papers on specific issues in digitization, and longer Guides to Good Practice dealing with digitization topics in particular arts and humanities disciplines\n\nBULLET::::- Online collections created by universities and museums in the UK. These include:\n", "BULLET::::- Aphasiology Archive\n\nBULLET::::- Archive of European Integration\n\nBULLET::::- D-Scholarship@Pitt, the Institutional Repository of the University of Pittsburgh, including Electronic Theses and Dissertations (ETDs)\n\nBULLET::::- Industry Studies Working Papers\n\nBULLET::::- Minority Health and Health Equity Archive\n\nBULLET::::- PhilSci-Archive, a preprint repository for the field of Philosophy of Science\n\nSection::::Electronic journal publishing program.\n", "The Open Humanities Press (OHP) is a scholar-led publishing initiative founded by Paul Ashton (Australia), Gary Hall (UK), Sigi Jöttkandt (Australia) and David Ottina (US). Its aim is to raise awareness of open access publishing in the humanities and to provide promotional and technical support to open access journals that have been invited by OHP's editorial oversight group to join the collective.\n", "It is possible to search and find the paper by defined topics through an Internet search. Secondly, all submitted papers are stored permanently and receive a stable web address, as for example the paper in this example:\n\nSection::::History.\n", "Section::::Journal indexing and archiving.\n\nAs articles become suitable, indexing on DOAJ, Pubmed, and MEDLINE is sought for all journals, as is archiving in PubMed Central. Articles also appear on indexes and repositories, including OAIster and Pubget. The publisher offers an Open Archives Initiative Protocol for Metadata Harvesting.\n\nSection::::Green and Gold OA.\n\nSHERPA/RoMEO has identified LA as a Green OA publisher. This means that authors are permitted to archive their work prior to and after publication. LA is also a gold OA publisher because all articles are freely available online immediately upon publication.\n\nSection::::Copyright.\n", "Section::::OPUS Archives.:Marija Gimbutas Collection at OPUS.\n\nOPUS holds over 15,000 slides utilized by Marija Gimbutas in her lectures and books on Neolithic civilizations and the goddess, thousands of research catalogue cards in numerous languages handwritten by Gimbutas, and extensive texts on the subjects of history, archaeology, and the humanities.\n\nSection::::OPUS Archives.:James Hillman Collection at OPUS.\n\nJames Hillman's collection includes first draft manuscripts of his books, including \"Re-Visioning Psychology\", which earned him a nomination for the Pulitzer Prize. Hillman's prolific career is documented through correspondence, personal notes, and unfinished projects that are available for pursuit by scholars of the next generation.\n", "Section::::Citations.\n\nAcademic authors cite sources they have used, in order to support their assertions and arguments and to help readers find more information on the subject. It also gives credit to authors whose work they use and helps avoid plagiarism.\n\nEach scholarly journal uses a specific format for citations (also known as references). Among the most common formats used in research papers are the APA, CMS, and MLA styles.\n", "There are thousands of digital humanities projects, ranging from small-scale ones with limited or no funding to large-scale ones with multi-year financial support. Some are continually updated while others may not be due to loss of support or interest, though they may still remain online in either a beta version or a finished form. The following are a few examples of the variety of projects in the field:\n\nSection::::Projects.:Digital archives.\n", "OHP launched in May 2008 with seven open access journals and was named a \"beacon of hope\" by the Public Library of Science. In August, 2009 OHP announced it will begin publishing open access book series edited by senior members of OHP's board. \n\nSection::::Works.\n\nSection::::Works.:Books.\n\nThe monograph series are:\n\nBULLET::::- \"New Metaphysics\" edited by Graham Harman and Bruno Latour\n\nBULLET::::- \"Critical Climate Change\" edited by Claire Colebrook and Tom Cohen\n\nBULLET::::- \"Fibreculture Books\" edited by Andrew Murphie\n\nBULLET::::- \"Liquid Books\" edited by Gary Hall and Clare Birchall\n\nBULLET::::- \"Immediations\" edited by the SenseLab\n", "Since the advent of the Open Archives Initiative, preprints and postprints have been deposited in institutional repositories, which are interoperable because they are compliant with the Open Archives Initiative Protocol for Metadata Harvesting.\n\nEprints are at the heart of the open access initiative to make research freely accessible online. Eprints were first deposited or self-archived in arbitrary websites and then harvested by virtual archives such as CiteSeer (and, more recently, Google Scholar), or they were deposited in central disciplinary archives such as arXiv or PubMed Central.\n", "The research projects, essays, and documentation are the products of a collaboration between humanities and computer science research faculty, computer professionals, student assistants and project managers, and library faculty and staff. In many cases, this work is supported by private or federal funding agencies. In all cases, it is supported by the Fellows’ home departments; the College or School to which those departments belong; the University of Virginia Library; the Vice President for Research and Public Service; the Vice President and Chief Information Officer; the Provost; and the President of the University of Virginia.\n\nSection::::History.\n", "Open Humanities Press was founded In 2006 by Sigi Jottkandt, David Ottina, and Paul Ashton alongside Gary Hall. OHP is the first open-access publishing ‘house’ explicitly dedicated to critical and cultural theory with the aim to develop a new sustainable business model for the open publication and dissemination of academic research and scholarship in the arts and humanities. In 2009 OHP launched the 'monograph project'. Designed to publish monographs in an open access manner, this project is run in collaboration with the University of Michigan Library’s Scholarly Publishing Office, University of California, Irvine, University of California, Los Angeles Library, and the Public Knowledge Project headed by John Willinsky at Stanford University.\n", "Section::::Book Editorial Program.\n\nDuring the 2007 fiscal year, the Press considered approximately 1,300 manuscripts and proposals, of which 60 were accepted for publication by the Editorial Board. As of 30 June 2007, 122 books were in press. Each book undergoes rigorous review, including preliminary evaluation by an in-house editor. Manuscripts that show promise are then evaluated by two external readers who are specialists in the subject matter. Those that receive two positive peer reviews are presented to the Press's academic editorial board, which makes the final determination about whether to publish.\n", "All Montana State University graduate students completing a thesis or dissertation are required to submit an electronic version of the work. The Graduate School works in conjunction with Montana State University Library to archive these documents. Each electronic version is entered into ScholarWorks, an open access repository of intellectual work at Montana State University. In 2015, MSU Library digitized over 5,000 theses and dissertations making the research of virtually every Montana State University graduate student since 1902 available online to the public. The collection now includes over 7,500 items.\n\nSection::::Distinguished Faculty.\n", "The process of peer review is organized by the journal editor and is complete when the content of the article, together with any associated images or figures, are accepted for publication. The peer review process is increasingly managed online, through the use of proprietary systems, commercial software packages, or open source and free software. A manuscript undergoes one or more rounds of review; after each round, the author(s) of the article modify their submission in line with the reviewers' comments; this process is repeated until the editor is satisfied and the work is accepted.\n", "Once all the paperwork is in order, copies of the thesis may be made available in one or more university libraries. Specialist abstracting services exist to publicize the content of these beyond the institutions in which they are produced. Many institutions now insist on submission of digitized as well as printed copies of theses; the digitized versions of successful theses are often made available online.\n\nSection::::See also.\n\nBULLET::::- Compilation thesis\n\nBULLET::::- Comprehensive examination\n\nBULLET::::- Dissertation Abstracts\n\nBULLET::::- Grey literature\n\nBULLET::::- Postgraduate education\n\nSection::::External links.\n\nBULLET::::- on Wikibooks\n\nBULLET::::- Networked Digital Library of Theses and Dissertations (NDLTD)\n", "Section::::\"Humanities\" magazine.\n\nStarting in 1969, the NEH published a periodical called \"Humanities\"; that original incarnation was discontinued in 1978. In 1980, \"Humanities\" magazine was relaunched (). It is published six times per year, with one cover article each year dedicated to profiling that year's Jefferson Lecturer. Most of its articles have some connection to NEH activities. The magazine's editor since 2007 has been journalist and author David Skinner. From 1990 until her death in 2007, \"Humanities\" was edited by Mary Lou Beatty (who had previously been a high-ranking editor at the \"Washington Post\").\n\nSection::::See also.\n", "BULLET::::- Rosika Schwimmer Papers, Manuscripts and Archives Division, The New York Public Library, New York, NY\n\nBULLET::::- Schwimmer Family Papers, Manuscripts and Archives Division, The New York Public Library, New York, NY\n\nBULLET::::- Digital images of Rosika Schwimmer from the Schwimmer-Lloyd Collection, Manuscripts and Archives Division, The New York Public Library, New York, NY\n\nBULLET::::- Rosika Schwimmer Papers, Hoover Institution Archives, Stanford, CA\n\nBULLET::::- Schwimmer-Lloyd Collection, Sophia Smith Collection, Smith College, Northampton, MA\n\nBULLET::::- Rosika Schwimmer Papers, Swarthmore College Peace Collection, Swarthmore, PA\n", "BULLET::::- \"Project Bamboo\", a partnership of ten research universities building shared infrastructure for humanities research. Within Bamboo, MITH is leading Corpora Space. We are designing research environments where scholars may discover, analyze and curate digital texts across the 450 years of print culture in English from 1473 until 1923, along with the texts from the Classical world upon which that print culture is based.\n" ]
[]
[]
[ "normal" ]
[]
[ "normal" ]
[]
2018-19116
Why do people say "human race" when it is clearly a species?
Because the term "race" has existed for hundreds of years and had various meanings, long before the modern science of taxonomy came about and properly defined species and races.
[ "Human taxonomy\n\nHuman taxonomy is the classification of the human species (systematic name \"Homo sapiens\", Latin: \"knowing man\") within zoological taxonomy. The systematic genus, \"Homo\", is designed to include both anatomically modern humans and extinct varieties of archaic humans. Current humans have been designated as subspecies \"Homo sapiens sapiens\", differentiated from the direct ancestor, \"Homo sapiens idaltu\".\n", "Human taxonomy on one hand involves the placement of humans within the Taxonomy of the hominids (great apes), and on the other the division of archaic and modern humans into species and, if applicable, subspecies. Modern zoological taxonomy was developed by Carl Linnaeus during the 1730s to 1750s. He named the human species as \"Homo sapiens\" in 1758, as the only member species of the genus \"Homo\", divided into several subspecies corresponding to the great races. The Latin noun \"homō\" (genitive \"hominis\") means \"human being\". The systematic name \"Hominidae\" for the family of the great apes was introduced by John Edward Gray (1825). Gray also supplied \"Hominini\" as the name of the tribe including both chimpanzees (genus \"Pan\") and humans (genus \"Homo\").\n", "Prior to the current scientific classification of humans, philosophers and scientists have made various attempts to classify humans. They offered definitions of the human being and schemes for classifying types of humans. Biologists once classified races as subspecies, but today anthropologists reject the concept of race and view humanity as an interrelated genetic continuum. Taxonomy of the hominins continues to evolve.\n\nSection::::History.\n", "Human Race\n\nThe Human Race may refer to:\n\nBULLET::::- Human species\n\nBULLET::::- Race (human classification), a classification system used to categorize humans into large and distinct populations\n\nBULLET::::- Human Race Theatre Company of Dayton Ohio\n\nBULLET::::- \"The Human Race\" (film)\n\nSection::::Music.\n\nBULLET::::- \"Human Race\" (song), a song by Three Days Grace from their 2015 album \"Human\"\n\nBULLET::::- \"Human Race\" (song by Margaret Urlich), 1992.\n\nBULLET::::- \"Human Race\", a 1970 song by the Everly Brothers\n\nBULLET::::- \"Human Race\", a 1979 song by Neil Innes\n\nBULLET::::- \"Human Race\", a song by Red Rider from their 1983 album \"Neruda\"\n", "The subspecies name \"H. sapiens sapiens\" is sometimes used informally instead of \"modern humans\" or \"anatomically modern humans\". It has no formal authority associated with it. By the early 2000s, it had become common to use \"H. s. sapiens\" for the ancestral population of all contemporary humans, and as such it is equivalent to the binomial \"H. sapiens\" in the more restrictive sense (considering \"H. neanderthalensis\" a separate species).\n\nSection::::Age and speciation process.\n\nSection::::Age and speciation process.:Derivation from \"H. erectus\".\n", "The species binomial \"\"Homo sapiens\"\" was coined by Carl Linnaeus in his 18th-century work \"Systema Naturae\". The generic name \"\"Homo\"\" is a learned 18th-century derivation from Latin ' \"man,\" ultimately \"earthly being\" (Old Latin ' a cognate to Old English ' \"man\", from PIE ', meaning \"earth\" or \"ground\"). The species-name \"\"sapiens\"\" means \"wise\" or \"sapient\". Note that the Latin word \"homo\" refers to humans of either gender, and that \"\"sapiens\"\" is the singular form (while there is no such word as \"\"sapien\"\").\n\nSection::::History.\n\nSection::::History.:Evolution and range.\n", "A complete binomial name is always treated grammatically as if it were a phrase in the Latin language (hence the common use of the term \"Latin name\" for a binomial name). However, the two parts of a binomial name can each be derived from a number of sources, of which Latin is only one. These include:\n\nBULLET::::- Latin, either classical or medieval. Thus, both parts of the binomial name \"Homo sapiens\" are Latin words, meaning \"wise\" (\"sapiens\") \"human/man\" (\"Homo\").\n", "Section::::Subspecies.\n\nSection::::Subspecies.:\"Homo sapiens\" subspecies.\n\nThe recognition or non-recognition of subspecies of \"Homo sapiens\" has a complicated history. The rank of subspecies in zoology is introduced for convenience, and not by objective criteria, based on pragmatic consideration of factors such as geographic isolation and sexual selection. The informal taxonomic rank of race is variously considered equivalent or subordinate to the rank of subspecies, and the division of anatomically modern humans (\"H. sapiens\") into subspecies is closely tied to the recognition of major racial groupings based on human genetic variation.\n", "The 1951 revised statement stated that \"Homo sapiens\" is one species. \"The concept of race is unanimously regarded by anthropologists as a classificatory device providing a zoological frame within which the various groups of mankind may be arranged and by means of which studies of evolutionary processes can be facilitated. In its anthropological sense, the word ‘race’ should be reserved for groups of mankind possessing well-developed and primarily heritable physical differences from other groups.\" These differences have been caused in part by partial isolation preventing intermingling, geography an important explanation for the major races, often cultural for the minor races. National, religious, geographical, linguistic and cultural groups do not necessarily coincide with racial groups.\n", "The discovery of the first extinct archaic human species from the fossil record dates to the mid 19th century, \"Homo neanderthalensis\", classified in 1864. Since then, a number of other archaic species have been named, but there is no universal consensus as to their exact number. After the discovery of \"H. neanderthalensis\", which even if \"archaic\" is recognizable as clearly human, late 19th to early 20th century anthropology for a time was occupied with finding the supposedly \"missing link\" between \"Homo\" and \"Pan\". The \"Piltdown Man\" hoax of 1912 was the fraudulent presentation of such a transitional species. Since the mid-20th century, knowledge of the development of \"Hominini\" has become much more detailed, and taxonomical terminology has been altered a number of times to reflect this.\n", "The first version stated that \"National, religious, geographic, linguistic and cultural groups do not necessarily coincide with racial groups: and the cultural traits of such groups have no demonstrated genetic connection with racial traits. Because serious errors of this kind are habitually committed when the term ‘race’ is used in popular parlance, it would be better when speaking of human races to drop the term ‘race’ altogether and speak of ethnic groups.\" The revised version instead stated that the experts \"agreed to reserve race as the word to be used for anthropological classification of groups showing definite combinations of physical (including physiological) traits in characteristic proportions.\"\n", "Since the introduction of systematic names in the 18th century, knowledge of human evolution has increased drastically, and a number of intermediate taxa have been proposed in the 20th to early 21st century. The most widely accepted taxonomy groups takes the genus \"Homo\" as originating between two and three million years ago, divided into at least two species, archaic \"Homo erectus\" and modern \"Homo sapiens\", with about a dozen further suggestions for species without universal recognition.\n", "Section::::Etymology and definition.\n\nIn common usage, the word \"human\" generally refers to the only extant species of the genus \"Homo\"—anatomically and behaviorally modern \"Homo sapiens\".\n", "The term \"race\" has also historically been used in relation to domesticated animals, as another term for \"breed\"; this usage survives in combining form, in the term landrace, also applied to domesticated plants. The cognate words for \"race\" in many languages (; ; ) may convey meanings the English word does not, and are frequently used in the sense of 'domestic breed'.\n\nSection::::Distinguishing from other taxonomic ranks.\n", "A late example of an academic authority proposing that the human racial groups should be considered taxonomical subspecies is John Baker (1974). The trinomial nomenclature \"Homo sapiens sapiens\" became popular for \"modern humans\" in the context of Neanderthals being considered a subspecies of \"H. sapiens\" in the second half of the 20th century. Derived from the convention, widespread in the 1980s, of considering two subspecies, \"H. s. neanderthalensis\" and \"H. s. sapiens\", the explicit claim that \"\"H. s. sapiens\" is the only extant human subspecies\" appears in the early 1990s. This is only true if the nomenclature derived from Linnaeus is rejected. Based on Linnaeus (1758), there are at least six subspecies, with \"H. s. sapiens\" catching those specimens not included in any other.\n", "and so, the expression fell into disuse, and the Shaltanacs found they had little choice but to become exceptionally happy and content with their lot, which surprised everyone else in the galaxy, who had not realised that the best way not to be unhappy is not to have a word for it.\n\nSection::::Races.:Silastic Armourfiends of Striterax.\n", "In the early 20th century, many anthropologists taught that race was an entirely biologically phenomenon and that this was core to a person's behavior and identity, a position commonly called racial essentialism. This, coupled with a belief that linguistic, cultural, and social groups fundamentally existed along racial lines, formed the basis of what is now called scientific racism. After the Nazi eugenics program, along with the rise of anti-colonial movements, racial essentialism lost widespread popularity. New studies of culture and the fledgling field of population genetics undermined the scientific standing of racial essentialism, leading race anthropologists to revise their conclusions about the sources of phenotypic variation. A significant number of modern anthropologists and biologists in the West came to view race as an invalid genetic or biological designation.\n", "There is no consensus on the taxonomic delineation between human species, human subspecies and the human races. On the one hand, there is the proposal that \"H. sapiens idaltu\" (2003) is not distinctive enough to warrant classification as a subspecies. On the other, there is the position that genetic variation in the extant human population is large enough to justify its division into several subspecies.\n", "I'm not sure how that will play out. The geneticists, if you read their papers, have long been using code words. They sort of dropped the term \"race\" about 1980 or earlier, and instead you see code words like \"population\" or \"population structure.\" Now that they're able to define race in genetic terms they tend to use other words, like \"continental groups\" or \"continent of origin,\" which does, indeed, correspond to the everyday conception of race. When I'm writing I prefer to use the word race because that's the word that everyone understands. It's a word with baggage, but it's not necessarily a malign word.\n", "Since the second half of the 20th century, the association of race with the ideologies and theories of scientific racism has led to the use of the word \"race\" itself becoming problematic. Although still used in general contexts, \"race\" has often been replaced by less ambiguous and loaded terms: \"populations\", \"people(s)\", \"ethnic groups\", or \"communities\", depending on context.\n\nSection::::Defining race.\n", "The first to challenge the concept of race on empirical grounds were the anthropologists Franz Boas, who provided evidence of phenotypic plasticity due to environmental factors, and Ashley Montagu, who relied on evidence from genetics. E. O. Wilson then challenged the concept from the perspective of general animal systematics, and further rejected the claim that \"races\" were equivalent to \"subspecies\".\n\nHuman genetic variation is predominantly within races, continuous, and complex in structure, which is inconsistent with the concept of genetic human races. According to Jonathan Marks,\n\nSection::::Modern scholarship.:Biological classification.:Subspecies.\n", "The term \"race\" in biology is used with caution because it can be ambiguous. Generally, when it is used it is effectively a synonym of \"subspecies\". (For animals, the only taxonomic unit below the species level is usually the subspecies; there are narrower infraspecific ranks in botany, and \"race\" does not correspond directly with any of them.) Traditionally, subspecies are seen as geographically isolated and genetically differentiated populations. Studies of human genetic variation show that human populations are not geographically isolated, and their genetic differences are far smaller than those among comparable subspecies.\n", "Recent work using DNA cluster analysis to determine race background has been used by some criminal investigators to narrow their search for the identity of both suspects and victims. Proponents of DNA profiling in criminal investigations cite cases where leads based on DNA analysis proved useful, but the practice remains controversial among medical ethicists, defense lawyers and some in law enforcement.\n\nThe Constitution of Australia contains a line about 'people of any race for whom it is deemed necessary to make special laws', despite there being no agreed definition of race described in the document. \n", "However, a line of research conducted by Cartmill (1998) seemed to limit the scope of Lieberman's finding that there was \"a significant degree of change in the status of the race concept\". Goran Štrkalj has argued that this may be because Lieberman and collaborators had looked at all the members of the American Anthropological Association irrespective of their field of research interest, while Cartmill had looked specifically at biological anthropologists interested in human variation.\n", "The Thranx are an insectoid species featured in Alan Dean Foster's Humanx Commonwealth book series. While at war with the reptilian AAnn, the Thranx discovered a human space vessel, and captured the occupants. Through the efforts of rogue thranx, the humans were liberated and returned to Terra, and the alliance between the two species was initiated, eventually resulting in the formation of the Humanx Commonwealth (\"Humanx\" takes the first three letters of \"hum\"an and the last three letters of thr\"anx\" to represent the union between the two races). Unity between the humans and the thranx provide equal benefits, with the two races forming a near symbiotic existence with one another. \n" ]
[]
[]
[ "normal" ]
[ "People should refer to humans as a species." ]
[ "normal", "false presupposition" ]
[ "Humans have been referred to as a race for hundreds of years." ]
2018-15853
How does a random number generator randomly generate numbers?
So, I'm not getting into logic gates for an eli5. But the basics are to apply some math based formula to a seed. For example, a super simple one would be to ask the person for a number, then calculate the nth diget of pi, with n being the number given squared Of course, that's not really ransom., Every time I gave 1 I would get 3. To make it more random, you need a fairly complex algorithm. There are several that can be given a seed and produce an impossible to guess answer. But then how random your output is depends on how random your seed is, asking people gets both tiring and isn't very random. But it turns out there are some really random ways to go. Measuring the exact number of bytes currently in use in the ram is a decent start. Or taking the frequency that the microphone is picking up works as well. Almost anything can be a seed, as long as it produces a large enough set of numbers to make the rng seem random (checking if the camera is on or off might be random, but it's only ever gonna produce 2 inputs, so your rng will only ever produce 2 answers). The only truly random seed we use that I am aware of ATM is radioactive decay, but that's expensive, and most people don't want radioactive materials in their computer.
[ "The second method uses computational algorithms that can produce long sequences of apparently random results, which are in fact completely determined by a shorter initial value, known as a seed value or key. As a result, the entire seemingly random sequence can be reproduced if the seed value is known. This type of random number generator is often called a pseudorandom number generator. This type of generator typically does not rely on sources of naturally occurring entropy, though it may be periodically seeded by natural sources. This generator type is non-blocking, so they are not rate-limited by an external event, making large bulk reads a possibility.\n", "Many \"random number generators\" in use today are defined algorithms, and so are actually \"pseudo-random\" number generators. The sequences they produce are called pseudo-random sequences. These generators do not always generate sequences which are sufficiently random, but instead can produce sequences which contain patterns. For example, the infamous RANDU routine fails many randomness tests dramatically, including the spectral test.\n", "Section::::Generation methods.:By humans.\n\nRandom number generation may also be performed by humans, in the form of collecting various inputs from end users and using them as a randomization source. However, most studies find that human subjects have some degree of non-randomness when attempting to produce a random sequence of e.g. digits or letters. They may alternate too much between choices when compared to a good random generator; thus, this approach is not widely used.\n\nSection::::Post-processing and statistical checks.\n", "Most computer generated random numbers use pseudorandom number generators (PRNGs) which are algorithms that can automatically create long runs of numbers with good random properties but eventually the sequence repeats (or the memory usage grows without bound). These random numbers are fine in many situations but are not as random as numbers generated from electromagnetic atmospheric noise used as a source of entropy. The series of values generated by such algorithms is generally determined by a fixed number called a seed. One of the most common PRNG is the linear congruential generator, which uses the recurrence\n", "Some computations making use of a random number generator can be summarized as the computation of a total or average value, such as the computation of integrals by the Monte Carlo method. For such problems, it may be possible to find a more accurate solution by the use of so-called low-discrepancy sequences, also called quasirandom numbers. Such sequences have a definite pattern that fills in gaps evenly, qualitatively speaking; a truly random sequence may, and usually does, leave larger gaps.\n\nSection::::Activities and demonstrations.\n\nThe following sites make available Random Number samples:\n", "Most computer programming languages include functions or library routines that provide random number generators. They are often designed to provide a random byte or word, or a floating point number uniformly distributed between 0 and 1.\n", "Section::::Specific tests for randomness.\n\nThere have been a fairly small number of different types of (pseudo-)random number generators used in practice. They can be found in the list of random number generators, and have included:\n\nBULLET::::- Linear congruential generator and Linear-feedback shift register\n\nBULLET::::- Generalized Fibonacci generator\n\nBULLET::::- Cryptographic generators\n\nBULLET::::- Quadratic congruential generator\n\nBULLET::::- Cellular automaton generators\n\nBULLET::::- Pseudorandom binary sequence\n", "Some systems take a hybrid approach, providing randomness harvested from natural sources when available, and falling back to periodically re-seeded software-based cryptographically secure pseudorandom number generators (CSPRNGs). The fallback occurs when the desired read rate of randomness exceeds the ability of the natural harvesting approach to keep up with the demand. This approach avoids the rate-limited blocking behavior of random number generators based on slower and purely environmental methods.\n", "Since OpenBSD 5.1 (May 1, 2012) and use an algorithm based on RC4 but renamed, because of intellectual property reasons, ARC4. While random number generation here uses system entropy gathered in several ways, the ARC4 algorithm provides a fail-safe, ensuring that a rapid and high quality pseudo-random number stream is provided even when the pool is in a low entropy state. The system automatically uses hardware random number generators (such as those provided on some Intel PCI hubs) if they are available, through the OpenBSD Cryptographic Framework.\n", "The generation of pseudo-random numbers is an important and common task in computer programming. While cryptography and certain numerical algorithms require a very high degree of \"apparent\" randomness, many other operations only need a modest amount of unpredictability. Some simple examples might be presenting a user with a \"Random Quote of the Day\", or determining which way a computer-controlled adversary might move in a computer game. Weaker forms of \"randomness\" are used in hash algorithms and in creating amortized searching and sorting algorithms.\n", "While this example is limited to simple types (for which a simple random generator can be used), tools targeting object-oriented languages typically explore the program to test and find generators (constructors or methods returning objects of that type) and call them using random inputs (either themselves generated the same way or generated using a pseudo-random generator if possible). Such approaches then maintain a pool of randomly generated objects and use a probability for either reusing a generated object or creating a new one.\n\nSection::::On randomness.\n\nAccording to the seminal paper on random testing by D. Hamlet\n", "Although historically \"manual\" randomization techniques (such as shuffling cards, drawing pieces of paper from a bag, spinning a roulette wheel) were common, nowadays automated techniques are mostly used. As both selecting random samples and random permutations can be reduced to simply selecting random numbers, random number generation methods are now most commonly used, both hardware random number generators and pseudo-random number generators.\n\nSection::::Techniques.:Optimization.\n", "Section::::Science.:Simulation.\n\nIn many scientific and engineering fields, computer simulations of real phenomena are commonly used. When the real phenomena are affected by unpredictable processes, such as radio noise or day-to-day weather, these processes can be simulated using random or pseudo-random numbers.\n\nAutomatic random number generators were first constructed to carry out computer simulation of physical phenomena, notably simulation of neutron transport in nuclear fission.\n", "Another common entropy source is the behavior of human users of the system. While people are not considered good randomness generators upon request, they generate random behavior quite well in the context of playing mixed strategy games. Some security-related computer software requires the user to make a lengthy series of mouse movements or keyboard inputs to create sufficient entropy needed to generate random keys or to initialize pseudorandom number generators.\n\nSection::::Generation methods.:Computational methods.\n", "to generate numbers, where , and are large integers, and formula_2 is the next in as a series of pseudo-random numbers. The maximum number of numbers the formula can produce is one less than the modulus, -1. The recurrence relation can be extended to matrices to have much longer periods and better statistical properties\n\nTo avoid certain non-random properties of a single linear congruential generator, several such random number generators with slightly different values of the multiplier coefficient, , can be used in parallel, with a \"master\" random number generator that selects from among the several different generators.\n", "Various imaginative ways of collecting this entropic information have been devised. One technique is to run a hash function against a frame of a video stream from an unpredictable source. Lavarand used this technique with images of a number of lava lamps. HotBits measures radioactive decay with Geiger–Muller tubes, while Random.org uses variations in the amplitude of atmospheric noise recorded with a normal radio.\n", "Subverted random numbers can be created using a cryptographically secure pseudorandom number generator with a seed value known to the attacker but concealed in the software. A relatively short, say 24 to 40 bit, portion of the seed can be truly random to prevent tell-tale repetitions, but not long enough to prevent the attacker from recovering, say, a \"randomly\" produced key.\n", "A hardware circuit to produce subverted bits can be built on an integrated circuit a few millimeters square. The most sophisticated hardware random number generator can be subverted by placing such a chip anywhere upstream of where the source of randomness is digitized, say in an output driver chip or even in the cable connecting the RNG to the computer. The subversion chip can include a clock to limit the start of operation to some time after the unit is first turned on and run through acceptance tests, or it can contain a radio receiver for on/off control. It could be installed by the manufacturer at the behest of their national signals intelligence service, or added later by anyone with physical access. CPU chips with built-in hardware random number generators can be replaced by compatible chips with a subverted RNG in the chips' firmware.\n", "In practice, the randomization is typically performed by a computer program. However, the randomization can also be generated from random number tables or by some physical mechanism (e.g., drawing the slips of paper).\n\nSection::::Three key numbers.\n\nAll completely randomized designs with one primary factor are defined by 3 numbers:\n\nBULLET::::- \"k\" = number of factors (= 1 for these designs)\n\nBULLET::::- \"L\" = number of levels\n\nBULLET::::- \"n\" = number of replications\n", "For cryptographic purposes, one normally assumes some upper limit on the work an adversary can do (usually this limit is astronomically sized). If one has a pseudo-random number generator whose output is \"sufficiently difficult\" to predict, one can generate true random numbers to use as the initial value (i.e., the seed), and then use the pseudo-random number generator to produce numbers for use in cryptographic applications. Such random number generators are called cryptographically secure pseudo-random number generators, and several have been implemented (for example, the /dev/urandom device available on most Unixes, the Yarrow and Fortuna designs, server, and AT&T Bell Laboratories \"truerand\"). As with all cryptographic software, there are subtle issues beyond those discussed here, so care is certainly indicated in actual practice. In any case, it is sometimes impossible to avoid the need for true (i.e., hardware-based) random number generators.\n", "In Ts'o's implementation, the generator keeps an estimate of the number of bits of noise in the entropy pool. From this entropy pool random numbers are created. When read, the codice_1 device will only return random bytes within the estimated number of bits of noise in the entropy pool. When the entropy pool is empty, reads from codice_1 will block until additional environmental noise is gathered. The intent is to serve as a cryptographically secure pseudorandom number generator, delivering output with entropy as large as possible. This is suggested by the authors for use in generating cryptographic keys for high-value or long-term protection.\n", "Random number generators can also be built from \"random\" macroscopic processes, using devices such as coin flipping, dice, roulette wheels and lottery machines. The presence of unpredictability in these phenomena can be justified by the theory of unstable dynamical systems and chaos theory. Even though macroscopic processes are deterministic under Newtonian mechanics, the output of a well-designed device like a roulette wheel cannot be predicted in practice, because it depends on the sensitive, micro-details of the initial conditions of each use.\n", "BULLET::::- Based on the initial motivating example: given an exponentially long string of 2 characters, half a's and half b's, a random access machine requires at least 2 lookups in the worst-case to find the index of an \"a\"; if it is permitted to make random choices, it can solve this problem in an expected polynomial number of lookups.\n", "Various applications of randomness have led to the development of several different methods for generating random data, of which some have existed since ancient times, among whose ranks are well-known \"classic\" examples, including the rolling of dice, coin flipping, the shuffling of playing cards, the use of yarrow stalks (for divination) in the I Ching, as well as countless other techniques. Because of the mechanical nature of these techniques, generating large numbers of sufficiently random numbers (important in statistics) required a lot of work and/or time. Thus, results would sometimes be collected and distributed as random number tables.\n", "BULLET::::4. The Quantum Random Bit Generator Service at the Ruđer Bošković Institute harvests randomness from the quantum process of photonic emission in semiconductors. They supply a variety of ways of fetching the data, including libraries for several programming languages.\n\nBULLET::::5. The Group at the Taiyuan University of technology generates random numbers sourced from chaotic laser. You can obtain a sample of random number by visiting their Physical Random Number Generator Service.\n\nSection::::Backdoors.\n" ]
[]
[]
[ "normal" ]
[]
[ "normal", "normal" ]
[]
2018-06699
Does removing the appendix make your immune system weaker?
No. Your appendix contains lymphoid tissue, but so does pretty much the entire length of the bowel, so it’s a loss of a very small segment of tissue overall. And if your immune system is weakened by the loss of some lymphoid tissues, you’ll just make more. There’s several organs that are big producers of immune cells (spleen, thymus, bone marrow, intestines, lymph nodes). And if one or more of these are gone, the rest will just pick up the slack. Source: I’m a surgeon who also happens to be living without an appendix.
[ "The standard treatment for acute appendicitis is surgical removal of the appendix. This may be done by an open incision in the abdomen (laparotomy) or through a few smaller incisions with the help of cameras (laparoscopy). Surgery decreases the risk of side effects or death associated with rupture of the appendix. Antibiotics may be equally effective in certain cases of non-ruptured appendicitis. It is one of the most common and significant causes of severe abdominal pain that comes on quickly. In 2015 about 11.6 million cases of appendicitis occurred which resulted in about 50,100 deaths. In the United States, appendicitis is the most common cause of sudden abdominal pain requiring surgery. Each year in the United States, more than 300,000 people with appendicitis have their appendix surgically removed. Reginald Fitz is credited with being the first person to describe the condition in 1886.\n", "Intestinal bacterial populations entrenched in the appendix may support quick re-establishment of the flora of the large intestine after an illness, poisoning, or after an antibiotic treatment depletes or otherwise causes harmful changes to the bacterial population of the colon.\n", "Section::::Pregnancy.\n\nIf appendicitis develops in a pregnant woman, an appendectomy is usually performed and should not harm the fetus. The risk of premature delivery is about 10% The risk of fetal death in the perioperative period after an appendectomy for early acute appendicitis is 3 to 5%. The risk of fetal death is 20% in perforated appendicitis.\n\nSection::::Recovery.\n\nA study from 2010 found that the average hospital stay for people with appendicitis in the United States was 1.8 days. For people with a perforated (ruptured) appendix, the average length of stay was 5.2 days.\n", "Recycling of various nutrients takes place in colon. Examples include fermentation of carbohydrates, short chain fatty acids, and urea cycling.\n\nThe appendix contains a small amount of mucosa-associated lymphoid tissue which gives the appendix an undetermined role in immunity. However, the appendix is known to be important in fetal life as it contains endocrine cells that release biogenic amines and peptide hormones important for homeostasis during early growth and development. The appendix can be removed with no apparent damage or consequence to the patient.\n", "BULLET::::- Research has studied whether the vermiform appendix has any importance in, \"C. difficile\". The appendix is thought to have a function of housing good gut flora. In a study conducted in 2011, it was shown that when \"C. difficile\" bacteria were introduced into the gut, the appendix housed cells that increased the antibody response of the body. The B cells of the appendix migrate, mature, and increase the production of toxin A-specific IgA and IgG antibodies, leading to an increased probability of good gut flora surviving against the \"C. difficile\" bacteria.\n", "The surgical removal of the appendix is called an appendectomy. This removal is normally performed as an emergency procedure when the patient is suffering from acute appendicitis. In the absence of surgical facilities, intravenous antibiotics are used to delay or avoid the onset of sepsis. In some cases, the appendicitis resolves completely; more often, an inflammatory mass forms around the appendix. This is a relative contraindication to surgery.\n\nThe appendix is also used for the construction of an efferent urinary conduit, in an operation known as the Mitrofanoff procedure, in people with a neurogenic bladder.\n", "Appendicitis usually requires the removal of the inflamed appendix, in an appendectomy either by laparotomy or laparoscopy. Untreated, the appendix may rupture, leading to peritonitis, followed by shock, and, if still untreated, death.\n\nSection::::Clinical significance.:Surgery.\n", "William Parker, Randy Bollinger, and colleagues at Duke University proposed in 2007 that the appendix serves as a haven for useful bacteria when illness flushes the bacteria from the rest of the intestines. This proposition is based on an understanding that emerged by the early 2000s of how the immune system supports the growth of beneficial intestinal bacteria, in combination with many well-known features of the appendix, including its architecture, its location just below the normal one-way flow of food and germs in the large intestine, and its association with copious amounts of immune tissue. Research performed at Winthrop–University Hospital showed that individuals without an appendix were four times as likely to have a recurrence of \"Clostridium difficile colitis\". The appendix, therefore, may act as a \"safe house\" for beneficial bacteria. This reservoir of bacteria could then serve to repopulate the gut flora in the digestive system following a bout of dysentery or cholera or to boost it following a milder gastrointestinal illness.\n", "BULLET::::- Circumstances of the operation (elective vs emergency); In many cases emergency resection of colon with anastomosis needs to be done and this carries a higher complication rate since proper bowel preparation is not possible in emergency situations\n\nBULLET::::- Disease being treated; (i.e., no colectomy surgery can cure Crohn's disease, because the disease usually recurs at the site where the healthy sections of the large intestine were joined together. For example, if a patient with Crohn's disease has a transverse colectomy, their Crohn's will usually reappear at the resection site of the ascending and descending colons.)\n", "Appendicitis is caused by a blockage of the hollow portion of the appendix. This is most commonly due to a calcified \"stone\" made of feces. Inflamed lymphoid tissue from a viral infection, parasites, gallstone, or tumors may also cause the blockage. This blockage leads to increased pressures in the appendix, decreased blood flow to the tissues of the appendix, and bacterial growth inside the appendix causing inflammation. The combination of inflammation, reduced blood flow to the appendix and distention of the appendix causes tissue injury and tissue death. If this process is left untreated, the appendix may burst, releasing bacteria into the abdominal cavity, leading to increased complications.\n", "Most of the cases are diagnosed intraoperatively and a preoperative diagnosis is rarely made in such cases. Management should be individualized according to appendix's inflammation stage, presence of abdominal sepsis, and comorbidity factors. The decision should be based on factors such as the patient's age, the size and anatomy of the appendix, and in case of appendicitis, standard appendectomy and herniorrhaphy without a mesh should be the standard of care.\n", "The appendix is also used as a means to access the colon in children with paralysed bowels or major rectal sphincter problems. The appendix is brought out to the skin surface and the child/parent can then attach a catheter and easily wash out the colon (via normal defaecation) using an appropriate solution.\n\nSection::::History.\n\nDr. Heather F. Smith of Midwestern University and colleagues explained:\n", "Hospital lengths of stay typically range from a few hours to a few days but can be a few weeks if complications occur. The recovery process may vary depending on the severity of the condition: if the appendix had ruptured or not before surgery. Appendix surgery recovery is generally a lot faster if the appendix did not rupture. It is important that people undergoing surgery respect their doctor's advice and limit their physical activity so the tissues can heal faster. Recovery after an appendectomy may not require diet changes or a lifestyle change.\n", "Once the decision to perform an appendectomy has been made, the preparation procedure takes approximately one to two hours. Meanwhile, the surgeon will explain the surgery procedure and will present the risks that must be considered when performing an appendectomy. (With all surgeries there are risks that must be evaluated before performing the procedures.) The risks are different depending on the state of the appendix. If the appendix has not ruptured, the complication rate is only about 3% but if the appendix has ruptured, the complication rate rises to almost 59%. The most usual complications that can occur are pneumonia, hernia of the incision, thrombophlebitis, bleeding or adhesions. Recent evidence indicates that a delay in obtaining surgery after admission results in no measurable difference in outcomes to the person with appendicitis.\n", "Although it has been long accepted that the immune tissue surrounding the appendix and elsewhere in the gut—called gut-associated lymphoid tissue—carries out a number of important functions, explanations were lacking for the distinctive shape of the appendix and its apparent lack of specific importance and function as judged by an absence of side effects following its removal. Therefore, the notion that the appendix is only vestigial became widely held.\n", "Pseudomyxoma peritonei treatment includes cytoreductive surgery which includes the removal of visible tumor and affected essential organs within the abdomen and pelvis. The peritoneal cavity is infused with heated chemotherapy known as HIPEC in an attempt to eradicate residual disease. The surgery may or may not be preceded or followed with intravenous chemotherapy or HIPEC.\n\nSection::::Epidemiology.\n", "Fecal microbiota transplant is a relatively new treatment option for IBD which has attracted attention since 2010. Some preliminary studies have suggested benefits similar to those in Clostridium difficile infection but a review of use in IBD shows that FMT is safe, but of variable efficacy. A 2014 reviewed stated that more randomized controlled trials were needed.\n\nSection::::Treatment.:Alternative medicine.\n", "Section::::Diagnosis.:Imaging.:Computed tomography.\n\nWhere it is readily available, computed tomography (CT) has become frequently used, especially in people whose diagnosis is not obvious on history and physical examination. Concerns about radiation tend to limit use of CT in pregnant women and children, especially with the increasingly widespread usage of MRI.\n\nThe accurate diagnosis of appendicitis is multi-tiered, with the size of the appendix having the strongest positive predictive value, while indirect features can either increase or decrease sensitivity and specificity. A size of over 6 mm is both 95% sensitive and specific for appendicitis.\n", "Historically, there was a trend not only to dismiss the vermiform appendix as being uselessly vestigial, but an anatomical hazard, a liability to dangerous inflammation. As late as the mid-20th century, many reputable authorities conceded it no beneficial function. This was a view supported, or perhaps inspired, by Darwin himself in the 1874 edition of his book \"The Descent of Man, and Selection in Relation to Sex\". The organ's patent liability to appendicitis and its poorly understood role left the appendix open to blame for a number of possibly unrelated conditions. For example, in 1916, a surgeon claimed that removal of the appendix had cured several cases of trifacial neuralgia and other nerve pain about the head and face, even though he stated that the evidence for appendicitis in those patients was inconclusive. The discovery of hormones and hormonal principles, notably by Bayliss and Starling, argued against these views, but in the early twentieth century, there remained a great deal of fundamental research to be done on the functions of large parts of the digestive tract. In 1916, an author found it necessary to argue against the idea that the colon had no important function and that \"the ultimate disappearance of the appendix is a coordinate action and not necessarily associated with such frequent inflammations as we are witnessing in the human\".\n", "Section::::Functions.:Immune and lymphatic system.\n", "The surgical procedure for the removal of the appendix is called an appendectomy. Appendectomy can be performed through open or laparoscopic surgery. Laparoscopic appendectomy has several advantages over open appendectomy as an intervention for acute appendicitis.\n\nSection::::Management.:Surgery.:Open appendectomy.\n\nFor over a century, laparotomy (open appendectomy) was the standard treatment for acute appendicitis. This procedure consists of the removal of the infected appendix through a single large incision in the lower right area of the abdomen. The incision in a laparotomy is usually long.\n", "Such a function may be useful in a culture lacking modern sanitation and healthcare practice, where diarrhea may be prevalent. Current epidemiological data on the cause of death in developed countries collected by the World Health Organization in 2001 show that acute diarrhea is now the fourth leading cause of disease-related death in developing countries (data summarized by The Bill and Melinda Gates Foundation). Two of the other leading causes of death are expected to have exerted limited or no selection pressure.\n\nSection::::See also.\n\nBULLET::::- Meckel's diverticulum\n\nSection::::Further reading.\n\nBULLET::::- Appendix May Actually Have a Purpose—2007 WebMD article\n", "Surgery is usually indicated if intestinal perforation occurs. One study found a 30-day mortality rate of 9% (8/88), and surgical site infections at 67% (59/88), with the disease burden borne predominantly by low-resource countries.\n\nFor surgical treatment, most surgeons prefer simple closure of the perforation with drainage of the peritoneum. Small-bowel resection is indicated for patients with multiple perforations. If antibiotic treatment fails to eradicate the hepatobiliary carriage, the gallbladder should be resected. Cholecystectomy is not always successful in eradicating the carrier state because of persisting hepatic infection.\n\nSection::::Treatment.:Resistance.\n", "Acute appendicitis seems to be the end result of a primary obstruction of the appendix. Once this obstruction occurs, the appendix becomes filled with mucus and swells. This continued production of mucus leads to increased pressures within the lumen and the walls of the appendix. The increased pressure results in thrombosis and occlusion of the small vessels, and stasis of lymphatic flow. At this point spontaneous recovery rarely occurs. As the occlusion of blood vessels progresses, the appendix becomes ischemic and then necrotic. As bacteria begin to leak out through the dying walls, pus forms within and around the appendix (suppuration). The end result is appendiceal rupture (a 'burst appendix') causing peritonitis, which may lead to sepsis and eventually death. These events are responsible for the slowly evolving abdominal pain and other commonly associated symptoms.\n", "Recently ... improved understanding of gut immunity has merged with current thinking in biological and medical science, pointing to an apparent function of the mammalian cecal appendix as a safe-house for symbiotic gut microbes, preserving the flora during times of gastrointestinal infection in societies without modern medicine. This function is potentially a selective force for the evolution and maintenance of the appendix.\n" ]
[]
[]
[ "normal" ]
[]
[ "normal" ]
[]
2018-11254
How is paper/cardboard recycled and why are most recycled paper products usually brown and more coarse than virgin paper?
Turn old paper/carboard to mush. Make mush into shape. Dry. Shape again and its recycled paper. Some recycled paper are brown, some are like virgins. Add bleach and color to make virgin.
[ "There are three categories of paper that can be used as feedstocks for making recycled paper: mill broke, pre-consumer waste, and post-consumer waste. Mill broke is paper trimmings and other paper scrap from the manufacture of paper, and is recycled internally in a paper mill. Pre-consumer waste is material that was discarded before it was ready for consumer use. Post-consumer waste is material discarded after consumer use such as old magazines, old telephone directories, and residential mixed paper.\n", "Section::::Paper and newsprint.\n\nPaper and newsprint can be recycled by reducing it to pulp and combining it with pulp from newly harvested wood. As the recycling process causes the paper fibres to break down, each time paper is recycled its quality decreases. This means that either a higher percentage of new fibres must be added, or the paper down-cycled into lower quality products. Any writing or colouration of the paper must first be removed by deinking, which also removes fillers, clays, and fibre fragments.\n", "Section::::Papermaking.:De-inked pulp.\n\nPaper recycling processes can use either chemically or mechanically produced pulp; by mixing it with water and applying mechanical action the hydrogen bonds in the paper can be broken and fibres separated again. Most recycled paper contains a proportion of virgin fibre for the sake of quality; generally speaking, de-inked pulp is of the same quality or lower than the collected paper it was made from.\n\nThere are three main classifications of recycled fibre:.\n", "Recycling as an alternative to the use of landfills and recycled paper is one of the less complicated procedures in the recycling industry. Although there is not a landfill crisis at this point in time, it is commonly believed that measures should to be taken in order to lower the negative impacts of landfills, for many hazardous elements are produced and spread because of this enclosure of trash. Most recycled paper is priced higher than freshly made paper, and this tends to plays a deciding factor for the consumer. Because most of the recycled pulp is purchased in an open market, virgin paper is produced cheaper with the pulp that was made by the specific paper mill. Virgin paper contains no recycled content and is made directly from the pulp of trees or cotton. Materials recovered after the initial paper manufacturing process are considered recycled paper. Because that original standard was so vague, some “recycled papers” contained only mill scraps that would have been included in virgin paper anyway. Standards have recently been set to prevent companies from making it seem like they were selling recycled paper. The collection and recycling industries have fixated on the scraps of paper that is thrown away by customers daily in order to increase the amount of recycled paper. Different paper mills are structured for different types of paper, and most “recovered office paper can be sent to a deinking mill”. A deinking mill serves as a step in the recycling paper process. This type of mill detaches the ink from the paper fibers, along with any other excess materials which are also removed from the remaining paper. In the deinking mill, after all of the unwanted coatings of paper are stripped, the refurbished paper is sent to the paper machine. The old scraps are now constructed into new paper at the paper machine. Many papers mills have recycled business papers by transforming the old business papers into beneficial letters and envelopes. The production process for recycled paper is more costly than the well-developed paper mills that create paper with the use of trees. This process in making recycled paper is also much more time-consuming. However, recycled paper has a multitude of benefits from an environmental perspective. “For all the state-of-the-art technology now incorporated into modern paper mills, the industry's underlying structure is still based upon a worldview that was transformative in the 19th-century but is out-of-date as the 21st century approaches”.\n", "BULLET::::- In 2006-2007, Australia 5.5 million tons of paper and cardboard was used with 2.5 million tons of this recycled.\n\nBULLET::::- Newspaper manufactured in Australia has 40% recycled content.\n\nSection::::By region.\n\nSection::::By region.:European Union.\n", "BULLET::::- 115 billion sheets of paper are used annually for personal computers. The average web user prints 16 pages daily.\n\nBULLET::::- Most corrugated fiberboard boxes have over 25% recycled fibers. Some are 100% recycled fiber.\n\nBULLET::::- In 1997, 299,044 metric tons of paper was produced (including cardboard).\n\nBULLET::::- In the United States, the average consumption of paper per person in 1999 was approximately 354 kilograms. This would be the same consumption for 6 people in Asia or 30 people in Africa.\n", "There are three categories of paper that can be used as feedstocks for making \"recycled paper\": mill broke, pre-consumer waste, and post-consumer waste. \"Mill broke\" is paper trimmings and other paper scrap from the manufacture of paper, and is recycled in a paper mill. \"Pre-consumer waste\" is a material which left the paper mill but was discarded before it was ready for consumer use. \"Post-consumer\" waste is material discarded after consumer use, such as old corrugated containers (OCC), old magazines, and newspapers. Paper suitable for recycling is called \"scrap paper\", often used to produce moulded pulp packaging. The industrial process of removing printing ink from paper fibres of recycled paper to make deinked pulp is called deinking, an invention of the German jurist Justus Claproth.\n", "Paper recycling\n\nThe recycling of paper is the process by which waste paper is turned into new paper products. It has a number of important benefits besides saving trees from being cut down. It is less energy and water intensive than paper made from wood pulp. It saves waste paper from occupying landfill and producing methane as it breaks down. Around two thirds of all paper products in the US are now recovered and recycled, although it does not all become new paper. After repeated processing the fibers become too short for the production of new paper.\n", "All of the processes imparted an adequately high pH in studies conducted by the European Commission on Preservation and Access, the Library of Congress, and a team of scientists from the Centre de Recherches sur la Conservation des Documents Graphiques in the early and mid-nineties. BookKeeper produced a pH of 9-10. CSC Book Saver gives a pH of 8.78-10.5. Wei T'o gives 7.5 to 10.4, and Papersave gives a pH of 7.5-9. \n", "Bales of recycled paper (normally old corrugated containers) for unbleached (brown) packaging grades may be simply pulped, screened and cleaned. Recycling to make white papers is usually done in a deinking plant, which employs screening, cleaning, washing, bleaching and flotation. Deinked pulp is used in printing and writing papers and in tissue, napkins and paper towels. It is often blended with virgin pulp.\n", "Section::::Methods.\n\nThe process of waste paper recycling most often involves mixing used/old paper with water and chemicals to break it down. It is then chopped up and heated, which breaks it down further into strands of cellulose, a type of organic plant material; this resulting mixture is called pulp, or slurry. It is strained through screens, which remove any glue or plastic (especially from \n\nplastic-coated paper) that may still be in the mixture then cleaned, de-inked, bleached, and mixed with water. Then it can be made into new recycled paper.\n", "BULLET::::- Postconsumer waste – This is fibre from paper that has been used for its intended end use and includes office waste, magazine papers and newsprint. As the vast majority of this material has been printed – either digitally or by more conventional means such as lithography or rotogravure – it will either be recycled as printed paper or go through a de-inking process first.\n\nRecycled papers can be made from 100% recycled materials or blended with virgin pulp, although they are (generally) not as strong nor as bright as papers made from the latter.\n\nSection::::Papermaking.:Additives.\n", "Section::::Methods.:Further pulp processing.\n\nThe pulp is then processed through an apparatus which renders the pulp as a mesh of fibers. This fiber network is then pressed to remove all water contents, and the paper is subsequently dried to remove all traces of moisture.\n\nSection::::Methods.:Finishing.\n\nAfter the above processes have been completed, the resulting paper is coated with a minuscule amount of china clay or calcium carbonate to modify the surface, and the paper is then re-sized depending on its intended purpose.\n\nSection::::Methods.:Product recycling.\n", "Sometimes recyclers ask for the removal of the glossy paper inserts from newspapers because they are a different type of paper. Glossy inserts have a heavy clay coating that some paper mills cannot accept. Most of the clay is removed from the recycled pulp as sludge, which must be disposed of. If the coated paper is 20% by weight clay, then each ton of glossy paper produces more than 200 kg of sludge and less than 800 kg of fibre.\n\nThe price of recycled paper has varied greatly over the last 30 or so years.\n", "Almost all paper can be recycled today, but some types are harder to recycle than others. Papers coated with plastic or aluminium foil, and papers that are waxed, pasted, or gummed are usually not recycled because the process is too expensive. Gift-wrap paper also cannot be recycled due to its already poor quality.\n", "BULLET::::- Preconsumer waste – This is offcut and processing waste, such as guillotine trims and envelope blank waste; it is generated outside the paper mill and could potentially go to landfill, and is a genuine recycled fibre source; it includes de-inked preconsumer (recycled material that has been printed but did not reach its intended end use, such as waste from printers and unsold publications).\n", "Pulp used in the manufacture of paperboard can be bleached to decrease colour and increase purity. Virgin fibre pulp is naturally brown in colour, because of the presence of lignin. Recycled paperboard may contain traces of inks, bonding agents and other residue which colors it grey.\n\nAlthough bleaching is not necessary for all end-uses, it is vital for many graphical and packaging purposes. There are various methods of bleaching, which are used according to a number of factors for example, the degree of colour change required, chemicals chosen and method of treatment. There are three categories of bleaching methods:\n", "Wood pulp produced primarily by grinding wood is known as \"mechanical pulp\" and is used mainly for newsprint. These mechanical processes use fewer chemicals than either kraft or sulfite mills. The primary source of pollution from these mills is organic material such as resin acids released from the wood when it is processed. Mechanical wood pulp is \"brightened,\" as opposed to bleached, using less toxic chemicals than are needed for chemical pulps.\n\nSection::::Inks.\n", "BULLET::::- Mill broke or internal mill waste – This incorporates any substandard or grade-change paper made within the paper mill itself, which then goes back into the manufacturing system to be re-pulped back into paper. Such out-of-specification paper is not sold and is therefore often not classified as genuine reclaimed recycled fibre, however most paper mills have been reusing their own waste fibre for many years, long before recycling became popular.\n", "Newsprint is generally made by a mechanical milling process, without the chemical processes that are often used to remove lignin from the pulp. The lignin causes the paper to become brittle and yellow when exposed to air or sunlight. Traditionally, newsprint was made from fibers extracted from various softwood species of trees (most commonly, spruce, fir, balsam fir or pine). However, an increasing percentage of the world's newsprint is made with recycled fibers.\n\nSection::::Sustainability.\n", "The share of ink in a wastepaper stock is up to about 2% of the total weight.\n\nSection::::Rationale for recycling.\n\nIndustrialized paper making has an effect on the environment both upstream (where raw materials are acquired and processed) and downstream (waste-disposal impacts).\n", "There are upper limits on the percentage of the world's newsprint that can be manufactured from recycled fiber. For instance, some of the fiber that enters a recycled pulp mill is lost in pulping, due to inefficiencies inherent in the process. According to the web site of the U.K. chapter of Friends of the Earth, wood fiber can normally only be recycled up to five times due to damage to the fiber. Thus, unless the quantity of newsprint used each year worldwide declines in line with the lost fiber, a certain amount of new (virgin) fiber is required each year globally, even though individual newsprint mills may use 100% recycled fiber.\n", "BULLET::::- In 1690: The first paper mill to use recycled linen was established by the Rittenhouse family.\n\nBULLET::::- In 1896: The first major recycling center was started by the Benedetto family in New York City, where they collected rags, newspaper, and trash with a pushcart.\n\nBULLET::::- In 1993: The first year when more paper was recycled than was buried in landfills.\n", "BULLET::::- Recycled: Used paper is collected and sorted and usually mixed with virgin fibres in order to make new material. This is necessary as the recycled fibre often loses strength when reused; the added virgin fibres enhance strength. Mixed waste paper is not usually deinked (skipping the deinking stage) for paperboard manufacture and hence the pulp may contain traces of inks, adhesives, and other residues which together give it a grey colour. Products made of recycled board usually have a less predictable composition and poorer functional properties than virgin fibre-based boards. Health risks have been associated with using recycled material in direct food contact. Swiss studies have shown that recycled material can contain significant portions of mineral oil, which may migrate into packed foods. Mineral oil levels of up to 19.4 mg/kg were found in rice packed in recycled board.\n", "Today, over half of all paper used in the United States is collected and recycled. Paper products are still the largest component of municipal solid waste, making up more than 40% of the composition of landfills. In 2006, a record 53.4% of the paper used in the US (53.5 million tons) was recovered for recycling, up from a 1990 recovery rate of 33.5%.\n" ]
[ "Most recycled paper products are brown.", "recycled paper is brown instead of white like virgin paper." ]
[ "Some recycled paper products are brown and some are like virgin paper.", "recycled paper can be made white by bleaching it." ]
[ "false presupposition" ]
[ "Most recycled paper products are brown.", "recycled paper is brown instead of white like virgin paper." ]
[ "false presupposition", "false presupposition" ]
[ "Some recycled paper products are brown and some are like virgin paper.", "recycled paper can be made white by bleaching it." ]
2018-00123
Why do cars slightly move up when your foot isn’t touching the gas pedal?
Only ones with automatic transmissions do. A torque converter is between the engine and drive wheels. When the wheels are not turning but the engine is, the torque converter still tries to rotate the wheels.
[ "The older mechanically designed accelerator pedals not only provided a spring return, but the mechanism inherently provided some friction. This friction introduced mechanical hysteresis into the pedal force versus pedal position transfer function. Put more simply, once the pedal was set at a specific position, the friction would help keep the pedal at this setting. This made it easier for the driver to maintain a pedal position. For example, if the driver's foot is slightly jostled\n\nby a bump in road, the accelerator pedal would tend to stay at its setting. While these old purely mechanical designs did have some\n", "The issue involves a friction device in the pedal designed to provide the proper “feel” by adding resistance and making the pedal steady and stable. This friction device includes a “shoe” that rubs against an adjoining surface during normal pedal operation. Due to the materials used, wear and environmental conditions, these surfaces may, over time, begin to stick and release instead of operating smoothly. In some cases, friction could increase to a point that the pedal is slow to return to the idle position or, in rare cases, the pedal sticks, leaving the throttle partially open.\n", "friction, the return spring force was always designed to overcome this friction with a considerable safety margin. The return spring force ensured that throttle returned to zero if the pedal force applied by the driver was reduced or removed.\n", "An unresponsive accelerator pedal may result from incursion: i.e., blockage by a foreign object, or any other mechanical interference with the pedal's operation — and may involve the accelerator or brake pedal. Throttle butterfly valves may become sluggish in operation or may stick in the closed position. When the driver pushes harder on the right foot, the valve may \"pop\" open to a point greater than that wanted by the driver, thus creating too much power and a lurch forward. Special solvent sprays are offered by all manufacturers and aftermarket jobbers to solve this very common problem.\n", "Accelerometers are often calibrated to measure g-force along one or more axes. If a stationary, single-axis accelerometer is oriented so that its measuring axis is horizontal, its output will be 0 g, and it will continue to be 0 g if mounted in an automobile traveling at a constant velocity on a level road. When the driver presses on the brake or gas pedal, the accelerometer will register positive or negative acceleration.\n", "where \"k\" is the slope of the trend and formula_36 is noise (white noise in the simplest case; more generally, noise following its own stationary autoregressive process). Here any transient noise will not alter the long-run tendency for formula_33 to be on the trend line, as also shown in the graph. This process is said to be trend stationary because deviations from the trend line are stationary.\n", "For example, when a car starts from a standstill (zero velocity, in an inertial frame of reference) and travels in a straight line at increasing speeds, it is \"accelerating\" in the direction of travel. If the car turns, an acceleration occurs toward the new direction. The forward acceleration of the car is called a \"linear\" (or \"tangential\") acceleration, the reaction to which passengers in the car experience as a force pushing them back into their seats. When changing direction, this is called \"radial\" (as orthogonal to tangential) acceleration, the reaction to which passengers experience as a sideways force. If the speed of the car decreases, this is an acceleration in the opposite direction of the velocity of the vehicle, sometimes called deceleration or Retrograde burning in spacecraft. Passengers experience the reaction to deceleration as a force pushing them forwards. Both acceleration and deceleration are treated the same, they are both changes in velocity. Each of these accelerations (tangential, radial, deceleration) is felt by passengers until their velocity (speed and direction) matches that of the uniformly moving car.\n", "V is a drive system, R is the elasticity in the system, and M is the load that is lying on the floor and is being pushed horizontally. When the drive system is started, the Spring R is loaded and its pushing force against load M increases until the static friction coefficient between load M and the floor is not able to hold the load anymore. The load starts sliding and the friction coefficient decreases from its static value to its dynamic value. At this moment the spring can give more power and accelerates M. During M's movement, the force of the spring decreases, until it is insufficient to overcome the dynamic friction. From this point, M decelerates to a stop. The drive system however continues, and the spring is loaded again etc.\n", "According to Toyota, the tactile response friction device in the affected Toyota electronic accelerator pedals sometimes creates too much friction. This excess friction either slows the pedal return or completely stops it. In the worst case, once a pedal is pushed to a specific setting, it stays at the setting even if the driver removes their foot from the pedal. Early reports, in\n\nMarch 2007, involved the Tundra pickup truck, which used nylon 4/6 in the friction lever.\n", "Section::::Differences from an automatic car.\n\nA Multimode manual car has a clutch instead of a torque converter. As such, gear changes are noticeable, and the car rolls backwards when on an up-sloping incline.\n\nBULLET::::- Creeping: A Multimode Manual Car creeps forward when the brake pedal is released and accelerator is not depressed, like an automatic car. This is achieved via partially engaging and slipping the clutch.\n", "When running with a force sensor, the motor is automatically a certain percentage of the service provided to the driver. In many models, this proportion may be set in several stages. There are also models where the support level can be set only at the dealer to the customer.\n\nSection::::Technical.:Components.:Motor control.:Rotary motion detection.\n\nIn the version with speed sensor (s) of the motor is automatically using a function to a set percentage of the self-applied force. Since the force required at the speed rises sharply, it can be calculated in some models without force sensor.\n\nSection::::Technical.:Components.:Motor control.:Sliding or traction.\n", "In proportional control, the power output is always proportional to the (actual versus target speed) error. If the car is at target speed and the speed increases slightly due to a falling gradient, the power is reduced slightly, or in proportion to the change in error, so that the car reduces speed gradually and reaches the new target point with very little, if any, \"overshoot\", which is much smoother control than on–off control. In practice, PID controllers are used for this and the large number of control processes that require more response control than proportional alone.\n\nSection::::References.\n", "If derivative action is over-applied, it can lead to oscillations too. An example would be a PV that increased rapidly towards SP, then halted early and seemed to \"shy away\" from the setpoint before rising towards it again.\n\nSection::::Linear control.:PID control.:Integral action.\n", "The principle is applied to spinning robots, where the driving wheels are normally on for the whole revolution, resulting in an increased rotational energy, which is stored for destructive effect, but, given perfect symmetry, no net translational acceleration. The drive works by modulating the power to the wheel or wheels that spin the robot. The net application of force in one direction results in acceleration in the plane - it can't really be characterised as \"forward\", \"backward and so forth, as the whole robot is spinning. However, in a standard configuration an accelerometer is used to determine the speed of rotation, and a light emitting diode is turned on once per revolution, to give a nominal forward direction indicator to the operator. The internal controls implement the commands received from the remote control to modulate the drive to the wheels, typically by turning it off for part of a revolution.\n", "With the series of recall campaigns, Audi made several modifications; the first adjusted the distance between the brake and accelerator pedal on automatic-transmission models. Later repairs, of 250,000 cars dating back to 1978, added a device requiring the driver to press the brake pedal before shifting out of park. As a byproduct of sudden unintended acceleration, vehicles now include gear stick patterns and brake interlock mechanisms to prevent inadvertent gear selection.\n", "High unsprung mass also exacerbates wheel control issues under hard acceleration or braking. If the vehicle does not have adequate wheel location in the vertical plane (such as a rear-wheel drive car with Hotchkiss drive, a live axle supported by simple leaf springs), vertical forces exerted by acceleration or hard braking combined with high unsprung mass can lead to severe wheel hop, compromising traction and steering control.\n", "BULLET::::- 2016: Tesla Model X although the vehicle logs showed that only the accelerator pedal had been pressed by the driver.\n\nSection::::Reported incidents.:Audi 100.\n\nDuring model years 1982–1987, Audi issued a series of recalls of Audi 100 (in some markets sold as \"Audi 5000\") models associated with reported incidents of \"sudden unintended acceleration\" linked to six deaths and 700 accidents. At the time, National Highway Traffic Safety Administration (NHTSA) was investigating 50 car models from 20 manufacturers for sudden surges of power.\n", "When a pycnometer is filled to a specific, but not necessarily accurately known volume, \"V\" and is placed upon a balance, it will exert a force\n", "When braking, the rider in motion is seeking to change the speed of the combined mass \"m\" of rider plus bike. This is a negative acceleration \"a\" in the line of travel. \"F\"=\"ma\", the acceleration \"a\" causes an inertial forward force \"F\" on mass \"m\".\n", "Bicycle pedals are left-threaded on the left-hand crank so that precession tightens the pedal rather than loosening it. This may seem counter-intuitive, but the torque exerted due to the precession is several orders of magnitude greater than that caused by a jammed pedal bearing.\n\nShimano SPD axle units, which can be unscrewed from the pedal body for servicing, have a left-hand thread where the axle unit screws into the right-hand pedal; the opposite case to the pedal-crank interface. Otherwise precession of the pedal body around the axle would tend to unscrew one from the other.\n\nSection::::Examples.:Bicycle bottom brackets.\n", "When braking, the inertial force \"ma\" in the line of travel, not being co-linear with \"f\", tends to rotate \"m\" about \"f\". This tendency to rotate, an overturning moment, is resisted by a moment from \"mg\". \n\nTaking moments about the front wheel contact point at an instance in time:\n\nBULLET::::- When there is no braking, mass \"m\" is typically above the bottom bracket, about 2/3 of the way back between the front and rear wheels, with \"Nr\" thus greater than \"Nf\".\n", "When the driver stops the vehicle on an incline where the nose of the car is sufficiently higher than the rear of the car, the system is engaged when the driver's foot is depressing the brake pedal, and then the clutch pedal is fully depressed. Once set, the driver must keep the clutch pedal fully depressed but may remove the foot from the brake pedal. To disengage the system and move the car forward, the driver selects first gear, gently depresses the fuel pedal, and slowly releases the clutch pedal which at a point in its travel releases the braking system, allowing the car to proceed.\n", "For example, consider riding in a car traveling at some arbitrary constant speed. In this situation, our sense of sight and sound provide the only cues (excluding engine vibration) that the car is moving; no other forces act on the passengers of the car except for gravity. Next, consider the same example of a car moving at constant speed except this time, all passengers of the car are blindfolded. If the driver were to step on the gas, the car would accelerate forward thus pressing each passenger back into their seat. In this situation, each passenger would perceive the increase in speed by sensing the additional pressure from the seat cushion.\n", "A change in speed out of this stable operating point is only possible with a new control intervention. This can be changing the load of the machine or the power of the drive which both changes the torque because it is a change in the characteristic curves. The drive-machine system then runs to a new operating point with a different speed and a different balance of torques.\n", "After this accident, Toyota conducted seven recalls related to unintended acceleration from September 2009 to March 2010. These recalls totaled approximately 10 million vehicles and mostly switched out all-weather mats and carpet covers that had the potential to cause pedal entrapment. At this point there was little evidence that there was ever any defect in the Electronic Throttle Control System (ETCS) that was installed in Toyota cars after 2002, despite requests to the NHTSA to investigate it, and Toyota announced that the root cause of sudden acceleration had been addressed. \n" ]
[]
[]
[ "normal" ]
[]
[ "normal" ]
[]
2018-04047
How long would it take to use 1 GB of data on a cell plan?
That depends on what you view on Reddit. If you end up loading threads with a lot of images/videos that you view/watch, it's going to consume more data than just reading text. Also the frequency of how often you navigate to other pages/threads will affect data usage as well (sitting on one page for 10 minutes is going to take up less data than frequently going back and forth between pages).
[ "BULLET::::- Telecommunications (capacity): The world's effective capacity to exchange information through two-way telecommunication networks was 281 petabytes of information in 1986, 471 petabytes in 1993, 2,200 petabytes in 2000, and 65,000 petabytes in 2007 (this is the informational equivalent to every person exchanging 6 newspapers per day).\n\nBULLET::::- Telecommunications (usage): In 2008, AT&T transferred about 30 petabytes of data through its networks each day. That number grew to 197 petabytes daily by March 2018.\n", "BULLET::::- Internet: Google processed about 24 petabytes of data per day in 2009. The BBC's iPlayer is reported to have transferred up to 7 petabytes each month in 2010. In 2012, Imgur transferred about 4 petabytes of data per month.\n\nBULLET::::- Supercomputers: In January 2012, Cray began construction of the Blue Waters, which has \"up to 500 petabytes of [digital] tape storage\".\n\nBULLET::::- Data storage system: In August 2011, IBM was reported to have built the largest storage array ever, with a capacity of 120 petabytes.\n\nBULLET::::- Databases: Teradata Database 12 has a capacity of 50 petabytes of compressed data.\n", "It is predicted by 2018, mobile traffic will reach by 10 billion connections with 94% traffic comes from Smartphones, laptops and tablets. Also 69% of mobile traffic from Videos since we have high definition screens available in smart phones and 176.9 wearable devices to be at use. Apparently, 4G will be dominating the traffic by 51% of total mobile data by 2018.\n\nSection::::Usage.:By government agencies.\n\nSection::::Usage.:By government agencies.:Law enforcement.\n", "Gfive had launched 5 high-end smartphones under G Cloud range of internet technology. It will offer users up to 5.5 GB free cloud space memory through which user can easily synchronise data to laptops etc. Even if user lost his/her mobile he/she can just transfer all the data by one touch on his new phone's G Cloud service.\n\nSection::::Products.\n", "Research has also been carried out on the use of data reduction in wearable (wireless) devices for health monitoring and diagnosis applications. For example, in the context of epilepsy diagnosis, data reduction has been used to increase the battery lifetime of a wearable EEG device by selecting, and only transmitting, EEG data that is relevant for diagnosis and discarding background activity.\n\nSection::::Best practices.\n\nThese are common techniques used in data reduction.\n\nBULLET::::- Order by some aspect of size.\n\nBULLET::::- Table diagonalization, whereby rows and columns of tables are re-arranged to make patterns easier to see (refer to the diagram).\n", "BULLET::::- The costs to provide a comprehensive appropriate network such as LTE or 5G are enormous .\n\nSection::::Outlook.\n\nThe solution to handle the flow of data is expected to come from artificial intelligence. Doubts in artificial intelligence (AI) and decision making by AI exist.\n\nSection::::Tests.\n\nIn April 2019 test and verification of communication elements took place on the EuroSpeedway Lausitz. Participants were Ford, Samsung, Vodafone, Huawei, LG Electronics and others. Topics were communication matters, especially interoperability, said to have been successful at 96 %.\n", "Mobile data traffic doubled between the end of 2011 (~620 Petabytes in Q4 2011) and the end of 2012 (~1280 Petabytes in Q4 2012). This traffic growth is and will continue to be driven by large increases in the number of mobile subscriptions and by increases in the average data traffic per subscription due to increases in the number of smartphones being sold, the use of more demanding applications and in particular video, and the availability and deployment of newer 3G and 4G technologies capable of higher data rates. Total mobile broadband traffic was expected to increase by a factor of 12 to roughly 13,000 PetaBytes by 2018 .\n", "Section::::Performance.\n\nIn tests performed in July 2013 by Alcatel-Lucent and Telekom Austria using prototype equipment, aggregate (sum of uplink and downlink) data rates of 1.1 Gbit/s were achieved at a distance of 70 m and 800 Mbit/s (0.8 Gbit/s) at a distance of 100 m, in laboratory conditions with a single line. On older, unshielded cable, aggregate data rates of 500 Mbit/s were achieved at 100 m.\n\nSection::::Deployment scenarios.\n", "To anticipate and contribute to this change, according to MITMOT proposal summary, it has introduced the following technical features to enable a better support of handsets: \n\nBULLET::::- Built-in support for asymmetric antenna configurations to accommodate various terminal sizes (Phone/PDA/PC) offering a scalable and evolutionary solution using space time block codes\n\nBULLET::::- Exploit the spatial diversity provided by MIMO not only to increase the peak data rate but also to grant range extension for indoor/limited outdoor operation (i.e. SOHO, corporate enterprise networks, and public \"hotspots\")\n\nBULLET::::- Support heterogeneous traffic: increase overall peak data rate without jeopardizing lower data rates modes\n", "When installed applications, for example, were not used by the user for a long period, the smartphone automatically detected them and archived them into the cloud to reduce internal storage usage. It also adapted to the usage patterns of the user and performed the backup process whenever applicable. The smartphone also stored the user's photos in the cloud in the default resolution appropriate for upload, until the user specified the resolution.\n\nSection::::Reception.\n\nSection::::Reception.:Sales.\n\nPre-orders after the Kickstarter campaign began in October 2015, with shipping set to start in February 2016.\n\nSection::::Reception.:Known issues.\n", "BULLET::::- £12, 800 mins, unlimited texts, 2 GB data\n\nSection::::Charges.:Bundles.\n\nBULLET::::- £10, 300 mins, unlimited texts, 500 MB data\n\nBULLET::::- £12.50, 500 mins, unlimited texts, 1 GB data\n\nBULLET::::- £15, 800 mins, unlimited texts, 2 GB data\n\nSection::::Charges.:Rolling Bundles.\n\nBULLET::::- £8, 300 mins, unlimited texts, 500 MB data\n\nBULLET::::- £10, 500 mins, unlimited texts, 1 GB data\n\nBULLET::::- £12, 800 mins, unlimited texts, 2 GB data\n", "The theoretical maximum throughput for end user is clearly lower than the peak data rate due to higher layer overheads. Even this is never possible to achieve unless the test is done under perfect laboratory conditions.\n", "SIAD will aggregate its IP data traffic and GSM ATM traffic at the cellsite passing it along to the multi-service routers sitting in front of mobile switching center (MSC). \n\nIt will aggregate the cell site traffic and forward to the MSN.\n", "The higher number of cells in HetNet results in user equipment changing the serving cell more frequently when in motion.\n\nThe ongoing work on LTE-Advanced in Release 12, amongst other areas, concentrates on addressing issues that come about when users move through HetNet, such as frequent hand-overs between cells. It also included use of 256-QAM.\n\nSection::::First technology demonstrations and field trials.\n", "It is estimated that by 2017, more than 11 exabytes of data traffic will have to be transferred through mobile networks\n\nevery month. A possible solution is the replacement of some RF-technologies, like Wi-Fi, by others that do not use RF, like Li-Fi, as proposed by the Li-Fi Consortium.\n\nSection::::Limitations of Bandwidth.:Data crunch.\n", "Earlier studies from the University of California, Berkeley, estimated that by the end of 1999, the sum of human-produced information (including all audio, video recordings, and text/books) was about 12 exabytes of data. The 2003 Berkeley report stated that in 2002 alone, \"telephone calls worldwide on both landlines and mobile phones contained 17.3 exabytes of new information if stored in digital form\" and that \"it would take 9.25 exabytes of storage to hold all U.S. [telephone] calls each year\". International Data Corporation estimates that approximately 160 exabytes of digital information were created, captured, and replicated worldwide in 2006. Research from the University of Southern California estimates that the amount of data stored in the world by 2007 was 295 exabytes and the amount of information shared on two-way communications technology, such as cell phones, in 2007 as 65 exabytes.\n", "Section::::Wi-Fi.:Initiation of offloading procedure.\n", "Under IS-95B P_REV=5, it was possible for a user to use up to seven supplemental \"code\" (traffic) channels simultaneously to increase the throughput of a data call. Very few mobiles or networks ever provided this feature, which could in theory offer 115200 bit/s to a user.\n\nSection::::Protocol details.:Physical layer.:Block Interleaver.\n\nAfter convolution coding and repetition, symbols are sent to a 20 ms block interleaver, which is a 24 by 16 array.\n\nSection::::Protocol details.:Physical layer.:Capacity.\n", "With the increasing availability of inter-device networks (e.g. Bluetooth or WifiDirect) there is also the possibility of offloading delay tolerant data to the \"ad hoc\" network layer. In this case, the delay tolerant data is sent to only a subset of data receivers via the 3G network, with the rest forwarded between devices in the \"ad hoc\" layer in a multi-hop fashion.\n\nAs a result, the traffic on the cellular network is reduced, or gets shifted to inter-device networks.\n\nSection::::See also.\n\nBULLET::::- LTE in unlicensed spectrum\n\nBULLET::::- Generic Access Network\n\nSection::::External links.\n\nBULLET::::- Global Wi-Fi Offload Summit\n", "Since August 2013 the network of Online is \"AS12876 ONLINE S.A.S\". It is independent of the one of Free, which was not the case before. This allowed the company to move away from the closed interconnection policy of the internet service provider.\n\nIn early 2015, the company announced to have exceeded Gb/s of immediate Internet traffic.\n\nIn May 2016, Online shows a total of Gb/s of capacity on his links at their weathermap and is present at the following Internet Exchange Points: France-IX, Equinix-IX, AMS-IX et Neutral Internet Exchange\n", "BULLET::::- According to the CSIRO, in the next decade, astronomers expect to be processing 10 petabytes of data every hour from the Square Kilometre Array (SKA) telescope. The array is thus expected to generate approximately one exabyte every four days of operation. According to IBM, the new SKA telescope initiative will generate over an exabyte of data every day. IBM is designing hardware to process this information.\n\nBULLET::::- According to the Digital Britain Report, 494 exabytes of data was transferred across the globe on June 15, 2009.\n", "Section::::Evolved EDGE.\n", "Section::::Deployment.:Africa.:Libya.\n\nLTT Company has begun providing this service in September 2007, it works fine, but the speed has not been increased yet, it is still 256 kbit/s download, and about 128 kbit/s upload for home users. It also provides xDSL services for business users with speeds from 256 kbit/s to 8 Mbit/s.\n\nIn 2007 LTT company started to provide ADSL2+ with 512 kbit/s downstream for home users, also it launched in the beginning of 2009 the Wimax called Libya Max which provides internet wireless.\n\nSection::::Deployment.:Africa.:South Africa.\n", "If an Airbus A380 were filled with microSD cards each holding 512 gigabytes of storage capacity, the theoretical total storage space onboard would be approximately 91 exabytes. A 4h47m flight from New York to Los Angeles would work out to a data transport rate of well over 5 petabytes per second, although this does not account for the time required to write to and read from the cards, which would almost certainly be much longer than the duration of the flight.\n", "BULLET::::- Video streaming: , Netflix had 3.14 petabytes of video \"master copies\", which it compresses and converts into 100 different formats for streaming.\n\nBULLET::::- Photos: , Facebook users had uploaded over 240 billion photos, with 350 million new photos every day. For each uploaded photo, Facebook generates and stores four images of different sizes, which translated to a total of 960 billion images and an estimated 357 petabytes of storage.\n\nBULLET::::- Music: One petabyte of average MP3-encoded songs (for mobile, roughly one megabyte per minute), would require 2000 years to play.\n" ]
[]
[]
[ "normal" ]
[]
[ "normal", "normal" ]
[]
2018-04812
How was it discovered that two pieces of the same metal would bond together in a vacuum if they became in contact, and how do astronauts repair stuff in space avoiding this to occur?
I'm not sure about the history of it, but I'd wager that it came up as a theoretical prediction long before anyone tried it. As far as the astronaut stuff: vacuum welding is actually tricky to do. The surfaces must fit together very well, like two finely sanded flat surfaces or something. The surface must also be free of any contamination. Many/most of the parts used in space are prepared in an atmosphere, so the oxygen in the atmosphere will bind to the surface of the metal, creating a thin layer of oxidized metal. This stuff won't vacuum weld unless you were to sand and polish it off in a vacuum.
[ "In 2009 the European Space Agency published a peer-reviewed paper detailing why cold welding is a significant issue that spacecraft designers need to carefully consider. The conclusions of this appropriately titled study can be found on page 25 of \"Assessment of Cold Welding between Separable Contact Surfaces due to Impact and Fretting under Vacuum\". The paper also cites a documented example from 1991 with the Galileo spacecraft high-gain antenna (see page 2; the technical source document from NASA regarding the Galileo spacecraft is also provided in a link here).\n", "During the Soyuz 6 mission of 1969, Russian astronauts performed the first welding experiments in space. Three different welding processes were tested using a hardware unit called Vulkan. The tests included welding aluminum, titanium, and stainless steel.\n", "Many plastics are considerably sensitive to atomic oxygen and ionizing radiation. Coatings resistant to atomic oxygen are a common protection method, especially for plastics. Silicone-based paints and coatings are frequently employed, due to their excellent resistance to radiation and atomic oxygen. However, the silicone durability is somewhat limited, as the surface exposed to atomic oxygen is converted to silica which is brittle and tends to crack.\n\nSection::::Solving corrosion.\n", "Section::::Materials.\n\nCorrosion in space has the highest impact on spacecraft with moving parts. Early satellites tended to develop problems with seizing bearings. Now the bearings are coated with a thin layer of gold.\n", "NASA's International Space Station, or ISS, was home to aluminum squares laser marked with CerMark® marking material for almost four years. These squares were part of the Material International Space Station Experiment, or MISSE.\n", "Section::::Materials for use in space.\n\nIn addition to the concerns above, materials for use in spacecraft applications have to cope with radiation damage and high-intensity ultraviolet radiation, thermal loads from solar radiation, radiation cooling of the vehicle in other directions, and heat produced within the spacecraft's systems. Another concern, for orbits closer to Earth, is the presence of atomic oxygen, leading to corrosion of exposed surfaces; aluminium is an especially sensitive material. Silver, often used for surface-deposited interconnects, forms layer of silver oxide that flakes off and may erode up to a total failure.\n", "The process of space corrosion is being actively investigated. One of the efforts aims to design a sensor based on zinc oxide, able to measure the amount of atomic oxygen in the vicinity of the spacecraft; the sensor relies on drop of electrical conductivity of zinc oxide as it absorbs further oxygen. \n\nSection::::Other problems.\n", "The availability and favorable physical properties of metals will make them a major component of space manufacturing. Most of the metal handling techniques used on Earth can also be adopted for space manufacturing. A few of these techniques will need significant modifications due to the microgravity environment.\n", "Section::::Testing of Plastic Welds.:Destructive Testing.:Tensile Testing.\n", "Section::::Conclusion.\n", "Another scientist is testing a new glue that fixes fractured bone and stabilizes the seal between metal hardware and bones. The researcher found that when the bone was glued back together on Earth, the materials eventually converted to new bone over time. The researcher is now testing the glue in space to see if it accelerates the formation of new bone.\n\nThis scientist believes that surgeries on fractured bone could become a lot less complicated if bone glue were used instead of metal plates, screws and rods.\n\nSection::::Research.:Life Sciences.:Research examples.:Analyzing Bacterial Growth.\n", "Chuck suspects that the killings may have something to do with the crashed Y-13 and requests that Wilson send sample specks to Dr. von Essen at Aviation Medicine. The next day, test results show that they are particles of meteor dust and show no signs of structural damage from passage through the atmosphere. Later, Dr. von Essen explains the results to Chuck: Wherever the encrustation occurs on the Y-13 fuselage, the metal is intact. In places not encrusted, the metal has been transformed into a brittle, carbon-like substance, easily reduced to powder. Chuck theorizes that the covering may be some sort of \"cosmic protection\".\n", "Below is a list of materials well known for their IR weldability:\n\nBULLET::::- Polycarbonate (PC)\n\nBULLET::::- Poly(methyl methacrylate) (PMMA)\n\nBULLET::::- Ethylene vinyl alcohol (EVOH)\n\nBULLET::::- Acrylic\n\nBULLET::::- Polystyrene (PS)\n\nBULLET::::- Acrylonitrile butadiene stryrene (ABS)\n\nBULLET::::- Polyvinyl chloride (PVC)\n\nBULLET::::- Polyethylene (PE)\n\nBULLET::::- Polypropylene (PP)\n\nBULLET::::- Polyketone (PK)\n\nBULLET::::- Elastomers\n\nBULLET::::- Polyamide (PA)\n\nBULLET::::- Polyoxymethylene (POM or Acetal)\n\nBULLET::::- Polytetrafluoroethylene (PTFE)\n\nBULLET::::- High density polyethylene (HDPE)\n\nBULLET::::- Glass fiber reinforced polyethylene (PPE)\n\nBULLET::::- Polypropylene (PP)\n\nBULLET::::- Polybutylene terephthalate (PBT)\n\nBULLET::::- Glass fiber reinforced polyether sulfone (PES)\n\nSection::::Advantages / Disadvantages.\n\nSection::::Advantages / Disadvantages.:Advantages.\n", "Another way of preventing this problem is through materials selection. This will build an inherent resistance to this process and reduce the need of post processing or constant monitoring for failure. Certain metals or alloys are highly susceptible to this issue so choosing a material that is minimally affected while retaining the desired properties would also provide an optimal solution. Much research has been done to catalog the compatibility of certain metals with hydrogen.\n", "Most solids used are engineering materials consisting of crystalline solids in which the atoms or ions are arranged in a repetitive geometric pattern which is known as a lattice structure. The only exception is material that is made from glass which is a combination of a supercooled liquid and polymers which are aggregates of large organic molecules.\n", "BULLET::::- Planar defects (e.g., lack of fusion, cracks)\n\nBULLET::::- Volumetric defects (e.g., porosity, nonmetallic inclusions)\n\nSection::::Inspection technologies.:Magnetic Flux Leakage (MFL).\n\nMain article – Magnetic flux leakage\n", "Only few manufacturer of vacuum interrupters worldwide produce the contact material itself. The basic raw materials copper and chrome are combined to a powerful contact material by means of the arc-melting procedure. The resulting raw parts are processed to RMF or AMF contact discs, whereby the slotted AMF discs are deburred at the end. Contact materials require the following:\n\nBULLET::::1. High breaking ability: Excellent electrical conductivity, small thermal conductivity, bigger heat capacity and low hot electron emission capability\n\nBULLET::::2. High breakdown voltage and resistance to electrical erosion\n\nBULLET::::3. Resistance to welding\n\nBULLET::::4. Low cutoff current value\n", "BULLET::::12. Corrosion products that accumulated at the faying surfaces of the dissimilar metals raised the stresses in some instances enough, with the combined corrosive action, to cause cracks to form in the strips. Such cracks were found on 24ST and Alclad 24ST strips coupled with nickel alloys or stainless steel, on Dowmetal H strips coupled with aluminum alloys or stainless steel, and on stainless-steel strips coupled with Dowmetal M.\n", "Section::::Welding techniques.:Laser welding.:Effect of Constituent Properties on Weldability.:Matrix.:Chemical bonding.\n", "The two crews used the morning to move the P6 truss from its overnight position on the station's robotic arm, over to the shuttle's robotic arm. The crew then moved the station's arm along the mobile transporter to an outboard work site that allows attachment of the P6 truss to its new location on the P5 truss on Tuesday. Managers on the ground had Whitson perform an experiment on the shavings Tani collected from the SARJ on Sunday's EVA, putting a magnet under a slip of paper, and testing to see if the shavings collected on the paper, to ascertain if they were metal. The test confirmed the particles collected by Tani were ferrous. This information allowed the managers on the ground to rule out some possibilities of the origin of the particles, such as the thermal covers, which are made of aluminized mylar.\n", "An increase in strain rate increases the upper limit temperature as well as the crack propagation rate. In most metal couples LME does not occur below a threshold stress level.\n\nTesting typically involves tensile specimens but more sophisticated testing using fracture mechanics specimens is also performed.\n\nSection::::Mechanisms.\n\nMany theories have been proposed for LME. The major ones are listed below;\n\nBULLET::::- The dissolution-diffusion model of Robertson and Glickman says that absorption of the liquid metal on the solid metal induces dissolution and inward diffusion. Under stress these processes lead to crack nucleation and propagation.\n", "BULLET::::- Many plastics, namely many plastic tapes (special attention should be paid to adhesives). Fiberglass composites, e.g. Micarta (G-10) and G-30, should be avoided. Even Kapton and Teflon are sometimes advised against.\n\nBULLET::::- Various residues, e.g. flux from soldering and brazing, and lubricants from machining making thorough cleaning imperative. Getting the outgassable residues from tight crevices can be challenging; a good mechanical design that avoids such features can help.\n\nSection::::Materials for vacuum use.\n\nSection::::Materials for vacuum use.:Metals.\n", "Weld coupons, pieces of metal used to test a welders' skill, are typically prepared at the beginning of a welding shift, any time any variable is adjusted or changed and at the end of the shift (and more frequently as required by an inspector). Each coupon must be examined internally and externally to verify full penetration, proper bead width and other criteria. With smaller diameters\n", "In microelectromechanical systems (MEMS) and nanoelectromechanical systems (NEMS), the package protects the sensitive internal structures from environmental influences such as temperature, moisture, high pressure and oxidizing species. The long-term stability and reliability of the functional elements depend on the encapsulation process, as does the overall device cost. The package has to fulfill the following requirements:\n\nBULLET::::- protection against environmental influences\n\nBULLET::::- heat dissipation\n\nBULLET::::- integration of elements with different technologies\n\nBULLET::::- compatibility with the surrounding periphery\n\nBULLET::::- maintenance of energy and information flow\n\nSection::::Techniques.\n\nThe commonly used and developed bonding methods are as follows:\n\nBULLET::::- Direct bonding\n", "Special alloys, either with low carbon content or with added carbon \"getters\" such as titanium and niobium (in types 321 and 347, respectively), can prevent this effect, but the latter require special heat treatment after welding to prevent the similar phenomenon of \"knifeline attack\". As its name implies, corrosion is limited to a very narrow zone adjacent to the weld, often only a few micrometers across, making it even less noticeable.\n\nSection::::Corrosion in passivated materials.:Crevice corrosion.\n" ]
[ "Astronauts need to be careful of vacuum welding of parts in space." ]
[ "Vacuum welding is a difficult process that requires a perfectly polished surface among other things." ]
[ "false presupposition" ]
[ "Astronauts need to be careful of vacuum welding of parts in space." ]
[ "false presupposition" ]
[ "Vacuum welding is a difficult process that requires a perfectly polished surface among other things." ]
2018-20531
Why does a pan of sautéing food release a lot of steam as soon as the heat is turned off?
There are two thing that is refers to as steam. On is water in gas phase ie vapor, it is invisible and what is released when you boil water. The other is wet steam that is a white mist that you can see and what we often think of as steam. It is comprised of water vapor that is a gas and small droplets of liquid suspended in the air. It is the liquid droplets that we can see. Mist and fog is the same thing. Clouds is often liquid droplets but can also be solid ice crystals. Sh when you remove it from the heat source it get colder. The result is that some of the released vapor condensate to a liquid you can see or it or it blow droplet away from the surface. So the explanation is that it is colder and you get visible liquid water droplets in the air instead of just invisible vapor.
[ "A cooking technique called flash boiling uses a small amount of water to quicken the process of boiling. For example, this technique can be used to melt a slice of cheese onto a hamburger patty. The cheese slice is placed on top of the meat on a hot surface such as a frying pan, and a small quantity of cold water is thrown onto the surface near the patty. A vessel (such as a pot or frying-pan cover) is then used to quickly seal the steam-flash reaction, dispersing much of the steamed water on the cheese and patty. This results in a large release of heat, transferred via vaporized water condensing back into a liquid (a principle also used in refrigerator and freezer production).\n", "BULLET::::- Sauté pans, used for sautéing, have a large surface area and relatively low sides to permit rapid evaporation and allow the cook to toss the food. The word \"sauté\" comes from the French verb \"sauter\", meaning \"to jump\". Saute pans often have straight vertical sides, but may also have flared or rounded sides.\n", "Section::::History.\n", "Section::::Practical uses.:Flash boiling in cooking.\n", "In a conventional boiler, fuel is burned and the hot gases produced pass through a heat exchanger where much of their heat is transferred to water, thus raising the water's temperature.\n", "Steam blanching systems inject hot air (~100 °C) onto food as it passes through the blanching system on a conveyor belt. This method greatly reduces the leaching of water-soluble compounds from the product and is the preferred technique for smaller foods and those with cut surfaces. Steam blanching is more energy efficient, and the ability for quick heating allows for shorter processing times. This reduced heat exposure preserves color, flavor, and overall quality of the food; however, evaporation may occur leading to lower masses and product yields.\n", "The classic steamer has a chimney in the center, which distributes the steam among the tiers.\n\nSection::::Method.\n\nSteaming works by boiling water continuously, causing it to vaporize into steam; the steam then carries heat to the nearby food, thus cooking the food. The food is kept separate from the boiling water but has direct contact with the steam, resulting in a moist texture to the food. This differs from double boiling, in which food is not directly exposed to steam, or pressure cooking, which uses a sealed vessel.\n", "Drying: The press cake is dried by tumbling inside a heated drum. Under-drying may result in the growth of molds or bacteria; over-drying can cause scorching and reduction in the meal's nutritional value.\n\nTwo alternative methods of drying are used:\n\nBULLET::::- Direct: Very hot air at a temperature of 500 °C (932 °F) is passed over the material as it is tumbled rapidly in a cylindrical drum. While quicker, heat damage is much more likely if the process is not carefully controlled.\n\nBULLET::::- Indirect: The meal is tumbled inside a cylinder containing steam-heated discs.\n", "This method of food preparation involves using the Maillard reaction to \"brown\" the meat or vegetables and then deglazing with stock or water and simmering the mixture over low heat for an extended period of time. It is often done in a cast iron pot or dutch oven, so the heat can be evenly applied and distributed.\n\nSection::::Description.:Meat.\n", "Cooking often involves water, frequently present in other liquids, which is both added in order to immerse the substances being cooked (typically water, stock or wine), and released from the foods themselves. A favorite method of adding flavor to dishes is to save the liquid for use in other recipes. Liquids are so important to cooking that the name of the cooking method used is often based on how the liquid is combined with the food, as in steaming, simmering, boiling, braising and blanching. Heating liquid in an open container results in rapidly increased evaporation, which concentrates the remaining flavor and ingredients – this is a critical component of both stewing and sauce making.\n", "Steam infusion\n\nSteam Infusion is a direct-contact heating process in which steam condenses on the surface of a pumpable food product. Its primary use is for the gentle and rapid heating of a variety of food ingredients and products including milk, cream, soymilk, ketchup, soups and sauces.\n\nUnlike steam injection and traditional vesselled steam heating; the steam infusion process surrounds the liquid food product with steam as opposed to passing steam through the liquid.\n\nSteam Infusion allows food product to be cooked, mixed and pumped within a single unit, often removing the need for multiple stages of processing.\n\nSection::::History.\n", "For example, if the vapor's partial pressure is 2% of atmospheric pressure and the air is cooled from 25 °C, starting at about 22 °C water will start to condense, defining the dew point, and creating fog or dew. The reverse process accounts for the fog burning off in the morning. If the humidity is increased at room temperature, for example, by running a hot shower or a bath, and the temperature stays about the same, the vapor soon reaches the pressure for phase change, and then condenses out as minute water droplets, commonly referred to as steam.\n", "BULLET::::- a small amount of still hot water and steam remain in the bottom vessel and are kept heated, with this pressure keeping the water in the upper vessel\n\nBULLET::::- when the heat is removed from the bottom vessel, the vapor pressure decreases, and can no longer support the column of water – gravity (acting on the water) and atmospheric pressure then push the water back into the bottom vessel.\n", "BULLET::::- Some ovens use a temperature probe to cook until a target internal food temperature is reached.\n\nBULLET::::- A common toilet refills the water tank until a float closes the valve. The float is acting as a water level sensor.\n\nSection::::Applications.:Automotive.\n", "This was a vessel with a safety valve, which can be tightly closed by a screw and a lid. Food can be cooked along with water in the vessel when the vessel is heated, and the vessel's internal temperature can be raised by as much as the pressure inside the vessel will permit safely. The maximum pressure is limited by a weight placed on the safety valve lever. If the pressure exceeds this limit, the safety valve will be forced open and steam will escape until the pressure drops sufficient for the weight to close the valve again. \n", "Reduction (cooking)\n\nIn cooking, reduction is the process of thickening and intensifying the flavor of a liquid mixture such as a soup, sauce, wine, or juice by simmering or boiling.\n\nReduction is performed by simmering or boiling a liquid such as stock, fruit or vegetable juices, wine, vinegar, or a sauce until the desired concentration is reached by evaporation. This is done without a lid, enabling the vapor to escape from the mixture. Different components of the liquid will evaporate at slightly different temperatures, and the goal of reduction is to drive away those with lowest points of evaporation.\n", "Steam infusion is used in cooking applications on fluid based products. The heat addition on particulate level in a low pressure environment makes steam infusion cooking especially applicable to soups and sauces where particle integrity is important. Steam Infusion has shown the potential to improve the nutritional content of food and is being researched by the University of Lincoln\n", "Sautéing\n\nSautéing or sauteing (, ; in reference to tossing while cooking) is a method of cooking that uses a relatively small amount of oil or fat in a shallow pan over relatively high heat. Various sauté methods exist, and sauté pans are a specific type of pan designed for sautéing.\n\nSection::::Description.\n", "Section::::Initiation.:Flux.\n", "The steaming function works very well for foods like vegetables, potatoes and fish, while convection or combination (convection plus steam) heat is suitable for roasting, braising and baking. One disadvantage of convection heat is that foods tend to dry out more quickly; depending on the product, it may also require very long cooking times. Meanwhile, steaming alone does not get foods crispy enough. \n", "Ingredients for sautéing are usually cut into pieces or thinly sliced to facilitate fast cooking. The primary mode of heat transfer during sautéing is conduction between the pan and the food being cooked. Food that is sautéed is browned while preserving its texture, moisture, and flavor. If meat, chicken, or fish is sautéed, the sauté is often finished by deglazing the pan's residue to make a sauce.\n", "Steam and air are used as leavening agents when they expand upon heating. To take advantage of this style of leavening, the baking must be done at high enough temperatures to flash the water to steam, with a batter that is capable of holding the steam in until set. This effect is typically used in popovers, Yorkshire puddings, and to a lesser extent in tempura.\n", "Sauces from basic brown sauce to Béchamel sauce and even tomato sauce are simmered for long periods (from 1 to 10 hours) but not boiled. Simmering not only develops the maximum possible flavor, but also allows impurities to collect at the top and be skimmed off periodically as the sauce cooks. Boiling diffuses the impurities into the liquid and results in a bitter taste and unclear stock. Broths are also simmered rather than boiled, and for the same reasons.\n\nSection::::Examples.\n\nCommon preparations involving reductions include:\n\nBULLET::::- Consommés, reduced and clarified stocks\n\nBULLET::::- Gravies\n", "Section::::Initiation.:Stagnation.\n", "Section::::Initiation.\n" ]
[]
[]
[ "normal" ]
[]
[ "normal" ]
[]
2018-18636
What do bits and bytes look like and how are they moved around from computer to computer?
A bit is *physically* stored in a computer in a number of ways. On a hard disk, you have a platter, which is a nickel disk that spins at 7.5k RPM, and an electromagnet can change the pole of a surface region - so a bit is going to be a north or south pole. In an optical disk, like a CD, DVD, or Blueray, it's going to be etches in a magnetic foil in a spiral pattern, there will be several spirals called tracks. The etch can be long or short, like Morse code. It's not the etch that signifies the bit - it's the reflective surface in between. The laser shines on the foil and reflects onto a sensor; the etch deflects the laser into dead-space. Long and short spaces between the etches signify a 0 or 1. On solid state media, like a thumb drive or SSD, you've got these transistor constructions that are able to hold onto electrons, even with the power off, for *years*. One per bit. This isn't like a capacitor or battery, it works under a different property that's a bit beyond me. Your system memory, RAM, is banks of capacitors, one per bit, their charge or discharge is a 0 or 1. These capacitors have a limited charge, and discharge themselves quickly, so there is a refresh rate where they are checked for their state, and given an additional jolt if they're supposed to be charged. When you read a bit, the capacitor actually has to discharge, and then to keep the value, a feedback loop will cause refresh to charge it again. If you want to set a bit to 0, you have to discharge that capacitor into a sink. Your CPU cache is done by a flip-flop, which is a type of circuit made of transistors. These are loops that either carry current, or don't, and that gets you your bit. They're very fast, which is why they're used on chip, but they're also very big, which is why they're not used in RAM. A bus is just a bunch of wires, whether over a cable or traces etched on a circuit board. Typically, RAM to CPU will have one trace per bit, so when you read a "word", you're reading multiple bits in parallel. A word is going to be the minimum read and write size of your CPU, so if you're only interested in 1 byte, the machine may necessarily read and write 4, 8, 16, 32, 64 bytes at once. Or more. A bus may also be multiplexed. So if you're reading 8 bits, it may go over 1 wire, and those bits are sent as a sequence of electrical pulses in a well defined order. Other busses and channels may rely on more elaborate signaling. USB uses a differential signal - 2 wires carrying sine waves that encode multiple bytes at once (which gets a bit out of scope of what I'm talking about), but the sine waves are opposites. The bits are extracted from the difference between the two signals - how far the peaks and troughs are from one another. This is to account for noise - you see, any conductor is an antenna. That's why crystal radios pick up AM broadcast when you use the kitchen sink as your antenna. So if you have a long-ass ethernet or telephone cable or whatever, it's going to pick up electromagnetic noise from everywhere, which can corrupt the message. It used to be that a signal was encoded some voltage relative to ground - but ground at the source could be a different energy potential than ground at the destination, so you send 5v and they receive 4.3v. With two wires and differentiating the signal, the value is always relative. If you pick up noise, both signals will be affected the same way, the difference remains the same. So to address the end of your question, whether it's a signal over Ethernet, coax, a bus, or Wifi, it's going to be a sine wave, because it's all electromagnetism. And this is the subject of modulation, of which there are many types - you'd be most familiar with AM and FM, but there's more than that.
[ "Section::::Binary number representation.\n\nComputers represent data in sets of binary digits. The representation is composed of bits, which in turn are grouped into larger sets such as bytes.\n\nA \"bit\" is a binary digit that represents one of two states. The concept of a bit can be understood as a value of either \"1\" or \"0\", \"on\" or \"off\", \"yes\" or \"no\", \"true\" or \"false\", or encoded by a switch or toggle of some kind.\n", "Digital information exists as one of two digits, either 0 or 1. These are known as bits (a contraction of \"binary digits\") and the sequences of 0s and 1s that constitute information are called bytes.\n", "In 1983 TVOntario included the show's episodes as part of a correspondence course. The original broadcasts on TVOntario also had a companion series, \"The Academy\", that was scheduled immediately afterward in which \"Bits and Bytes\" technology consultant, Jim Butterfield, appeared as co-host to further elaborate on the concepts introduced in the main series.\n\nSection::::\"Bits and Bytes 2\".\n", "In the second \"Bits and Bytes\" series, produced almost a decade later, Billy Van assumed the role of instructor and taught a new female student. The new series focused primarily on IBM PC compatibles (i.e. Intel-based 286 or 386 computers) running DOS and early versions of Windows, as well as the newer and updated technologies of that era. For that series, a selection of the original's animated spots are reaired to illustrate fundamental computer technology priniciples along with a number of new spots to cover newly emerged concepts of computer technology such as advances in computer graphics and data management. \n", "Computers usually manipulate bits in groups of a fixed size, conventionally named \"words\". Like the byte, the number of bits in a word also varies with the hardware design, and is typically between 8 and 80 bits, or even more in some specialized computers. In the 21st century, retail personal or server computers have a word size of 32 or 64 bits.\n", "BULLET::::- Byte – is a unit of digital information that most commonly consists of eight bits, representing a binary number. Historically, the byte was the number of bits used to encode a single character of text in a computer and for this reason it is the smallest addressable unit of memory in many computer architectures.\n", "Section::::Physical representation.:Transmission and processing.\n\nBits are transmitted one at a time in serial transmission, and by a multiple number of bits in parallel transmission. A bitwise operation optionally processes bits one at a time. Data transfer rates are usually measured in decimal SI multiples of the unit bit per second (bit/s), such as kbit/s.\n\nSection::::Physical representation.:Storage.\n", "The size of the byte has historically been hardware dependent and no definitive standards existed that mandated the size – byte-sizes from 1 to 48 bits are known to have been used in the past. Early character encoding systems often used six bits, and machines using six-bit and nine-bit bytes were common into the 1960s. These machines most commonly had memory words of 12, 24, 36, 48 or 60 bits, corresponding to two, four, six, eight or 10 six-bit bytes. In this era, bytes in the instruction stream were often referred to as \"syllables\", before the term byte became common.\n", "Bits can be implemented in several forms. In most modern computing devices, a bit is usually represented by an electrical voltage or current pulse, or by the electrical state of a flip-flop circuit.\n", "Section::::Units derived from bit.:Nibble.\n\nA group of four bits, or half a byte, is sometimes called a nibble or nybble. This unit is most often used in the context of hexadecimal number representations, since a nibble has the same amount of information as one hexadecimal digit.\n\nSection::::Units derived from bit.:Crumb.\n\nA pair of two bits or a quarter byte was called a crumb, often used in early 8-bit computing (see Atari 2600, ZX Spectrum). It is now largely defunct.\n\nSection::::Units derived from bit.:Word, block, and page.\n", "Section::::History.:Development.\n", "BULLET::::- \"Each one of the 10,000 positions of memory is numbered from 0000 to 9999 and each stored character must occupy one of these positions.\" (page 8)\n\nBULLET::::- The word byte, meaning eight bits, is coined by Dr. Werner Buchholz in June 1956, during the early design phase for the IBM Stretch computer.\n\nBULLET::::- IBM 650 RAMAC (a decimal addressed machine) announcement\n", "The intro sequence featured a montage of common computer terms such as \"ERROR\", \"LOGO\" and \"ROM\", as well as various snippets of simple computer graphics and video effects, accompanied by a theme song that very heavily borrows from the 1978 song Neon Lights by Kraftwerk. \n\nSection::::Series Format.\n", "Although the possibility of a \"Bits and Bytes 3\" was suggested at the end of the second series, TVOntario eventually elected instead to rebroadcast the Knowledge Network computer series, \"Dotto's Data Cafe\", as a more economical and extensive production on the same subject.\n\nSection::::Episodes (1983-84).\n\nBULLET::::- Program 1: Getting Started\n\nBULLET::::- Program 2: Ready-Made Programs\n\nBULLET::::- Program 3: How Programs Work?\n\nBULLET::::- Program 4: File & Data Management\n\nBULLET::::- Program 5: Communication Between Computers\n\nBULLET::::- Program 6: Computer Languages\n\nBULLET::::- Program 7: Computer-Assisted Instruction\n\nBULLET::::- Program 8: Games & Simulations\n\nBULLET::::- Program 9: Computer Graphics\n\nBULLET::::- Program 10: Computer Music\n", "In the 1980s, when bitmapped computer displays became popular, some computers provided specialized bit block transfer (\"bitblt\" or \"blit\") instructions to set or copy the bits that corresponded to a given rectangular area on the screen.\n\nIn most computers and programming languages, when a bit within a group of bits, such as a byte or word, is referred to, it is usually specified by a number from 0 upwards corresponding to its position within the byte or word. However, 0 can refer to either the most or least significant bit depending on the context.\n\nSection::::Other information units.\n", "Byte\n\nThe byte is a unit of digital information that most commonly consists of eight bits. Historically, the byte was the number of bits used to encode a single character of text in a computer and for this reason it is the smallest addressable unit of memory in many computer architectures.\n", "The C and C++ programming languages define \"byte\" as an \"\"addressable unit of data storage large enough to hold any member of the basic character set of the execution environment\"\" (clause 3.6 of the C standard). The C standard requires that the integral data type \"unsigned char\" must hold at least 256 different values, and is represented by at least eight bits (clause 5.2.4.2.1). Various implementations of C and C++ reserve 8, 9, 16, 32, or 36 bits for the storage of a byte. In addition, the C and C++ standards require that there are no \"gaps\" between two bytes. This means every bit in memory is part of a byte.\n", "Bits and Bytes\n\nBits and Bytes was the name of two Canadian educational television series produced by TVOntario that taught the basics of how to use a personal computer. \n\nThe first series, made in 1983, starred Luba Goy as the Instructor and Billy Van as the Student. \"Bits and Bytes 2\" was produced in 1991 and starred Billy Van as the Instructor and Victoria Stokle as the Student. The Writer-Producers of both \"Bits and Bytes\" and \"Bits and Bytes 2\" were Denise Boiteau & David Stansfield. \n\nSection::::Title Sequence.\n", "It is a deliberate respelling of \"bite\" to avoid accidental mutation to \"bit\".\n\nAnother origin of \"byte\" for bit groups smaller than a machine's word size (and in particular groups of four bits) is on record by Louis G. Dooley, who claimed he coined the term while working with Jules Schwartz and Dick Beeler on an air defense system called SAGE at MIT Lincoln Laboratory in ca. 1956/1957, which was jointly developed by Rand, MIT, and IBM. Later on, Schwartz's language JOVIAL actually used the term, but he recalled vaguely that it was derived from AN/FSQ-31.\n", "Computers usually manipulate bits in groups of a fixed size, conventionally called \"words\". The number of bits in a word is usually defined by the size of the registers in the computer's CPU, or by the number of data bits that are fetched from its main memory in a single operation. In the IA-32 architecture more commonly known as x86-32, a word is 16 bits, but other past and current architectures use words with 8, 9, 12, 18, 24, 26, 32, 36, 39, 40, 48, 56, 60, 64, 80 bits or others.\n", "BULLET::::- \"nearly 15 million bytes\" with no other abbreviations\n\nBULLET::::- 1977 Disk/Trend Report – Rigid Disk Drives, published June 1977\n", "A digit 1 is created by an inferior or superior state that lasts 100 microseconds, and a digit 0 is created by an inferior or superior state that lasts 200 microseconds. Consequently, the transmission rate is variable, depending upon how many of the characters are \"one\" and how many are \"zero\"; the average rate is about 7,500 bits per second. A 400 microsecond burst is an \"end of frame\" indicator and also saves time. For example, if the 32-bit destination address field has some of its most significant bits zero, they need not be sent; the \"end of frame\" delimits the field and all receiving devices assume the untransmitted bits are zero.\n", "On most modern operating systems, files are organized into one-dimensional arrays of bytes. The format of a file is defined by its content since a file is solely a container for data, although, on some platforms the format is usually indicated by its filename extension, specifying the rules for how the bytes must be organized and interpreted meaningfully. For example, the bytes of a plain text file ( in Windows) are associated with either ASCII or UTF-8 characters, while the bytes of image, video, and audio files are interpreted otherwise. Most file types also allocate a few bytes for metadata, which allows a file to carry some basic information about itself.\n", "Byte (disambiguation)\n\nA byte is a unit of digital information in computing and telecommunications that most commonly consists of eight bits. \n\nByte may also refer to:\n\nBULLET::::- \"Byte\" (magazine), a computer industry magazine\n\nBULLET::::- Byte (song), a song by Martin Garrix and Brooks\n\nBULLET::::- \"Bytes\" (album), an album by Black Dog Productions\n\nBULLET::::- Byte (retailer), a computer retailer in the United Kingdom\n\nBULLET::::- Byte (dinghy), a sailing dinghy\n\nBULLET::::- \"Byte\", a naming series for electric cars from Byton\n\nSection::::See also.\n\nBULLET::::- Nybble\n", "The figures below are simplex data rates, which may conflict with the duplex rates vendors sometimes use in promotional materials. Where two values are listed, the first value is the downstream rate and the second value is the upstream rate.\n\nAll quoted figures are in metric decimal units. Note that these aren't the traditional binary prefixes for memory size. These decimal prefixes have long been established in data communications. This occurred before 1998 when IEC and other organizations introduced new binary prefixes and attempted to standardize their use across all computing applications.\n\nSection::::Bandwidths.\n" ]
[]
[]
[ "normal" ]
[]
[ "normal", "normal" ]
[]
2018-19271
Why do people go crazy over gas going up by a couple cents
Because a lot of people, in the United States at least, are really poor and depend on their cars for work, getting food, etc.
[ "The FTC monitors competition in energy markets and released its latest staff report on gasoline prices in September 2011. Leibowitz said the American people need to understand why they often pay so much for gasoline. \"Our report spells out the factors that determine what consumers pay at the pump, and why gas prices seem to 'rocket up' but feather down.\"\n", "BULLET::::- Even if the gasoline price grows to $100 I’ll still drive my car. Even if the price drops to a penny, I am not buying it still.\n\nBULLET::::- I will set my house on fire so the fire spread over my neighbor’s barn.\n\nBULLET::::- They pretend to pay me decent salary; I pretend that I am working (also very common in the former USSR).\n", "Section::::Aftermath.:Ongoing ratecapping.\n", "One analyst explained about changing attitudes of young people:\n\nAnother elaborated about the \"saturation of demand\" hypothesis:\n", "Section::::Background.:Fuel prices.\n\nThe price of petrol (SP95-E10) decreased during 2018, from €1.47 per litre in January to €1.43 per litre in the last week of November.\n", "Section::::Proponents.\n\nPresidential hopefuls John McCain and Hillary Rodham Clinton both championed this proposal. The two political opponents purported this to be a short-term fix for gas prices that were set to hit $4 a gallon in the summer of 2008. With economic woes topping the American peoples' list of concerns, this became a hotly debated issue in the 2008 U.S. Presidential Election.\n", "In European Union member states, petrol prices are much higher than in North America due to higher fuel excise or taxation, although the base price is also higher than in the U.S. Occasionally, price rises trigger national protests. In the UK, a large-scale protest in August and September 2000, known as 'The Fuel Crisis', caused wide-scale havoc not only across the UK, but also in some other EU countries. The UK Government eventually backed down by indefinitely postponing a planned increase in fuel duty. This was partially reversed during December 2006 when then-Chancellor of the Exchequer Gordon Brown raised fuel duty by 1.25 pence per litre.\n", "Rochester Ford Toyota in Rochester, MN, known for tough negotiating, shifted to a fixed price and an emphasis on making the customer's day. New car sales doubled and it recorded a 30% rise in customer satisfaction.\n\nIn April 2000, the Ford Motor Company decided to incorporate the Fish Philosophy in their training programs. This decision came about as a result of the lack of motivation in a certain division of the company.\n", "Barack Obama was perhaps the most visibly vocal critic of the measure. He and other critics of the proposal have exclaimed that the holiday would be nothing more than a \"short-term, quick-fix\" that would not solve the nation's current and long term problems of high oil prices and foreign oil dependency. Critics have nearly unanimously denounced the scheme as nothing more than pandering for votes in the Indiana and North Carolina primaries. \n", "Clinton demurred when asked to name a single economist who supported the holiday, but libertarian George Mason professor Bryan Caplan wrote a \"New York Times\" op-ed arguing that \"we could do a lot worse than the gas tax holiday.\" He said that while he agreed that it would not seriously reduce prices due to increased demand, it was \"a relatively cheap symbolic gesture that makes truly bad policies less likely\" and that it would give oil companies an incentive to increase production.\n\nSection::::Critics.\n", "Gas prices rose from $2.34/gallon in May 2017 to $2.84 in May 2018, cutting into the tax cut savings. Economist Mark Zandi stated: \"If the 50-cent per gallon increase in gas prices remains, it would cost the average American $450 a year, offsetting about half the tax [cut] benefit.\" President Trump's withdrawal from the Iran nuclear deal was one factor in the increase in gas and oil prices, along with quotas established by OPEC in the face of an economy growing worldwide.\n\nSection::::Household financial position.:SNAP participation.\n", "CAFE advocates assert that most of the gains in fuel economy over the past 30 years can be attributed to the standard itself. Opponents assert that economic forces are responsible for fuel economy gains, and that higher fuel prices drove customers to seek more fuel-efficient vehicles. CAFE standards have come under attack by some conservative thinktanks, along with safety experts, car and truck manufacturers, some consumer and environment groups, and organized labor. Rather than mandating fuel economy increases, Charles Krauthammer advocated using a significant increase in gasoline taxes that would be revenue-neutral for the government.\n\nSection::::Active debate.:Effect on traffic safety.\n", "The proposal met criticism from a wide array of news sources, politicians, the vast majority of economists, and the Bush administration.\n\nEconomic theory is very clear that the incidence of a consumption tax (who is expected to pay the tax) is inconsequential. Even if it were to lower the cost paid by the consumer, it would just result in a spike in demand during the period it was in effect with rising prices in response.\n", "In responding to the protests, the government argued that lower than needed supplies by OPEC and the Katrina hurricane had a more significant impact on the price of fuel than the level of duty.\n\nSection::::2007.\n", "Section::::Origins of \"Twentysix Gasoline Stations\".:The Book.\n", "In 2008, a report by Cambridge Energy Research Associates stated that 2007 had been the year of peak gasoline usage in the United States, and that record energy prices would cause an \"enduring shift\" in energy consumption practices. According to the report, in April gas consumption had been lower than a year before for the sixth straight month, suggesting 2008 would be the first year U.S. gasoline usage declined in 17 years. The total miles driven in the U.S. began declining in 2006.\n", "And although not directly related, the near-disaster at Three Mile Island on March 28, 1979, also increased anxiety about energy policy and availability.\n\nDue to memories of oil shortage in 1973, motorists soon began panic buying, and long lines appeared at gas stations, as they had six years earlier during the 1973 oil crisis.\n\nAs the average vehicle of the time consumed between two and three liters (about 0.5–0.8 gallons) of gasoline (petrol) an hour while idling, it was estimated that Americans wasted up to of oil per day idling their engines in the lines at gas stations.\n", "BULLET::::- Reduce the annual growth rate in our energy demand to less than two percent.\n\nBULLET::::- Reduce gasoline consumption by ten percent below its current level.\n\nBULLET::::- Cut in half the portion of United States oil which is imported, from a potential level of to .\n\nBULLET::::- Establish a strategic petroleum reserve of one billion barrels, more than six months' supply.\n\nBULLET::::- Increase our coal production by about two thirds to more than 1 billion tons a year.\n\nBULLET::::- Insulate 90 percent of American homes and all new buildings.\n", "Leibowitz was the one commissioner to dissent on a 2007 FTC Report on Spring/Summer 2006 Nationwide Gasoline Price Increases, which found that the increase could be explained by market forces. Leibowitz suggested that the plausible explanation for the increase in gasoline prices, that the Commission found, was not necessarily the \"only\" explanation. \"The question you ask determines the answer you get,\" he wrote, \"whatever theoretical justifications exist don't exclude the real world threat that there was profiteering at the expense of consumers.\" Similarly, in an earlier report investigating accusations of price gouging by oil companies after Hurricane Katrina, Leibowitz wrote separately to note that a handful of refiners studied displayed \"troubling\" conduct.\n", "BULLET::::- On September 14, 2012 more than fifty Lukoil gas station owners in New Jersey and Pennsylvania temporarily raised their prices to over $8 a gallon to protest Lukoil's wholesale gas pricing. The owners are typically charged a wholesale price that is 5 to 10 cents a gallon more than their competitors and some are assessed an additional 25 to 30 cents per gallon based on their location. According to the station owners this makes it difficult to be competitive with stations that sell more established brands for lower prices.\n", "A number of car accidents caused by \"VIP\" vehicles have brought the issue into headlines and inspired motorists to take part in street protests.\n", "Retail store Halfords reported \"high\" sales of fuel cans. Sales of all cans have soared by 225% compared with this time last year, with motorists buying in \"the thousands\", while sales of jerry cans are up by more than 500%.\n\nThe Energy Secretary, Ed Davey urged motorists to \"Keep tanks two-thirds full, but don't panic.\".\n\nThe London Fire Brigade urged businesses to place legal warnings on websites and petrol cans. Chief Inspector Nick Maton of Dorset Police warned motorists not to panic.\n\nThe British army was put on standby to deliver petrol if a strike was called.\n\nSection::::The event.:30 March.\n", "The expectation of future prices and their long-term maintenance at non-economic levels for a certain quantity of consumption also affects vehicle decisions. If the price of fuel is so high that marginal consumers cannot afford the same mileage without switching to a more efficient car, then they are forced to sell the less efficient one. An increase of the quantity of such vehicles causes the used market value to fall, which then increases the depreciation expected of a new vehicle, which increases the total cost of ownership of such vehicles, making them less popular.\n", "Section::::Controversies.\n\nSan Francisco's KPIX-TV news reported in June 2013 that there had been tens of thousands of complaints across the U.S. from consumers saying they were misled into signing up with Just Energy, and instead of saving money on their monthly bills as advertised, the cost went up. The complaints led to a consumer fraud lawsuit filed by the state of Illinois, and investigations in New York, Ohio, and Canada. Just Energy settled each of these cases, paying large fines and promising to change its sales practices.\n", "In the May 6, 2007 edition of \"Autoline Detroit\", Bob Lutz, an automobile designer/executive of BMW and Big Three fame, asserted that the CAFE standard was a failure and said it was like trying to fight obesity by requiring tailors to make only small-sized clothes.\n\nProponents state that automobile-purchasing decisions that may have global effects should not be left entirely up to individuals operating in a free market.\n" ]
[]
[]
[ "normal" ]
[]
[ "normal", "normal" ]
[]
2018-04663
how do the bacteria in probiotic yogurt survive stomach acid and populate the intestines?
Most of them don't but a few will get through and once they are within the gut they will start multiplying if they can find a niche there. The intestine already has a huge bacterial population and they have a habit of killing their neighbours which is why probiotic* yoghurt etc needs to be taken repeatedly to establish a viable population. * probiotic really refers to certain nutrients that specifically help bacterial growth rather than the bacteria themselves but the term has been co-opted to mean both.
[ "Yogurt is produced using a culture of \"Lactobacillus delbrueckii\" subsp. \"bulgaricus\" and \"Streptococcus thermophilus\" bacteria. In addition, other lactobacilli and bifidobacteria are sometimes added during or after culturing yogurt. Some countries require yogurt to contain a certain amount of colony-forming units (CFU) of bacteria; in China, for example, the requirement for the number of lactobacillus bacteria is at least 1 million CFU per millilitre.\n", "Generally, the bacteria used as starter culture are not the same used as probiotics. There are, however, cases when one bacterium can be used both as starter culture and as probiotic. The scientific community is presently trying to deepen understanding of the roles played by microbes in food processing and human health.\n\nThe most important bacteria in food manufacturing are \"Lactobacillus\" species, belonging to the group of lactic acid bacteria.\n", "On the other hand, there are certain similarities between \"L. bulgaricus\" GLB44 and some of the other probiotics. For example, \"L. bulgaricus\", \"L. rhamnosus\" GG and \"L. plantarum\" 299v all have scientific records of their ability to pass successfully through the gastrointestinal tract, Also they all have records to inhibit to a degree pathogenic bacteria.,\n", "Other strains of L. mucosae have been isolated from human feces, referred to as ME-340, human intestine and vagina, the intestines of dogs, calves, and horses, and the stomach mucosa of breast-fed lamb, strain D.\n\nSection::::Special Functions.\n", "_Microbes are used to convert the lactose sugars into lactic acid through fermentation. _The bacteria used for such fermentation are usually from \"Lactococci\", \"Lactobacilli\", or \"Streptococci\" families.\n\n_Sometimes these microbes are added before or after the acidification step needed for cheese production. \n\n_Also these microbes are responsible for the different flavors of cheese, since they have enzymes that breakdown milk sugars and fats into multiple building blocks. \n\n_The production of yogurt starts from the pasteurization of milk, where undesired microbes are reduced or eliminated. \n", "All other commercially available strains of \"L. bulgaricus\" are isolated from traditional yogurts and are grown in milk. Distinctly from the other \"L. bulgaricus\", GLB44 grows very well in vegetable juices, given its natural plant habitat. Since all probiotics carry some of the organic matter in which they are grown, GLB44 carries traces of vegetable juice. GLB44 is currently grown in the European Union, in vegetable juice sourced from European farms that are GMO free.\n", "The main method of producing yogurt is through the lactic acid fermentation of milk with harmless bacteria. The primary bacteria used are typically \"Lactobacillus bulgaricus\" and \"Streptococcus thermophilus\", and United States as well as European law requires all yogurts to contain these two cultures (though others may be added as probiotic cultures). These bacteria produce lactic acid in the milk culture, decreasing its pH and causing it to congeal. The bacteria also produce compounds that give yogurt its distinctive flavor. An additional effect of the lowered pH is the incompatibility of the acidic environment with many other types of harmful bacteria.\n", "BULLET::::- \"Lactobacillus rhamnosus\"\n\nBULLET::::- \"Propionibacterium freudenreichii\"\n\nBULLET::::- \"Bifidobacterium breve\"\n\nBULLET::::- \"Lactobacillus reuteri\"\n\nBULLET::::- \"Lactobacillus salivarius\"\n\nBULLET::::- \"Bifidobacterium infantis\"\n\nBULLET::::- \"Streptococcus thermophilus\"\n\nSection::::Alterations in flora balance.:Research.\n\nTests for whether non-antibiotic drugs may impact human gut-associated bacteria were performed by \"in vitro\" analysis on more than 1000 marketed drugs against 40 gut bacterial strains, demonstrating that 24% of the drugs inhibited the growth of at least one of the bacterial strains.\n\nSection::::Role in disease.\n", "During normal childbirth, it appears that newborns after a period of days receive transmission of \"L. brevis\" from the mother. It appears that the transmission occurs through breast feeding or through natural child birth. In infants, this resistance is also helpful with protecting the gut against various bile and acids. \"Helicobacter pylori,\" which is a common gut pathogen in humans, studies have shown that certain strains of \"L.brevis\" are successful at combating this pathogen.\n", "A meta-analysis that included five double-blind trials examining the short-term (2–8 weeks) effects of a yogurt with probiotic strains on serum cholesterol levels found a minor change of 8.5 mg/dl (0.22 mmol/l) (4% decrease) in total cholesterol concentration, and a decrease of 7.7 mg/dl (0.2 mmol/l) (5% decrease) in serum LDL concentration.\n\nSection::::Research.:Diarrhea.\n", "In 1920, Rettger and Cheplin reported that Metchnikoff's \"Bulgarian Bacillus\", later called \"Lactobacillus delbrueckii \"subsp.\" bulgaricus\", could not live in the human intestine. They conducted experiments involving rats and humans volunteers, feeding them with \"Lactobacillus acidophilus\". They observed changes in composition of fecal microbiota, which they described as \"transformation of the intestinal flora\". Rettger further explored the possibilities of \"L. acidophilus\", and reasoned that bacteria originating from the gut were more likely to produce the desired effect in this environment. In 1935, certain strains of \"L. acidophilus\" were found very active when implanted in the human digestive tract. Trials were carried out using this organism, and encouraging results were obtained, especially in the relief of chronic constipation.\n", "Bifidobacteria were first isolated from a breast-fed infant by Henry Tissier, who also worked at the Pasteur Institute. The isolated bacterium named \"Bacillus bifidus communis\" was later renamed to the genus \"Bifidobacterium\". Tissier found that bifidobacteria are dominant in the gut microbiota of breast-fed babies and he observed clinical benefits from treating diarrhea in infants with bifidobacteria.\n", "The National Yogurt Association (NYA) of the United States gives a \"Live & Active Cultures Seal\" to refrigerated yogurt products that contain 100 million cells per gram, or frozen yogurt products that contain 10 million cells per gram at the time of manufacture. In 2002, the FDA and WHO recommended that \"the minimum viable numbers of each probiotic strain at the end of the shelf-life\" be reported on labeling, but most companies that give a number report the viable cell count at the date of manufacture, a number that could be much higher than what exists at consumption. Because of the variability in storage conditions and time before eating, exactly how many active culture cells remain at the time of consumption is difficult to determine.\n", "Although it is still largely unknown as to how probiotics do this, two mechanisms have been currently proposed. The first mechanism suggests that \"Lactobacilli\" augment the development of intestinal mucins (glyosylated proteins), which consequently protect the body from intestinal infections.\n\nSection::::Health effects.:Diarrhea.:Persistent diarrhea.\n", "Ilya Metchnikoff, a professor at the Pasteur Institute in Paris, researched the relationship between the longevity of Bulgarians and their consumption of yogurt. He had the idea that aging is caused by putrefactive activity, or proteolysis, by microbes that produce toxic substances in the intestine.\n", "In 1899, Henri Tissier, a French pediatrician at the Pasteur Institute in Paris, isolated a bacterium characterised by a Y-shaped morphology (\"bifid\") in the intestinal microbiota of breast-fed infants and named it \"bifidus\". In 1907, Élie Metchnikoff, deputy director at the Pasteur Institute, propounded the theory that lactic acid bacteria are beneficial to human health. Metchnikoff observed that the longevity of Bulgarian peasants was the result of their consumption of fermented milk products. Elie Metchnikoff also suggested that \"“oral administration of cultures of fermentative bacteria would implant the beneficial bacteria in the intestinal tract”\".\n\nSection::::Metabolism.\n", "Section::::Probiotics.\n\nProbiotics are products aimed at delivering living, potentially beneficial, bacterial cells to the gut ecosystem of humans and other animals, whereas prebiotics are indigestible carbohydrates delivered in food to the large bowel to provide fermentable substrates for selected bacteria. Strains of LAB are the most common microbes employed as probiotics. Two principal kinds of probiotic bacteria, members of the genera \"Lactobacillus\" and \"Bifidobacterium\", have been studied in detail.\n", "Section::::Safety.:Transferable Resistance Genes.\n", "BULLET::::- None of the isolates grow in lysozyme or utilize citrate, and five of six (83%) isolates produce arylsulfatase in 3 days.\n\nBULLET::::- The semi-quantitative catalase activity of all isolates is reactive (45 mm).\n\nDifferential characteristics\n\nBULLET::::- The nearest phylogenetic neighbours, according to 16S rRNA gene sequence similarity, are \"M. neworleansense\" and all \"M. porcinum\" isolates studied (all 99·9%).\n\nSection::::Pathogenesis.\n", "Although probiotics are generally safe, when they are used by oral administration there is a small risk of passage of viable bacteria from the gastrointestinal tract to the blood stream (bacteremia), which can cause adverse health consequences. Some people, such as those with a compromised immune system, short bowel syndrome, central venous catheters, cardiac valve disease and premature infants, may be at higher risk for adverse events. In children with lowered immune systems or who are already critically ill, consumption of probiotics may rarely cause bacteremia or fungemia, leading to sepsis, which is a potentially fatal disease. Scant complaints of mild gastrointestinal discomfort or gas have been noted.\n", "Section::::Antimicrobial activity.\n\nThe protegrins are highly microbicidal against \"Candida albicans\", \"Escherichia coli\",\"Listeria monocytogenes\", \"Neisseria gonorrhoeae\", and the virions of the human immunodeficiency virus in vitro under conditions which mimic the tonicity of the extracellular milieu. The mechanism of this microbicidal activity is believed to involve membrane disruption, similar to many other antibiotic peptides \n\nSection::::Mimetics as antibiotics.\n", "In the 1980s, Danone researchers took interest in bifidobacteria. They developed a specific strain that can survive in the acidic medium of yogurt. In addition to traditional yogurt bacteria, they decided to add a probiotic strain:Bifidus Actiregularis. Activia products thus contain \"Bifidobacterium animalis DN 173,010\", a proprietary strain of Bifidobacterium, a probiotic which is marketed by Dannon under the trade names \"Bifidus Regularis\", \"Bifidus Actiregularis\", \"Bifidus Digestivum\" and \"Bifidobacterium Lactis\".\n\nSection::::Introductions into new countries.\n\nBULLET::::- 1987: France\n\nBULLET::::- 1988: Belgium, Spain, and the United Kingdom\n\nBULLET::::- 1989: Italy\n\nBULLET::::- 2002: Russia, Japan\n\nBULLET::::- 2003: America\n\nBULLET::::- 2004: Canada\n", "\"Campylobacter\" infections are transmitted to a host via contaminated water and food, sexual activity, and interaction with infected animals. Symptoms include diarrhea, cramping, and abdominal pain. Campylobacter can cause disease in both humans and animals, and most human cases are induced by the species \"Campylobacter jejuni.\"\n\nSection::::Diseases Caused by Exogenous Bacteria.:Terrestrial Exogenous Bacteria.\n", "Due to more than a century of safe use, the FDA has granted \"L. bulgaricus\" a \"grandfather\" status, with an automatic GRAS status (Generally Recognized as Safe). Moreover, the Code of Federal Regulations mandates that in the US, for a product to be called yogurt, it must contain two specific strains of lactic acid bacteria: \"Lactobacillus bulgaricus\" and \"Streptococcus thermophilus\", as regulated by the FDA.\n", "Finally, all three example probiotics in this comparison have all research tests performed by reputable academic professionals, for the case of GLB44, the research was performed by a Harvard Medical School professor, for the case of \"L. rhamnosus\" GG by two Tufts University professors, and for \"L. plantarum\" 299v by a professor at the Lund University. All three probiotics have commercial brand names in the United States. GLB44 is commercially available under the brand name ProViotic as a probiotic food supplement, \"L. rhamnosus\" GG as Culturelle, and \"L. plantarum\" 299v as the ingredient in GoodBelly.\n" ]
[ "bacteria in yoghurt survives the stomach acid." ]
[ "Most of the bacteria actually doesn't survive. " ]
[ "false presupposition" ]
[ "bacteria in yoghurt survives the stomach acid." ]
[ "false presupposition" ]
[ "Most of the bacteria actually doesn't survive. " ]
2018-00868
Why do frozen objects stick to our skin?
Ice adheres to things it freezes on. If you want an example, add a drop of water to a piece of plastic, let it freeze, and then turn the plastic upside down: the ice is stuck to it. When you touch your tongue to something really cold the moisture on your tongue freezes and thus becomes stuck to the cold object. The object needs to be cold enough where your body heat (specifically the warm blood circulating through your tongue) isn't enough to overcome the coldness of the object.
[ "When the two bodies come in contact, surface deformation may occur on both bodies. This deformation may either be plastic or elastic, depending on the material properties and the contact pressure. When a surface undergoes plastic deformation, contact resistance is lowered, since the deformation causes the actual contact area to increase\n\nSection::::Factors influencing contact conductance.:Surface cleanliness.\n\nThe presence of dust particles, acids, etc., can also influence the contact conductance.\n\nSection::::Measurement of thermal contact conductance.\n", "The group's work has been published in Nature, Science and Advanced Materials journals, as well as \"Proceedings of the (United States) National Academy of Sciences\". They have demonstrated pressure and temperature measurements at a resolution of a few millimeters, as well as organic transistors, organic solar cells, and organic light emitting diodes about 1 μm thick.\n", "Section::::Expansion.\n\nSome substances, such as water and bismuth, expand when frozen.\n\nSection::::Freezing of living organisms.\n", "Amorphous materials do not have a eutectic point, but they do have a critical point, below which the product must be maintained to prevent melt-back or collapse during primary and secondary drying.\n\nSection::::Stages of freeze drying.:Freezing and annealing.:Structurally sensitive goods.\n", "Section::::Thin films and surfaces.\n", "Section::::Permeability.:Increasing permeability.\n\nScientists previously believed that the skin was an effective barrier to inorganic particles. Damage from mechanical stressors was believed to be the only way to increase its permeability. \n", "The surface of an object is the part of the object that is primarily perceived. Humans equate seeing the surface of an object with seeing an object. For example, in looking at an automobile, it is normally not possible to see the engine, electronics, and other internal structures, but the object is still recognized as an automobile because the surface identifies it as one. Conceptually, the \"surface\" of an object can be defined as the topmost layer of atoms. Many objects and organisms have a surface that is in some way distinct from their interior. For example, the peel of an apple has very different qualities from the interior of the apple, and the exterior surface of a radio may have very different components from the interior. Peeling the apple constitutes removal of the surface, ultimately leaving a different surface with a different texture and appearance, identifiable as a peeled apple. Removing the exterior surface of an electronic device may render its purpose unrecognizable. By contrast, removing the outermost layer of a rock or the topmost layer of liquid contained in a glass would leave a substance or material with the same composition, only slightly reduced in volume.\n", "This provides a way to control the wetting properties of a surface by small temperature changes. The described behavior can be exploited in tissue engineering since the adhesion of cells is strongly dependent on the hydrophilicity/hydrophobicity. This way, it is possible to detach cells from a cell culture dish by only small changes in temperature, without the need to additionally use enzymes (see figure). Respective commercial products are already available.\n\nSection::::Applications.:Thermoresponsive surfaces.:Chromatography.\n", "Investigating this phenomenon, the original Green Lantern (Alan Scott) was shocked by the sight of Dr. Mahkent shot dead in his stateroom, apparently the victim of Lanky Leeds, a notorious racketeer who was reportedly traveling on the same ship. Thus, when the bizarrely costumed criminal known as the Icicle appeared upon the scene later that same day, wielding a unique weapon capable of instantly freezing solid any moisture in the air, Green Lantern presumed he was actually Lanky Leeds, who had stolen Doctor Mahkent's invention.\n", "BULLET::::1. Polymers with polar substituents are known to have interesting applications including within electrical and optical materials.\n\nBULLET::::2. These polymers are typically transparent.\n\nBULLET::::3. The T (initial decomposition) of these polymers are relatively low compared to their analogues, but have relatively higher T (maximum rate of weight change temperatures). Meaning although they will start to melt quicker, they will take longer to fully change phases.\n\nBULLET::::4. Polymers with large captodative stabilizations starting materials can quickly “unzip” to their starting monomer upon heating.\n", "Section::::Applications.\n\nSection::::Applications.:Bioseparation.\n\nThermoresponsive polymers can be functionalized with moieties that bind to specific biomolecules. The polymer-biomolecule conjugate can be precipitated from solution by a small change of temperature. Isolation may be achieved by filtration or centrifugation.\n\nSection::::Applications.:Thermoresponsive surfaces.\n\nSection::::Applications.:Thermoresponsive surfaces.:Tissue engineering.\n\nFor some polymers it was demonstrated that thermoresponsive behavior can be transferred to surfaces. The surface is either coated with a polymer film or the polymer chains are bound covalently to the surface.\n", "The non-overlapping growth directions also help to explain why dendritic textures are often seen in freeze-casts. This texturing is usually found only on the side of each lamella; the direction of the imposed temperature gradient. The ceramic structure left behind shows the negative image of these dendrites. In 2013, Deville et al. made the observation that the periodicity of these dendrites (tip-to-tip distance) actually seems to be related to the primary crystal thickness.\n\nSection::::Controlling the porous structure.:Particle packing effects.\n", "In frostbite, cooling of the body causes narrowing of the blood vessels (vasoconstriction). Temperatures below −4 °C are required to form ice crystals in the tissues. The process of freezing causes ice crystals to form in the tissue, which in turn causes damage at the cellular level. Ice crystals can damage cell membranes directly. In addition, ice crystals can damage small blood vessels at the site of injury. Scar tissue forms when fibroblasts replace the dead cells.\n\nSection::::Mechanism.:Rewarming.\n", "In recent publications on the subject there are three approaches to the characterization of surface icephobicity. First, the icephobicity implies low adhesion force between ice and the solid surface. In most cases, the critical shear stress is calculated, although the normal stress can be used as well. While no explicit quantitative definition for the icephobicty has been suggested so far, the researchers characterized icephobic surfaces as those having the shear strength (maximum stress) less in the region between 150 kPa and 500 kPa and even as low as 15.6 kPa.\n", "Second, the icephobicity implies the ability to prevent ice formation on the surface. Such ability is characterized by whether a droplet of supercooled water (below the normal freezing temperature of 0 C) freezes at the interface. The process of freezing can be characterized by time delay of heterogeneous ice nucleation. The mechanisms of droplet freezing are quite complex and can depend on the temperature level, on whether cooling down of the droplet is performed from the side of the solid substrate or from vapor and by other factors.\n", "Section::::Applications.:Neural tissue.\n", "For most substances, the melting and freezing points are the same temperature; however, certain substances possess differing solid–liquid transition temperatures. For example, agar displays a hysteresis in its melting point and freezing point. It melts at 85 °C (185 °F) and solidifies from 32 °C to 40 °C (89.6 °F to 104 °F).\n\nSection::::Crystallization.\n", "The interpositive is printed with a wet gate, contact print that has been done in \"liquid\", and historically has had only one purpose, namely, to be the element that is used to make the internegative.\n\nIt is sometimes referred to as a Protection IP, which is a good term since the only time the IP is touched is on the occasion of making the first or a replacement internegative. Since interpositives are used so rarely, they are usually the film element that is in the best condition of all the film elements. \n", "Icephobicity\n\nIcephobicity (from \"ice\" and Greek φόβος \"phobos\" \"fear\") is the ability of a solid surface to repel ice or prevent ice formation due to a certain topographical structure of the surface. The word “icephobic” was used for the first time at least in 1950; however, the progress in micropatterned surfaces resulted in growing interest towards icephobicity since the 2000s.\n\nSection::::Icephobicity vs. hydrophobicity.\n", "BULLET::::- Break Deformations – deformations that lead to the breaking of bumps and the creation of new contact areas.\n\nThe energy that is dissipated during the phenomenon is transformed into heat, thus increasing the temperature of the surfaces in contact. The increase in temperature also depends on the relative speed and the roughness of the material, it can be so high as to even lead to the fusion of the materials involved.\n", "The term \"icephobicity\" is similar to the term hydrophobicity and other “-phobicities” in physical chemistry (oleophobicity, lipophobicity, omniphobicity, amphiphobicity, etc.). The icephobicity is different from deicing and anti-icing in that icephobic surfaces, unlike the anti-icing surfaces, do not require special treatment or chemical coatings to prevent ice formation,\n", "Section::::Function.:Biomedical materials.\n", "Section::::Applications.:Biomedical applications.\n\nUnderstanding and being able to specify the surface properties on biomedical devices is critical in the biomedical industry, especially regarding devices that are implanted in the body. A material interacts with the environment at its surface, so the surface properties largely direct the interactions of the material with its environment. Surface chemistry and surface topography affect protein adsorption, cellular interactions, and the immune response.\n", "Freeze-cast components, in their basic form, are ideal for use as heat-resisting objects. In this way, they can be useful in metalwork, as molds or as substrates for metal spray-forming. However, with suitable post-processing, they could fulfil many other applications, such as silicon chip mounts, or even engine blocks.\n\nSection::::Theory.\n\nThe science is not particularly well understood. It has been known for years that silica sols (also known as colloidal silica, silicic acid, polysilicic acid) will gel when exposed to temperatures around 0 °C (32 °F). The theoretical mechanism is quite simple:\n", "The modification of surfaces to keep polymers biologically inert has found wide uses in biomedical applications such as cardiovascular stents and in many skeletal prostheses. Functionalizing polymer surfaces can inhibit protein adsorption, which may otherwise initiate cellular interrogation upon the implant, a predominant failure mode of medical prostheses.\n\nNarrow biocompatibility requirements within the medical industry have over the past ten years driven surface modification techniques to reach an unprecedented level of accuracy.\n\nSection::::Applications.:Coatings.\n" ]
[]
[]
[ "normal" ]
[]
[ "normal", "normal" ]
[]
2018-08583
Why do brain cells die so quickly compared to other cells which can go hours without oxygen/circulation?
Heart tissue also dies really fast. The parts of your body that use a lot of energy die really quick when their energy source is cut off. Your brain uses a massive amount of energy compared to just about everything else in your body. This means your brain runs out of reserves long before your other organs and once out of reserves (with no extra energy arriving) it starts to die off.
[ "As oxygen or glucose becomes depleted in ischemic brain tissue, the production of high energy phosphate compounds such as adenosine triphosphate (ATP) fails, leading to failure of energy-dependent processes (such as ion pumping) necessary for tissue cell survival. This sets off a series of interrelated events that result in cellular injury and death. A major cause of neuronal injury is the release of the excitatory neurotransmitter glutamate. The concentration of glutamate outside the cells of the nervous system is normally kept low by so-called uptake carriers, which are powered by the concentration gradients of ions (mainly Na) across the cell membrane. However, stroke cuts off the supply of oxygen and glucose which powers the ion pumps maintaining these gradients. As a result, the transmembrane ion gradients run down, and glutamate transporters reverse their direction, releasing glutamate into the extracellular space. Glutamate acts on receptors in nerve cells (especially NMDA receptors), producing an influx of calcium which activates enzymes that digest the cells' proteins, lipids, and nuclear material. Calcium influx can also lead to the failure of mitochondria, which can lead further toward energy depletion and may trigger cell death due to programmed cell death.\n", "These processes are the same for any type of ischemic tissue and are referred to collectively as the \"ischemic cascade\". However, brain tissue is especially vulnerable to ischemia since it has little respiratory reserve and is completely dependent on aerobic metabolism, unlike most other organs.\n", "Although loss of function is almost immediate, there is no specific duration of clinical death at which the non-functioning brain clearly dies. The most vulnerable cells in the brain, CA1 neurons of the hippocampus, are fatally injured by as little as 10 minutes without oxygen. However, the injured cells do not actually die until hours after resuscitation. This delayed death can be prevented \"in vitro\" by a simple drug treatment even after 20 minutes without oxygen. In other areas of the brain, viable human neurons have been recovered and grown in culture hours after clinical death. Brain failure after clinical death is now known to be due to a complex series of processes called reperfusion injury that occur \"after\" blood circulation has been restored, especially processes that interfere with blood circulation during the recovery period. Control of these processes is the subject of ongoing research.\n", "Twenty percent of comatose states result from the side effects of a stroke. During a stroke, blood flow to part of the brain is restricted or blocked. An ischemic stroke, brain hemorrhage, or tumor may cause restriction of blood flow. Lack of blood to cells in the brain prevent oxygen from getting to the neurons, and consequently causes cells to become disrupted and die. As brain cells die, brain tissue continues to deteriorate, which may affect the functioning of the ARAS.\n", "In general, oxidative phosphorylation is the process used to supply energy for neuronal processes in the brain. When resources for oxidative phosphorylation are exhausted, neurons turn to aerobic glycolysis in the place of oxygen. However, this can be taxing on a cell. Given that the neurons in question retain juvenile characteristics, they may not be entirely myelinated. Bufill, Agusti, Blesa et. al note how “The increase of the aerobic metabolism in these neurons may lead, however, to higher levels of oxidative stress, therefore, favoring the development of neurodegenerative diseases which are exclusive, or almost exclusive, to humans, such as Alzheimer’s disease.” Specifically through various studies of the brain, aerobic glycolysis activity has been detected at high levels in the dorsolateral prefrontal cortex, which has functionality regarding the working memory. Stress on these working memory cells may support conditions related to neurodegenerative diseases such as Alzheimer’s Disease.\n", "BULLET::::2. The cranial nerves III-XII emerge from the brainstem. These cranial nerves supply the face, head, and viscera. (The first two pairs of cranial nerves arise from the cerebrum).\n\nBULLET::::3. The brainstem has integrative functions being involved in cardiovascular system control, respiratory control, pain sensitivity control, alertness, awareness, and consciousness. Thus, brainstem damage is a very serious and often life-threatening problem.\n\nSection::::Function.:Cranial nerves.\n", "Brain death can sometimes be difficult to differentiate from other medical states such as barbiturate overdose, alcohol intoxication, sedative overdose, hypothermia, hypoglycemia, coma, and chronic vegetative states. Some comatose patients can recover to pre-coma or near pre-coma level of functioning, and some patients with severe irreversible neurological dysfunction will nonetheless retain some lower brain functions, such as spontaneous respiration, despite the losses of both cortex and brain stem functionality. Such is the case with anencephaly.\n", "The brain, although protected by the blood–brain barrier, can be affected by infections including viruses, bacteria and fungi. Infection may be of the meninges (meningitis), the brain matter (encephalitis), or within the brain matter (such as a cerebral abscess). Rare prion diseases including Creutzfeldt–Jakob disease and its variant, and kuru may also affect the brain.\n\nSection::::Clinical significance.:Tumours.\n", "During brain ischemia, the brain cannot perform aerobic metabolism due to the loss of oxygen and substrate. The brain is not able to switch to anaerobic metabolism and, because it does not have any long term energy stored, the levels of adenosine triphosphate (ATP) drop rapidly, approaching zero within 4 minutes. In the absence of biochemical energy, cells begin to lose the ability to maintain electrochemical gradients. Consequently, there is a massive influx of calcium into the cytosol, a massive release of glutamate from synaptic vesicles, lipolysis, calpain activation, and the arrest of protein synthesis. Additionally, removal of metabolic wastes is slowed. The interruption of blood flow to the brain for ten seconds results in the immediate loss of consciousness. The interruption of blood flow for twenty seconds results in the stopping of electrical activity. An area called a penumbra may result, wherein neurons do not receive enough blood to communicate, however do receive sufficient oxygenation to avoid cell death for a short period of time.\n", "Hemisballismus as a result of stroke occurs in only about 0.45 cases per hundred thousand stroke victims. Even at such a small rate, stroke is by far the most common cause of hemiballismus. A stroke causes tissue to die due to a lack of oxygen resulting from an impaired blood supply. In the basal ganglia, this can result in the death of tissue that helps to control movement. As a result, the brain is left with damaged tissue that sends damaged signals to the skeletal muscles in the body. The result is occasionally a patient with hemiballismus.\n\nTraumatic Brain Injury\n", "During an ischemic stroke, a lack of oxygen and glucose leads to a breakdown of the sodium-calcium pumps on brain cell membranes, which in turn results in a massive buildup of sodium and calcium intracellularly. This causes a rapid uptake of water and subsequent swelling of the cells. It is this swelling of the individual cells of the brain that is seen as the main distinguishing characteristic of cytotoxic edema, as opposed to vasogenic edema, wherein the influx of fluid is typically seen in the interstitial space rather than within the cells themselves. While not all patients who have experienced a stroke will develop a severe edema, those who do have a very poor prognosis.\n", "Congenital heart defects may also cause brain ischemia due to the lack of appropriate artery formation and connection. People with congenital heart defects may also be prone to blood clots.\n\nOther events that may result in brain ischemia include cardiorespiratory arrest, stroke, and severe irreversible brain damage.\n\nRecently, Moyamoya disease has also been identified as a potential cause for brain ischemia. Moyamoya disease is an extremely rare cerebrovascular condition that limits blood circulation to the brain, consequently leading to oxygen deprivation.\n\nSection::::Pathophysiology.\n", "The second most common cause of coma, which makes up about 25% of cases, is lack of oxygen, generally resulting from cardiac arrest. The Central Nervous System (CNS) requires a great deal of oxygen for its neurons. Oxygen deprivation in the brain, also known as hypoxia, causes sodium and calcium from outside of the neurons to decrease and intracellular calcium to increase, which harms neuron communication. Lack of oxygen in the brain also causes ATP exhaustion and cellular breakdown from cytoskeleton damage and nitric oxide production.\n", "In severe cases it is extremely important to act quickly. Brain cells are very sensitive to reduced oxygen levels. Once deprived of oxygen they will begin to die off within five minutes.\n\nSection::::Prognosis.\n\nMild and moderate cerebral hypoxia generally has no impact beyond the episode of hypoxia; on the other hand, the outcome of severe cerebral hypoxia will depend on the success of damage control, amount of brain tissue deprived of oxygen, and the speed with which oxygen was restored.\n", "Cerebral circulation is the movement of blood through the network of cerebral arteries and veins supplying the brain. The rate of the cerebral blood flow in the adult is typically 750 milliliters per minute, representing 15% of the cardiac output. The arteries deliver oxygenated blood, glucose and other nutrients to the brain, and the veins carry deoxygenated blood back to the heart, removing carbon dioxide, lactic acid, and other metabolic products. Since the brain is very vulnerable to compromises in its blood supply, the cerebral circulatory system has many safeguards including autoregulation of the blood vessels and the failure of these safeguards can result in a stroke. The amount of blood that the cerebral circulation carries is known as cerebral blood flow. The presence of gravitational fields or accelerations also determine variations in the movement and distribution of blood in the brain, such as when suspended upside-down. \n", "The initial hypoxia (decreased oxygen flow) or ischemia (decreased blood flow) can occur for a number of reasons. Fetal blood vessels are thin-walled structures, and it is likely that the vessels providing nutrients to the periventricular region cannot maintain a sufficient blood flow during episodes of decreased oxygenation during development. Additionally, hypotension resulting from fetal distress or cesarean section births can lead to decreased blood and oxygen flow to the developing brain. These hypoxic-ischemic incidents can cause damage to the blood brain barrier (BBB), a system of endothelial cells and glial cells that regulates the flow of nutrients to the brain. A damaged BBB can contribute to even greater levels of hypoxia. Alternatively, damage to the BBB can occur due to maternal infection during fetal development, fetal infections, or infection of the newly delivered infant. Because their cardiovascular and immune systems are not fully developed, premature infants are especially at risk for these initial insults.\n", "Cerebral cortical function (e.g. communication, thinking, purposeful movement, etc) is lost while brainstem functions (e.g. breathing, maintaining circulation and hemodynamic stability, etc) are preserved.  Non-cognitive upper brainstem functions such as eye-opening, occasional vocalizations (e.g. crying, laughing), maintaining normal sleep patterns, and spontaneous non-purposeful movements often remain intact.   \n", "The brain, however, appears to accumulate ischemic injury faster than any other organ. Without special treatment after circulation is restarted, full recovery of the brain after more than 3 minutes of clinical death at normal body temperature is rare. Usually brain damage or later brain death results after longer intervals of clinical death even if the heart is restarted and blood circulation is successfully restored. Brain injury is therefore the chief limiting factor for recovery from clinical death.\n", "In addition to damaging effects on brain cells, ischemia and infarction can result in loss of structural integrity of brain tissue and blood vessels, partly through the release of matrix metalloproteases, which are zinc- and calcium-dependent enzymes that break down collagen, hyaluronic acid, and other elements of connective tissue. Other proteases also contribute to this process. The loss of vascular structural integrity results in a breakdown of the protective blood brain barrier that contributes to cerebral edema, which can cause secondary progression of the brain injury.\n\nSection::::Pathophysiology.:Hemorrhagic.\n", "The category of \"brain death\" is seen as problematic by some scholars. For instance, Dr. Franklin Miller, senior faculty member at the Department of Bioethics, National Institutes of Health, notes: \"By the late 1990s... the equation of brain death with death of the human being was increasingly challenged by scholars, based on evidence regarding the array of biological functioning displayed by patients correctly diagnosed as having this condition who were maintained on mechanical ventilation for substantial periods of time. These patients maintained the ability to sustain circulation and respiration, control temperature, excrete wastes, heal wounds, fight infections and, most dramatically, to gestate fetuses (in the case of pregnant \"brain-dead\" women).\"\n", "The brains of many different organisms have been kept alive in vitro for hours, or in some cases days. The central nervous system of invertebrate animals is often easily maintained as they need less oxygen and to a larger extent get their oxygen from CSF; for this reason their brains are more easily maintained without perfusion. Mammalian brains, on the other hand, have a much lesser degree of survival without perfusion and an artificial blood perfusate is usually used.\n", "A neonatal stroke in the developing brain involves excitotoxicity, oxidative stress, and inflammation, which accelerate cell death through necrosis or apoptosis, depending on the region of the brain and severity of stroke. The pathophysiology of neonatal stroke may include thrombosis and thrombolysis, and vascular reactivity. Apoptosis mechanisms may have a more prominent role in developing an ischemic brain injury in neonatal humans than in adult brain ischemia, as a majority of cells die in the environment where edema developed after a neonatal stroke. There is an increased inflammatory response after hypoxia-ischemia, which corresponds to extensive neuronal apoptosis. Apoptosis involves the mitochondrial release of cytochrome c and apoptosis-inducing factor (AIF), which activate caspase-dependent and –independent execution pathways, respectively. Injury may also occur due to O accumulation via the production of O by microglia, a type of glial cell that are responsible for immune response in the CNS, but their role in injury after neonatal stroke is still relatively unknown. As observed by Alberi, \"et al.\", progressive atrophy in the ipsilateral hemisphere over three weeks after the stroke occurred, suggesting that a neonatal stroke has long-lasting effects on neuronal viability and the potential for a prolonged therapeutic window for alleviating the progression of cell death.\n", "A decrease in circulation in the brain vasculature due to stroke or injury can lead to a condition known as ischemia. In general, decrease in blood flow to the brain can be a result of thrombosis causing a partial or full blockage of blood vessels, hypotension in systemic circulation (and consequently the brain), or cardiac arrest. This decrease in blood flow in the cerebral vascular system can result in a buildup of metabolic wastes generated by neurons and glial cells and a decrease in oxygen and glucose delivery to them. As a result, cellular energy failure, depolarization of neuronal and glial membranes, edema, and excess neurotransmitter and calcium ion release can occur. This ultimately ends with cell death, as cells succumb to a lack of nutrients to power their metabolism and to a toxic brain environment, full of free radicals and excess ions that damage normal cell organelle function.\n", "Traumatic brain injury may cause a range of serious coincidental complications that include cardiac arrhythmias and neurogenic pulmonary edema. These conditions must be adequately treated and stabilised as part of the core care.\n", "With due regard for the cause of the coma, and the rapidity of its onset, testing for the purpose of diagnosing death on brainstem death grounds may be delayed beyond the stage where brainstem reflexes may be absent only temporarily – because the cerebral blood flow is inadequate to support synaptic function although there is still sufficient blood flow to keep brain cells alive and capable of recovery. There has recently been renewed interest in the possibility of neuronal protection during this phase by use of moderate hypothermia and by correction of the neuroendocrine abnormalities commonly seen in this early stage.\n" ]
[]
[]
[ "normal" ]
[ "Brain cells die more quickly than other body cells without oxygen/circulation." ]
[ "false presupposition", "normal" ]
[ "Heart tissue also dies really fast." ]
2018-12255
Why is 1066 and the Norman Invasion so significant?
One of the most obvious changes was the language. Before the Norman Conquest, Old English was the current version of the language. English is a Germanic language, but we also have many Latin-based words, many of which came to us as a result of the Conquest by the Old French-speaking Normans (French is a Romance language, meaning it evolved from Latin origins). This began the period of the language known as Middle English, which eventually gave way to Early Modern English (the language of Shakespeare and the King James Bible) and then to Modern English later still. As you may know, Early Modern English is often very easy for English speakers today to understand. If you've ever read Shakespeare, you will probably have noticed that there are many archaic words and phrases that might lead to some confusion, but the general meaning is pretty easy to figure out. Going a step back, Middle English becomes far more difficult, even though there are a lot of words that are familiar and some sentences might be possible to interpret from context. That all changes once you get to Old English. Aside from the very different grammar, few of the words are intelligible to Modern English speakers. It would probably be easier for you to read a book in Spanish than in Old English, even if you've never studied either language and speak only Modern English. This is partly because of the enormous language shifts that were put into motion following the Conquest; as Spanish is another Romance language, our own language now shares many similarities to it.
[ "Section::::Participation in the Norman Conquest.\n", "BULLET::::- August – William invades Scotland, reaching the River Tay.\n\nBULLET::::- At Abernethy, King Malcolm III of Scotland submits to William.\n\nBULLET::::- Bishop of Lincoln raised to diocesan status. Construction of Lincoln Cathedral begins.\n\nBULLET::::- 1073\n\nBULLET::::- Rebuilding of St Augustine's Abbey in Canterbury.\n\nBULLET::::- 1074\n\nBULLET::::- Roger de Montgomerie is created Earl of Shrewsbury, and invades Wales, reaching as far as Powys.\n\nBULLET::::- 1075\n\nBULLET::::- Revolt of the Earls: three earls rebel against William in the last serious act of resistance to the Norman conquest of England.\n", "In the 20th and 21st centuries historians have focused less on the rightness or wrongness of the conquest itself, instead concentrating on the effects of the invasion. Some, such as Richard Southern, have seen the conquest as a critical turning point in history. Southern stated that \"no country in Europe, between the rise of the barbarian kingdoms and the 20th century, has undergone so radical a change in so short a time as England experienced after 1066\". Other historians, such as H. G. Richardson and G. O. Sayles, believe that the transformation was less radical. In more general terms, Singman has called the conquest \"the last echo of the national migrations that characterized the early Middle Ages\". The debate over the impact of the conquest depends on how change after 1066 is measured. If Anglo-Saxon England was already evolving before the invasion, with the introduction of feudalism, castles or other changes in society, then the conquest, while important, did not represent radical reform. But the change was dramatic if measured by the elimination of the English nobility or the loss of Old English as a literary language. Nationalistic arguments have been made on both sides of the debate, with the Normans cast as either the persecutors of the English or the rescuers of the country from a decadent Anglo-Saxon nobility.\n", "BULLET::::- Construction of Richmond Castle in North Yorkshire by Alan Rufus begins.\n\nBULLET::::- Jews from Rouen, in Normandy, settle in England at the invitation of the King.\n\nBULLET::::- 1071\n\nBULLET::::- William defeats Hereward the Wake's rebellion on the Isle of Ely.\n\nBULLET::::- Edwin, Earl of Mercia, again rebels against William but is betrayed and killed, leading to the re-distribution of land within Mercia to William's subjects.\n\nBULLET::::- 1072\n\nBULLET::::- 27 May – the Accord of Winchester establishes the primacy of the Archbishop of Canterbury over the Archbishop of York in the Church of England.\n", "BULLET::::- William I, in a letter, refuses to accept Pope Gregory VII as his overlord.\n\nBULLET::::- 1081\n\nBULLET::::- William campaigns in Wales, reaching as far as St David's.\n\nBULLET::::- Construction of Ely Cathedral begins.\n\nBULLET::::- 1082\n\nBULLET::::- Odo of Bayeux arrested, and forfeits his Earldom and estates.\n\nBULLET::::- Bayeux Tapestry completed.\n\nBULLET::::- 1083\n\nBULLET::::- William faces a revolt in the province of Maine in Normandy.\n\nBULLET::::- 1084\n\nBULLET::::- Construction of Worcester Cathedral begins.\n\nBULLET::::- 1085\n\nBULLET::::- Threatened invasion from Denmark aborted after a rebellion there.\n\nBULLET::::- 25 December – William commissions the Domesday Book.\n\nBULLET::::- 1086\n", "Section::::Military impact.\n", "Section::::Invasion and the early Norman period (1066-1100).\n", "Debate over the conquest started almost immediately. The \"Anglo-Saxon Chronicle\", when discussing the death of William the Conqueror, denounced him and the conquest in verse, but the king's obituary notice from William of Poitiers, a Frenchman, was full of praise. Historians since then have argued over the facts of the matter and how to interpret them, with little agreement. The theory or myth of the \"Norman yoke\" arose in the 17th century, the idea that Anglo-Saxon society had been freer and more equal than the society that emerged after the conquest. This theory owes more to the period in which it was developed than to historical facts, but it continues to be used to the present day in both political and popular thought.\n", "as a detailed narrative of the Norman Conquest, Freeman's book has never been superseded, and it is those best versed in the history of eleventh-century England who are most conscious of its value. \n", "Between 1016 and 1042 England was ruled by Danish kings but the Anglo-Saxons then regained control until 1066.\n\nSection::::Medieval.:Normans.\n\nThe Norman invasion of Britain is normally considered the last successful attempt in history by a foreign army to take control of the Kingdom of England by means of military occupation. From the Norman point of view, William the Conqueror was considered the legitimate heir to the realm (as explained in the Bayeux Tapestry), and the invasion was required to secure this against the usurpation of Harold Godwinson.\n", "BULLET::::- 1042 Death of Harthacnut, Edward the Confessor accedes to the English throne\n\nBULLET::::- 1057 Death of Macbeth, Lulach accedes to the Scottish throne\n\nBULLET::::- 1058 Death of Lulach, Malcolm III accedes to the Scottish throne\n\nBULLET::::- 1066 Death of Edward the Confessor in January, Harold II accedes to the English throne. Norman invasion and conquest of England, Harold II is killed and William the Conqueror becomes King of England\n\nBULLET::::- 1078 Work commenced on Tintern Abbey\n\nBULLET::::- 1086 Work commences on the Domesday Book\n\nBULLET::::- 1087 Death of William the Conqueror\n", "BULLET::::- August 1 – King William I (the Conqueror) calls for a meeting at Old Sarum, where he invites his major vassals and tenants-in-chief to swear allegiance to him. The oath is known as the Oath of Salisbury.\n\nBULLET::::- The Domesday Book is completed, which is drawn up on the orders of William I. It describes in detail the landholdings and resources in England.\n\nBULLET::::- The population in England is estimated to be 1.25 million citizens with 10% living in boroughs.\n\nSection::::Events.:By place.:Seljuk Empire.\n", "\"Memorable\" events in English history include the Disillusion of the Monasteries (Chapter XXXI); the struggle between the Cavaliers (characterised as \"Wrong but Wromantic\") and the Roundheads (characterised as \"Right but Repulsive\") in the English Civil War (Chapter XXXV); and The Industrial Revelation (Chapter XLIX).\n", "Marjorie Chibnall added that in his knowledge of medieval chronicles Freeman had no rival. As a set-off to this list Barlow noted Freeman's dogmatism, pugnacity and indifference to various subjects he considered irrelevant to his survey of 11th century England: theology, philosophy, and most of the arts.\n", "Section::::Norman invasion.\n", "The issue of continuity and change in post conquest England is a topic of significant debate in scholarship. By 1086, there were very few Englishmen among the 200 or so major landowners recorded in the Domesday Book. Normans, Flemings, Bretons and others had settled on the estates of dead, dispossessed or outlawed English nobility. Contemporary chroniclers were divided, with Henry of Huntingdon writing that the English people had been \"delivered [up] for destruction by the violent and cunning Norman people\", while William of Poitiers lauded the Norman victory at the Battle of Hastings and said the slaughter of the English had been just punishment for Harold Godwinson \"perjury\". \n", "The \"Valor\" is a document of the first importance for historians of the later mediæval and Tudor church, the English Reformation, and the Dissolution. It is also valuable to economic historians of the period.\n\nSection::::Bibliography and references.\n\nBULLET::::- \"Abbeys and Priories in England and Wales\", Bryan Little, Batsford 1979\n\nBULLET::::- \"The Abbeys and Priories of Medieval England\", Colin Platt, Secker & Warburg 1984\n\nBULLET::::- \"Bare Ruined Choirs\", David Knowles, Cambridge University Press 1959\n\nSection::::External links.\n\nBULLET::::- Title page of the \"Valor\" from the National Archives showing King Henry\n", "Section::::Background.\n\nSection::::Background.:Norman conquest of England.\n", "The Norman conquest introduced a ruling class over England who displaced English land owners and clergy, and who spoke only Anglo-Norman, though it is likely many if not most were conversant in English from the second generation onwards. William of Malmesbury, a chronicler of mixed Anglo-Norman descent writing in the twelfth century, described the Battle of Hastings as: \"That fatal day for England, the sad destruction of our dear country [\"dulcis patrie\"]\". He also lamented: \"England has become the habitation of outsiders and the dominion of foreigners. Today, no Englishman is earl, bishop, or abbot, and newcomers gnaw away at the riches and very innards of England; nor is there any hope for an end of this misery\". Another chronicler, Robert of Gloucester, speaking in part of earlier centuries, in the mid to late thirteenth century:\n", "In the second of this three-part series, Professor Robert Bartlett explores the impact of the Norman conquest of Britain and Ireland. Bartlett shows how William the Conqueror imposed a new aristocracy, savagely cut down opposition and built scores of castles and cathedrals to intimidate and control. He also commissioned the Domesday Book, the greatest national survey of England that had ever been attempted. England adapted to its new masters and both the language and culture were transformed as the Normans and the English intermarried. Bartlett shows how the political and cultural landscape of Scotland, Wales and Ireland were also forged by the Normans and argues that the Normans created the blueprint for colonialism in the modern world.\n", "Frank Barlow saw Freeman's influence as being profound. Modern historians, Frank Stenton and Ann Williams among them, have again come to share some of his beliefs, including the existence of a degree of historical continuity across the Norman Conquest, and to view English and Norman events in the broader context of European history. In 1967 R. Allen Brown called the \"History\" \"a notorious high-water mark in studies of 1066\". In the present century Anthony Brundage and Richard A. Cosgrove have been more reprehensive, writing that the \"History\"’s \n", "Freeman aimed his \"History\" at both specialists and non-specialists. In an 1867 letter he wrote that \n\nI have to make my text a narrative which I hope may be intelligible to girls and curates, and in an appendix to discuss the evidence for each point in a way which I hope may be satisfactory to Gneist and Stubbs. \n", "One of William's themes in the \"Gesta Pontificum Anglorum\", as in his \"Gesta Regum Anglorum\", is that the Normans' invasion and conquest of England saved the English and rescued their civilization from the barbarities of the native English and restored England to the Latin culture of the continent. One aspect of this theme was William's reluctance to give Anglo-Saxon names in their native form, instead Latinizing them.\n", "The Norman invasion of Britain in 1066 is usually considered to be the beginning of a new era in English history. William, Duke of Normandy, defeated English king Harold Godwinson at the Battle of Hastings. Having conquered Hampshire and Kent, William and his army turned to London. Having failed to cross London bridge at Southwark, William's army marched clockwise around London and waited to the north-west at Berkhamsted. Where, having realised that resistance was pointless, a delegation from London arrived to surrender the city, and recognise William as King. William soon granted a charter for London in 1067 which upheld previous Saxon rights, privileges and laws. \n", "BULLET::::- 1 August – Domesday Book is presented to William at Old Sarum.\n\nBULLET::::- 1087\n\nBULLET::::- 9 September – William I of England (William the Conqueror) dies at Rouen while on campaign in northern France; his first son Robert succeeds him as Robert II, Duke of Normandy whilst his second son succeeds him on the English throne as William II of England.\n\nBULLET::::- 26 September – coronation of William II at Westminster Abbey.\n\nBULLET::::- 25 December – Odo of Bayeux re-instated as Earl of Kent.\n\nBULLET::::- An early fire of London destroys much of the city including St Paul's Cathedral.\n" ]
[]
[]
[ "normal" ]
[]
[ "normal", "normal" ]
[]
2018-04608
We have two ears, so surely two decent headphone drivers and some good software on the part of the game or film can perfectly replicate surround sound? Why do headphones come out with "virtual surround sound"?
It is possible to have more or less "perfect" surround sound with headphones - it's called binaural recording. But it requires the recording to be made live and in a special way: two microphones are placed inside a dummy head complete with dummy ears to simulate exactly how the sound would reach our eardrums. Binaural recordings also have to be played back on headphones or earbuds - not speakers - for optimal effect. Music and movie soundtracks are engineered for optimal playback on home theater/stereo systems, where some of the sound from each speaker always reaches both ears, but after reflecting around the room and getting "filtered" through your ear structure. Virtual surround sound on headphones simulates this effect to convey the illusion that the sound is actually coming from speakers all around you instead of the typical "inside your head" sound from headphones.
[ "Section::::Implications.\n\nUnderstanding visual capture has the potential to lead to numerous benefits in the future. Beyond solving people’s pain in phantom limb syndrome, there are numerous potential applications for visual capture. Already, there have been surround-sound systems built to provide unique listening experiences, that “put you right in the middle of the action”. However, it is more than just having sound come from every direction, but the improvements in visual quality of movies, and where sound and vision can be localized best to provide a coherent movie-going experience.\n\nSection::::See also.\n\nBULLET::::- Lip reading\n\nBULLET::::- Rubber Hand Illusion\n\nBULLET::::- Ocular dominance\n", "Some methods use knowledge of head-related transfer function (HRTF). With an appropriate HRTF the signals required at the eardrums for the listener to perceive sound from any direction can be calculated. These signals are then recreated at the eardrum using either headphones or a crosstalk calculation method. The disadvantage of this approach is that it is very difficult to get these systems to work for more than one listener at a time.\n\nSection::::Types.:Using reflections.\n", "Most surround sound recordings are created by film production companies or video game producers; however some consumer camcorders have such capability either built-in or available separately. Surround sound technologies can also be used in music to enable new methods of artistic expression. After the failure of quadraphonic audio in the 1970s, multichannel music has slowly been reintroduced since 1999 with the help of SACD and DVD-Audio formats. Some AV receivers, stereophonic systems, and computer soundcards contain integral digital signal processors and/or digital audio processors to simulate surround sound from a stereophonic source (see fake stereo).\n", "Section::::How it works.:Auditory senses.\n\nA headset is placed around the ears of a patient that allows for sound to be heard. Generally, full surround-sound is desirable as it gives the patient a sense of space. For example, if a virtual person standing in front of the patient was speaking, the voice would be clear and audible. However, if a virtual person was standing to the left or right of the patient, the sound would be much quieter and muffled. Surround sound aids in giving a lifelike feeling to the VR environment.\n\nSection::::How it works.:Olfactory senses.\n", "The first fully operational venues were equipped in autumn 2010, in Araújo Cinemas, Maringá, Brazil. By May 2012, the installations worldwide include cinema theatres in the United States, China, Ireland, Germany, Austria, France, Brazil, Korea, Japan, Italy, and Spain. These cinemas play both native imm sound 3D content and optionally use 3D upmix in real-time for alternative content. The Impossible, mixed using imm sound technology, will be released in October 2012.\n\nSection::::Technology.\n", "Some amusement parks have created attractions based around the principles of 3-D audio. One example is \"Sounds Dangerous!\" at Disney's Hollywood Studios at the Walt Disney World Resort in Florida. Guests wear special earphones as they watch a short film starring comedian Drew Carey. At a point in the film, the screen goes dark while a 3-D audio sound-track immerses the guests in the ongoing story. To ensure that the effect is heard properly, the earphone covers are color-coded to indicate how they should be worn. This is not a generated effect but a binaural recording.\n", "Virtual surround is an audio system that attempts to create the perception that there are many more sources of sound than are actually present. In order to achieve this, it is necessary to devise some means of tricking the human auditory system into thinking that a sound is coming from somewhere that it is not. Most recent examples of such systems are designed to simulate the true (physical) surround sound experience using one, two or three loudspeakers. Such systems are popular among consumers who want to enjoy the experience of surround sound without the large number of speakers that are traditionally required to do so.\n", "The rise of Dolby Atmos and other 360° Audio film technology in relation to commercial entertainment has seen a rise in popularity of the use of Binaural simulation. This is with the purpose of fully adapting the 360° soundtrack for headphones and earphones. Users can watch 360° Audio films and music with the immersive surround sound experience remaining intact despite using just the two headset speakers. Notably, any full 360° multi-channel soundtrack is automatically converted to simulated binaural audio when listened to with headphones.\n", "Historically the simplicity of game environments reduced the required number of sounds needed, and thus only one or two people were directly responsible for the sound recording and design. As the video game business has grown and computer sound reproduction quality has increased, however, the team of sound designers dedicated to game projects has likewise grown and the demands placed on them may now approach those of mid-budget motion pictures.\n\nSection::::Music.\n", "Sound is generally the easiest sensation to implement with high fidelity, based on the foundational telephone technology dating back more than 130 years. Very high-fidelity sound equipment has also been available for a considerable period of time, with stereophonic sound being more convincing than monaural sound.\n\nSection::::Implementation.:Implementation of human sensory elements.:Manipulation.\n", "Dolby Headphone is incorporated into the audio decoders packaged with surround headphones including:\n\nBULLET::::- Razer Thresher 7.1\n\nBULLET::::- Razer Thresher Ultimate\n\nBULLET::::- HyperX Cloud Revolver S\n\nBULLET::::- Astro Gaming A40 System\n\nBULLET::::- Astro Gaming A50 System\n\nBULLET::::- Logitech G430\n\nBULLET::::- Logitech G35\n\nBULLET::::- Logitech G930\n\nBULLET::::- Logitech G933\n\nBULLET::::- Logitech G633\n\nBULLET::::- Plantronics GameCom Commander\n\nBULLET::::- Plantronics Gamecom 777\n\nBULLET::::- Plantronics Gamecom 780\n\nBULLET::::- Plantronics GameCom 788\n\nBULLET::::- Plantronics RIG 500E\n\nBULLET::::- Turtle Beach Systems Ear Force DXL1\n\nBULLET::::- Turtle Beach Systems Ear Force X41\n\nBULLET::::- Turtle Beach Systems Ear Force X42\n\nBULLET::::- Turtle Beach Systems Ear Force Recon 320\n", "Auro-3D is designed along three layers of sound (Surround, height and overhead ceiling) rather than the single horizontal layer used in the traditional 5.1 sound format. It creates a spatial sound experience by adding a height layer around the audience on top of the traditional 2D Surround sound system. This extra layer reveals both localized sounds and height reflections which are crucial for our brains to better interpret the sounds that exist in the lower Surround layer.\n", "Section::::Types.\n\nA virtual surround system must provide a means for 2-dimensional imaging of sound, using some properties of the human auditory system. The way that the auditory system localises a sound source is a topic that is studied in the field of psychoacoustics. Thus, virtual surround systems use knowledge of psychoacoustics to \"trick\" the listener. There are several ways in which this has been attempted.\n\nSection::::Types.:Using HRTFs.\n", "BULLET::::- Dolby Headphone: an implementation of virtual surround, simulating 5.1 surround sound in a standard pair of stereo headphones.\n\nBULLET::::- Dolby Virtual Speaker: simulates 5.1 surround sound in a setup of two standard stereo speakers.\n", "Realtek, a manufacturer of integrated HD audio codecs, has a product similar to ALchemy called 3D SoundBack. C-Media, a manufacturer of PC sound card chipsets, also has a solution called Xear3D EX, although it works instead by intercepting DirectSound3D calls transparently in the background without any user intervention.\n\nSection::::OS Support.:Windows 8.\n", "Another popular example of visual capture happens while watching a movie in a theater, and the sound appears to be coming from the actors lips. Although this may seem true, the sound is actually coming from the speakers, often spread out across the theater rather than directly behind wherever the character’s mouth may be.\n", "Using head-related transfer functions and reverberation, the changes of sound on its way from the source (including reflections from walls and floors) to the listener's ear can be simulated. These effects include localization of sound sources behind, above and below the listener.\n\nSome 3D technologies also convert binaural recordings to stereo recordings. MorrowSoundTrue3D converts binaural, stereo, 5.1 and other formats to 8.1 single and multiple zone 3D sound experiences in realtime.\n\n3D Positional Audio effects emerged in the 1990s in PC and Game Consoles.\n", "Pre-decoding to 5.1 media has been known as \"G-Format\" during the early days of DVD audio, although the term is not in common use anymore.\n\nThe obvious advantage of pre-decoding is that any surround listener can be able to experience Ambisonics; no special hardware is required beyond that found in a common home theatre system. The main disadvantage is that the flexibility of rendering a single, standard Ambisonics signal to any target speaker array is lost: the signal is assumes a specific \"standard\" layout and anyone listening with a different array may experience a degradation of localisation accuracy.\n", "Sotaro Tojima, best known for his work on Konami's \"\" and \"\", served as \"Halo 4\"s audio director. The team performed many live audio recording sessions, several of which occurred in Tasmania, Australia. Some of these recording sessions took place in generally inhospitable environments, such as underwater, in fire, and in ice, through the use of specially designed microphones; other recording sessions have utilized \"home made\" explosives. Tojima intended for the game's audio to be clearly grounded in the \"Halo\" universe, while also having a more realistic quality than in past titles.\n", "The history of electronic music includes the evolution of multi-channel playback in concert (arguably the real roots of \"surround sound\" for cinema) and for a considerable time the 8-channel format was a de facto standard. This standardisation was fostered, in great measure, by the development of professional and semi-professional 8-track tape recorders—originally analog, but later manifesting in proprietary cassette formats by Alesis and Tascam. The speaker configuration, however, is much less traditional, and unlike cinematic reproduction systems, there is no hard-and-fast \"standard\". In fact, composers took (and to some extent still take) considerable interest in experimenting with speaker layouts. In these experiments, the goal is not limited to creating \"realistic\" playback of believably natural sonic environments. Rather, the goals are often simply to experience and understand the psychoacoustics effect created by variations on source and imaging.\n", "Section::::Current development.:Use in gaming.\n\nHigher-order Ambisonics has found a niche market in video games developed by Codemasters. Their first game to use an Ambisonic audio engine was , however, this only used Ambisonics on the PlayStation 3 platform. Their game extended the use of Ambisonics to the Xbox 360 platform, and uses Ambisonics on all platforms including the PC.\n\nThe recent games from Codemasters, F1 2010, Dirt 3, F1 2011 and , use fourth-order Ambisonics on faster PCs, rendered by Blue Ripple Sound's Rapture3D OpenAL driver and pre-mixed Ambisonic audio produced using Bruce Wiggins' WigWare Ambisonic Plug-ins.\n\nSection::::Patents and trademarks.\n", "In the field of console-gaming, there has been very little in the way of audio-games. One notable exception has been the innovative incorporation of strong audio elements in several of the games produced by the Japanese video game company, WARP. WARP (formerly EIM) was founded by musician Kenji Eno and consisted of a five-man team including first-time designer Fumito Ueda. In 1997, WARP developed a game called \"Real Sound\" for the Sega Saturn which was later ported to Dreamcast in 1999 and renamed . This game featured no visuals at all and was entirely dependent upon sound.\n", "Remote Control Productions has been responsible for the scores for a number of successful live-action films including the \"Pirates of the Caribbean\" movies, \"Iron Man\", \"Gladiator\", \"\", \"The Last Samurai\", \"Transformers\", \"Hancock\", \"Kingdom of Heaven\", \"The Da Vinci Code\", \"Inception\", \"Sherlock Holmes\" and its sequel, and \"The Dark Knight Trilogy\", along with successful animated films such as the \"Shrek\" series, \"Kung Fu Panda\", \"Madagascar\", \"The Lion King\", and more.\n", "BULLET::::- 2018 - Indywood Academy Awards - Best Re-recording Mixer - \"Padmaavat\"\n\nBULLET::::- 2019 - Zee Cine Awards - Best Audiography - \"Padmaavat\"\n\nNominations \n\nBULLET::::- 2008 - V. Shantaram Award - Best Sound Design for Marathi film \"Jogwa\"\n\nBULLET::::- 2008 - Zee Gaurav Puraskar - Best Sound Design for Marathi film \"Jogwa\"\n\nHonours & Recognitions \n\nBULLET::::- 2015 - Archdiocese of Thrissur's Youth Excellence Award\n\nSection::::External links.\n\nBULLET::::- Justin Jose - Nettv4u\n\nBULLET::::- Film Mixing and Sound design ~ Audio in films, and general mixing tips tricks, techniques\n", "The song \"Propeller Seeds\" by English artist Imogen Heap was recorded using 3D audio.\n\nThere has been developments in using 3D audio for DJ performances including the world's first Dolby Atmos event on 23rd Jan 2016 held at Ministry of Sound, London. The event was a showcase of a 3D audio DJ set performed by Hospital Records owner Tony Colman aka London Elektricity. \n\nOther investigations included the Jago 3D Sound project which is looking at using Ambisonics combined with STEM music containers created and released by Native Instruments in 2015 for 3D nightclub sets.\n\nSection::::Software.\n\nBULLET::::- Waves NX\n" ]
[]
[]
[ "normal" ]
[]
[ "normal" ]
[]
2018-23942
Why is internet speed in underdeveloped/developing nations slower than that of developed nations?
It takes an investment ($$) to build out the infrastructure for high speed internet, like fiber optic cables. Investors don't want to sink a lot of money into developing countries because they don't expect as great a return i.e. profit.
[ "BULLET::::- \"A Nation Online: Entering the Broadband Age\", NTIS, U.S. Department of Commerce, September 2004.\n\nBULLET::::- Rumiany, D. (2007). \"Reducing the Global Digital Divide in Sub-Saharan Africa\". Posted on Global Envision with permission from Development Gateway, January 8, 2007. Retrieved July 17, 2009.\n\nBULLET::::- \"Telecom use at the Bottom of the Pyramid 2 (use of telecom services and ICTs in emerging Asia)\", LIRNEasia, 2007.\n\nBULLET::::- \"Telecom use at the Bottom of the Pyramid 3 (Mobile2.0 applications, migrant workers in emerging Asia)\", LIRNEasia, 2008-09.\n\nBULLET::::- \"São Paulo Special: Bridging Brazil's digital divide\", Digital Planet, \"BBC World Service\", October 2, 2008.\n", "Developing countries lag behind other nations in terms of ready access to the internet, though computer access has started to bridge that gap. Access to computers, or to broadband access, remains rare for half of the world's population. For example, as of 2010, on average of only one in 130 people in Africa had a computer while in North America and Europe one in every two people had access to the Internet. 90% of students in Africa had never touched a computer.\n", "Using previous studies (Gamos, 2003; Nsengiyuma & Stork, 2005; Harwit, 2004 as cited in James), James asserts that in developing countries, \"internet use has taken place overwhelmingly among the upper-income, educated, and urban segments\" largely due to the high literacy rates of this sector of the population. As such, James suggests that part of the solution requires that developing countries first build up the literacy/language skills, computer literacy, and technical competence that low-income and rural populations need in order to make use of ICT.\n", "\"By 2018, the government hopes to have 63 percent of the country connected to broadband. And according to 2013 GSMA mobile economy figures, there are already 43.9 million mobile connections and 24 million mobile users in a country whose 47 million people give it the third largest population in Latin America and third largest Spanish-speaking population in the world.\"\n", "Internet access is limited by the relation between pricing and available resources to spend. Regarding the latter, it is estimated that 40% of the world's population has less than US$20 per year available to spend on information and communications technology (ICT). In Mexico, the poorest 30% of the society counts with an estimated US$35 per year (US$3 per month) and in Brazil, the poorest 22% of the population counts with merely US$9 per year to spend on ICT (US$0.75 per month). From Latin America it is known that the borderline between ICT as a necessity good and ICT as a luxury good is roughly around the “magical number” of US$10 per person per month, or US$120 per year. This is the amount of ICT spending people esteem to be a basic necessity. Current Internet access prices exceed the available resources by large in many countries.\n", "BULLET::::- Lesson 2: When choosing the technology for a poverty intervention project, pay particular attention to infrastructure requirements, local availability, training requirements, and technical challenges. Simpler technology often produces better results.\n\nBULLET::::- Lesson 3: Existing technologies – particularly the telephone, radio, and television—can often convey information less expensively, in local languages, and to larger numbers of people than can newer technologies. In some cases, the former can enhance the capacity of the latter.\n\nBULLET::::- Lesson 4: ICT projects that reach out to rural areas might contribute more to the MDGs than projects based in urban areas.\n", "Of note a total of 83% of least developed country exports enter developed countries duty-free. In the developing world, 31% of the population use the Internet, compared with 77% of the developed world. In 2012 ODA of $126 billion was 4% less than in 2011, which was 2% less than in 2010. This is the first time since 1996-1997 that ODA fell in two consecutive years, while essential medicines are available in only 57% of public sector facilities and 65% of private facilities in selected developing countries. There are over six billion mobile phone subscriptions worldwide and for every person who uses the Internet from a computer, two do so from a mobile device. In South Africa, over 25,000 students have improved their math skills through interactive exercises and quizzes on mobile phones through cooperation between government, Nokia and individual schools and teachers.\n", "The upper graph of the Figure on the side shows that the divide between developed and developing countries has been diminishing when measured in terms of subscriptions per capita. In 2001, fixed-line telecommunication penetration reached 70% of society in developed OECD countries and 10% of the developing world. This resulted in a ratio of 7 to 1 (divide in relative terms) or a difference of 60% (divide in measured in absolute terms). During the next decade, fixed-line penetration stayed almost constant in OECD countries (at 70%), while the rest of the world started a catch-up, closing the divide to a ratio of 3.5 to 1. The lower graph shows the divide not in terms of ICT devices, but in terms of kbit/s per inhabitant. While the average member of developed countries counted with 29 kbit/s more than a person in developing countries in 2001, this difference got multiplied by a factor of one thousand (to a difference of 2900 kbit/s). In relative terms, the fixed-line capacity divide was even worse during the introduction of broadband Internet at the middle of the first decade of the 2000s, when the OECD counted with 20 times more capacity per capita than the rest of the world. This shows the importance of measuring the divide in terms of kbit/s, and not merely to count devices. The International Telecommunications Union concludes that \"the bit becomes a unifying variable enabling comparisons and aggregations across different kinds of communication technologies\".\n", "Access to computers is a dominant factor in determining the level of Internet access. In 2011, in developing countries, 25% of households had a computer and 20% had Internet access, while in developed countries the figures were 74% of households had a computer and 71% had Internet access. The majority of people in developing countries do not have Internet access. About 4 billion people do not have Internet access. When buying computers was legalized in Cuba in 2007, the private ownership of computers soared (there were 630,000 computers available on the island in 2008, a 23% increase over 2007).\n", "However, hurdles are still large. \"Of the 4.3 billion people not yet using the Internet, 90% live in developing countries. In the world's 42 Least Connected Countries (LCCs), which are home to 2.5 billion people, access to ICTs remains largely out of reach, particularly for these countries' large rural populations.\" ICT has yet to penetrate the remote areas of some countries, with many developing countries dearth of any type of Internet. This also includes the availability of telephone lines, particularly the availability of cellular coverage, and other forms of electronic transmission of data. The latest \"Measuring the Information Society Report\" cautiously stated that the increase in the aforementioned cellular data coverage is ostensible, as \"many users have multiple subscriptions, with global growth figures sometimes translating into little real improvement in the level of connectivity of those at the very bottom of the pyramid; an estimated 450 million people worldwide live in places which are still out of reach of mobile cellular service.\"\n", "BULLET::::- Opening ceremony: Mr. Jomo Kwame Sundaram, Assistant Secretary-General for Economic Development at UNDESA noted that while Internet use was increasing, it was growing faster in the developed world than in developing regions and that the digital divide was growing instead of shrinking.\n", "(International Journal of Communication (19328036)) (Digital Divide in Colombia: The Role of Motivational and Material Access in the Use and Types of Use of ICTs).\n\nSection::::Colombia.:Physical Access.\n", "While developed countries with technological infrastructures were joining the Internet, developing countries began to experience a digital divide separating them from the Internet. On an essentially continental basis, they are building organizations for Internet resource administration and sharing operational experience, as more and more transmission facilities go into place.\n\nSection::::Development of wide area networking.:TCP/IP goes global (1980s).:The early global \"digital divide\" emerges.:Africa.\n\nAt the beginning of the 1990s, African countries relied upon X.25 IPSS and 2400 baud modem UUCP links for international and internetwork computer communications.\n", "Internet access has changed the way in which many people think and has become an integral part of people's economic, political, and social lives. The United Nations has recognized that providing Internet access to more people in the world will allow them to take advantage of the “political, social, economic, educational, and career opportunities” available over the Internet. Several of the 67 principles adopted at the World Summit on the Information Society convened by the United Nations in Geneva in 2003, directly address the digital divide. To promote economic development and a reduction of the digital divide, national broadband plans have been and are being developed to increase the availability of affordable high-speed Internet access throughout the world.\n", "Even though technology has become more and more affordable, there is still a disparage between poor people's access to Internet and wealthy people's access to Internet. This becomes an issue once children begin school, as kids who are in low income school systems do not have access to technology in which they are not granted technical skills to develop to continue their education and later translate those skills to the workplace. This low level of computer literacy can be attributed to poor infrastructure and high costs to stay connected. In 2000, the entirety of Sub Saharan Africa had less telephone lines than Manhattan as a whole.\n", "Another factor which affects access to the digital public sphere is the digital divide, which refers to how people from less developed countries tend to have less access to information and communications technologies compared to those from more developed countries. For example, the most developed regions of the world, such as North America and Western Europe, have the highest Internet penetration rates at over 80% each, while the least developed countries such as in Africa and South Asia have less than 30% each. On the other hand, the reduced cost and increasing availability of mobile devices such as smartphones throughout less developed regions is helping to reduce this disparity at an exponential rate. In just two years, between 2013 and 2015, the number of Internet users in developing nations has risen by 9%, according to the Pew Research Center. Other research has shown, though, that even within more developed countries like the United States, the digital divide continues to persist between upper and lower socioeconomic classes and between different education levels. Furthermore, scholars like Mark Warschauer argue that it is not just access to technology that matters, but the knowledge of how to put that technology to use in meaningful ways.\n", "It is estimated that 40% of the world's population has less than US$ 20 per year available to spend on ICT. In Brazil, the poorest 20% of the population counts with merely US$9 per year to spend on ICT (US$ 0.75 per month). In Mexico, the poorest 20% of the society counts with an estimated US$ 35 per year (US$ 3 per month). For Latin America it is estimated that the borderline between ICT as a necessity good and ICT as a luxury good is roughly around the \"magical number\" of US$10 per person per month, or US$120 per year.\n", "BULLET::::- James, J. (2004). \"Information Technology and Development: A new paradigm for delivering the Internet to rural areas in developing countries\". New York, NY: Routledge. (print). (e-book).\n\nBULLET::::- Southwell, B. G. (2013). \"Social networks and popular understanding of science and health: sharing disparities\". Baltimore, MD: Johns Hopkins University Press. (book).\n\nBULLET::::- World Summit on the Information Society (WSIS), 2005. \"What's the state of ICT access around the world?\" Retrieved July 17, 2009.\n\nBULLET::::- World Summit on the Information Society (WSIS), 2008. \"ICTs in Africa: Digital Divide to Digital Opportunity\". Retrieved July 17, 2009.\n\nSection::::Further reading.\n", "Section::::Pakistan.:Solutions.\n\nBesides Private efforts, Public efforts put forth by the Pakistani government would help bridge the digital divide. There is an ambitious undertaking called the Universal Services Fund which would aim to provide broadband coverage to the whole nation by 2018.\n\nSection::::Philippines.\n", "differences in the uses of ICTs have important implications for life outcomes\". \"Rojas and Puig-i-abril (2009) found that the most prevalent activities of Internet users in Colombia were checking e-mail, consuming entertainment content, chatting with friends, and consuming news and information.\" There are different levels of \"use\" including, non-use, low use, and frequent use while also taking into consideration the opportunities taken by the users. There is a difference in those who go online just for fun and those who use the internet for the improvement of themselves. \n", "According to 2011 estimates, about 13.5% of the African population has Internet access Internet in Africa. Africa accounts for 15% of the World population, but only 6.2% of the world's population is African. However, these statistics are skewed due to the fact that most of these Internet users come from South Africa, a country that has a much better infrastructure than the rest of the continent. The rest is mainly distributed among Morocco and Egypt, both countries that have better infrastructures than the majority of the countries in Africa, yet not as strong as that of South Africa. There have been many initiatives in the U.S. to push for better infrastructure which would eventually lead to better Internet access in Africa.\n", "There are projects worldwide that have implemented, to various degrees, the solutions outlined above. Many such projects have taken the form of Information Communications Technology Centers (ICT centers). Rahnman explains that \"the main role of ICT intermediaries is defined as an organization providing effective support to local communities in the use and adaptation of technology. Most commonly an ICT intermediary will be a specialized organization from outside the community, such as a non-governmental organization, local government, or international donor. On the other hand, a social intermediary is defined as a local institution from within the community, such as a community-based organization.\n", "The World Summit on the Information Society (2003 and 2005) and the World Summit for Sustainable Development (2002) recognized the importance of the ICT in narrowing the digital divide and attaining sustainable development, respectively. In view of the above, many countries established community access points with the intention of providing common access to ICT related services to rural communities. However, it has now been realized that community access points established to narrow the digital divide has not been able to capture the fragmented and underutilized knowledge of the poor and the disadvantaged communities. While many reasons could be attributed to this situation some of the significant causes are; inadequate exchange of experience related to ICT, weak linkages with stakeholders and low capacity of disadvantage communities in accessing and utilization of knowledge.\n", "There were roughly 0.6 billion fixed broadband subscribers and almost 1.2 billion mobile broadband subscribers in 2011. In developed countries people frequently use both fixed and mobile broadband networks. In developing countries mobile broadband is often the only access method available.\n\nSection::::Digital divide.:Bandwidth divide.\n", "Projects in marginalized rural areas face the most significant hurdles – but since people in marginalized rural areas are at the very bottom of the pyramid, development efforts should make the most difference in this sector. ICTs have the potential to multiply development effects and are thus also meaningful in the rural arena.\n\nHowever, introducing ICTs in these areas is also most costly, as the following barriers exist:\n\nBULLET::::- Lack of infrastructure: no electrical power, no running water, bad roads, etc.\n\nBULLET::::- Lack of health services: diseases like HIV, TB, malaria are more common.\n" ]
[]
[]
[ "normal" ]
[]
[ "normal", "normal" ]
[]
2018-00960
How do emergency responders get access to gated communities & other restricted areas.
One of the most common ways is the use of a Knox Box. It's a small metal lock box. The first responder has a key to the box, and the keys to the building are inside. Google Knox Box, and you'll see what they look like, and you'll start seeing them everywhere you go.
[ "Neighbourhoods with \"physical\" or explicit gating with security checkpoints and patrols are extremely rare, being absent in even some of Canada's richest neighbourhoods such as Bridle Path, Toronto. Furthermore, municipal planning laws in many Canadian provinces ban locked gates on public roads as a public health issue since they deny emergency vehicles quick access.\n\nA noted exception in Canada is Arbutus Ridge, an age-restricted community constructed between 1988 and 1992 on the southeastern coast of Vancouver Island.\n", "They are popular in southern China, namely the Pearl River Delta Region, the most famous of which is Clifford Estates.\n\nIn Saudi Arabia, gated communities have existed since the discovery of oil, mainly to accommodate families from Europe or North America. After threat levels have increased since the late 1990s against foreigners in general and U.S. citizens in particular gates have become armed, sometimes heavily, and all vehicles have been inspected. Marksmen and Saudi Arabian National Guard armored vehicles appeared in certain times, markedly after recent terrorist attacks in areas near-by, targeting people from European or North American countries.\n", "BULLET::::- The public transport in closed cities may go with transit checkpoints or checking passes/passports at technical stops and available to the others outside the security area. Within a gated community in the best cases, the bus stop is opposite the checkpoint, a few offer free buses. Via gated communities buses go due to easement or good will.\n", "A local government agency, often a fire department, police department, or emergency management agency, agrees to sponsor CERT within its jurisdiction. The sponsoring agency liaises with, deploys and may train or supervise the training of CERT members. Many sponsoring agencies employ a full-time community-service person as liaison to the CERT members. In some communities, the liaison is a volunteer and CERT member.\n", "BULLET::::3. Security zone community: the security zone community is the most popular type of gated community in which it offers a housing development that is surrounded by fences or gates. This development is normally provided with guard services.\n\nBULLET::::4. Security zone community and lifestyle: this type of gated community housing development is usually developed within a city centre. It focuses on both the security aspect and the provision of lifestyle facilities for its residents.\n", "A team may self-activate (self-deploy) when their own neighborhood is affected by disaster. An effort is made to report their response status to the sponsoring agency. A self-activated team will size-up the loss in their neighborhood and begin performing the skills they have learned to minimize further loss of life, property, and environment. They will continue to respond safely until redirected or relieved by the sponsoring agency or professional responders on-scene.\n", "BULLET::::- In \"The Neighbors\", a 2012 TV series in the United States, a family has relocated to a gated townhouse community called \"Hidden Hills\" in New Jersey only they discover that the entire community is populated by residents from another planet who identify themselves by the names of sport celebrities, patrol the community in golf carts, receive nourishment through their eyes and mind by reading books rather than eating, and cry green goo from their ears.\n\nBULLET::::- In \"Safe\", a Netflix original series based on a mystery in and around a gated community.\n", "Besides the services of gatekeepers, many gated communities provide other amenities. These may depend on a number of factors including geographical location, demographic composition, community structure, and community fees collected. When there are subassociations that belong to master associations, the master association may provide many of the amenities. In general, the larger the association the more amenities that can be provided.\n", "As people are trained and agree to join the community emergency response effort, a CERT is formed. Initial efforts may result in a team with only a few members from across the community. As the number of members grow, a single community-wide team may subdivide. Multiple CERTs are organized into a hierarchy of teams consistent with ICS principles. This follows the Incident Command System (ICS) principle of Span of control until the ideal distribution is achieved: one or more teams are formed at each neighborhood within a community.\n", "On 2 January 2015, the Integrated Checkpoints Command concept was introduced. Under the Integrated Checkpoints Command, ICA checkpoints are split into three domains: Land, Sea and Air. Each domain is headed by a Domain Commander who reports directly to the Commissioner. All Home Team agencies deployed at the checkpoints must report to the Domain Commander.\n\nIn March 2017, it was announced that Woodlands Town Centre would be absorbed by the new Woodlands Checkpoint extension.\n", "The Federal Relocation Arc consists of three layered networks of facilities, each of which is designed to be progressively more survivable and fortified than the previous. In the event that intelligence or rising tensions indicate that a serious emergency may soon develop, cabinet-level agencies of the United States government would activate three \"emergency teams\" sequentially lettered \"A\", \"B\", and \"C\". Each pre-designated emergency team generally consists of 60 to 100 staff who are capable of running the most critical functions of the government agency that they represent. Following an alert, an agency's \"A\" team would move to a secure underground facility located within or immediately adjacent to the agency's normal headquarters building in Washington, D.C. The \"B\" team would relocate to the High Point Special Facility in Virginia's Blue Ridge Mountains. The \"C\" team would set up office at a dedicated emergency facility that each agency maintains approximately 20–30 miles outside of Washington. \n", "Police officers stationed at \"kōban\" serve several roles:\n\nBULLET::::- Maps and directions – providing maps and directions to local addresses, sometimes even personally guiding those unfamiliar with local street layouts and addressing schemes. Additionally, officers can refer people to local hotels, restaurants, and other businesses.\n\nBULLET::::- Lost and found – accepting reports of lost items and accepting found items from members of the public and, if a matching lost item is turned in, notifying the owner of the item to come pick up the item.\n", "South African Police Service launched a pilot project in South Africa in 2007 to train private security in crowd control, securing crime scenes, reporting suspect vehicles, participate in weekly police briefings. Project participants will respond in an event of a major disaster. Unlike other jurisdictions, these security guards will also participate in a non-emergency situation. For example, they form part of the security plan for the 2010 FIFA World Cup.\n\nSection::::International Adoption.:United States.\n", "Mike Brazel\n\nRegion II: New York\n\nBULLET::::- Federal Emergency Management Agency\n\nBULLET::::- 26 Federal Plaza, Room 1307\n\nBULLET::::- New York, NY 10278-0002\n\nAreas:\n\nNew Jersey, New York, Puerto Rico,\n\nU.S. Virgin Islands\n\nNIMS Coordinator:\n\nMarshall Mabry\n\nRegion III: Philadelphia\n\nBULLET::::- Federal Emergency Management Agency\n\nBULLET::::- One Independence Mall, 6th Floor\n\nBULLET::::- 615 Chestnut Street\n\nBULLET::::- Philadelphia, PA 10106-4404\n\nAreas:\n\nDelaware, District of Columbia, Maryland, Pennsylvania,\n\nVirginia, West Virginia\n\nNIMS Coordinator:\n\nJohn Brasko\n\nRegion IV: Atlanta\n\nBULLET::::- Federal Emergency Management Agency\n\nBULLET::::- Federal Regional Center\n\nBULLET::::- 402 South Pinetree Boulevard\n\nBULLET::::- Thomasville, GA 31792\n\nAreas:\n\nAlabama, Florida, Georgia, Kentucky,\n", "Pre-Designated Incident Facilities: Response operations can form a complex structure that must be held together by response personnel working at different and often widely separate incident facilities. These facilities can include:\n", "Many smaller gated communities in Mexico are not officially classified as separate gated communities as many municipal rules prohibit closed off roads. Most of these small neighborhoods cater to lower middle income residents and offer a close perimeter and check points similar to an \"authentic\" gated community. This situation is tolerated and sometimes even promoted by some city governments due to the lack of capacity to provide reliable and trusted security forces.\n\nSection::::Examples.:New Zealand.\n\nIn New Zealand, gated communities have been developed in suburban areas of the main cities since the 1980s and 1990s.\n\nSection::::Examples.:Pakistan.\n", "BULLET::::- Region V, Chicago, IL Serving: IL, IN, MI, MN, OH, WI\n\nBULLET::::- Region VI, Denton, TX Serving: AR, LA, NM, OK, TX\n\nBULLET::::- Region VII, Kansas City, MO Serving: IA, KS, MO, NE\n\nBULLET::::- Region VIII, Denver, CO Serving: CO, MT, ND, SD, UT, WY\n\nBULLET::::- Region IX, Oakland, CA Serving: AZ, CA, HI, NV, GU, AS, CNMI, RMI, FM\n\nBULLET::::- Region IX, PAO Serving: American Samoa, CNMI, Guam, Hawaii\n\nBULLET::::- Region X, Bothell, WA Serving: AK (Alaska), ID, OR, WA\n\nSection::::Training courses overview.:Certifications.\n", "At border areas, such as Donegal, Louth and so on, local agreements are in place with the Northern Ireland Fire and Rescue Service in relation to attending incidents.\n\nSection::::Israel.\n", "BULLET::::- b. Supporting efforts to ensure First responders are prepared to respond to major events, especially prevention of and response to threatened terrorist attacks. First responders can be civilians that are members of a United States national service program under the jurisdiction of the Department of Homeland Security, The National Office of Citizen Corp's (FEMA) Individual and Community Preparedness Division, Community emergency response team (CERT)\n\nSection::::Annex I.\n", "Section::::Tradition.\n", "NIMS Coordinator:\n\nTom Morgan\n\nRegion VIII: Denver\n\nBULLET::::- Federal Emergency Management Agency\n\nBULLET::::- Denver Federal Center\n\nBULLET::::- Building 710, Box 25267\n\nBULLET::::- Denver, CO 80225-0267\n\nAreas:\n\nColorado, Montana, North Dakota, South Dakota,\n\nUtah, Wyoming\n\nNIMS Coordinator:\n\nLanney Holmes\n\nRegion IX: Oakland\n\nBULLET::::- Federal Emergency Management Agency\n\nBULLET::::- 1111 Broadway, Suite 1200\n\nBULLET::::- Oakland, CA 94607-4052\n\nAreas:\n\nAmerican Samoa, Arizona, California, Guam, Hawaii, Nevada,\n\nMarianas Islands, Federated States of Micronesia\n\nRepublic of the Marshall Islands\n\nNIMS Coordinator:\n\nSusan Waller\n\nRegion X: Bothell\n\nBULLET::::- Federal Emergency Management Agency\n\nBULLET::::- Federal Regional Center\n\nBULLET::::- 130 228th Street, S.W.\n\nBULLET::::- Bothell, WA 98021-9796\n\nAreas:\n", "BULLET::::- Purpose-designed communities — catering to foreigners (e.g. worker compounds in Mid-West Asia, built largely for the oil industry)\n\nSection::::Comparison to closed cities.\n\nThe closed cities of Russia are different from the gated communities.\n\nBULLET::::- The guard duty in closed cities is free to residents (paid by taxes)\n", "States and some cities have their own civil defense measures, overseen by the Ministry of National Integration.\n\nSection::::Americas.:Canada.\n\nCanada's civil defense measures evolved over time. As with many other matters in Canada, responsibility is shared between the federal and provincial government. The first post-WWII civil defence co-ordinator was appointed in October 1948 \"to supervise the work of federal, provincial and municipal authorities in planning for public air-raid shelters, emergency food and medical supplies, and the evacuation of likely target areas\".\n", "Section::::Shooting and bombing incidents.\n\nIn light of active shooter events and the panic that can ensue, a lockdown alert issued as \"shelter in place,\" can also be implemented as a response to armed events, such as the 2014 Fort Hood shooting. Similarly a lockdown alert was likewise issued as a \"Shelter-in-place\" alert by the Massachusetts Emergency Management Agency over the cell phone Wireless Emergency Alerts service for local residents during the manhunt for the Boston Marathon bombing suspects.\n\nSection::::Implementation.\n", "All counties of the state are required to have a qualified emergency management director and all towns and cities are required to develop an emergency management program. A city or town may either have an emergency management director or create an agreement with their county for emergency management services. Regardless of level, the local emergency management director is responsible for the organization, administration, and operation of all such local organizations for emergency management within the director's territorial limits.\n" ]
[]
[]
[ "normal" ]
[]
[ "normal", "normal" ]
[]
2018-02657
How does some types of weed have the same smell as a skunk?
Humans are incredibly sensitive to a type of molecule called thiols. Think of how sensitive a shark in the ocean is to blood, humans are hundreds of times more sensitive to thiols in the air. We can smell a skunk miles away. thiols aren't exclusive to skunks. They also exist in cannabis.
[ "Skunk refers to cannabis strains that are strong-smelling and have been likened to the smell of the spray from a skunk. These strains of cannabis are believed to have originated in the United States prior to development by Dutch growers. Just as with other strains of cannabis, skunk is commonly grown in controlled indoor environments under specialized grow lights, or in a greenhouse when full outdoor conditions are not suitable; skunk strains are hybrids of \"Cannabis sativa\" and \"Cannabis indica\".\n\nSection::::Breeding.:Varieties.:Sour Diesel.\n", "Skunk spray is composed mainly of three low-molecular-weight thiol compounds, (\"E\")-2-butene-1-thiol, 3-methyl-1-butanethiol, and 2-quinolinemethanethiol, as well as acetate thioesters of these. These compounds are detectable by the human nose at concentrations of only 10 parts per billion.\n\nSection::::Bites.\n", "\"Skunk\" refers to several named strains of potent cannabis, grown through selective breeding and sometimes hydroponics. It is a cross-breed of \"Cannabis sativa\" and \"C. indica\" (although other strains of this mix exist in abundance). Skunk cannabis potency ranges usually from 6% to 15% and rarely as high as 20%. The average THC level in coffee shops in the Netherlands is about 18–19%.\n", "Section::::Description.\n", "Section::::Recent genetic testing.\n", "The secretion of the spotted skunks differs from that of the striped skunks. The two major thiols of the striped skunks, (E)-2-butene-1-thiol and 3-methyl-1-butanethiol are the major components in the secretion of the spotted skunks along with a third thiol, 2-phenylethanethiol.\n\nThioacetate derivatives of the three thiols are present in the spray of the striped skunks but not the spotted skunks. They are not as odoriferous as the thiols. Water hydrolysis converts them to the more potent thiols. This chemical conversion may be why pets that have been sprayed by skunks will have a faint \"skunky\" odor on damp evenings.\n", "Many thiols have strong odors resembling that of garlic. The odors of thiols, particularly those of low molecular weight, are often strong and repulsive. The spray of skunks consists mainly of low-molecular-weight thiols and derivatives. These compounds are detectable by the human nose at concentrations of only 10 parts per billion. Human sweat contains (\"R\")/(\"S\")-3-methyl-3-sulfanylhexan-1-ol (MSH), detectable at 2 parts per billion and having a fruity, onion-like odor. (Methylthio)methanethiol (MeSCHSH; MTMT) is a strong-smelling volatile thiol, also detectable at parts per billion levels, found in male mouse urine. Lawrence C. Katz and co-workers showed that MTMT functioned as a semiochemical, activating certain mouse olfactory sensory neurons, attracting female mice. Copper has been shown to be required by a specific mouse olfactory receptor, MOR244-3, which is highly responsive to MTMT as well as to various other thiols and related compounds. A human olfactory receptor, OR2T11, has been identified which, in the presence of copper, is highly responsive to the gas odorants (see below) ethanethiol and \"t\"-butyl mercaptan as well as other low molecular weight thiols, including allyl mercaptan found in human garlic breath, and the strong-smelling cyclic sulfide thietane.\n", "Spotted skunks can spray up to roughly 10 feet.\n\nSection::::Deodorizing.\n\nChanging the thiols into compounds that have little or no odor can be done by oxidizing the thiols to sulfonic acids. Hydrogen peroxide and baking soda (sodium bicarbonate) are mild enough to be used on people and animals but changes hair color.\n\nStronger oxidizing agents, like sodium hypochlorite solutions—liquid laundry bleach—are cheap and effective for deodorizing other materials.\n\nSection::::Diet.\n", "Environment\n", "Section::::Biology.\n\nSection::::Biology.:Reproduction.\n", "Section::::Conservation status.\n", "Skunks are omnivorous, eating both plant and animal material and changing their diets as the seasons change. They eat insects, larvae, earthworms, grubs, rodents, lizards, salamanders, frogs, snakes, birds, moles, and eggs. They also commonly eat berries, roots, leaves, grasses, fungi and nuts.\n", "Spotted skunk\n\nThe genus Spilogale includes all skunks commonly known as spotted skunks and is composed of four extant species: \"S. gracilis, S. putorius, S. pygmaea, S. angustifrons\".\n\nSection::::Description.\n", "Section::::Composition.\n", "As with other related species, western spotted skunks possess a pair of large musk glands that open just inside the anus, and which can spray their contents through muscular action. The musk is similar to that of striped skunks, but contains 2-phenylethanethiol as an additional component, and lacks some of the compounds produced by the other species. These differences are said to give western spotted skunk musk a more pungent odor, but not to spread as widely as that of striped skunks.\n\nSection::::Distribution and habitat.\n", "BULLET::::- Skunks - Under the orders of HighRoller, Sparky White once led the Skunks into attacking Big Green. Besides wielding nunchuks, they can combine their stenches to form a Stink Monster. After their Stink Monster was sucked up by the Pig Army, they ended up sneezed into the trees. Commander ApeTrully managed to get the Giraffes to save them. After defecting from High Roller, the Skunks join up with Big Green where they used their stench to drive away the termites.\n", "Within the skunks, the species of the genus \"Promephitis\" have been likened to the two extant species of stink badger (\"Mydaus\") from Southeast Asia. Taken together, \"Promephitis\" and the stink badgers are probably the sister group of the fossil species \"Palaeomephitis steinheimensis\", the name applied to the oldest known species of skunks. Within the recent genera, the stink badgers represent the earliest genus; the clade comprising them with \"Promephitis\" and \"Palaeomephitis\" is considered to be a sister group to all other skunks living today and other fossil forms.\n", "BULLET::::- \"Polygonum mite\" – tasteless water-pepper → \"Persicaria mitis\"\n\nBULLET::::- \"Polygonum nepalense\" → \"Persicaria nepalensis\"\n\nBULLET::::- \"Polygonum odoratum\" – Vietnamese coriander → \"Persicaria odorata\"\n\nBULLET::::- \"Polygonum orientale\" → \"Persicaria orientalis\"\n\nBULLET::::- \"Polygonum pensylvanicum\" – Pennsylvania smartweed or pink knotweed or pinkweed → \"Persicaria pensylvanica\"\n\nBULLET::::- \"Polygonum persicaria\" – redshank or persicaria or lady's thumb → \"Persicaria maculosa\"\n\nBULLET::::- \"Polygonum punctatum\" – dotted smartweed → \"Persicaria punctata\"\n\nBULLET::::- \"Polygonum runcinatum\" → \"Persicaria runcinata\"\n\nBULLET::::- \"Polygonum sagittatum\" – arrowleaf tearthumb, American tear-thumb or scratchgrass → \"Persicaria sagittata\"\n\nBULLET::::- \"Polygonum tinctorium\" → \"Persicaria tinctoria\"\n\nBULLET::::- \"Polygonum virginianum\" → \"Persicaria virginiana\"\n", "Section::::Behavior.\n\nSection::::Behavior.:Diet.\n", "Skunk\n\nSkunks are North and South American mammals in the family Mephitidae. While related to polecats and other members of the weasel family, skunks have as their closest Old World relatives the stink badgers. The animals are known for their ability to spray a liquid with a strong, unpleasant smell. Different species of skunk vary in appearance from black-and-white to brown, cream or ginger colored, but all have warning coloration.\n\nSection::::Etymology.\n", "Family Mephitidae (skunks and stink badgers) was once classified as mustelids, but are now recognized as a lineage in their own right. The 12 species of skunks are divided into four genera: \"Mephitis\" (hooded and striped skunks, two species), \"Spilogale\" (spotted skunks, four species), \"Mydaus\" (stink badgers, two species) and \"Conepatus\" (hog-nosed skunks, four species). The two skunk species in the genus \"Mydaus\" inhabit Indonesia and the Philippines; all other skunks inhabit the Americas from Canada to central South America.\n", "Tom Cruise Purple is a strain of cannabis sold in California by select licensed cannabis clubs. The strain is potent, and is packaged with a picture of the actor Tom Cruise laughing. Tom Cruise Purple is sold by cannabis purveyors in Northern California. Cruise sought out legal advice regarding the product, and considered a lawsuit against its manufacturers.\n\nSection::::Breeding.:Varieties.:Skunk.\n", "Section::::In the GRUNK.\n", "BULLET::::- \"Cercospora byliana\"\n\nBULLET::::- \"Cercospora brachypus\" - found on grapes\n\nBULLET::::- \"Cercospora brassicicola\" - infests many cole crops\n\nBULLET::::- \"Cercospora brunkii\" - found on \"Pelargonium\" and \"Geranium\".\n\nBULLET::::- \"Cercospora bunchosiae\"\n\nBULLET::::- \"Cercospora canescens\"\n\nBULLET::::- \"Cercospora cannabis\" - causes olive leaf spot on \"Cannabis\" spp., and is found on hops\n\nBULLET::::- \"Cercospora cantuariensis\" - found on hops\n\nBULLET::::- \"Cercospora capsici\" - causes \"frogeye\" leaf spot on peppers\n\nBULLET::::- \"Cercospora caribaea\"\n\nBULLET::::- \"Cercospora carotae\" - causes carrot leaf blight\n\nBULLET::::- \"Cercospora circumscissa\"\n\nBULLET::::- \"Cercospora citrullina\" - causes \"cucurbit leaf spot\" on watermelon and cucumber plants\n\nBULLET::::- \"Cercospora clemensiae\"\n", "Section::::Product.\n" ]
[]
[]
[ "normal" ]
[]
[ "normal" ]
[]
2018-13812
how do we know what dinosaurs looked like if all we have is the bones?
We don't for many of them. One of the criticisms is we tend to stretch the skin over the bones. And if we applied the same techniques to modern animal skeletons our results would look almost nothing like it.
[ "Section::::Episodes.\n\nSection::::Episodes.:Episode one: \"Central England\".\n\nBULLET::::- Dr. Philip Wilby of the British Geological Survey in Nottingham examines soft-tissue, preserved by the \"Medusa effect\", from a recently re-excavated Victorian fossil discovery.\n\nBULLET::::- Dr. Phil Manning compares a T-Rex with William Buckland’s Megalosaurus (the first scientifically identified dinosaur and the inspiration for Charles Dickens’ opening paragraph in \"Bleak House\").\n\nBULLET::::- Dr. Derek Siveter of Oxford University Museum and Dr. Mark Sutton of Imperial College London demonstrate virtual dissection that produces computer-models of microfossils.\n", "Section::::Biochronology.\n\nBecause species of aetosaurs typically have restricted fossil ranges and are abundant in the strata they are found in, they are useful in biochronology. Osteoderms are the most common remains associated with aetosaurs, so a single identifiable scute can accurately date the layer it is found in.\n", "BULLET::::- Micromosaics of Harold \"Henry\" Dalton: Microscopic mosaics from the 19th century depicting flowers, animals, and other objects, made entirely from individual butterfly wing scales and diatoms\n\nBULLET::::- The Stereofloral Radiographs of Albert G. Richards: A collection of stereographic radiographs of flowers\n\nBULLET::::- Rotten Luck: The Decaying Dice of Ricky Jay: A collection of decomposing antique dice once owned by magician Ricky Jay and documented in his book \"Dice: Deception, Fate, and Rotten Luck\"\n", "Section::::Research techniques.:CAT scans.\n", "Section::::Sources of material.\n\nThe primary sources of paleoparasitological material include mummified tissues, coprolites (fossilised dung) from mammals or dinosaurs, fossils, and amber inclusions. Hair, skins, and feathers also yield ectoparasite remains. Some archaeological artifacts document the presence of animal parasites. One example is the depiction of what appear to be mites in the ear of a \"hyaena-like\" animal in a tomb painting from ancient Thebes.\n", "The earliest definitive works of \"proto-paleoart\" that unambiguously depict the life appearance of fossil animals come from fifteenth and sixteenth century Europe. One such depiction is Ulrich Vogelsang's statue of a Lindwurm in Klagenfurt, Austria that dates to 1590. Writings from the time of its creation specifically identify the skull of \"Coelodonta antiquitatis\", the woolly rhinoceros, as the basis for the head in the restoration. This skull had been found in a mine or gravel pit near Klagenfurt in 1335, and remains on display today. Despite its poor resemblance of the skull in question, the Lindwurm statue was thought to be almost certainly inspired by the find.\n", "Section::::History of study.\n", "MR images have also been obtained from the brain of a 3200-year-old Egyptian mummy. The perspectives are slim, however, that any three-dimensional imaging dataset of a fossil, semi-fossil or mummified brain will ever be of much use to morphometric analyses of the kind described here, since the processes of mummification and fossilization heavily alter the structure of soft tissues in a way specific to the individual specimen and subregions therein.\n", "Section::::Disciplines.:Forensic ornithology.\n\nBird remains can be identified, first and foremost from feathers (which are distinctive to a particular species at both macroscopic and microscopic levels).\n\nSection::::Disciplines.:Forensic odontology.\n\nOdontologists or dentists can be used in order to aid in an identification of degraded remains. Remains that have been buried for a long period or which have undergone fire damage often contain few clues to the identity of the individual. Tooth enamel, as the hardest substance in the human body, often endures and as such odontologists can in some circumstances compare recovered remains to dental records.\n\nSection::::Disciplines.:Forensic pathology.\n", "Section::::Description and interpretation.:Skin.\n", "BULLET::::- New specimen of the rhinesuchid \"Australerpeton cosgriffi\" (a skull and mandible) is described from the Permian Rio do Rasto Formation (Brazil) by Azevedo, Vega & Soares (2017).\n\nBULLET::::- A description of the anatomy of the braincase and middle ear regions of an exceptionally well-preserved skull of \"Stanocephalosaurus amenasensis\" from the Triassic of Algeria is published by Arbez, Dahoumane & Steyer (2017).\n", "Several professional paleoartists recommend the consideration of contemporary animals in aiding accurate restorations, especially in cases where crucial details of pose, appearance and behavior are impossible to know from fossil material. For example, most extinct animals' coloration and patterning are unknown from fossil evidence, but these can be plausibly restored in illustration based on known aspects of the animal's environment and behavior, as well as inference based on function such as thermoregulation, species recognition, and camouflage.\n\nSection::::Aims and production.:Artistic principles.\n", "BULLET::::- A study on the morphology of the braincase of the phytosaur \"Wannia scurriensis\" is published by Lessner & Stocker (2017).\n\nBULLET::::- A description of the morphology of the sacrum of \"Smilosuchus adamanensis\" is published by Griffin \"et al.\" (2017).\n\nSection::::Other reptiles.\n\nSection::::Other reptiles.:Research.\n\nBULLET::::- A study on the red blood cell size in fossil tetrapods, especially archosauromorph reptiles and synapsids, as indicated by bone microstructure, is published by Huttenlocker & Farmer (2017).\n", "Section::::Taphonomy.\n", "Section::::Description and interpretation.:Skin frill.\n", "The \"D. herschelensis\" skeleton was discovered was scattered around the dig site. The skull, lower jaw, ribs, pelvis and shoulder blades were all recovered, but the spine was incomplete, so the exact number of vertebrae the living animal would have had is unknown. All four limbs are missing, with the exception of 9 small Phalanges (finger bones) and a small number of limb bones found close by which may belong to the animal in question.\n", "Section::::History of discovery.\n", "By measuring the lengths of relevant bones, adding a factor for the non-bone contribution, and comparing them to historical numbers, one can estimate the stature of a skeleton.\n\nSection::::Organic remains.:Faunal analysis.\n", "Section::::Some of the museum specimens.\n\nBULLET::::- \"Ampelosaurus atacis\" (12 meters long complete skeleton)\n\nBULLET::::- \"Camarasaurus\" (skull)\n\nBULLET::::- \"Dunkleosteus terrelli\" (1.1 meters long skull of a giant fish)\n\nBULLET::::- \"Mamenchisaurus\" (22 meters long complete skeleton)\n\nBULLET::::- \"Mixopterus\" (eurypterid, kind of gigantic sea scorpion)\n\nBULLET::::- \"Oviraptor philoceratops\" (skull)\n\nBULLET::::- \"Psittacosaurus\" (complete skeleton)\n\nBULLET::::- \"Psittacosaurus\" (life size model)\n\nBULLET::::- \"Quetzalcoatlus\" (12 meters wingspan complete skeleton, but not a dinosaur, it is a pterosaur)\n\nBULLET::::- \"Stenopterygius\", (complete skeleton, an ichthyosaur rather than a dinosaur)\n\nBULLET::::- \"Struthiosaurus\" (complete skeleton)\n\nBULLET::::- \"Tarascosaurus salluvicus\" (life size model)\n\nBULLET::::- \"Triceratops\" (skull)\n\nBULLET::::- \"Tsintaosaurus spinorhinus\" (complete skeleton)\n", "Section::::History of discoveries.:The first complete skeletons.\n", "Section::::History.\n", "Section::::Paleobiology.:Diet.\n", "CT scans are most commonly used in paleoradiological studies because they can create images of soft tissue, organs and body cavities of mummified remains without performing an invasive and damaging autopsy. This enables archaeologists and anthropologists to digitally unwrap the remains and reveal what they contain. CT scanners create these images by taking multiple radiographic planes (or cuts) of the body at different angles which records the layers of different structures in the remains. This differs from typical radiographic scans (X-rays) where all the structural layers are documented in one image, which can create shadows and therefore limit their accuracy.\n", "BULLET::::- A study on the morphological diversity of the snouts and frills of the ceratopsians, as well as on the skull and jaw shape changes in the evolution of the group is published by Maiorino \"et al.\" (2017).\n\nBULLET::::- New specimen of \"Liaoceratops yanzigouensis\" is described from the Lujiatun Bed of the Lower Cretaceous Yixian Formation (China) by Yang \"et al.\" (2017), who describe the postcranial skeleton of \"L. yanzigouensis\" for the first time.\n", "Scientists will probably never be certain of the largest and smallest dinosaurs to have ever existed. This is because only a tiny percentage of animals were ever fossilized and most of these remain buried in the earth. Few of the specimens that are recovered are complete skeletons, and impressions of skin and other soft tissues are rare. Rebuilding a complete skeleton by comparing the size and morphology of bones to those of similar, better-known species is an inexact art, and reconstructing the muscles and other organs of the living animal is, at best, a process of educated guesswork.\n" ]
[ "We can determine what dinosaurs look like from their bones." ]
[ "We don't know what many dinosaurs actually looked like from just having their bones." ]
[ "false presupposition" ]
[ "We can determine what dinosaurs look like from their bones.", "We can determine what dinosaurs look like from their bones." ]
[ "false presupposition", "normal" ]
[ "We don't know what many dinosaurs actually looked like from just having their bones.", "We don't know what many dinosaurs actually looked like from just having their bones." ]
2018-02469
What do medals in the Olympics actually do for the winner? Also, what happens if a country wins the most medals?
They don't "do" anything. Medalling is proof that you are among the best in the world at your sport. That can help you get sponsors and endorsements, and it can help you leverage a post-sports career in something like journalism or broadcasting if you play things right. But there are no special privileges or anything that come with having a medal.
[ "Olympic medal\n\nAn Olympic medal is awarded to successful competitors at one of the Olympic Games. There are three classes of medal: gold, awarded to the winner; silver, awarded to the 1st runner-up; and bronze, awarded to the second runner-up. The granting of awards is laid out in detail in the Olympic protocols.\n", "Details about the medals from each of the Summer Olympic Games:\n\nSection::::Individual design details.:Winter Olympic medal designs.\n\nDetails about the medals from each of the Winter Olympic Games:\n\nSection::::Participation medals.\n\nSince the beginning of the modern Olympics the athletes and their support staffs, event officials, and certain volunteers involved in planning and managing the games have received commemorative medals and diplomas. Like the winners' medals, these are changed for each Olympiad, with different ones issued for the summer and winter games.\n\nSection::::Presentation.\n", "In addition to generally supporting their Olympic athletes, some countries provide sums of money and gifts to medal winners, depending on the classes and number of medals won.\n\nTotal medals won are used to rank competitor nations in medal tables, these may be compiled for a specific discipline, for a particular Games, or over all time. These totals always total event placements rather than actual medals — a victory in a team event (such as relay race) equates to a single gold for such rankings even though each team member would receive a physical medal.\n\nSection::::Introduction and early history.\n", "Medals are not the only awards given to competitors; every athlete placed first to eighth receives an Olympic diploma. Also, at the main host stadium, the names of all medal winners are written onto a wall. Finally, as noted below, all athletes receive a participation medal and diploma.\n\nSection::::Production and design.\n\nThe IOC dictates the physical properties of the medals and has the final decision about the finished design. Specifications for the medals are developed along with the National Olympic Committee (NOC) hosting the Games, though the IOC has brought in some set rules:\n", "List of medal sweeps at the World Championships in Athletics\n\nA sweep in athletics is when one team wins all available medals in a single event in a sporting event. At the highest level, that would be when one nation wins all the medals in an athletics event at the World Championships in Athletics. In athletics the maximum number of entrants from a single country in most events is three, allowing in theory for athletes from the same country to finish in all three top places.\n\nSection::::Men.\n", "This list does not includes events where two bronze medals are awarded due to repechage or the non-existence of a bronze-medal playoff as they are awarded due to the rules of the sports, thus not considered as ties. All events that are not listed, namely those below, awarded two bronze medals.\n", "'Total' shows the number of ties for medals in Olympics history, while 'Events' shows the number of events with at least one medal tie. There can be more than one tie for medals for one event as there can be ties for gold, silver and bronze.\n\nSection::::Ties not included in this list.\n", "List of medal sweeps in Olympic athletics\n\nA podium sweep is when one team wins all available medals in a single event in a sporting event. At the highest level, that would be when one nation wins all the medals in the Summer Olympics Athletics. Many Olympic sports or events do not allow three entries into a single event in the Olympics, making a sweep impossible. But in Athletics (excluding relays) the maximum for a single country is three.\n", "Section::::Background.:National goals.\n\nThe sports funding agencies of some nations have set targets of reaching a certain rank in the medals table, usually based on gold medals; examples are Australia, Japan, France, and Germany. Funding is reduced for sports with low prospects of medals.\n", "The presentation of the medals and awards changed significantly until the 1932 Summer Olympics in Los Angeles brought in what has now become standard. Before 1932 all the medals were awarded at the closing ceremony, with the athletes wearing evening dress for the first few Games. Originally the presenting dignitary was stationary while the athletes filed past to receive their medals. The victory podium was introduced upon the personal instruction in 1931 of Henri de Baillet-Latour, who had seen one used at the 1930 British Empire Games. The winner is in the middle at a higher elevation, with the silver medallist to the right and the bronze to the left. At the 1932 Winter Olympics, medals were awarded in the closing ceremony, with athletes for each event in turn mounting the first-ever podium. At the Summer Olympics, competitors in the Coliseum received their medals immediately after each event for the first time; competitors at other venues came to the Coliseum next day to receive their medals. Later Games have had a victory podium at each competition venue.\n", "At competitions, medals are awarded to the first, second and third-place winners in each event, and ribbons are awarded to athletes who finish in fourth through eighth place.\n\nSection::::Unified Sports.\n", "Section::::Production and design.:Custom reverse designs.\n\nThe German Olympic Committee, Nationales Olympisches Komitee für Deutschland, were the first Summer Games organisers to elect to change the reverse of the medal. The 1972 design was created by Gerhard Marcks, an artist from the Bauhaus, and features mythological twins Castor and Pollux. Since then the Organising Committee of the host city has been given the freedom of the design of the reverse, with the IOC giving final approval.\n\nSection::::Production and design.:Comparison between Summer and Winter.\n", "During the closing ceremony, in the Olympic Stadium, medals were presented for Cross country skiing at the cross-country skiing men's 50 km free event, one of the last events held at the Games. In a new practice for Winter Olympics closing ceremonies, the medals for this long race were awarded during the ceremony similar to the way the medals for the men's marathon are awarded during the closing ceremonies of Summer Olympic Games.\n", "Medal designs have varied considerably since the first Olympic Games in 1896, particularly in size and weight. A standard obverse (front) design of the medals for the Summer Olympic Games began in 1928 and remained for many years, until its replacement at the 2004 Games as the result of controversy surrounding the use of the Roman Colosseum rather than a building representing the Games' Greek roots. The medals of the Winter Olympic Games never had a common design, but regularly feature snowflakes and the event where the medal has been won.\n", "The sections above are based on information published by the International Olympic Committee. Various sources deal with some of the entries in the preceding sections differently.\n\nSection::::Variations.:Early Olympics.\n\nFor the 1900 Summer Olympics several countries are credited with appearances that are not considered official by the IOC. Only one of these cases concerns a medal. A gold medal that is officially added to France's total is given to Luxembourg.\n", "Section::::List of stripped Olympic medals.\n\nBULLET::::- This is the list of Olympic medals stripped by the IOC, the governing body of the Olympics.\n\nBULLET::::- (X) medal declared vacant\n\nBULLET::::- (Y) medal yet to be reallocated or declared vacant\n\nBULLET::::- (Z) not due to doping; all others were due to doping offenses\n\nNotes:\n\nSection::::List of Olympic medals stripped and later returned.\n\nHere is the list of Olympic medals that were stripped by the IOC and later returned by the IOC.\n\nSection::::Stripped, returned, and stripped.\n", "In what is known as the Antwerp Ceremony (because the tradition began at the Antwerp Games), the mayor of the city that organized the Games transfers a special Olympic flag to the president of the IOC, who then passes it on to the mayor of the city hosting the next Olympic Games. The receiving mayor then waves the flag eight times. There are four such flags:\n", "The athletes or teams who place first, second, or third in each event receive medals. The winners receive gold medals, which were solid gold until 1912, then made of gilded silver and now gold-plated silver. Every gold medal however must contain at least six grams of pure gold. The runners-up receive silver medals and the third-place athletes are awarded bronze medals. In events contested by a single-elimination tournament (most notably boxing), third place might not be determined and both semifinal losers receive bronze medals. At the 1896 Olympics only the first two received a medal; silver for first and bronze for second. The current three-medal format was introduced at the 1904 Olympics. From 1948 onward athletes placing fourth, fifth, and sixth have received certificates, which became officially known as victory diplomas; in 1984 victory diplomas for seventh- and eighth-place finishers were added. At the 2004 Summer Olympics in Athens, the gold, silver, and bronze medal winners were also given olive wreaths. The IOC does not keep statistics of medals won on a national level (except for team sports), but NOCs and the media record medal statistics as a measure of success.\n", "As the IOC does not consider its sorting of nations to be an official ranking system, various methods of ranking nations are used. Some sort rankings decided by the total number of medals the country has but most list by the gold medals counted. However, if two or more teams have the same number of gold medals, the silver medals are then judged from the most to the least and then the bronze medals.\n\nSection::::Ranking systems.:Medal count ranking.\n", "BULLET::::- The Olympic Cup is awarded to institutions or associations with a record of merit and integrity in actively developing the Olympic Movement\n\nBULLET::::- The Olympic Order is awarded to individuals for exceptionally distinguished contributions to the Olympic Movement; superseded the Olympic Certificate\n\nBULLET::::- The Olympic Laurel is awarded to individuals for promoting education, culture, development, and peace through sport\n\nBULLET::::- The Olympic town status has been given to some towns that have been particularly important for the Olympic Movement\n\nSection::::IOC members.\n", "After each Olympic event is completed, a medal ceremony is held. The Summer Games would usually conduct the ceremonies immediately after the event at the respective venues, whereas the Winter editions would present the medals at a nightly victory ceremony held at a medal plaza, excluding the most of indoor events. A three–tiered rostrum is used for the three medal winners, with the gold medal winner ascending to the highest platform, in the centre, with the silver and bronze medalists flanking. The medals are awarded by a member of the IOC. The IOC member is usually accompanied by a person from sports federation governing the sport (such as IAAF in athletics or FINA in swimming), who presents each athlete with a small bouquet of flowers. When the Games were held in Athens in 2004, the medal winners also received olive wreaths in honor of the tradition at the Ancient Olympics. For the Rio Games in 2016, the flowers were replaced by a small 3D model of the Games' logo and Pyeongchang Games in 2018 a stuffed animal. After medals are distributed, the flags of the nations of the three medalists are raised. The flag of the gold medalist's country is in the centre and raised the highest while the flag of the silver medalist's country is on the left facing the flags and the flag of the bronze medalist's country is on the right, both at lower elevations than the gold medalist's country's flag.\n", "BULLET::::2. The medal can not be assigned team, but to a single team sport athlete if the committee finds it justified.\n\nBULLET::::3. The gold medal can only be distributed once to the same person .\n\nBULLET::::4. If two or more candidates for committee opinion are equal:\n\nBULLET::::5. Shall have regard to the importance of sports achievement has, or can obtain, for the particular sport.\n\nBULLET::::6. The Committee shall as far as possible avoid awarding two medals in the same year.\n\nBULLET::::7. The Committee may, if no candidates are worthy, neglect and distribute gold medal a year.\n", "After all the athletes enter the stadium, the final medals ceremony of the Games is held. The organizing committee of the respective host city, after consulting with the IOC, determines which event will have its medals presented. During the Summer Olympics, this is usually the men's marathon. Traditionally, the men's marathon is held in the last hours of competition on the last day of the Olympics, and the race is won just before the start of the closing ceremony. However, recent Summer Olympiads in Atlanta, Athens, Beijing, London, and Rio staged the marathon in the early morning due to heat problems in the host city. Since the 2006 Winter Olympics, the medals for the men's 50 km cross-country skiing event were presented at the closing ceremony. The medallist's national flags are then hoisted and the national anthem of the gold medallist's country is played.\n", "Sporting success predictions and ratings can be univariate, i.e. based on one independent variable, such as a country's population size and the number of medals is divided by the population of the country, or multivariate, where resources-per-person in the form of GDP per capita and other variables are included. \n", "On the second day of the standard icosathlon, the following events are contested:\n\nBULLET::::- 110 metre hurdles\n\nBULLET::::- Discus throw\n\nBULLET::::- 200 metres\n\nBULLET::::- Pole vault\n\nBULLET::::- 3000 metres\n\nBULLET::::- pause\n\nBULLET::::- 400 metre hurdles\n\nBULLET::::- Javelin throw\n\nBULLET::::- 1500 metres\n\nBULLET::::- Triple jump\n\nBULLET::::- 10000 m\n\nEach event is scored according to the decathlon scoring tables or, for non-decathlon events, the IAAF points tables. At the conclusion of each icosathlon, the competitor with the highest point total is declared the winner.\n" ]
[ "Olypic medals do something for the winnner." ]
[ "The medals don't do anything. It is just an award. " ]
[ "false presupposition" ]
[ "Olypic medals do something for the winnner." ]
[ "false presupposition" ]
[ "The medals don't do anything. It is just an award. " ]
2018-06247
How come spacecraft are able to take advantage of the rotational speed of Earth, but airplanes aren't?
Spacecraft are going to leave the Earth's atmosphere and enter into orbit. Orbits involve moving around the Earth so fast that you move "sideways" away from the Earth at the same speed that you fall. At that point, your velocity relative to the Earth actually matters. Or, to phrase it another way, a spacecraft needs to orbit with a velocity that is *independent* of the Earth's rotation. By contrast, a plane is never going to need to leave the frame of reference where the ground is considered stationary. Its velocity is best treated as *dependent* on the Earth's rotation. That probably didn't make a lot of sense, so let me use an analogy. The setup: You're on a large merry-go-round, and you're wearing roller skates. Scenario 1: You just realized that you're late for dinner, and your mom is going to be super mad if you're not home ASAP. You can either wait until the merry-go-round is moving you *toward* the direction you need to go, or *away* from the direction you need to go. What do you do? Clearly you need to wait until you're already moving in the direction you want to go before you jump off, so you get a nice boost. This is because the velocity you need to reach is in a frame of reference where you view the merry-go-round as spinning. Going one direction over another will affect your velocity when you're in the new frame of reference. This is like launching a rocket into orbit. Scenario 2: You're on the merry-go-round, and you realized that you dropped your wallet on a different part of the merry-go-round. Since you're not going to leave the frame of reference where you treat the merry-go-round as stationary, you won't get a boost for going in one direction over another. This is like flying a plane from one city to another.
[ "Newton's second law, applied to rotational rather than linear motion, becomes:\n\nwhere τ is the net torque (or \"moment\") exerted on the vehicle, I is its moment of inertia about the axis of rotation, and α is the angular acceleration vector in radians per second per second. Therefore, the rotational rate in degrees per second per second is\n", "Steady flight is defined as flight where the aircraft's linear and angular velocity vectors are constant in a body-fixed reference frame such as the body frame or wind frame. In the Earth frame, the velocity may not be constant since the airplane may be turning, in which case the airplane has a centripetal acceleration (\"V\"cos(\"γ\"))/\"R\" in the \"x\"-\"y\" plane, where \"V\" is the magnitude of the true airspeed and \"R\" is the turn radius.\n", "In many flight dynamics applications, the Earth frame is assumed to be inertial with a flat \"x\",\"y\"-plane, though the Earth frame can also be considered a spherical coordinate system with origin at the center of the Earth.\n\nThe other two reference frames are body-fixed, with origins moving along with the aircraft, typically at the center of gravity. For an aircraft that is symmetric from right-to-left, the frames can be defined as:\n\nBULLET::::- Body frame\n\nBULLET::::- Origin - airplane center of gravity\n\nBULLET::::- \"x\" axis - positive out the nose of the aircraft in the plane of symmetry of the aircraft\n", "A good example of actually using earth's rotational energy is the location of the European spaceport in French Guiana (on S. American continent). Wiki: Guiana Space Centre. This is within about 5 degrees of the equator, so space rocket launches (for primarily geo-stationary satellites) from here to the east obtain nearly all of the full rotational speed of the earth at the equator (about 1,000 mph, sort of a \"sling-shot benefit). Rocket launches easterly from Kennedy (USA) on the other hand obtain only about 900 mph added benefit due to the lower relative rotational speed of the earth at that northerly latitude. This saves significant rocket fuel per launch.\n", "Section::::Reference frames.\n\nSteady flight analysis uses three different reference frames to express the forces and moments acting on the aircraft. They are defined as:\n\nBULLET::::- Earth frame (assumed inertial)\n\nBULLET::::- Origin - arbitrary, fixed relative to the surface of the Earth\n\nBULLET::::- \"x\" axis - positive in the direction of north\n\nBULLET::::- \"y\" axis - positive in the direction of east\n\nBULLET::::- \"z\" axis - positive towards the center of the Earth\n\nBULLET::::- Body frame\n\nBULLET::::- Origin - airplane center of gravity\n", "Once velocity and flight path angle are known, altitude formula_11 and downrange distance formula_12 are computed as:\n\nThe planet-fixed values of \"v\" and \"θ\" are converted to space-fixed (inertial) values with the following conversions:\n\nwhere \"ω\" is the planet's rotational rate in radians per second, \"φ\" is the launch site latitude, and \"A\" is the launch azimuth angle.\n", "and the angular rotation rate ω (degrees per second) is obtained by integrating α over time, and the angular rotation θ is the time integral of the rate, analogous to linear motion. The three principal moments of inertia I, I, and I about the roll, pitch and yaw axes, are determined through the spacecraft's center of mass.\n", "Section::::Introduction.\n\nSection::::Introduction.:Reference frames.\n\nThree right-handed, Cartesian coordinate systems see frequent use in flight dynamics. The first coordinate system has an origin fixed in the reference frame of the Earth:\n\nBULLET::::- Earth frame\n\nBULLET::::- Origin - arbitrary, fixed relative to the surface of the Earth\n\nBULLET::::- \"x\" axis - positive in the direction of north\n\nBULLET::::- \"y\" axis - positive in the direction of east\n\nBULLET::::- \"z\" axis - positive towards the center of the Earth\n", "When a spacecraft in orbit is slowed sufficiently, its altitude decreases to the point at which aerodynamic forces begin to rapidly slow the motion of the vehicle, and it returns to the ground. Without retrorockets, spacecraft would remain in orbit for years until their orbits naturally slow, and reenter the atmosphere at a much later date; in the case of manned flights, long after life support systems have been expended. Therefore, it is critical that spacecraft have extremely reliable retrorockets.\n\nSection::::Uses.:Project Mercury.\n", "Aircraft principal axes\n\nAn aircraft in flight is free to rotate in three dimensions: \"yaw\", nose left or right about an axis running up and down; \"pitch\", nose up or down about an axis running from wing to wing; and \"roll\", rotation about an axis running from nose to tail. The axes are alternatively designated as \"vertical\", \"transverse\", and \"longitudinal\" respectively. These axes move with the vehicle and rotate relative to the Earth along with the craft. These definitions were analogously applied to spacecraft when the first manned spacecraft were designed in the late 1950s.\n", "In both the circumlunar case and the cislunar case, the craft can be moving generally from west to east around the Earth (co-rotational), or from east to west (counter-rotational).\n\nFor trajectories in the plane of the Moon's orbit with small periselenum radius (close approach of the Moon), the flight time for a cislunar free-return trajectory is longer than for the circumlunar free-return trajectory with the same periselenum radius. Flight time for a cislunar free-return trajectory decreases with increasing periselenum radius, while flight time for a circumlunar free-return trajectory increases with periselenum radius.\n", "BULLET::::- A gravity assist maneuver, sometimes known as a \"slingshot maneuver\" or \"Crocco mission\" after its 1956 proposer Gaetano Crocco, results in an opposition-class mission with a much shorter dwell time at the destination. This is accomplished by swinging past another planet, using its gravity to alter the orbit. A round trip to Mars, for example, can be significantly shortened from the 943 days required for the conjunction mission, to under a year, by swinging past Venus on return to the Earth.\n\nSection::::Interplanetary flight.:Hyperbolic departure.\n", "A spacecraft enters orbit when its centripetal acceleration due to gravity is less than or equal to the centrifugal acceleration due to the horizontal component of its velocity. For a low Earth orbit, this velocity is about ; by contrast, the fastest manned airplane speed ever achieved (excluding speeds achieved by deorbiting spacecraft) was in 1967 by the North American X-15. The energy required to reach Earth orbital velocity at an altitude of is about 36 MJ/kg, which is six times the energy needed merely to climb to the corresponding altitude.\n", "The initial direction of a minimum-delta-v trajectory points halfway between straight up and straight toward the destination point (which is below the horizon). Again, this is the case if the Earth's rotation is ignored. It is not exactly true for a rotating planet unless the launch takes place at a pole.\n\nSection::::Flight duration.\n", "The forces acting on spacecraft are of three types: propulsive force (usually provided by the vehicle's engine thrust); gravitational force exerted by the Earth and other celestial bodies; and aerodynamic lift and drag (when flying in the atmosphere of the Earth or another body, such as Mars or Venus). The vehicle's attitude must be taken into account because of its effect on the aerodynamic and propulsive forces. There are other reasons, unrelated to flight dynamics, for controlling the vehicle's attitude in non-powered flight (e.g., thermal control, solar power generation, communications, or astronomical observation).\n", "Section::::Interplanetary flight.:Heliocentric transfer orbit.\n\nThe transfer orbit required to carry the spacecraft from the departure planet's orbit to the destination planet is chosen among several options:\n", "BULLET::::- The fastest fixed-wing aircraft, and fastest glider, is the Space Shuttle, a rocket-glider hybrid, which has re-entered the atmosphere as a fixed-wing glider at more than Mach 25 — ver 25 times the speed of sound, about 17,000 mph at re-entry to Earth's atmosphere.\n", "For situations where propellant consumption may be a problem (such as long-duration satellites or space stations), alternative means may be used to provide the control torque, such as reaction wheels or control moment gyroscopes.\n\nSection::::Orbital flight.\n", "In practice, this is accomplished by matching the rotation of the surface below, by reaching a particular altitude where the orbital speed almost matches the rotation below, in an equatorial orbit. As the speed decreases slowly, then an additional boost would be needed to increase the speed back to a matching speed, or a retro-rocket could be fired to slow the speed when too fast. \n", "Flight calculations are made quite precisely for space missions, taking into account such factors as the Earth's oblateness and non-uniform mass distribution; gravitational forces of all nearby bodies, including the Moon, Sun, and other planets; and three-dimensional flight path. For preliminary performance analysis, some simplifying assumptions can be made: the planet is spherical and uniform; the vehicle can be simulated as a point mass; flight path assumes a two-body patched conic approximation; and the local flight path lies in a single plane) with reasonably small loss of accuracy.\n", "Because centripetal acceleration is:\n\nNewton's second law in the horizontal direction can be expressed mathematically as:\n\nwhere:\n\nIn straight level flight, lift is equal to the aircraft weight. In turning flight the lift exceeds the aircraft weight, and is equal to the weight of the aircraft (\"mg\") divided by the cosine of the angle of bank:\n\nwhere \"g\" is the gravitational field strength.\n\nThe radius of the turn can now be calculated:\n", "Supersonic aircraft like jet fighters or Concorde and Tupolev Tu-144 supersonic transports are the only aircraft able to overtake the maximum speed of the terminator at the equator. However, slower vehicles can overtake the terminator at higher latitudes, and it is possible to walk faster than the terminator at the poles, near to the equinoxes. The visual effect is that of seeing the sun rise in the west, or set in the east.\n\nSection::::Earth's terminator.:Grey-line radio propagation.\n", "BULLET::::- Increasing the departure apsis speed (and thus the semi-major axis) results in a trajectory which crosses the destination planet's orbit non-tangentially before reaching the opposite apsis, increasing delta-v but cutting the outbound transit time below the maximum.\n", "The speed at a perigee of 6555 km from the centre of the Earth for trajectories passing between 2000 and 20 000 km from the Moon is between 10.84 and 10.92 km/s regardless of whether the trajectory is cislunar or circumlunar or whether it is co-rotational or counter-rotational.\n", "The rotating skyhook, or momentum-exchange tether, is an idea related to the space elevator concept. It is one of the many proposed applications of space tethers, which include some propulsion systems. The tether is rotated from a heavy orbiting vehicle such that the far end, weighted with a docking station, periodically enters Earth's atmosphere. With the right timing, a fast aircraft can transfer cargo and passengers during the brief time the skyhook is at the bottom of its cycle and stationary relative to Earth's surface.\n\nSection::::Space flight.:Light sail.\n" ]
[]
[]
[ "normal" ]
[]
[ "normal", "normal" ]
[]
2018-24551
Why do salt crystals formed millions of years ago expire in under a year?
Because that's the date the manufacturer printed on the package. That's literally the only reason. Salt does not degrade. As far as why the manufacturer would choose to print a specific date, it's more likely to be used as a tracking number than an expiration date, enabling manufacturers to identify the cause of quality control issues.
[ "Thomas also criticised Humphreys' idea that there is \"not enough sodium in the sea\" for a several billion year old sea, writing, \"Humphreys finds estimates of oceanic salt accumulation and deposition that provide him the data to 'set' an upper limit of 62 million years. But modern geologists do not use erratic processes like these for clocks. It's like someone noticing that (A) it's snowing at an inch per hour, (B) the snow outside is four feet deep, and then concluding that (C) the Earth is just 48 hours, or two days, in age. Snowfall is erratic; some snow can melt; and so on. The Earth is older than two days, so there must be a flaw with the 'snow' dating method, just as there is with the 'salt' method.\"\n", "When salt extrudes and flows at the surface, it becomes a salt glacier (also known as a salt fountain). Unlike underground salt structures, when rock salt is uncovered, it is exposed to rainwater, wind and heat from the sun that could lead to rapid deformation of salt structure within a short time, which can be daily to seasonal.\n\nSection::::Salt dynamics.:Surface salt structure.:Uplift of salt glaciers.\n", "Otherwise, the minerals trona or thermonatrite and nahcolite are commonly formed. As the evaporation of a salt lake will occur over geological time spans, during which also part or all of the salt beds might redissolve and recrystallize, deposits of sodium carbonate can be composed of layers of all these minerals.\n\nThe following list may include geographical sources of either natron or other hydrated sodium carbonate minerals:\n\nBULLET::::- Africa\n\nBULLET::::- Chad\n\nBULLET::::- shores of Lake Chad\n\nBULLET::::- Trou au Natron\n\nBULLET::::- Era Kohor crater on Emi Koussi\n\nBULLET::::- Egypt\n\nBULLET::::- Wadi El Natrun (Natron Valley)\n\nBULLET::::- Ethiopia\n\nBULLET::::- Showa Province\n", "Crenarchaeol and other GDGTs can be preserved in the environment for hundreds of millions of years under the right conditions. Most GDGTs degrade at between 240 and 300 °C and so are not found in rocks that have undergone heating to temperatures higher than 300 °C. GDGTs undergo degradation when exposed to oxygen but the relative concentrations of sediment GDGTs tends to remain the same even during degradation, meaning that degradation does not interfere with proxies like TEX-86 that are based on the ratios of different GDGTs.\n\nSection::::As a biomarker.:Marine nitrogen cycle.\n", "These lakes are most common in western Canada, and the northern part of Washington state, USA. One of the examples, is Basque Lake 2 in Western Canada, which is highly concentrated in magnesium sulfate. In summer it deposits epsomite (\"Epsom salts\"). In winter, it deposits meridianiite. This is named after Meridiani Planum where Opportunity rover found crystal molds in sulfate deposits (Vugs) which are thought to be remains of this mineral which have since been dissolved or dehydrated. It is preferentially formed at subzero temperatures, and is only stable below 2 °C, while Epsomite (·7) is favored at higher temperatures.\n", "Selenite crystals: A designated area of the of salt flats at the refuge has gypsum concentrations high enough to grow selenite, a crystalline form of gypsum. The selenite crystals found there have an hourglass-shaped sand inclusion that is not known to occur in selenite crystals found elsewhere in the world. Digging for crystals is allowed, but only from April 1 through October 15 to protect this vital Whooping Crane habitat.\n", "On the west edge of the lake, visitors can dig for selenite crystals. These crystals feature an hourglass inclusion which is unique to the Great Salt Plains. Scientists believe that salt was deposited during repeated water-level rises of a shallow sea millions of years ago. The supply of salt is kept intact by saline groundwater that flows just a few feet below the surface. When the water evaporates, a layer of salt remains on the surface. This process also plays a role in the formation of selenite crystals that visitors covet.\n\nSection::::Future of the lake.\n", "Salt is expensive to remove from water, and salt content is an important factor in water use (such as potability). Increases in salinity have been observed in lakes and rivers in the United States, due to common road salt and other salt de-icers in runoff.\n", "When gypsum dehydrates severely, anhydrite is formed. If water is reintroduced, gypsum can and will reform – including as the four crystalline varieties. An example of gypsum crystals reforming in modern times is found at Philips Copper Mine (closed and abandoned), Putnam County, New York, US where selenite micro crystal coatings are commonly found on numerous surfaces (rock and otherwise) in the cave and in the dump.\n", "Section::::Resource value.\n", "\"The salts collected from this lake vary in their nature and composition and from their-appearance are easily separated by men accustomed to handling them. Various names are given to some five or six main varieties, but there is no fixed line between one salt and another, their compositions depending upon the period and condition of crystallization. At the present time large quantities of these salts are lying on the shores of the lake...\"\n\nWith the process of crystallization, sodium chloride or common salt is formed along with the carbonates of soda resulting in a number of products, as explained below.\n", "The plant usually uses C3 carbon fixation, but when it becomes water- or salt-stressed, it is able to switch to Crassulacean acid metabolism. Like many salt-tolerant plants, \"M. crystallinum\" accumulates salt throughout its life, in a gradient from the roots to the shoots, with the highest concentration stored in epidermal bladder cells. The salt is released by leaching once the plant dies. This results in a detrimental osmotic environment preventing the growth of other, non-salt-tolerant species while allowing \"M. crystallinum\" seeds to germinate.\n", "BULLET::::- The lake is Monomictic Mixing type and develops thermal stratification in March to November. Maximum depth of the Thermocline is . Hypolimnion temperature ranges from to .\n\nBULLET::::- pH value varied from a maximum of 8.8 on the surface to a minimum of 7.7 at depth in year over the 12 months period\n\nBULLET::::- DO [mg l-1] value varied from a maximum of 10.4 on the surface to a minimum of 2.2 at the bottom in year over the 12 months period\n", "BULLET::::- Soils in groundwater discharge areas\n\nThe manifestation of dryland salinity is largely a problem of groundwater – however the accumulation of salt within the soil and at the surface due to proximity to or saturation by saline groundwater causes changes to the soil’s chemistry, structure and stability, and the plant life that it supports.\n\nBULLET::::- Managing soils for dryland salinity in catchments\n", "The world production of sodium sulfate, almost exclusively in the form of the decahydrate amounts to approximately 5.5 to 6 million tonnes annually (Mt/a). In 1985, production was 4.5 Mt/a, half from natural sources, and half from chemical production. After 2000, at a stable level until 2006, natural production had increased to 4 Mt/a, and chemical production decreased to 1.5 to 2 Mt/a, with a total of 5.5 to 6 Mt/a. For all applications, naturally produced and chemically produced sodium sulfate are practically interchangeable.\n\nSection::::Production.:Natural sources.\n", "In highway de-icing, salt has been associated with corrosion of bridge decks, motor vehicles, reinforcement bar and wire, and unprotected steel structures used in road construction. Surface runoff, vehicle spraying, and windblown actions also affect soil, roadside vegetation, and local surface water and groundwater supplies. Although evidence of environmental loading of salt has been found during peak usage, the spring rains and thaws usually dilute the concentrations of sodium in the area where salt was applied. A 2009 study found that approximately 70% of the road salt being applied in the Minneapolis-St Paul metro area is retained in the local watershed.\n", "There are limitations in the use of cooling crystallization:\n\nBULLET::::- Many solutes precipitate in hydrate form at low temperatures: in the previous example this is acceptable, and even useful, but it may be detrimental when, for example, the mass of water of hydration to reach a stable hydrate crystallization form is more than the available water: a single block of hydrate solute will be formed – this occurs in the case of calcium chloride);\n", "Extended exposure to high sodium levels results in a decrease in the amount of water retained and able to flow through soil, as well as a decrease in decomposition rates (this leaves the soil infertile and prohibit any future growth). This issue is prominent in Australia, where 1/3 of the land is affected by high levels of salt. It is a natural occurrence, but farming practices such as overgrazing and cultivation have contributed to the rise of it. The options for managing sodic soils are very limited; one must either change the plants or change the soil. The latter is the more difficult process. If changing the soil, one must add calcium to absorb the excess sodium that blocks water flow.\n", "Cyclic salt\n\nCyclic salt is salt that is carried by the wind when it comes in contact with breaking waves. It is estimated that more than 300 million tons of cyclic salt is deposited on the Earth's surface each year, and it is considered to be a significant factor in the chlorine content of the Earth's river water. In general, cyclic salt deposits are lower at sites further inland and are most abundant along the shoreline, although this pattern varies depending on the given environmental conditions.\n", "There is some concern that areas of the Moss have been damaged or partly destroyed as a result of agricultural activity, including the digging of drainage ditches. Natural England, the agency responsible for the management of SSSIs in England, has suggested that no further irrigation channels should be dug, existing ones should not be deepened, and that the area should not be exposed to fertilisers or surface water run-off. Natural England further notes that the site's continued status as an SSSI depends on maintaining a high water table.\n\nSection::::Geography and farming.:Climate.\n", "For practical reasons salinity is usually related to the sum of masses of a subset of these dissolved chemical constituents (so-called \"solution salinity\"), rather than to the unknown mass of salts that gave rise to this composition (an exception is when artificial seawater is created). For many purposes this sum can be limited to a set of eight major ions in natural waters, although for seawater at highest precision an additional seven minor ions are also included. The major ions dominate the inorganic composition of most (but by no means all) natural waters. Exceptions include some pit lakes and waters from some hydrothermal springs.\n", "Visitors learn the physical and geological characteristics of the salt bed in Kansas. With a focus on the Permian Period (the sixth and last period of the Paleozoic Era), the gallery illustrates the animals that lived during this time and why there are no fossil records in the salt bed. Throughout the mine, water bubbles can be found trapped in some of the salt. These fluid inclusions are believed to have occurred during the Permian Period.\n", "With the zeroing of the budget for Yucca Mountain nuclear waste repository in Nevada, more nuclear waste is being loaded into sealed metal casks filled with inert gas. Many of these casks will be stored in coastal or lakeside regions where a salt air environment exists, and the Massachusetts Institute of Technology is studying how such dry casks perform in salt environments. Some hope that the casks can be used for 100 years, but cracking related to corrosion could occur in 30 years or less.\n", "BULLET::::3. Cratonic basins – Within continental boundaries, salt deposition can occur anywhere that bodies of water can collect. Even away from ocean sources, water is capable of dissolving and carrying ions that can later precipitate as salts, and when the water evaporates, the salts are left behind. Examples of these basins are the South Oman Salt Basin and the Michigan Basin. In the past, there was a great shallow sea covering most of the Great Plains region of the United States; when this sea dried up, it created the Strataca deposit now mined in Kansas, among others.\n\nSection::::Salt.:Characteristics.\n", "BULLET::::3. even though the precision of the predictions for the future may still not be very high, a lot is gained when the trend is sufficiently clear; for example, it need not be a major constraint to design appropriate soil salinity control measures when a certain salinity level, predicted by Saltmod to occur after 20 years, will in reality occur after 15 or 25 years.\n\nSection::::Principles.:Hydrological data.\n" ]
[ "salt expires in a year.", "Salt crystals expire within a year." ]
[ "Salt does not degrade, the manufacturer just puts a date on the package.", "Salt expiration dates for quality control and not actual salt decay." ]
[ "false presupposition" ]
[ "salt expires in a year.", "Salt crystals expire within a year." ]
[ "false presupposition", "false presupposition" ]
[ "Salt does not degrade, the manufacturer just puts a date on the package.", "Salt expiration dates for quality control and not actual salt decay." ]
2018-06977
A few billion dollars saved Europe after WWII(Marshall Plan).Hundreds of billions of dollars(aid)has no effect in Africa.
* Marshall Plan: ~$110 billion in 2016 US dollars * Annual foreign aid to all of Africa by OECD countries: ~$60 billion per annum in 2016 dollars (at least in recent years, probably much less earlier). Headline: Most people who work on these kinds of problems would argue that it's an issue of institutions. People like Bruce Bueno De Mesquita would argue that many African countries over the last 70 years have had governments that were, in effect, not incentivized to invest in their populace whereas governments in Europe tend to be. People like James Robinson would argue that, specifically, institutions in many African countries are weak and the issue isn't a need of resources alone but a need for a stronger framework (i.e. stronger property rights, rule of law, etc.) More broadly it's important to note: * Africa is much, much bigger than France, Italy, and West Germany, who got pretty much all of the Marshall Fund money. * The story of African development is hugely varied. Some countries have done extremely well over the last 70 years. The story of Africa is definitely not one of stagnation in the vast majority of countries. Most have grown tremendously since 2000, let alone 1940-1960 when many achieved independence.
[ "\"within the framework of FIDES very large sums were granted to French-speaking Africa. In face of the immense needs, however, they seemed quite modest. The aid could in fact have been increased many times without a corresponding tax pressure, had France had the courage politically to decolonize more rapidly. Forty-six percent of the FIDES grants, particularly in the first four-year plan, were used to build roads, ports and airports. These were indispensable to open up the countries, but could have been achieved at less cost.\"\n\nPaul Nugent (2004)\n", "In 2005, the United States Office of Management and Budget rated the U.S. African Development Foundation's programs fully effective under its Performance Assessment Rating Tool program, an efficiency recognition that has been accorded to less than ten percent of United States Government grant-making programs. The U.S. African Development Foundation receives most of its programming resources from the United States government, but the U.S. African Development Foundation has a unique co-funding platform with host African governments, private U.S. corporations, and donors.\n", "Dambisa Moyo, author and economist, discusses the counterintuitive negative impact associated with African Aid in the book \"Dead Aid\". Moyo \"describes the state of postwar development policy in Africa today and unflinchingly confronts one of the greatest myths of our time: that billions of dollars in aid sent from wealthy countries to developing African nations has helped to reduce poverty and increase growth.\"\n", "Beginning with the first development aid project in Togo in 1977, the Institute for International Contact and Co-operation has steadily expanded its geographical and conceptual framework. Project work in Africa still constitutes one of the main focal points today. In the Hanns Seidel Foundation’s concept of development policy, strengthening underlying social factors is regarded as just as important as promoting social political structures. This means improving, strengthening and utilizing human capacities, taking into account the social, political, cultural and economic conditions of the country in question. Promoting a sense of democratic community while preserving traditions which deserve to be preserved are among the principles of the Hanns Seidel Foundation’s development co-operation. All projects are designed in such a way that the countries or partner organizations can take them over themselves in the course of time.\n", "While critics of foreign humanitarianism in Africa argue it to be rooted in ethnocentrism, others argue that foreign aid is crucial for Africa to begin to sustain itself. That the act of financially aiding a country in need could not be detrimental. The work of foreign nations would allow the continent to eventually sustain itself and “feed itself”—addressing issues of extreme poverty in the nation. While this may be reflective of \"The White Man's Burden,\" it is argued that Western nations are more developed scientifically and could then produce resources to assist the continent. Some argue that Africa’s current state is due to the shadows of imperialism and colonialism, and thus it is up to foreign nations to assist the continent.\n", "BULLET::::2. Ideological obstacles that have been biased by two decades of failed economic policy reform and in turn, create a hostile environment for reform.\n\nBULLET::::3. Low state capacity that reinforces and that in turn, is reinforced by the neopatrimonial tendencies of the state.\n\nVan de Walle later argues that these state generated factors that have obstructed the effective implementation of economic policy reform are further exacerbated by foreign aid. Aid, therefore, makes policy reform less likely, rather than more likely. Van de Walle posits that international aid has sustained economic stagnation in Africa by:\n", "Enterprise expansion grants provide assistance for cooperatives, farmer associations, community groups, enterprises, or businesses that have developed a plan for expansion and linkages to new markets or follow-on financing. The grants can amount to up to $250,000 and can last for up to five years. Applications for grants are reviewed by the U.S. African Development Foundation Country Program Coordinator of that country. If a business or organization receives funding from the U.S. African Development Foundation, it is eligible to work with technical partner organizations that will help them turn their plan into a reality by providing technical, governance and financial support or assistance.\n", "In Africa, it is argued that in order to meet the MDGs by 2015 infrastructure investments would need to reach about 15% of GDP (around $93 billion a year). Currently, the source of financing varies significantly across sectors. Some sectors are dominated by state spending, others by overseas development aid (ODA) and yet others by private investors. In Sub-Saharan Africa, the state spends around $9.4 billion out of a total of $24.9 billion. In irrigation, SSA states represent almost all spending; in transport and energy a majority of investment is state spending; in ICT and water supply and sanitation, the private sector represents the majority of capital expenditure. Overall, aid, the private sector and non-OECD financiers between them exceed state spending. The private sector spending alone equals state capital expenditure, though the majority is focused on ICT infrastructure investments. External financing increased from $7 billion (2002) to $27 billion (2009). China, in particular, has emerged as an important investor.\n", "Although aid has had some negative effects on the growth and development of most African countries, research shows that development aid, in particular, actually does have a strong and favorable effect on economic growth and development. Development aid has a positive effect on growth because it may actually promote long term economic growth and development through promoting investments in infrastructure and human capital. More evidence suggests that aid had indeed, had a positive effect on economic growth and development in most African countries. According to a study conducted among 36 sub-saharan African countries in 2013, 27 out of these 36 countries have experienced strong and favorable effects of aid on GDP and investments, which is contrary to the believe that aid ineffective and does not lead to economic development in most African countries. \n", "Projects currently being carried out Self Help Africa include a cassava development project in Kenya that is backed by the European Union, and a multi-year local development project in Northern Zambia that is supported by Irish Aid, funded multi-year project in Northern Zambia. In late 2018, Self Help Africa received support of €12m from the EU to implement a multi-year farmer training project in Malawi, and an enterprise development value-chain project in Kenya that will support the creation of up to 50 new agri-enterprise businesses - projects that will provide markets for up to 100,000 rural poor farming families. This project will disburse grants through a 'challenge fund' that is being implemented in conjunction with South African base Imani Development. Support of €2.5m for the work was received from Slovak Aid in 2018.\n", "Dambisa Moyo devotes a whole section of her book, \"Dead Aid\" to rethinking the aid dependency model. She cautions that although “weaning governments off aid won’t be easy”, it is necessary. Primary among her prescriptions is a “capital solution” where African countries must enter the bond market to raise their capital for development, the interconnectedness that globalization has provided, will turn other “pools of money toward African markets in form of mutual funds, hedge funds, pension schemes” etc.\n\nAlthough a bleak picture is painted of aid, with it comes room for new solutions and new ways of thinking about development\n", "Summing up the experience of African countries both at the national and at the regional levels it is no exaggeration to suggest that, on balance, foreign assistance, especially foreign capitalism, has been somewhat deleterious to African development. It must be admitted, however, that the pattern of development is complex and the effect upon it of foreign assistance is still not clearly determined. But the limited evidence available suggests that the forms in which foreign resources have been extended to Africa over the past twenty-five years, insofar as they are concerned with economic development, are, to a great extent, counterproductive.\n", "In 2015, Self Help Africa secured a number of major new grants to support its work, including $750,000 from The Bill & Melinda Gates Foundation for a development project in West Africa, and from the European Union for work, providing training and developing enterprise opportunities for rural youth in Uganda.\n", "Section::::Operations.:Programs.:Operational Assistance Grants.\n\nOperational assistance grants offer funding for preexisting cooperatives, farmer associations, community groups, enterprises, or businesses who plan to engage in technical, managerial, organizational improvements. The grants can be up to $100,000 over two years.\n\nSection::::Operations.:Programs.:Capacity Building.\n", "It has been argued that much government-to-government aid was ineffective because it was merely a way to support strategically important leaders (Alesino and Dollar, 2000). A good example of this is the former dictator of Zaire, Mobuto Sese Seko, who lost support from the West after the Cold War had ended. Mobuto, at the time of his death, had a sufficient personal fortune (particularly in Swiss banks) to pay off the entire external debt of Zaire.\n", "The U.S. African Development Foundation apportions significant amounts of money for organizations that deliver technical, organizational, managerial services to businesses and organizations that receive funding from the U.S. African Development Foundation. The U.S. African Development Foundation believes that grant recipients will better utilize funding if they have the best technical, organizational, managerial, leadership skills possible. Organizations that help build capacity in local communities, or other local organizations are also eligible for grants from the U.S. African Development Foundation. The U.S. African Development Foundation believes these organizations help ensure the success of the initiatives that the U.S. African Development Foundation sponsors.\n", "Section::::History.:2000–2005.\n\nIn the year 2000, AHA was actively working in six countries with emergency relief, refugee, and health programmes. The biggest problem AHA faced was the lack of funding for the institutional structures that would make all of its goals a reality. AHA in order to maintain its activities needed 350,000 USD.\n", "Historically, food aid is more highly correlated with excess supply in Western countries than with the needs of developing countries. Foreign aid has been an integral part of African economic development since the 1980s.\n\nThe aid model has been criticized for supplanting trade initiatives. Growing evidence shows that foreign aid has made the continent poorer. One of the biggest critics of the aid development model is economist Dambisa Moyo (a Zambian economist based in the US), who introduced the Dead Aid model, which highlights how foreign aid has been a deterrent for local development.\n", "In 2011, the U.S. African Development Foundation launched a food security and resilience program for pastoralist communities in the Turkana region of northern Kenya and a special food security project in the Sahel region of West Africa in 2012.\n\nIn 2016, U.S. African Development Foundation launched a program aimed at improving food security and economic livelihoods in the Bukavu region of the Democratic Republic of Congo (DRC). In Somalia, U.S. African Development Foundation began operations, through a local technical partner, in 2011 to provide vocational and job skills training to unemployed youth, since impacting over 5,000 Somali youth.\n\nSection::::Operations.:Programs.\n", "In 2016, Self Help Africa secured a number of major new grants to support its work, including the \"Ethiopian Agricultural Transformation Agency\" and Walmart Foundation. Existing projects in Benin and in Ghana, West Africa came to a close, while December 2016 marked the completion of five-year grant support from DFID under its Programme Partnership Arrangement (PPA).\n\nSection::::2017.\n", "Many have argued that PEPFAR's emphasis on direct funding from the United States to African governments (bilateral programs) have been at the expense of full commitments to multilateral programs such as the Global Fund. Reasons given for this vary, but a major criticism has been that this enables the U.S. \"to maximize its leverage with other countries through the funds available for distribution\" since the \"Global Fund and other multilateral venues do not possess the same top-down leverage as does the United States in demanding fundamental national-level reforms\". However, since the inception of PEPFAR there has been a shift away from strictly bilateral funding to more multilateral programs.\n", "Created by an Act of Congress in 1980, the African Development Foundation began program operations in 1984. It has since provided financing to more than 1,700 small enterprises and community-based organizations.\n\nThe budget of the U.S. African Development Foundation is funded through annual United States government appropriations for foreign operations. The U.S. African Development Foundation is governed by a board of directors that includes seven members who are nominated by the President of the United States, confirmed by the United States Senate, and operated by a President/CEO.\n\nSection::::Operations.\n", "Since 9/11 many humanitarian organizations have directed their aid with western political agendas – this can be seen in Iraq and Afghanistan. This is not new however – aid was similarly distributed during the cold war. However there are other organizations such as Al qaida that in 2011 used aid to win the hearts and minds of the Somalian population. It can be seen then that the best interests of the population receiving aid is not always the reason for distribution of aid from organizations but rather to influence the vulnerable. Aid should be given only on the basis of our shared humanity.\n", "The U.S. African Development Foundation measures grant success in terms of jobs created and sustained, increased income and food security levels, and improved social conditions. In 2016 The U.S. African Development Foundation was given $30 million for new project grants in 20 countries, with an active portfolio of $53 million invested in 500 enterprises. In 2016, U.S. African Development Foundation grants benefited 1,200,000 livelihoods in sub-Saharan Africa, 54% of which are women.\n\nSection::::History.\n", "Section::::Effects of Foreign Aid on Developing Countries.:Effects of Foreign Aid in Africa..\n\nWhile most economists like Jeffery Sachs hold the view of aid as the driver for economic growth and development, others argue that aid has rather led to increasing poverty and decreasing economic growth of poor countries. Economists like Dambisa Moyo argue that aid does not lead to development, but rather creates problems including corruption, dependency, limitations on exports and dutch disease, which negatively affect the economic growth and development of most African countries and other poor countries across the globe.\n" ]
[ "A few billion dollars saved Europe after WWII.", "Few billion spent on Europe helped them and hundreds of billions of dollars spent on Africa have no effect." ]
[ "$110 billion in 2016 US dollars saved Europe after WWII.", "Over 100 billion was spent on Europe and only 60 billion is spent on Africa." ]
[ "false presupposition" ]
[ "A few billion dollars saved Europe after WWII.", "Few billion spent on Europe helped them and hundreds of billions of dollars spent on Africa have no effect." ]
[ "false presupposition", "false presupposition" ]
[ "$110 billion in 2016 US dollars saved Europe after WWII.", "Over 100 billion was spent on Europe and only 60 billion is spent on Africa." ]
2018-00345
If our body's natural inclination is homeostasis, then why is there an obesity epidemic?
Good question, some processes in the body lean towards a positive feedback loop, not everything is homeostatic. The issue with obesity is the perputual metabolic adaptations that seem to lean towards fat storage. As a sort of positive feedback. The irony is that regaining lost weight seems to be easier outlining that the body seems to change the default homeostatic tendencies.
[ "According to this new theory, homeostatic imbalance includes the 'Circle of Discontent', a system of feedback loops linking weight gain, body dissatisfaction, negative affect and over-consumption. The theory is consistent with an extensive evidence-base of cross-sectional and prospective studies. A four-armed strategy to halt the obesity epidemic consists of: (1) Putting a stop to victim-blaming, stigma and discrimination; (2) Devalorizing the thin-ideal; (3) Reducing consumption of energy-dense, low nutrient foods and drinks; (4) Improving access to plant-based diets. If fully implemented, interventions designed to restore homeostasis would halt the obesity epidemic.\n\nSection::::Consciousness research.\n", "The mechanism responsible for this reversed association is unknown, but it has been suggested that, in chronic kidney disease patients, \"The common occurrence of persistent inflammation and protein energy wasting in advanced CKD [chronic kidney disease] seems to a large extent to account for this paradoxical association between traditional risk factors and CV [cardiovascular] outcomes in this patient population.\" Other research has proposed that the paradox may be explained by adipose tissue storing lipophilic chemicals that would otherwise be toxic to the body.\n", "Consumer research provides a real-world application for neuroscience studies. Consumer studies help neuroscience to learn more about how healthy and unhealthy brain functions differ, which may assist in discovering the neural source of consumption-related dysfunctions and treat a variety of addictions. Additionally, studies are currently underway to investigate the neural mechanism of “anchoring”, which has been thought to contribute to obesity because people are more influenced by the behaviors of their peers than an internal standard. Discovering a neural source of anchoring may be the key to preventing behaviors that typically lead to obesity.\n\nSection::::Limitations.\n", "Although the negative health consequences of obesity in the general population are well supported by the available evidence, health outcomes in certain subgroups seem to be improved at an increased BMI, a phenomenon known as the obesity survival paradox. The paradox was first described in 1999 in overweight and obese people undergoing hemodialysis, and has subsequently been found in those with heart failure and peripheral artery disease (PAD).\n", "The terminology \"reverse epidemiology\" was first proposed by Kamyar Kalantar-Zadeh in the journal \"Kidney International\" in 2003 and in the \"Journal of the American College of Cardiology\" in 2004. It is a contradiction to prevailing concepts of prevention of atherosclerosis and cardiovascular disease; however, active prophylactic treatment of heart disease in otherwise healthy, asymptomatic people is and has been controversial in the medical community for several years.\n", "The main problem with this idea is the timing at which the transition is presumed to have happened, and how this would then translate into the genetic predisposition to type 2 diabetes and obesity. For example, the decline in reproductive investment in human societies (the so-called r to K shift) has occurred far too recently to have been caused by a change in genetics.\n", "While genetic influences are important to understanding obesity, they cannot explain the current dramatic increase seen within specific countries or globally. Though it is accepted that energy consumption in excess of energy expenditure leads to obesity on an individual basis, the cause of the shifts in these two factors on the societal scale is much debated. There are a number of theories as to the cause but most believe it is a combination of various factors.\n", "Obesity has become a serious health risk throughout the developed world. More than 1 billion adults are overweight, and more than 300 million of them are clinically obese.\n", "Consistent with cognitive epidemiological data, numerous studies confirm that obesity is associated with cognitive deficits. Whether obesity causes cognitive deficits, or vice versa is unclear at present.\n\nSection::::Causes.:Gut bacteria.\n", "Many attempts have been made to search for one or more genes contributing to thrift. Modern tools of genome wide association studies have revealed many genes with small effects associated with obesity or type 2 diabetes but all of them together explain only between 1.4 and 10% of population variance. This leaves a large gap between the pregenomic and emerging genomic estimates of heritability of obesity and Type 2 diabetes: sometimes called the 'missing heritability'. The reasons for this discrepancy are not completely understood. A likely possibility is that the missing heritability is explained by rare variants of large effect that are found only in limited populations. These would be impossible to detect by standard whole genome sequencing approaches even with hundreds of thousands of participants. The extreme endpoint of this distribution are the so-called 'monogenic' obesities where most of the impact on body weight can be tied to a mutation in a single gene that runs in a single family. The classic example of such a genetic effect is the presence of mutations in the leptin gene.\n", "One probable methodological explanation for the obesity paradox is collider stratification bias, which commonly emerges when one restricts or stratifies on a factor (the \"collider\") that is caused by both the exposure (or its descendents) and the outcome (or its ancestors / risk factors). \n\nSection::::See also.\n\nBULLET::::- French paradox\n\nBULLET::::- Mediterranean diet\n\nBULLET::::- Israeli paradox\n\nBULLET::::- Diet food\n\nBULLET::::- Low birth-weight paradox (Low-birth weight babies born to smokers have a lower mortality than low-birth weight babies born to non-smokers, because other causes of low birth-weight are more harmful than smoking.)\n\nBULLET::::- Negative-calorie food\n\nBULLET::::- Olestra\n", "The obesity paradox (excluding cholesterol paradox) was first described in 1999 in overweight and obese people undergoing hemodialysis, and has subsequently been found in those with heart failure, myocardial infarction, acute coronary syndrome, chronic obstructive pulmonary disease (COPD), and in older nursing home residents.\n", "Section::::Social class.:Food deserts.\n", "However, some recent studies have not been able to confirm the claims that distance to supermarkets predicts obesity or even diet quality.\n\nSection::::Social class.:Stress.\n", "Obesity provides chronic cellular stimuli for the UPR pathway as a result of the stresses and strains placed upon the ER, and without allowing restoration to normal cellular responsiveness to insulin hormone signaling, an individual becomes very likely to develop Type 2 Diabetes.\n", "Factors of the built environment may contribute to obesity, for example via the availability of unhealthy food or the absence of green spaces, defining the term \"obesogenic environment\".\n\nSection::::Access to technology.\n", "There is debate regarding whether obesity or insulin resistance is the cause of the metabolic syndrome or if they are consequences of a more far-reaching metabolic derangement. A number of markers of systemic inflammation, including C-reactive protein, are often increased, as are fibrinogen, interleukin 6, tumor necrosis factor-alpha (TNF-α), and others. Some have pointed to a variety of causes, including increased uric acid levels caused by dietary fructose.\n", "Section::::Lifestyle-health status paradox.\n\nFor people in the United States, obesity has been a growing trend. The formation of a healthy lifestyle is a viewpoint that is generally not attributed to Americans. From such increases in weight, diabetes, asthma, and migraines have grown more common. However, offsetting this somewhat, the number of people contracting cardiovascular disease and other chronic diseases has been dropping for the age range of Americans that are at a higher likelihood of being obese. This status paradox does not correlate with the evidence that shows such rates should be increasing, not decreasing.\n\nSection::::See also.\n", "Social and environmental determinants may also induce the onset of obesity. Social class may affect individual access to proper nutritional education and may hinder an individual's ability to make healthy lifestyle choices. Additionally, samples of low-income women and children were also shown to have higher rates of obesity because of stress. Exposure to pollutants such as smoke and second-hand smoke have also shown direct correlations to obesity.\n\nSection::::Other causes of obesity.:Gut bacteria.\n", "Diabetes and prediabetes are strongly linked to obesity and overweight. Nearly 50% of people with diabetes are obese, and 90% are overweight. 19 A chief risk factor for prediabetes is excess abdominal fat. Obesity increases one’s risk for a variety of other medical problems, including hypertension, stroke, other forms of cardiovascular disease, arthritis, and several forms of cancer. Obese individuals are at twice the risk of dying from any cause than normal-weight individuals. The prevalence of obesity and overweight have risen to epidemic proportions in the United States, where 67% of adults are overweight and, of these, approximately half are obese.\n", "Pathophysiology of obesity\n\nThere are many possible pathophysiological mechanisms involved in the development and maintenance of obesity.\n\nSection::::Research.\n", "Some research indicates a correlation between urban sprawl and obesity. Car centric development and lack of walkability lead to less use of active modes of transportation such as utility cycling and walking which is linked to various health issues caused by a lack of exercise.\n\nSection::::Solutions to Negative Externalities.\n\nSection::::Solutions to Negative Externalities.:Pigovian Taxes.\n", "Section::::Fast food.\n\nAs societies become increasingly reliant on energy-dense fast-food meals, the association between fast food consumption and obesity becomes more concerning. In the United States consumption of fast food meal has tripled and calorie intake from fast food has quadrupled between 1977 and 1995. Consumption of sweetened drinks is also believed to be a major contributor to the rising rates of obesity.\n\nSection::::Portion size.\n\nThe portion size of many prepackage and restaurant foods has increased in both the United States and Denmark since the 1970s.\n", "Phase III clinical trials for the drug against obesity were conducted in 2003 by Axokine's maker, Regeneron Pharmaceuticals, demonstrating a small positive effect in some patients, but the drug was not commercialized. A major problem with the treatment was that in nearly 70% of the subjects tested, antibodies against Axokine were produced after approximately three months of treatment. In the minority of subjects who did not develop the antibodies, weight loss averaged 12.5 pounds in one year, versus 4.5 pounds for placebo-treated subjects. In order to obtain this benefit, subjects needed to receive daily subcutaneous injections of one microgram Axokine per kilogram body weight.\n", "The thrifty phenotype hypothesis proposes that a low availability of nutrients during the prenatal stage followed by an improvement in nutritional availability in early childhood causes an increase risk of metabolic disorders, including Type II diabetes, as a result of permanent changes in the metabolic processing of glucose-insulin determined in utero. This predominantly affects poor communities, where maternal malnutrition may be rampant, in turn causing fetuses to be biologically programmed to expect sparse nutritional environments. But, once in the world, the readily accessible processed foods consumed are unable to be processed efficiently by individuals who had their metabolic systems pre-set to expect scarcity. This difference between expected nutritional deficits and actual food surplus results in obesity and eventually Type II Diabetes. Janet Rich-Edwards, an epidemiologist at Harvard Medical School, initially set out to disprove the fetal origins theory with her database of over 100,000 nurses. Instead, she found that the results hold: a strong relationship exists between low birth weight and later coronary heart disease and stroke.\n" ]
[ "The body's natural inclination is homeostasis", "If the body's natural inclination is homeostasis, there should not be an obesity epidemic." ]
[ "Some body processes use a positive feedback loop", "Some body processes lead towards a positive feedback loop, therefore not everything is homeostatic." ]
[ "false presupposition" ]
[ "The body's natural inclination is homeostasis", "If the body's natural inclination is homeostasis, there should not be an obesity epidemic." ]
[ "false presupposition", "false presupposition" ]
[ "Some body processes use a positive feedback loop", "Some body processes lead towards a positive feedback loop, therefore not everything is homeostatic." ]
2018-02762
What is happening when my Wi-Fi/Internet just stops working for 10 minutes, and then acts like it did nothing wrong after?
There is some connectivity glitch that either self-resolves, or some tech was alerted, and fixed the issue.
[ "The usual symptom of a failed filter is frequent DSL disconnects or slow internet speed. The usual procedure to test for failed filters is to remove all filters and all other devices and extension cables from the telephone line. Then connect the DSL modem or router directly to the main phone line socket and check to see whether the internet speed increases or disconnects reduce. If this works, one or more microfilter(s) used with the analog devices need to be replaced.\n\nSection::::See also.\n\nBULLET::::- ADSL\n\nBULLET::::- VDSL\n\nSection::::External links.\n\nBULLET::::- Photographs of the internals of ADSL microfilters\n", "wireless transmission (wireless, microwave, satellite), and/or\n\ncapacity (system limits).\n\nThe failures can occur because of\n\ndamage,\n\nfailure,\n\ndesign,\n\nprocedural (improper use by humans),\n\nengineering (how to use and deployment),\n\noverload (traffic or system resources stressed beyond designed limits),\n\nenvironment (support systems like power and HVAC),\n\nscheduled downtime (outages designed into the system for a purpose such as software upgrades and equipment growth),\n\nother (none of the above but known), or\n\nunknown.\n\nThe failures can be the responsibility of\n\ncustomer/service provider,\n\nvendor/supplier,\n\nutility,\n\ngovernment,\n\ncontractor,\n\nend customer,\n\npublic individual,\n\nact of nature,\n\nother (none of the above but known), or\n\nunknown.\n", "Create a connection to a listening socket. PSH an application payload (i.e. 'GET / HTTP/1.0'). FIN the connection and 0-window it. This attack will have very different results depending on the stack/application you are targeting. Using this against a Cisco 1700 (IOS) web server, we observed sockets left in FIN_WAIT_1 indefinitely. After enough of such sockets, the router could no longer communicate TCP correctly.\n", "DSL device to the digital subscriber line access multiplexer (DSLAM) when a power outage occurs. A DSL interface with dying gasp must derive power for a brief period from another source so that the message can be sent without external power. The dying gasp message will end the session and a new session will be able to be made as soon as power returns and the modem retrains.\n\nDying Gasp is referenced in section 7.1.2.5.3 of ITU-T Recommendation G.991.2 (12/2003) as the Power Status bit.\n\nSection::::Fibre.\n", "If a disruption in the connection between \"C\" and \"A\" occurs, the connection may be terminated as a result. This can occur either by a socket producing an error, or by excessive lag in which the far server \"A\" anticipates this case (which is called a timeout).\n", "Despite the publicity given to the 2015 leap second, Internet network failures occurred due to the vulnerability of at least one class of router. Also, interruptions of around 40 minutes duration occurred with Twitter, Instagram, Pinterest, Netflix, Amazon, and Apple's music streaming series Beats 1.\n\nSeveral versions of the Cisco Systems NEXUS 5000 Series Operating System NX-OS (versions 5.0, 5.1, 5.2) are affected.\n\nThe 2015 leap second also affected the Altea airlines reservation system used by Qantas and Virgin Australia.\n", "BULLET::::2. \"Electronic Behavior Control System\" – 4:33\n\nBULLET::::3. \"go to\" – 0:12\n\nBULLET::::4. \"Sexual Orientation\" – 3:06\n\nBULLET::::5. \"Station Identification\" – 4:40\n\nBULLET::::6. \"Get Down Ver. 2.2\" – 3:45\n\nBULLET::::7. \"Shoot the Mac-10\" – 4:03\n\nBULLET::::8. \"You Have 5 Seconds to Complete This Section\" – 3:06\n\nBULLET::::9. \"Super Zen State (Power Chant No.3)\" – 6:50\n\nBULLET::::10. \"State Extension\" – 1:15\n\nBULLET::::11. \"interruption\" – 0:23\n\nBULLET::::12. \"Dream Induction\" – 3:20\n\nBULLET::::13. \"transition\" – 0:06\n\nBULLET::::14. \"Electronic Behavior Control System Ver. 2.0\" – 2:24\n\nBULLET::::15. \"We Must Have the Facts\" – 3:05\n\nBULLET::::16. \"interference\" – 0:14\n\nBULLET::::17. \"3:7:8\" – 3:43\n", "Telstra's Ryde switch failed in late 2011 after water egressed into the electrical switch board from continuing wet weather. The Ryde switch is one of the largest by area switches in Australia, and affected more than 720,000 services.\n\nThe Miami datacenter of ServerAxis went offline unannounced on February 29, 2016 and was never restored. This impacted multiple providers and hundreds of websites. The outage impacted coverage of the 2016 NCAA Women's Division I Basketball Tournament as WBBState, one of the affected sites, was by far the most comprehensive provider of women's basketball statistics available.\n\nSection::::Service levels.\n", "In December 2012, a partial loss (40%) of GMail service occurred globally, for 18 minutes. This loss of service was caused by a routine update of load balancing software which contained faulty logic—in this case, the error was caused by logic using an inappropriate \"all\" instead of the more appropriate \"some\". The cascading error was fixed by fully updating a single node in the network instead of partially updating all nodes at one time.\n\nSection::::Cascading structural failure.\n", "Unscheduled pauses (typically breakdowns) – These are also similar, but the user does not know when they will happen or the duration. Initially, devices are put into a ‘stop’ condition to reduce energy consumption. Depending on duration, equipment can be switched into further energy-saving states if required.\n", "When the connection between \"A\" and \"C\" is severed, users who were connected to other servers that are no longer reachable on the network appear to quit. For example, if user \"Sara\" is connected to server \"A\", user \"Bob\" is connected to server \"B\", and user \"Joe\" is connected to \"C\", and \"C\" splits, or disconnects, from \"A\", it will appear to \"Joe\" as if \"Sara\" and \"Bob\" both quit (disconnected from the network), and it will appear to both \"Sara\" and \"Bob\" that \"Joe\" quit. \n", "Firmware for TP-Link Wi‑Fi extenders in 2016 and 2017 hardcoded five NTP servers, including Fukuoka University in Japan and the Australia and New Zealand NTP server pools, and would repeatedly issue one NTP request and five DNS requests every five seconds consuming 0.72 GB per month per device. The excessive requests were misused to power an Internet connectivity check that displayed the device's connectivity status in their web administration interface. \n", "The Subject then sends an email complaining about Praemus. He complains that the unit claims he was online for 200 hours in a week, and other various problems with his computer, such as his monitor vibrating, blasts of static electricity and his screen shutting down, meaning he has to send his email from an internet cafe. The Subject gets an email back saying that this is the first time the Praemus system has done this, and that he should try turning the computer off and on again. However, unknown to him, there is no one working in the Praemus office.\n", "Residential customers in areas fed by affected overhead power lines can occasionally see the effects of an autorecloser in action. If the fault affects the customer's own distribution circuit, they may see one or several brief, complete outages followed by either normal operation (as the autorecloser succeeds in restoring power after a transient fault has cleared) or a complete outage of service (as the autorecloser exhausts its retries). If the fault is on an adjacent circuit, the customer may see several brief \"dips\" (sags) in voltage as the heavy fault current flows into the adjacent circuit and is interrupted one or more times. A typical manifestation would be the dip, or intermittent black-out, of domestic lighting during an electrical storm. Autorecloser action may result in electronic devices losing time settings, losing data in volatile memory, halting, restarting, or suffering damage due to power interruption. Owners of such equipment may need to protect electronic devices against the consequences of power interruptions and also power surges.\n", "BULLET::::- When a pattern is fed from the environment (a real input), the information travels both to the early processing area and the final storage area, however the teacher nodes will inhibit the output from the final storage area\n\nBULLET::::- The new pattern is learned by the early processing area by the standard backpropagation algorithm\n\nBULLET::::- At the same time random input is also fed into the network and causes pseudopatterns to be generated by the final storage area\n", "BULLET::::- Discrimination between the studied items and previously unseen items decreased as the network learned more.\n\nThis finding contradicts with studies on human memory, which indicated that discrimination increases with learning. Ratcliff attempted to alleviate this problem by adding 'response nodes' that would selectively respond to old and new inputs. However, this method did not work as these response nodes would become active for all inputs. A model which used a context pattern also failed to increase discrimination between new and old items.\n\nSection::::Proposed solutions.\n", "While it is trivial to get a single service to become unavailable in a matter of seconds, to make an entire system become defunct can take many minutes, and in some cases hours. As a general rule, the more services a system has, the faster it will succumb to the devastating (broken TCP, system lock, reboot, etc.) effects of the attacks. Alternatively, attack amplification can be achieved by attacking from a larger number of IP addresses. We typically attack from a /29 through a /25 in our labs. Attacking from a /32 is typically less effective at causing the system wide faults.\n", "BULLET::::- Loss of integrity: How much data could be corrupted and how damaged is it? Minimal slightly corrupt data (1), minimal seriously corrupt data (3), extensive slightly corrupt data (5), extensive seriously corrupt data (7), all data totally corrupt (9)\n\nBULLET::::- Loss of availability How much service could be lost and how vital is it? Minimal secondary services interrupted (1), minimal primary services interrupted (5), extensive secondary services interrupted (5), extensive primary services interrupted (7), all services completely lost (9)\n", "At 9:15 p.m. Clementi saved a screenshot of Ravi's \"dare you to video chat me\" tweet. At 9:25 Ravi's computer was disconnected, according to analysis by the Rutgers IT expert, and it stayed disconnected till 11:19 p.m. At 9:33 Clementi told a friend online that he had turned off the power strip to Ravi's computer. At 10:19 Clementi's guest arrived.\n\nRavi left for ultimate frisbee practice around 8:30–9:00 p.m.\n", "BULLET::::- System downtime - Periods of system downtime may arise, either due to network-related issues, hardware failure, or loss of electricity. The inability to use electronic prescribing when the system is not accessible is of great concern, and must be addressed with the discussion of fall-back procedures and mechanisms when such situations arise.\n", "Also, related systems are affected in this case. As an example, DNS resolution might fail and what would normally cause systems to be interconnected, might break connections that are not even directly involved in the actual systems that went down. This, in turn, may cause seemingly unrelated nodes to develop problems, that can cause another cascade failure all on its own.\n", "Section::::Impact.\n\nOutages caused by system failures can have a serious impact on the users of computer/network systems, in particular those industries that rely on a nearly 24-hour service:\n\nBULLET::::- Medical informatics\n\nBULLET::::- Nuclear power and other infrastructure\n\nBULLET::::- Banks and other financial institutions\n\nBULLET::::- Aeronautics, airlines\n\nBULLET::::- News reporting\n\nBULLET::::- E-commerce and online transaction processing\n\nBULLET::::- Persistent online games\n\nAlso affected can be the users of an ISP and other customers of a telecommunication network.\n", "Network failures typically start when a single network node fails. Initially, the traffic that would normally go through the node is stopped. Systems and users get errors about not being able to reach hosts. Usually, the redundant systems of an ISP respond very quickly, choosing another path through a different backbone. The routing path through this alternative route is longer, with more hops and subsequently going through more systems that normally do not process the amount of traffic suddenly offered.\n\nThis can cause one or more systems along the alternative route to go down, creating similar problems of their own.\n", "Some support technicians refer to it as \"biological interface error\".\n\nThe networking administrators' version is referring to the cause of a problem as a \"layer 8 issue\".\n\nThe computing jargon refers to \"wetware bugs\" as the user is considered part of the system, in a hardware/software/wetware layering.\n", "While it might be technically reasonable to send a few initial packets at short intervals, it is essential for the health of any network that client connection re-attempts are generated at logarithmically or exponentially decreasing rates to prevent denial of service.\n\nThis \"in protocol\" exponential or logarithmic backdown applies to any connectionless protocol, and by extension many portions of connection-based protocols. Examples of this backing down method can be found in the TCP specification for connection establishment, zero-window probing, and keepalive transmissions.\n\nSection::::Notable cases.\n\nSection::::Notable cases.:Tardis and Trinity College, Dublin.\n" ]
[]
[]
[ "normal" ]
[]
[ "normal", "normal" ]
[]
2018-01558
Why do old balloons shrivel up when you touch them?
The oil in your skin. Latex is a semi-liquid oil based elastomer (elastic (stretchy) polymer). When it ages, some of that stretchy property is lost and some of the liquid leaks out. The oil in your skin makes a good lubricating replacement for it. It can rapidly be absorbed by the latex and your skin can rapidly absorb the liquid latex left in the surface of the old baloon. This unevenness in oil content makes the baloon wrinkly.
[ "Even a perfect rubber balloon eventually loses gas to the outside. The process by which a substance or solute migrates from a region of high concentration, through a barrier or membrane, to a region of lower concentration is called diffusion. The inside of balloons can be treated with a special gel (for instance, the polymer solution sold under the \"Hi Float\" brand) which coats the inside of the balloon to reduce the helium leakage, thus increasing float time to a week or longer.\n", "Although the term \"Mullins effect\" is commonly applied to stress softening in filled rubbers, the phenomenon is common to all rubbers, including \"gums\" (rubber lacking filler). As first shown by Mullins and coworkers, the retraction stresses of an elastomer are independent of carbon black when the stress at the maximum strain is constant. Mullins softening is a viscoelastic effect, although in filled rubber there can be additional contributions to the mechanical hysteresis from filler particles debonding from each other or from the polymer chains.\n", "When rubber or plastic balloons are filled with helium so that they float, they typically retain their buoyancy for only a day or so, sometimes longer. The enclosed helium atoms escape through small pores in the latex which are larger than the helium atoms. Balloons filled with air usually hold their size and shape much longer, sometimes for up to a week.\n", "However, balloons have a certain elasticity to them that needs to be taken into account. The act of stretching a balloon fills it with potential energy. When it is released, the potential energy is converted to kinetic energy and the balloon snaps back into its original position, though perhaps a little stretched out. When a balloon is filled with air, the balloon is being stretched. While the elasticity of the balloon causes tension that would have the balloon collapse, it is also being pushed back out by the constant bouncing of the internal air molecules. The internal air has to exert force not only to counteract the external air to keep the air pressures \"even\", but it also has to counteract the natural contraction of the balloon. Therefore, it requires more air pressure (or force) than the air outside the balloon wall. Because of this, when helium balloons are left and they float higher, as atmospheric pressure decreases, the air inside it exerts more pressure than outside it so the balloon pops from tension. In some cases, the helium leaks out from pores and the balloon deflates, falling down.\n", "The parallel chains of stretched rubber are susceptible to crystallization. This takes some time because turns of twisted chains have to move out of the way of the growing crystallites. Crystallization has occurred, for example, when, after days, an inflated toy balloon is found withered at a relatively large remaining volume. Where it is touched, it shrinks because the temperature of the hand is enough to melt the crystals.\n", "where \"t\" and \"t\" refer to the initial and final thicknesses, respectively. For a balloon of radius formula_4, a fixed volume of rubber means that rt is constant, or equivalently\n\nhence\n\nand the radial force equation becomes\n\nThe equation for the tangential force \"f\" (where \"L\" formula_8 \"r\") then becomes\n\nIntegrating the internal air pressure over one hemisphere of the balloon then gives\n\nwhere \"r\" is the balloon's uninflated radius.\n\nThis equation is plotted in the figure at left. The internal pressure \"P\" reaches a maximum for\n", "The behavior of the balloons in the two-balloon experiment was first explained theoretically by David Merritt and Fred Weinhaus in 1978.\n\nSection::::Theoretical pressure curve.\n\nThe key to understanding the behavior of the balloons is understanding how the pressure inside a balloon varies with the balloon's diameter. The simplest way to do this is to imagine that the balloon is made up of a large number of small rubber patches, and to analyze how the size of a patch is affected by the force acting on it.\n\nThe Karan-Guth stress-strain relation for a parallelepiped of ideal rubber can be written\n", "According to The Journal of the American Medical Association, out of 373 children who died in the US between 1972 and 1992 after choking on children's products, nearly a third choked on latex balloons. The Consumer Products Safety Commission found that children had inhaled latex balloons whole (often while trying to inflate them) or choked on fragments of broken balloons. \"Parents\", a monthly magazine about raising children, advised parents to buy Mylar balloons instead of latex balloons.\n\nSection::::History.\n", "There has been some environmental concern over metallised nylon balloons, as they do not biodegrade or shred as rubber balloons do. Release of these types of balloons into the atmosphere is considered harmful to the environment. This type of balloon can also conduct electricity on its surface and released foil balloons can become entangled in power lines and cause power outages.\n", "BULLET::::- Val Andrews, in \"Manual of Balloon Modeling, Vol. 1, An Encyclopedic Series\", credits H.J. Bonnert of Scranton, PA as being the \"daddy of them all.\"\n\nBULLET::::- John Shirley, in the preface to \"One balloon animals; the rubber jungle / Roger's rubber jungle\" by Roger Siegel, writes \"probably began sometime around 1920 but did not become popular until the advent of the skinny balloons after World War II...The inventor of the one balloon animal is unknown, but his origination opened the door to a new art.\"\n", "At small strains, elastin confers stiffness to the tissue and stores most of the strain energy. The collagen fibers are comparatively inextensible and are usually loose (wavy, crimped). With increasing tissue deformation the collagen is gradually stretched in the direction of deformation. When taut, these fibers produce a strong growth in tissue stiffness. The composite behavior is analogous to a nylon stocking, whose rubber band does the role of elastin as the nylon does the role of collagen. In soft tissues, the collagen limits the deformation and protects the tissues from injury.\n", "On the figure (a), there is only concave upward Considere plot. It indicates that there is no yield drop so the material will be suffered from fracture before it yields. On the figure (b), there is specific point where the tangent matches with secant line at point where formula_32. After this value, the slope becomes smaller than the secant line where necking starts to appear. On the figure (c), there is point where yielding starts to appear but when formula_33, the drawing happens. After drawing, all the material will stretch and eventually show fracture. Between formula_34 and formula_35, the material itself does not stretch but rather, only the neck starts to stretch out.\n", "Soap films have uniform stress in every direction and require a closed boundary to form. They naturally form a minimal surface—the form with minimal area and embodying minimal energy. They are however very difficult to measure. For a large film, its weight can seriously affect its form.\n\nFor a membrane with curvature in two directions, the basic equation of equilibrium is:\n\nformula_1\n\nwhere:\n\nBULLET::::- \"R\" and \"R\" are the principal radii of curvature for soap films or the directions of the warp and weft for fabrics\n\nBULLET::::- \"t\" and \"t\" are the tensions in the relevant directions\n", "Section::::Early materials.:Polyester resin.\n", "In LaTeX and Plain TeX, codice_1 produces a narrow, non-breaking space. Outside of math formulae in LaTeX, codice_2 also produces a narrow, non-breaking space, but inside math formulas it produces a narrow, breakable space.\n", "At higher altitudes, the air pressure is lower and therefore the pressure inside the balloon is also lower. This means that while the mass of lifting gas and mass of displaced air for a given lift are the same as at lower altitude, the volume of the balloon is much greater at higher altitudes.\n\nA balloon that is designed to lift to extreme heights (stratosphere), must be able to expand enormously in order to displace the required amount of air. That is why such balloons seem almost empty at launch, as can be seen in the photo.\n", "As a result of this distinction, these classes of gels demonstrate different elasticity as calculated by the elastic modulus, a mathematical model for predicting the elasticity of different materials under different stressors. The shear modulus (G) of a \"strong\" gel exhibits a smaller dissipation of energy than \"weak\" gels, and the \"strong\" gel's G-values plateau for longer periods of time. Furthermore, rheological properties of different gels can occasionally be used to compare naturally occurring biopolymer gels with synthetic LMOGs.\n\nSection::::Interactions of gel and solvent.\n", "There are two types of light-gas balloons: zero-pressure and superpressure. Zero-pressure balloons are the traditional form of light-gas balloon. They are partially inflated with the light gas before launch, with the gas pressure the same both inside and outside the balloon. As the zero-pressure balloon rises, its gas expands to maintain the zero pressure difference, and the balloon's envelope swells.\n", "Latex is practically a neutral substance, with a pH of 7.0 to 7.2. However, when it is exposed to the air for 12 to 24 hours, its pH falls and it spontaneously coagulates to form a solid mass of rubber.\n", "Section::::Modern Materials.:Fiberglass.\n", "Skin is a soft tissue and exhibits key mechanical behaviors of these tissues. The most pronounced feature is the J-curve stress strain response, in which a region of large strain and minimal stress exists and corresponds to the microstructural straightening and reorientation of collagen fibrils. In some cases the intact skin is prestreched, like wetsuits around the diver's body, and in other cases the intact skin is under compression. Small circular holes punched on the skin may widen or close into ellipses, or shrink and remain circular, depending on preexisting stresses.\n\nSection::::Functions.:Aging.\n", "Most cases of foreign body aspiration are in children ages 6 months to 3 years due to the tendency for children to place small objects in the mouth and nose. Children of this age usually lack molars and cannot grind up food into small pieces for proper swallowing. Small, round objects including nuts, hard candy, popcorn kernels, beans, and berries are common causes of foreign body aspiration. Latex balloons are also a serious choking hazard in children that can result in death. A latex balloon will conform to the shape of the trachea, blocking the airway and making it difficult to expel with the Heimlich maneuver.\n", "and drops to zero as \"r\" increases. This behavior is well known to anyone who has blown up a balloon: a large force is required at the start, but after the balloon expands (to a radius larger than \"r\"), less force is needed for continued inflation.\n\nSection::::Why does the larger balloon expand?\n", "Their data, plotted as the graph in Fig. 5, shows the position of end and midpoint styli as the sample rapidly retracts to its original length. The sample was initially stretched 9.5” beyond its unstrained length and then released. The styli returned to their original positions (displacement of 0”) in a little over 6 ms. The linear behavior of the displacement vs. time indicates that, after a brief acceleration, both the end and the midpoints of the sample snapped back at a constant velocity of about 50 m/s or 112 mph.\n", "The balloon help concept has since been adopted as an optional alternative to tooltips in later versions of Microsoft Windows, such as , which uses balloons to highlight and explain aspects of various programs or operating system features (Balloons in msdn). Balloon help is also highly visible in the Squeak Smalltalk environment, in the Enlightenment window manager, and in the AmigaOS's MUI.\n" ]
[]
[]
[ "normal" ]
[]
[ "normal", "normal" ]
[]
2018-00202
how are nutritional values measured? How do companies obtain the nutritional values that are printed on the packaging?
My English isn't perfect, sorry in advance. For kJ or kcal, they use a bomb calorimeter. Basically a chamber where they burn the food and can measure exactly how much heat this generates. They have done this for almost every food source. Add in the fact that not every food source is 100% digestible (Atwat factor. fats 92%, proteins 94%, carbohydrates 88%) So now there are charts where you can look up how much kJ or kcal everything is. Then you can calculate how much of every ingredient you have in said food source and add up. Other nutritional values such as amount of fat or proteins can be determined trough chemistry (not my area of expertise so just a summary): mash up the food source, extract the proteins/fat/what you want to know with a chemical reaction and calculate. Like kcal, this has been done for a lot of food sources and again you can look it up in charts. Almost every country has a food composition database (I think), for example: for Belgium many dietitians use Nubel, in the US I found [USDA food composition database] ( URL_1 ) How companies obtain the nutritional values may be a bit different, I don't know anything about that. Hope this is a bit clear :) Edit: found this: URL_0
[ "Nutritional rating systems\n\nNutritional rating systems are methods of ranking or rating food products or food categories to communicate the nutritional value of food in a simplified manner to a target audience. Rating systems are developed by governments, nonprofit organizations, or private institutions and companies.\n\nThe methods may use point systems to rank or rate foods for general nutritional value or they may rate specific food attributes such as cholesterol content. Graphics or other symbols may be used to communicate the ratings to the target audience.\n", "Nutritional value\n\nNutritional value or nutritive value as part of food quality is the measure of a well-balanced ratio of the essential nutrients carbohydrates, fat, protein, minerals, and vitamins in items of food or diet in relation to the nutrient requirements of their consumer. Several nutritional rating systems and nutrition facts label have been invented to be able to rank food in terms of its nutritional value, but absolute scales are open for debate and tend to ignore particular consumer needs.\n", "Another approach commonly used by FCD compilers is to ‘borrow’ or ‘adopt’ nutrient values that were originally generated by another organisation. Possible sources for borrowed data: are FCD from other countries, nutrient analyses from scientific literature or manufacturers’ data (e.g. from food labels). Compilers will need to evaluate the data in terms of both data quality and applicability of foods before incorporating it from any of these sources into their FCDBs. For example, fortification values can differ between countries so a fortified breakfast cereal for one country’s FCD might not be appropriate for another country.\n\nSection::::Data evaluation and quality.\n", "BULLET::::- ready-to-eat green vegetables: 0.33 to 3.11\n\nBULLET::::- ready-to-eat starchy tubers: 0.87 to 6.17\n\nBULLET::::- boiled Black Beans: 9\n\nBULLET::::- boiled chia seeds: 16\n\nBULLET::::- high scores: home-prepared potato pancakes 6.17; French fries 3.18-4.03\n\nBULLET::::- average scores: baked potato 2.5; boiled yam 1.49\n\nBULLET::::- low scores: boiled sweet potato 1.6\n\nBULLET::::- Legumes:\n\nBULLET::::- dry roasted soybeans: 13\n\nBULLET::::- boiled lentils: 9\n\nBULLET::::- boiled Green Peas: 5\n\nBULLET::::- boiled Black eyed beans: 8\n\nBULLET::::- boiled chickpeas: 9\n\nBULLET::::- peanuts (raw, roasted, butter): 23.68 to 28.04\n\nBULLET::::- Baked products:\n\nBULLET::::- Wholewheat pancakes:\n\nBULLET::::- Bread: 6.7 to 11.4\n\nBULLET::::- Crackers: 7.43\n", "This lists the percentage of each of the nutrients in the food. The minimum percent of crude protein and crude fat, the maximum percent of crude fiber, and moisture are always required. Note that \"crude\" refers to the analysis method, rather than the quality of the nutrient.\n\n5. Ingredient Statement:\n\nIngredients must be listed in order of predominance by weight, on an \"as formulated basis\". The ingredient that makes up the highest percentage of the total weight as it goes into the product is listed first.\n\n6. Nutritional Adequacy Statement:\n", "BULLET::::- Estimating values from other sources, including manufacturers food labels, scientific literature and FCDBs from other countries.\n\nSection::::Food composition dataset.:History.\n\nSome of the earliest work related to detecting adulterated foods and finding the active components of medicinal herbs.\n", "With certain exceptions, such as foods meant for babies, the following Daily Values are used. These are called Reference Daily Intake (RDI) values and were originally based on the highest 1968 Recommended Dietary Allowances (RDA) for each nutrient in order to assure that the needs of all age and sex combinations were met. These are older than the current Recommended Dietary Allowances of the Dietary Reference Intake. For vitamin C, vitamin D, vitamin E, vitamin K, calcium, phosphorus, magnesium, and manganese, the current highest RDAs are up to 50% higher than the older Daily Values used in labeling, whereas for other nutrients the recommended needs have gone down. A side-by-side table of the old and new adult Daily Values is provided at Reference Daily Intake. As of October 2010, the only micronutrients that are required to be included on all labels are vitamin A, vitamin C, calcium, and iron. To determine the nutrient levels in the foods, companies may develop or use databases, and these may be submitted voluntarily to the U.S. Food and Drug Administration for review.\n", "Section::::Chemical analysis.\n", "Nutrition analysis\n\nNutrition analysis refers to the process of determining the nutritional content of foods and food products. The process can be performed through a variety of certified methods.\n\nSection::::Methods.\n\nSection::::Methods.:Laboratory analysis.\n\nTraditionally, food companies would send food samples to laboratories for physical testing. \n\nTypical analysis includes:\n\nMoisture (water) by loss of mass at 102 °C,\n\nProtein by analysis of total nitrogen, either by Dumas or Kjeldahl methods,\n\nTotal fat, traditionally by a solvent extraction, but often now by secondary methods such as NMR,\n\nCrude ash (total inorganic matter) by combustion at 550C,\n", "Section::::China.\n\nIn 2011 the Chinese Ministry of Health released the National Food Safety Standard for Nutrition Labeling of Prepackaged Foods (GB 28050-2011). The core nutrients that must be on a label are: protein, fat, carbohydrate and sodium. Energy is noted in kJ. And all values must be per 100g/100ml.,\n\nSection::::European Union.\n", "In the UK, FCD are listed in tables known as The Chemical Composition of Foods, McCance and Widdowson (1940). FCDBs have become available online on the internet, for example, the USDA Dataset in the States, the Japanese food composition dataset and a number of European food composition datasets. Foods from these national FCDBs can be identified by International Food Code (IFC).\n", "The label begins with a standard serving measurement, calories are listed second, and then following is a breakdown of the constituent elements. Always listed are total fat, sodium, carbohydrates and protein; the other nutrients usually shown may be suppressed, if they are zero. Usually all 15 nutrients are shown: calories, calories from fat, fat, saturated fat, trans fat, cholesterol, sodium, carbohydrates, dietary fiber, sugars, protein, vitamin A, vitamin C, calcium, and iron.\n", "Nutrition facts label\n\nThe nutrition facts label (also known as the nutrition information panel, and other slight variations) is a label required on most packaged food in many countries, showing what nutrients (to limit and get enough of) are in the food. Labels are usually based on official nutritional rating systems. Most countries also release overall nutrition guides for general educational purposes. In some cases, the guides are based on different dietary targets for various nutrients than the labels on specific foods.\n", "The Ministry of Health and Family Welfare had, on September 19, 2008, notified the Prevention of Food Adulteration (5th Amendment) Rules, 2008, mandating packaged food manufacturers to declare on their product labels nutritional information and a mark from the F.P.O or Agmark (Companies that are responsible for checking food products) to enable consumers make informed choices while purchasing. Prior to this amendment, disclosure of nutritional information was largely voluntary though many large manufacturers tend to adopt the international practice.\n\nSection::::Mexico.\n", "In the United States, the Nutritional Facts label lists the percentage supplied that is recommended to be met, or to be limited, in one day of human nutrients based on a daily diet of 2,000 calories.\n", "The first UK tables, known as McCance and Widdowson’s The Composition of Foods, were published in 1940. The Food and Agriculture Organization (FAO) published tables for international use and initially intended these for the assessment of food availability at the global level.\n\nA list of International FCDBs can be found on the National Food Institute - Technical University of Denmark's (DTU) website.\n\nSection::::Documentation.\n", "In the European Union, along the \"old\" rules (Directive 90/496, amended), the information (usually in panel format) is most often labelled \"Nutrition Information\" (or equivalent in other EU languages). An example is shown on the right. The panel is optional, but if provided, the prescribed content and format must be followed. It will always give values for a set quantity — or of the product — and often also for a defined \"serving\", as an option. First will come the energy values, in both kilocalories and kilojoules.\n", "In addition to the nutrition label, products may display certain nutrition information or health claims on packaging. These health claims are only allowed by the FDA for \"eight diet and health relationships based on proven scientific evidence\", including: calcium and osteoporosis, fiber-containing grain products, fruits and vegetables and cancer, fruits, vegetables, and grain products that contain fiber—particularly soluble fiber—and the risk of coronary heart disease, fat and cancer, saturated fat and cholesterol and coronary heart disease, sodium and hypertension, and folate and neural tube defects. The Institute of Medicine recommended these labels contain the most useful nutritional information for consumers: saturated fats, trans fats, sodium, calories, and serving size. In January 2011, food manufacturers and grocery stores announced plans to display some of this nutrition information on processed food.\n", "In Canada, a standardized \"Nutrition Facts\" label was introduced as part of regulations passed in 2003, and became mandatory for most prepackaged food products on December 12, 2005. (Smaller businesses were given until December 12, 2007 to make the information available.). In accordance with food packaging laws in the country, all information, including the nutrition label, must be written in both English and French, the country's two official languages.\n", "Table of food nutrients\n\nThe tables below include tabular lists for selected basic foods, compiled from United States Dept. of Agriculture (USDA) sources. Included for each food is its weight in grams, its calories, and (also in grams,) the amount of protein, carbohydrates, dietary fiber, fat, and saturated fat. As foods vary by brands and stores, the figures should only be considered estimates, with more exact figures often included on product labels. For precise details about vitamins and mineral contents, the USDA source can be used.\n", "The evidence-based proprietary algorithm is based on the dietary guidelines and recommendations of regulatory and health organizations including the US Food and Drug Administration, the US Department of Agriculture, and the World Health Organization. It was developed by a scientific advisory panel composed of experts in nutrition and health from Dartmouth College, Harvard University, Tufts University, University of North Carolina and other colleges.\n\nSection::::Systems in use today.:Nutripoints.\n", "Then will come a breakdown of constituent elements: usually most or all of protein, carbohydrate, starch, sugar, fat, fibre and sodium. The \"fat\" figure is likely to be further broken down into saturated and unsaturated fat, while the \"carbohydrate\" figure is likely to give a subtotal for sugars. With the \"new\" rules, the mandatory information is: energy, fat, saturates, carbohydrates, sugars, protein and salt, in that particular order, with options to extend this list to: mono-unsaturates, polyunsaturates, polyols, starch, fibre, and vitamins and minerals.\n", "Specific measures for materials and articles such as ceramics, regenerated cellulose, plastics, gaskets and active and intelligent materials, and substances such as vinyl chloride, N-nitrosamines and N-nitrostable substances in rubber, and epoxy derivatives, exist.\n\nEU No 10/2011 is applicable regulation for all the Food contact material or Article made up of Plastics.\n\nSection::::Legislation.:United States.\n\nThe U.S. Food and Drug Administration (FDA) considers three different types of food additives:\n\nBULLET::::- Direct food additives - components added directly to the food\n", "BULLET::::- 21 CFR 171 Food additive petitions\n\nBULLET::::- 21 CFR 172 Food additives permitted for direct addition to food for human consumption\n\nBULLET::::- 21 CFR 173 Secondary direct food additives permitted in food for human consumption\n\nBULLET::::- 21 CFR 178 Indirect food additives: Adjuvants, production aids, and sanitizers\n\nBULLET::::- 21 CFR 180 Food additives permitted in food or in contact with food on an interim basis pending additional study\n\nPolymers or additives can also be regulated in other ways with exemptions; for example:\n\nBULLET::::- Threshold of regulation\n\nBULLET::::- Food contact notification\n\nBULLET::::- Private letters\n\nBULLET::::- Prior sanctioned food ingredient\n", "Food Balance Sheet\n\nFood balance sheet shows a brief picture of the pattern of the food supply of a country. For each food item, it sketches the primary commodity availability for human consumption i.e. the sources of supply and its utilization in terms of nutrient value.\n" ]
[]
[]
[ "normal" ]
[]
[ "normal" ]
[]
2018-02552
How exactly does the hardware of a computer work?
To explain it would take, like, a college degree’s worth of explaining. At the core of how all the parts work are the flow of electricity, and logic gates. These are little physical bits in those small chips. They take in voltage through two little wires, and have a third leading out. Depending on which kind of gate it is and which current is active, the third will or won’t pass on the current according to rules. These rules correspond with Boolean logic (binary logic) rules, and clever combinations of those gates basically makes a computer. An example is an OR gate: of either or both of the incoming wires has voltage, the gate passes in the voltage. An AND gate would only do so if *both* incoming wires had voltage. What we think of as addition can be represented with combinations of these gates. Same with every other math operation. Memory and disk storage is just to record data, printers and speakers and monitors just report data to the user, but under all that, it’s lots and lots of little logic gates, and clever programming.
[ "BULLET::::- Supercomputer\n\nBULLET::::- Tablet computer\n\nSection::::Hardware.\n\nThe term \"hardware\" covers all of those parts of a computer that are tangible physical objects. Circuits, computer chips, graphic cards, sound cards, memory (RAM), motherboard, displays, power supplies, cables, keyboards, printers and \"mice\" input devices are all hardware.\n\nSection::::Hardware.:Other hardware topics.\n\nA general purpose computer has four main components: the arithmetic logic unit (ALU), the control unit, the memory, and the input and output devices (collectively termed I/O). These parts are interconnected by buses, often made of groups of wires.\n", "BULLET::::- Computer monitor\n\nBULLET::::- Printer\n\nBULLET::::- PC speaker\n\nBULLET::::- Projector\n\nBULLET::::- Sound card\n\nBULLET::::- Video card\n\nSection::::Hardware.:Control unit.\n\nThe control unit (often called a control system or central controller) manages the computer's various components; it reads and interprets (decodes) the program instructions, transforming them into control signals that activate other parts of the computer. Control systems in advanced computers may change the order of execution of some instructions to improve performance.\n", "Components directly attached to or to part of the motherboard include:\n", "Hardware is typically directed by the software to execute any command or instruction. A combination of hardware and software forms a usable computing system, although other systems exist with only hardware components.\n\nSection::::Von Neumann architecture.\n", "Inside each of these parts are thousands to trillions of small electrical circuits which can be turned off or on by means of an electronic switch. Each circuit represents a bit (binary digit) of information so that when the circuit is on it represents a \"1\", and when off it represents a \"0\" (in positive logic representation). The circuits are arranged in logic gates so that one or more of the circuits may control the state of one or more of the other circuits.\n\nSection::::Hardware.:Input devices.\n", "I/O devices are often complex computers in their own right, with their own CPU and memory. A graphics processing unit might contain fifty or more tiny computers that perform the calculations necessary to display 3D graphics. Modern desktop computers contain many smaller computers that assist the main CPU in performing I/O. A 2016-era flat screen display contains its own computer circuitry.\n\nSection::::Hardware.:Multitasking.\n", "\"Hardware\" is also an expression used within the computer engineering industry to explicitly distinguish the (electronic computer) hardware from the \"software\" that runs on it. But \"hardware,\" within the automation and software engineering disciplines, need not simply be a computer of some sort. A modern automobile runs vastly more \"software\" than the Apollo spacecraft. Also, modern aircraft cannot function without running tens of millions of computer instructions embedded and distributed throughout the aircraft and resident in both standard computer hardware and in specialized hardward components such as IC wired logic gates, analog and hybrid devices, and other digital components. The need to effectively model how separate physical components combine to form complex systems is important over a wide range of applications, including computers, personal digital assistants (PDAs), cell phones, surgical instrumentation, satellites, and submarines.\n", "Section::::History.:Analog computers.\n", "Section::::The main classes of video hardware.\n\nThere are two main categories of solutions for a home computer to generate a video signal:\n\nBULLET::::- A custom design, either built from discrete logic chips or based around some kind of custom logic chips (an ASIC or PLD).\n\nBULLET::::- A system using some form of video display controller (VDC), a VLSI chip that contained most of the logic circuitry needed to generate the video signal\n", "Section::::Scientific background.:Biomechanical computers.\n", "A mainframe computer is a much larger computer that typically fills a room and may cost many hundreds or thousands of times as much as a personal computer. They are designed to perform large numbers of calculations for governments and large enterprises.\n\nSection::::Types of computer systems.:Departmental computing.\n\nIn the 1960s and 1970s, more and more departments started to use cheaper and dedicated systems for specific purposes like process control and laboratory automation.\n\nSection::::Types of computer systems.:Supercomputer.\n", "When unprocessed data is sent to the computer with the help of input devices, the data is processed and sent to output devices. The input devices may be hand-operated or automated. The act of processing is mainly regulated by the CPU. Some examples of input devices are:\n\nBULLET::::- Computer keyboard\n\nBULLET::::- Digital camera\n\nBULLET::::- Digital video\n\nBULLET::::- Graphics tablet\n\nBULLET::::- Image scanner\n\nBULLET::::- Joystick\n\nBULLET::::- Microphone\n\nBULLET::::- Mouse\n\nBULLET::::- Overlay keyboard\n\nBULLET::::- Real-time clock\n\nBULLET::::- Trackball\n\nBULLET::::- Touchscreen\n\nSection::::Hardware.:Output devices.\n\nThe means through which computer gives output are known as output devices. Some examples of output devices are:\n", "The execution process carries out the instructions in a computer program. Instructions express the computations performed by the computer. They trigger sequences of simple actions on the executing machine. Those actions produce effects according to the semantics of the instructions.\n\nSection::::Computer.:Computer software and hardware.\n", "BULLET::::- Computer – is a device that can be instructed to carry out sequences of arithmetic or logical operations automatically via computer programming. Modern computers have the ability to follow generalized sets of operations, called \"programs.\" These programs enable computers to perform an extremely wide range of tasks. A \"complete\" computer including the hardware, the operating system (main software), and peripheral equipment required and used for \"full\" operation can be referred to as a computer system. This term may as well be used for a group of computers that are connected and work together, in particular a computer network or computer cluster.\n", "Section::::Transistor research.\n", "Hardware is also an expression used within the computer engineering industry to explicitly distinguish the (electronic computer) hardware from the software that runs on it. But hardware, within the automation and software engineering disciplines, need not simply be a computer of some sort. A modern automobile runs vastly more software than the Apollo spacecraft. Also, modern aircraft cannot function without running tens of millions of computer instructions embedded and distributed throughout the aircraft and resident in both standard computer hardware and in specialized hardware components such as IC wired logic gates, analog and hybrid devices, and other digital components. The need to effectively model how separate physical components combine to form complex systems is important over a wide range of applications, including computers, personal digital assistants (PDAs), cell phones, surgical instrumentation, satellites, and submarines.\n", "Section::::Unconventional computers.\n\nA computer does not need to be electronic, nor even have a processor, nor RAM, nor even a hard disk. While popular usage of the word \"computer\" is synonymous with a personal electronic computer, the modern definition of a computer is literally: \"\"A device that computes\", especially a programmable [usually] electronic machine that performs high-speed mathematical or logical operations or that assembles, stores, correlates, or otherwise processes information.\" Any device which \"processes information\" qualifies as a computer, especially if the processing is purposeful.\n\nSection::::Future.\n", "Section::::Design.:Register transfer systems.\n\nMany digital systems are data flow machines. These are usually designed using synchronous register transfer logic, using hardware description languages such as VHDL or Verilog.\n", "Section::::Recording.\n", "BULLET::::- Physical Implementation draws physical circuits. The different circuit components are placed in a chip floorplan or on a board and the wires connecting them are created.\n", "Conventionally, a modern computer consists of at least one processing element, typically a central processing unit (CPU), and some form of memory. The processing element carries out arithmetic and logical operations, and a sequencing and control unit can change the order of operations in response to stored information. Peripheral devices include input devices (keyboards, mice, joystick, etc.), output devices (monitor screens, printers, etc.), and input/output devices that perform both functions (e.g., the 2000s-era touchscreen). Peripheral devices allow information to be retrieved from an external source and they enable the result of operations to be saved and retrieved.\n\nSection::::Etymology.\n", "A programmable hardware artifact, or machine, that lacks its computer program is impotent; even as a software artifact, or program, is equally impotent unless it can be used to alter the sequential states of a suitable (hardware) machine. However, a hardware machine and its programming can be designed to perform an almost illimitable number of abstract and physical tasks. Within the computer and software engineering disciplines (and, often, other engineering disciplines, such as communications), then, the terms hardware, software, and system came to distinguish between the hardware that runs a computer program, the software, and the hardware device complete with its program.\n", "Section::::Hardware.\n\nComputer hardware is a comprehensive term for all physical parts of a computer, as distinguished from the data it contains or operates on, and the software that provides instructions for the hardware to accomplish tasks. \n\nSome sub-systems of a personal computer may contain processors that run a fixed program, or firmware, such as a keyboard controller. Firmware usually is not changed by the end user of the personal computer. \n", "The defining feature of modern computers which distinguishes them from all other machines is that they can be programmed. That is to say that some type of instructions (the program) can be given to the computer, and it will process them. Modern computers based on the von Neumann architecture often have machine code in the form of an imperative programming language. In practical terms, a computer program may be just a few instructions or extend to many millions of instructions, as do the programs for word processors and web browsers for example. A typical modern computer can execute billions of instructions per second (gigaflops) and rarely makes a mistake over many years of operation. Large computer programs consisting of several million instructions may take teams of programmers years to write, and due to the complexity of the task almost certainly contain errors.\n", "Following Babbage, although unaware of his earlier work, was Percy Ludgate, an accountant from Dublin, Ireland. He independently designed a programmable mechanical computer, which he described in a work that was published in 1909.\n\nSection::::Analog computers.\n" ]
[]
[]
[ "normal" ]
[]
[ "normal" ]
[]
2018-09776
How do consultancies like McKinsey & Company determine the value of certain markets?
This is very tricky to do--and usually the numbers are way off from what ends up happening. The basic process is to look at what goods or services comprise the market (in this case it is all sorts of connected devices, and hubs like Alexa). Then, look at how much people are willing to pay for the goods and services at a given price. They then determine how much the goods and services would have to cost for suppliers to make x amount of them. Once you know that, you can do some basic algebra to determine about how many units will be produced and what the price is. If you know how many units will be produced and what the price is, you just multiply price * quantity and this gives you the size of the market. Obviously all this is imprecise
[ "In the United States, there exist a set of merger guidelines—written by the Antitrust Division of the Department of Justice (DOJ) and the Federal Trade Commission (FTC)—which specify methods for analyzing and defining markets. Since 1980, the DOJ and the FTC have used these guidelines to convince courts to adopt a more explicitly economic approach to antitrust policy.\n\nA relevant market comprises a product or group of products and the geographic area in which these products are produced and/or traded. Therefore, the relevant market has two components: the product market and the geographic market.\n\nSection::::Definition and use.:Product market.\n", "BULLET::::- Level Two: This is \"valuation\" based on \"market observables\". FASB acknowledged that active markets for identical assets and liabilities are relatively uncommon and, even when they do exist, they may be too thin to provide reliable information. To deal with this shortage of direct data, the board provided a second level of inputs that can be applied in three situations: The first involves less-active markets for identical assets and liabilities; this category is ranked lower because the market consensus about value may not be strong. The second arises when the owned assets and owed liabilities are similar to, but not the same as, those traded in a market. In this case, the reporting company has to make some assumptions about what the fair value of the reported items might be in a market. The third situation exists when no active or less-active markets exist for similar assets and liabilities, but some observable market data is sufficiently applicable to the reported items to allow the fair values to be estimated.\n", "Market-based valuation\n\nA Market-based valuation is a form of stock valuation that refers to market indicators, also called extrinsic criteria (i.e. not related to economic fundamentals and account data, which are intrinsic criteria).\n\nSection::::Examples of market valuation methods.\n\nSection::::Examples of market valuation methods.:Technical analysis.\n\nTechnical analysis is the most characteristic market-based method, although it focuses more on timing than pricing.\n", "Recently, Mocciaro Li Destri, Picone & Minà (2012) have underscored the subtle but important difference between the firms’ capacity to create value through correct operational choices and valid strategies, on the one hand, and the epiphenomenal manifestation of variations in stockholder value on the financial markets (notably on stock markets). In this perspective, they suggest to implement new methodologies able to bring strategy back into financial performance measures.\n", "BULLET::::- The top-down investor starts their analysis with global economics, including both international and national economic indicators. These may include GDP growth rates, inflation, interest rates, exchange rates, productivity, and energy prices. They subsequently narrow their search to regional/ industry analysis of total sales, price levels, the effects of competing products, foreign competition, and entry or exit from the industry. Only then do they refine their search to the best business in the area being studied.\n\nBULLET::::- The bottom-up investor starts with specific businesses, regardless of their industry/region, and proceeds in reverse of the top-down approach.\n\nSection::::Procedures.\n", "In order to measure market orientation, the two most widely used scales are MARKOR and\n\nMKTOR \n\nThe mktor scale is a 15-item, 7-point Likert-type scale, with all points specified. In this measure, market orientation is conceptualised as a one-dimensional construct, with three components, namely: customer orientation, competitor orientation, and interfunctional coordination. The simple average of the scores of the three components is the market orientation score.\n", "The geographic market is an area in which the conditions of competition applying to the product concerned are the same for all traders. The same factors used in delineating relevant product markets should be used to define the relevant geographic market.\n\nThe elements to be taken into consideration when defining the relevant geographic market include the nature and characteristics of the concerned products, the existence of entry barriers, consumer preferences, differences among the market shares of undertakings in the neighboring geographic areas, as well as significant differences between suppliers’ prices and transport costs level.\n", "Also, rough market comparison tools such as the PE ratio and the PEG ratio are used. More sophisticated forms of analysis (fundamental analysis, quantitative analysis, and behavioral analysis) use also some market criteria, such as the risk premium or beta coefficient.\n\nThose criteria might be \"tilted\" in some valuation models in anticipation of their possible variation in the next future, or to adapt them to their historical statistical range or mean.\n\nSection::::See also.\n\nBULLET::::- List of valuation topics\n\nBULLET::::- Price discovery\n\nBULLET::::- Valuation using multiples\n\nBULLET::::- Valuation using discounted cash flows\n\nBULLET::::- Valuation using the Market Penetration Model\n\nSection::::External links.\n", "Section::::Environmental scanning.:Macro environment.\n\nThere are a number of common approaches how the external factors, which are mentioned in the definition of Kroon and which describe the macro environment, can be identified and examined. These factors indirectly affect the organization but cannot be controlled by it. One approach could be the PEST analysis. PEST stands for political, economic, social and technological. Two more factors, the environmental and legal factor, are defined within the PESTEL analysis (or PESTLE analysis).\n", "BULLET::::2. A relevant geographic market comprises the area in which the firms concerned are involved in the supply of products or services and in which the conditions of competition are sufficiently homogeneous.\n\nSection::::Definition and use.\n", "When looking at the weaknesses of the organization's placing in the current business environment a formal environmental scanning is used. A common formal environmental scanning process has five steps. The five steps are fundamental in the achievement of each step and may develop each other in some form:\n", "BULLET::::- political stability\n\nSection::::Environmental scanning.:PESTEL analysis.:Economic factors.\n", "Market analysis strives to determine the attractiveness of a market, currently and in the future. Organizations evaluate future attractiveness of a market by understanding evolving opportunities, and threats as they relate to that organization's own strengths and weaknesses.\n\nOrganizations use these findings to guide the investment decisions they make to advance their success. The findings of a market analysis may motivate an organization to change various aspects of its investment strategy. Affected areas may include inventory levels, a work force expansion/contraction, facility expansion, purchases of capital equipment, and promotional activities.\n\nSection::::Elements.\n\nSection::::Elements.:Market size.\n", "Market access is supported by qualitative and quantitative evidence of value, with value determined or assured through health economics and outcomes research (HEOR), comparative-effectiveness research (CER), patient-reported outcomes (PROs), evidence-based medicine (EBM), real-world data (RWD), real-world evidence (RWE), and longitudinal real-world results (RWR). Health technology assessment (HTA) organizations like the Institute for Clinical and Economic Review (ICER) contribute to the evidence base. Product, service, and solution developers may compile global value dossiers (GVDs) to support market access. The intellectual, political, and economic interplay of academics, health and human services (HHS) professionals, connected professional communities (e.g., health economics professionals), and trade associations (e.g., the [https://www.phrma.org Pharmaceutical Research and Manufacturers of America [PhRMA]]) may help set the standards for value assessments.\n", "Typically, any study that claims to test the relationship between price and the level of market concentration is also (jointly, that is, simultaneously) testing whether the market definition (according to which market concentration is being calculated) is \"relevant\"; that is, whether the boundaries of each market is not being determined either too narrowly or too broadly so as to make the defined \"market\" meaningless from the point of the competitive interactions of the firms that it includes (or is made of).\n\nSection::::Alternative definition.\n", "In 1982 the U.S. Department of Justice Merger Guidelines introduced the SSNIP test as a new method for defining markets and for measuring market power directly. In the EU it was used for the first time in the \"Nestlé/Perrier\" case in 1992 and has been officially recognized by the European Commission in its \"Commission's Notice for the Definition of the Relevant Market\" in 1997.\n", "BULLET::::- Level One: The preferred inputs to valuation efforts are “quoted prices in active markets for identical assets or liabilities,” with the caveat that the reporting entity must have access to that market. An example would be a stock trade on the New York Stock Exchange. Information at this level is based on direct observations of transactions involving the identical assets or liabilities being valued, not assumptions, and thus offers superior reliability. However, relatively few items, especially physical assets, actually trade in active markets. If available, a quoted market price in an active market for identical assets or liabilities should be used. To use this level, the entity must have access to an active market for the item being valued. In many circumstances, quoted market prices are unavailable. If a quoted market price is not available, preparers should make an estimate of fair value using the best information available in the circumstances. The resulting fair value estimate would then be classified in Level Two or Level Three.\n", "All IGM poll questions are phrased in the form of a statement, to which participants can choose options from a Likert scale: Strongly Agree, Agree, Uncertain, Disagree, Strongly Disagree. Participants must also indicate a confidence level in their response (on a scale of 10). They may additionally providing a free-form comment to explain their selection. \n", "The concept is still not widely spread in the real-estate business, as it has only really been used from 2010 and onwards. From 2010 to 2014 there are some Lead User companies who have used this for their corporate real estate divestments. One example is A.P. Moller Maersk Group, who successfully have done this in North America, Latin America, and Europe.\n\nSection::::The Concept.\n\nBuyers of real-estate are often presented with a situation where multiple buyers are interested in the same property. \n\nThis creates uncertainties that can be split into:\n\nBULLET::::- Are there really other buyers?\n", "Section::::Real estate.\n\nThe term is commonly used in real estate appraisal, since real estate markets are generally considered both informationally and transactionally inefficient. Also, real estate markets are subject to prolonged periods of disequilibrium, such as in contamination situations or other market disruptions.\n\nAppraisals are usually performed under some set of assumptions about transactional markets, and those assumptions are captured in the definition of value used for the appraisal. Commonly, the definition set forth for U.S. federally regulated lending institutions is used, although other definitions may also be used under some circumstances:\n", "A market opportunity for a product or a service, based on either one technology or several, fulfills the need(s) of a (preferably increasing) market better than the competition and better than substitution-technologies within the given environmental frame (e.g. society, politics, legislation, etc.).\n\nSection::::Elements.:Market profitability.\n\nWhile different organizations in a market will have different levels of profitability, they are all similar to different market conditions. Michael Porter devised a useful framework for evaluating the attractiveness of an industry or market. This framework, known as Porter five forces analysis, identifies five factors that influence the market profitability:\n\nBULLET::::- Buyer power\n", "BULLET::::- Using the CBOT Market Profile to Improve Performance\n\nBULLET::::- The Profile: The link Between CBOT Data and the Market\n\nBULLET::::- Part I What the Market is Doing: The Market Profile Graphic\n\nBULLET::::- Part II The Condition of the Market: Liquidity Data Bank\n\nBULLET::::- Appendix\n\nIn 1987, Professor Thomas P. Drinka of Western Illinois University launched the first Market Profile® course in academia. As of 2010, Western remains as the premiere and only academic institution to offer such a course as part of curriculum.\n", "Except for David A. Aaker's 7 main dimension of a market analysis including market size, market growth rate, market profitability, industry cost structure, distribution channel, market trends, and key success factor, there is another analysis of dimension market analysis. Based on Christina Callaway, dimension of market analysis can be divided into four parts which is environmental analysis, competitive analysis, target audience analysis, and SWOT analysis. The market analysis is to help company to illustrate current trend in the market and may affect the profitability (Christina, n.d.). At the same time, market analysis is also to determine the attractiveness in the market. A good marketing analysis can improve organization investment decision accurately, they can based on the attractiveness to change investment tactical.\n", "Another measure of concentration is the Herfindahl-Hirschman Index (HHI) which is calculated by \"summing the squares of the percentage market shares of all participants in the market\". The HHI index for perfect competition is zero; for monopoly, 10,000.\n\nU.S. courts almost never consider a firm to possess market power if it has a market share of less than 50 percent.\n\nSection::::Elasticity of demand.\n", "Section::::Cycles within Cycles.\n\nWithin long-term cycles, millions of cycles of smaller degrees (fractals/recursions) are constantly unfolding. Capitulation within capitulation marks ever-smaller data components of markets such that tick data and monthly data exhibit similar capitulation characteristics.\n\nSection::::Pain Thresholds and Their Dynamic Nature.\n" ]
[]
[]
[ "normal" ]
[]
[ "normal", "normal" ]
[]
2018-06484
What happens when corporations buy back their own stocks? Why is this done?
The stock becomes Treasury stock which means it isn't in circulation. It has the result of undiluting other share holders because that stick basically doesn't exist. It is a way to use cash on hand to increase shareholders value (especially if a company thinks their stock is under valued).
[ "A company may also buy back shares held by or for employees or salaried directors of the company or a related company. This type of buyback, referred to as an \"employee share scheme buyback\", requires an ordinary resolution. A listed company may also buy back its shares in on-market trading on the stock exchange, following the passing of an ordinary resolution if over the 10/12 limit. The stock exchange's rules apply to \"on-market buybacks\". A listed company may also buy unmarketable parcels of shares from shareholders (called a \"minimum holding buyback\"). This does not require a resolution but the purchased shares must still be canceled.\n", "One other reason for a company to buy back its own stock is to reward holders of stock options. Call option holders are hurt by dividend payments, since, typically, they are not eligible to receive them. A share buyback program \"may\" increase the value of remaining shares (if the buyback is executed when shares are under-priced); if so, call option holders benefit. A dividend payment short term \"always\" decreases the value of shares after the payment, so, for stocks with regularly scheduled dividends, on the day shares go ex-dividend, call option holders \"always\" lose whereas put option holders benefit. This does not apply to unscheduled (special) dividends since the strike prices of options are typically adjusted to reflect the amount of the special dividend. Finally, if the sellers into a corporate buyback are actually the call option holders themselves, they may directly benefit from temporary unrealistically favorable pricing.\n", "Section::::Buying back shares.:After buyback.\n\nThe company can either retire (cancel) the shares (however, retired shares are not listed as treasury stock on the company's financial statements) or hold the shares for later resale. Buying back stock reduces the number of outstanding shares. Accompanying the decrease in the number of shares outstanding is a reduction in company assets, in particular, cash assets, which are used to buy back shares.\n\nSection::::Accounting for treasury stock.\n\nOn the balance sheet, treasury stock is listed under shareholders' equity as a negative number. The accounts may be called \"Treasury stock\" or \"equity reduction\".\n", "If the market is not efficient, the company's shares may be underpriced. In that case a company can benefit its other shareholders by buying back shares. If a company's shares are overpriced, then a company is actually hurting its remaining shareholders by buying back stock.\n\nSection::::Buying back shares.:Incentives.\n", "A reverse stock split may be used to reduce the number of shareholders. If a company completes a reverse split in which 1 new share is issued for every 100 old shares, any investor holding fewer than 100 shares would simply receive a cash payment. If the number of shareholders drops, the company may be placed into a different regulatory category and may be governed by different law—for example, in the U.S., whether a company is regulated by the SEC depends in part on the number of shareholders.\n", "In an efficient market, a company buying back its stock should have no effect on its price per share valuation. If the market fairly prices a company's shares at $50/share, and the company buys back 100 shares for $5,000, it now has $5,000 less cash but there are 100 fewer shares outstanding; the net effect should be that the underlying value of each share is unchanged. Additionally, buying back shares will improve price/earnings ratios due to the reduced number of shares (and unchanged earnings) and improve earnings per share ratios due to fewer shares outstanding (and unchanged earnings).\n", "Section::::History.:Share Buy-back.\n\nIn September 2008, the company's Board of Directors authorized a plan for the repurchase of up to 2.1 million of the company’s shares, in an amount of up to US$2.8 million. As of December 31, 2008, the Company had repurchased 2.1 million shares at a total purchase price of approximately US$1.6 million.\n", "Common stock grants are similar in function but the mechanism is different. An employee, typically a company founder, purchases stock in the company at nominal price shortly after the company is formed. The company retains a repurchase right to buy the stock back at the same price should the employee leave. The repurchase right diminishes over time so that the company eventually has no right to repurchase the stock (in other words, the stock becomes fully vested).\n", "After a sale is identified as a wash sale and if the replacement stock is bought within 30 days before or after the sale then the wash sale loss is added to the basis of the replacement stock. The basis adjustment is important as it preserves the benefit of the disallowed loss; the holder receives that benefit on a future sale of the replacement stock. However, if the replacement shares are in a tax-advantaged account, such as an IRA, the disallowed loss cannot be added to the basis and there is no benefit for the loss.\n", "If the new shares are issued for proceeds at least equal to the pre-existing price of a share, then there is no negative dilution in the amount recoverable. The old owners just own a smaller piece of a bigger company. However, voting rights at stockholder meetings are decreased.\n\nBut, if new shares are issued for proceeds below or equal to the pre-existing price of a share, then stock holders have the opportunity to maintain their current voting power without losing net worth.\n\nSection::::Value dilution.:Market value of the business.\n", "From time to time, companies will issue a reverse split concurrently with a forward split, making a reverse/forward split. Note that in reverse/forward splits, the shareholder's old shares are erased, as they receive a number of new shares in proportion to their original holdings. By contrast, in a simple stock split, the original shares remain on the exchange as shareholders receive additional shares based on their existing holdings. In both stock splits and reverse splits, the share price is adjusted in proportion to the increase in shares to maintain equal value.\n", "There is a stigma attached to doing a reverse stock split, as it underscores the fact that shares have declined in value, so it is not common and may take a shareholder or board meeting for consent. Many institutional investors and mutual funds, for example, have rules against purchasing a stock whose price is below some minimum, perhaps US$5. A common reason for a reverse stock split is to satisfy a stock exchange's minimum share price.\n", "In August 2012, the company authorized the re-activation of the 2009 plan allowing for the repurchase of the company's ordinary shares in the open market in an amount in cash of up to $1.8 million.\n\nSection::::History.:Dividend.\n", "In February 2009, the Board of Directors authorized additional repurchases of the company’s shares in the total amount of US$1.2 million pursuant to the 2008 repurchase plan. As of December 31, 2009, the Company had purchased an aggregate amount of 3,165,092 shares at a total purchase price of US$2.8 million.\n\nIn November 2009, the Board of Directors authorized a new plan for the repurchase of the company’s shares, in an amount in cash of up to US$1.8 million. As of December 31, 2011, no repurchases had been made under this new plan and it became non-active.\n", "The Securities and Exchange Board of India mandates that only private companies can choose this method of issuing securities.\n\nSection::::Features.\n\nBULLET::::- Parties – There are three parties involved in a bought out deal; the promoters of the company, sponsors & co-sponsors who are generally merchant bankers and investors\n\nBULLET::::- Outright sale – There is an outright sale of a chunk of equity shares to a single sponsor or a lead sponsor\n\nBULLET::::- Syndicate – The sponsor forms a syndicate for management of resources required & distribution of risk\n", "Delisting refers to the practice of removing the stock of a company from a stock exchange so that investors can no longer trade shares of the stock on that exchange. This typically occurs when a company goes out of business, declares bankruptcy, no longer satisfies the listing rules of the stock exchange, or has become a private company after a merger or acquisition, or wants to reduce regulatory reporting complexities and overhead, or if the stock volumes on the exchange from which it wishes to delist are not significant. Delisting does not necessarily mean a change in company's core strategy. \n", "Companies typically have two uses for profits. Firstly, some part of profits can be distributed to shareholders in the form of dividends or stock repurchases. The remainder, termed \"retained earnings\", are kept inside the company and used for investing in the future of the company, if profitable ventures for reinvestment of retained earnings can be identified. However, sometimes companies may find that some or all of their retained earnings cannot be reinvested to produce acceptable returns.\n", "Section::::Limitations of treasury stock.\n\nBULLET::::- Treasury stock is not entitled to receive a dividend\n\nBULLET::::- Treasury stock has no voting rights\n\nBULLET::::- Total treasury stock can not exceed the maximum proportion of total capitalization specified by law in the relevant country\n\nWhen shares are repurchased, they may either be canceled or held for reissue. If not canceled, such shares are referred to as treasury shares. Technically, a repurchased share is a company's own share that has been bought back after having been issued and fully paid.\n", "Share repurchase\n\nShare repurchase (or stock buyback) is the re-acquisition by a company of its own stock. It represents a more flexible way (relative to dividends) of returning money to shareholders.\n\nIn most countries, a corporation can repurchase its own stock by distributing cash to existing shareholders in exchange for a fraction of the company's outstanding equity; that is, cash is exchanged for a reduction in the number of shares outstanding. The company either retires the repurchased shares or keeps them as treasury stock, available for re-issuance.\n", "In Virginia, ademption occurs with respect to most forms of property, but if the property at issue is stock certificates, then the buyout of the issuer of the stock by another company, and the swapping of the stocks for a new issue by that company, will not adeem the gift of stock. Similarly, if the shares of stock that existed at the time the gift was made have split (for example, where the holder of 500 shares receives a reissue of 1,000 shares each having half the value of the original), then the beneficiary of that gift will be entitled to the number of shares that exist \"after\" the split.\n", "Sometimes, shares are allocated in exchange for non-cash consideration, most commonly when corporation A acquires corporation B for shares (new shares issued by corporation A). Here the share capital is increased to the par value of the new shares, and the merger reserve is increased to the balance of the price of corporation B.\n", "On August 20, 2002, KBF Pollution Management, INC., a recycling services provider, reported that it would repurchase stock from its current shareholders. KBF planned to fund its Share Repurchase Program though initiation of service for many new generators, some of which are Fortune 500 companies, and expectations of third quarter revenues exceeding second quarter revenues by 30%. Under KBF’s Share Repurchase Plan, KBF stock can be purchased by block purchase from time to time as long as it is in compliance with SEC’s Rule 10b-18, subject to market conditions, meets legal requirements, and other factors. The repurchased shares are held in KBF’s treasury where they are either inactive or applied to corporate use.\n", "Another common way for accounting for treasury stock is the par value method. In the par value method, when the stock is purchased back from the market, the books will reflect the action as a retirement of the shares. Therefore, common stock is debited and treasury stock is credited. However, when the treasury stock is resold back to the market the entry in the books will be the same as the cost method.\n", "Stocks whose market value and/or turnover fall below critical levels may be delisted by the exchange. Delisting often arises from a merger or takeover, or the company going private.\n\nSection::::Requirements.\n", "The Company in early 2011 approved the voluntary suspension of its duty to file reports with the SEC and the voluntary deregistration of its common stock. The Company was eligible to suspend its reporting obligations and deregister its common stock because there were fewer than 300 holders of record of the Company’s common stock. This decision resulted in expense savings and reduced reporting burdens.\n" ]
[]
[]
[ "normal" ]
[ "When corporations buy back their own stock, it belongs to the corporation." ]
[ "false presupposition", "normal" ]
[ "The stock becomes Treasury stock which means it isn't in circulation." ]
2018-22786
How can a baseball travel faster than the directional component of the bat that just hit it, but a toy car (for example) does not travel faster than my hand that just rolled it?
The weight of the bat combined with the speed. That’s why following through on your swing is critical.
[ "The distance covered by a vehicle (for example as recorded by an odometer), person, animal, or object along a curved path from a point \"A\" to a point \"B\" should be distinguished from the straight-line distance from \"A\" to \"B\". For example, whatever the distance covered during a round trip from \"A\" to \"B\" and back to \"A\", the displacement is zero as start and end points coincide. In general the straight-line distance does not equal distance travelled, except for journeys in a straight line.\n\nSection::::Distance versus directed distance and displacement.:Directed distance.\n", "Section::::Types of bats.\n\nIn addition to the Louisville Slugger, there are many other types of bats that have been used throughout the history of baseball.\n\nThe introduction of aluminum baseball bats in the 1970s forever changed the game of baseball at every level but the professional. Aluminum bats are lighter and stronger than wooden bats. Because of the trampoline effect that occurs when a baseball hit an aluminum bat, aluminum bats can hit a ball significantly farther than wooden bats can.\n", "Premack further argued that young children divide the world into two kinds of objects, those that move only when acted upon by other objects, and those that are self-propelled and move on their own.\n", "\"It's become a cliché to say everything has a story, but in baseball, you could make the argument that everything really does. Even the baseball itself is a story -- one of geography and symbolism -- an almost holy relic of American culture. Sportswriter Steve Rushin tells the story of these objects in his latest book, The 34-Ton Bat.\"\n", "BULLET::::- Mrs. Cotta: Terra Cotta's mom who wears a planter for a hat.\n\nBULLET::::- Mr Postman: A Octopus/Polyp who is the Mail Carrier for Circus Town\n\nBULLET::::- Mrs. Boa: Bal Boa's Mom.\n\nBULLET::::- Hogan the Hamster: One of the class pets who lives at the Little Big Top Clown School\n\nBULLET::::- Cha Cha the Clown Crab: One of the class pets from the Little Big Top Clown School who JoJo takes care for the night.\n", "The big difference can be discerned when considering movement around a circle. When something moves in a circular path and returns to its starting point, its average \"velocity\" is zero, but its average \"speed\" is found by dividing the circumference of the circle by the time taken to move around the circle. This is because the average \"velocity\" is calculated by considering only the displacement between the starting and end points, whereas the average \"speed\" considers only the total distance traveled.\n\nSection::::Definition.:Tangential speed.\n", "Section::::MLB career.\n", "The Mechana-Koala cousins want to get together to eat cake, but the Crab Catamaran which can bring them together is not working! Unicorn creates a frozen bridge so the team can walk over to the Crab catamaran to see what the problem is. They learn the seahorses which pull the Catamaran are loose, because their harnesses have become unbolted! Sasquatch transforms and stretches to catch the seahorses, and Komodo produces a wrench from his tail to bolt them back into the harnesses. Soon the Koala cousins are together and ready to eat their cake, but not before inviting the Animal Mechanicals to join them!\n", "BULLET::::- Psychic forces are sufficient in most bodies for a shock to propel them directly away from the surface. A spooky noise or an adversary's signature sound will introduce motion upward, usually to the cradle of a chandelier, a treetop or the crest of a flagpole. The feet of a running character or the wheels of a speeding auto need never touch the ground, ergo fleeing turns to flight.\n\nBULLET::::- As speed increases, objects can be in several places at once.\n", "When a golf ball is hit, the impact, which lasts less than a millisecond, determines the ball’s velocity, launch angle and spin rate, all of which influence its trajectory and its behavior when it hits the ground.\n\nA ball moving through air experiences two major aerodynamic forces, lift and drag. Dimpled balls fly farther than non-dimpled balls due to the combination of these two effects.\n", "BULLET::::- May  8 – Slick Coffman, 92, pitcher who spent 18 years in baseball, including four seasons with the Detroit Tigers and St. Louis Browns from 1937 to 1940, whose career highlight came in his major league debut, defeating the Boston Red Sox in an 11-inning, 4–2 victory, and winning a pitching duel with Lefty Grove.\n\nBULLET::::- May  8 – Dorothy Ferguson, 80, Canadian infielder and outfielder in the AAGPBL from 1945 to 1954.\n", "One main difference, however, is that the ball in cricket is harder and heavier in weight. The legal weight for the ball in baseball is from ; whereas the ball in cricket must weigh between .\n", "In high school, young Gyro was a baseball pitcher with his \"madball\" pitch — actually only a straight-ball pitch. When Gyro is forced to pitch for the Northside, pitting an \"unhittable\" baseball against another of his inventions, an \"unmissable\" baseball bat made for the Southside team, the result is total chaos.\n", "Citing a 2013 \"Wall Street Journal\" article that found the total actual gameplay in MLB games averaged about 18 minutes out of the three hours it usually took to play, Freeman argued that mascots and other in-stadium entertainment were now inextricably part of the game, contrary to the state supreme court's holding:\n\nThe court could also have distinguished \"Coomer\" from \"Lowe\", the California precedent it relied on, Freeman wrote, by noting that Tremor's behavior in the latter case was not just negligent but recklessly so.\n", "BULLET::::- BB guns\n\nBULLET::::- Big Wheel\n\nBULLET::::- Bilibo\n\nBULLET::::- Bop It\n\nBULLET::::- Bungee balls\n\nBULLET::::- Contact juggling (acrylic ball)\n\nBULLET::::- Devil Sticks (juggling sticks)\n\nBULLET::::- Footbag (dirt bag / hacky sack)\n\nBULLET::::- Gee-haw whammy diddle\n\nBULLET::::- Get in Shape Girl\n\nBULLET::::- Jacks\n\nBULLET::::- Juggling clubs\n\nBULLET::::- Jump rope\n\nBULLET::::- Laser tag\n\nBULLET::::- Leapfrog\n\nBULLET::::- Marbles\n\nBULLET::::- Moon shoes\n\nBULLET::::- Nerf\n\nBULLET::::- Pogo stick\n\nBULLET::::- Radio Flyer\n\nBULLET::::- Roller Skates\n\nBULLET::::- Skip It\n\nBULLET::::- Slinky\n\nBULLET::::- Slip 'n Slide\n\nBULLET::::- Soap-box cart\n\nBULLET::::- Space Pets\n\nBULLET::::- Toss across\n\nBULLET::::- Toy gun\n\nBULLET::::- Water gun\n\nBULLET::::- Wiffle bat and ball\n\nSection::::Puzzle/assembly.\n", "In order to work, car analogies translate agents of action as the car driver, the seller, or police officers; likewise, elements of a system are referred as car pieces, such as wheels, motor, or ignition keys. Resources tend to appear as gas, speed, or as the money that can be spent on better accessories/vehicles.\n\nFor example, in the paragraph:\n", "BULLET::::- High-performance catamarans, including the Extreme 40 catamaran and International C-class catamaran can sail at speeds up to twice the speed of the wind.\n\nBULLET::::- Sailing hydrofoils achieve boat speeds up to twice the speed of the wind, as did the AC72 catamarans used for the 2013 America's Cup.\n\nBULLET::::- Ice boats can sail up to five times the speed of the wind.\n", "[274] Air resistance shows itself in two ways: by affecting less dense bodies more and by offering greater resistance to faster bodies. A lead ball will fall slightly faster than an oak ball, but the difference with a stone ball is negligible. However the speed does not go on increasing indefinitely but reaches a maximum. Though at small speeds the effect of air resistance is small, it is greater when considering, say, a ball fired from a cannon.\n", "When a person throws a ball, the person does work on it to give it speed as it leaves the hand. The moving ball can then hit something and push it, doing work on what it hits. The kinetic energy of a moving object is equal to the work required to bring it from rest to that speed, or the work the object can do while being brought to rest: net force × displacement = kinetic energy, i.e.,\n", "In 2009, researchers from the National Chung Hsing University in Taiwan introduced new concepts of “kidnapped airfoils” and “circulating horsepower” to explain the swimming capabilities of the swordfish. Swordfish swim at even higher speeds and accelerations than dolphins. The researchers claim their analysis also \"solves the perplexity of dolphin’s Gray paradox\".\n", "In classical mechanics, velocities are directly additive and subtractive. For example, if one car travels east at 60 km/h and passes another car traveling in the same direction at 50 km/h, the slower car perceives the faster car as traveling east at . However, from the perspective of the faster car, the slower car is moving 10 km/h to the west, often denoted as -10 km/h where the sign implies opposite direction. Velocities are directly additive as ; they must be dealt with using vector analysis.\n", "Section::::Career.:Professional career.:Detroit Tigers.\n", "BULLET::::- January  3 – Jim Westlake,72, pinch-hitter for the 1955 Philadelphia Phillies.\n\nBULLET::::- January  6 – Jarvis Tatum, 56, center fielder who played from 1968 to 1970 for the California Angels.\n\nBULLET::::- January  7 – Ed Albosta, 84, pitcher who played for the Brooklyn Dodgers in 1941 and the Pittsburgh Pirates in 1946.\n\nBULLET::::- January  9 – Don Landrum, 66, speedy center fielder who played for the Philadelphia Phillies, St. Louis Cardinals, Chicago Cubs and San Francisco Giants in part of eight seasons spanning 1957–1966.\n", "The term \"slipstreaming\" describes an object travelling inside the slipstream of another object (most often objects moving through the air though not necessarily flying). If an object is following another object, moving at the same speed, the rear object will require less power to maintain its speed than if it were moving independently. This technique, also called drafting can be used by bicyclists.\n", "The performance of composite baseball bats typically improves with age. The reason for this is the \"breaking-in process\"; in other words, the force of striking the ball over and over eventually breaks down the bat's composite fibers and resinous glues that different manufacturers use. This improves the bat's Bat Performance Factor (BPF) or Ball Exit Speed Ratio (BESR). These factors are directly related to the speed at which a struck ball comes off the bat; with higher speeds representing a greater danger to players in the infield (closer to the batter). Bats used in Little League play are required to remain below a league-specified values to be allowed in league play when new and after Accelerated Break-In (ABI).\n" ]
[]
[]
[ "normal" ]
[]
[ "normal" ]
[]
2018-01152
Why are there trees in the mountains, but as soon as the land flattens out the trees disappear and it turns into grass?
Here in Montana this happens when the mountain accumulate more snow through the winter and melt off more slowly through the summer, providing a source of water through the hot dry season. Down in the flatland there's less snow and it melts faster, running off downriver, so there's less late-season water to support trees or broadleaf vegetation. Except right next to rivers and streams, fed from the mountains, so you'll see narrow snakes of forest right along them running through the grasslands.
[ "The herb and grass mixture is invaded by shrub species, such as \"Rhus\" and \"Physocarpus\". Early invasion of shrub is slow, but once a few bushes have become established, birds invade the area and help disperse scrub seeds. This results in dense scrub growth shading the soil and making conditions unfavorable for the growth of herbs, which then begin to migrate. The soil formation continues and its moisture content increases.\n\nSection::::Stages.:Tree stage.\n", "As land became exposed following the glaciation of the last ice age, a variety of geographic settings ranging from the tropics to the Arctic and Antarctic became available for the establishment of vegetation. Species that now exist on formerly glaciated terrain must have undergone a change in distribution of hundreds to thousands of kilometers, or have evolved from other taxa that have once done so in the past. In a newly developing environment, plant growth is often strongly influenced by the introduction of new organisms into that environment, where competitive or mutuallistic relationships may develop. Often, competitive balances are eventually reached and species abundances remain somewhat constant over a period of generations. \n", "Gymnosperm seeds from the Late Carboniferous have been found to contain embryos, suggesting a lengthy gap between fertilisation and germination. This period is associated with the entry into a greenhouse earth period, with an associated increase in aridity. This suggests that dormancy arose as a response to drier climatic conditions, where it became advantageous to wait for a moist period before germinating. This evolutionary breakthrough appears to have opened a floodgate: previously inhospitable areas, such as dry mountain slopes, could now be tolerated, and were soon covered by trees.\n", "By the end of the eighteenth century, European farmers occupied the lower-lying foothills and valley lands and used the mountains for grazing. These early European settlers in the area moved their sheep to low-lying areas during winter and burnt the mountain vegetation in late winter or early spring to provide summer grazing. This practice of burning vegetation to provide pasture was continued until the introduction of fire protection areas in the late nineteenth century.\n", "Studies done on the Norwegian Island of Svalbard, have been very useful in understanding the behavior of postglacial vegetation. Studies show that many vascular plants that are considered pioneers of vegetation development, eventually become less frequent. For example, the abundance of species such as Braya purpurascens has fallen nearly 30% due to the introduction of new species in the area.\n\nSection::::Postglacial Vegetation in North America.\n", "When the trees can no longer grow the vegetation changes into heathland and chaparral, at around . Heathland is found in the wetter areas, on the west side of Mount Kenya, and is dominated by giant heathers. Chaparral is found in the drier areas and grasses are more common. and bush fires still occur.\n", "As the plant succession develops further, trees start to appear. The first trees (or pioneer trees) that appear are typically fast growing trees such as birch, willow or rowan. In turn these will be replaced by slow growing, larger trees such as ash and oak. This is the climax community on a lithosere, defined as the point where a plant succession does not develop any further—it reaches a delicate equilibrium with the environment, in particular the climate.\n", "Herbaceous weeds, mostly annuals such as asters, evening primroses, and milk weeds, invade the rock. Their roots penetrate deep down, secrete acids and enhance the process of weathering. Leaf litter and death of herbs add humus to the soil. Shading of soil results in decrease in evaporation and there is a slight increase in temperature. As a result, the xeric conditions begin to change and biennial and perennial herbs and xeric grasses such as \"Aristida\", \"Festuca\", and \"Poa\", begin to inhabit. These climatic conditions favor growth of bacterial and fungal populations, resulting in increase in decomposition activities.\n\nSection::::Stages.:Shrub stage.\n", "Besides the superposition of different plants growing on the same soil, there is a lateral impact of the higher layers on adjacent plant communities, for example, at the edges of forests and bushes. This particular vegetation structure results in the growth of certain vegetation types such as forest mantle and margin communities.\n\nSection::::Vertical structure in terrestrial plant habitats.:Tree layer.\n", "The reoccupation of mountain areas started in the 12th century, intensifying in the 16th century with the introduction of maize, beans, and potatoes from the Americas. Agricultural fields occupied former pastures, and these were displaced to more elevated areas resulting in a mosaic of fields, pastures, and forests.\n", "Many different plant species live in the high-altitude environment. These include perennial grasses, sedges, forbs, cushion plants, mosses, and lichens. High-altitude plants must adapt to the harsh conditions of their environment, which include low temperatures, dryness, ultraviolet radiation, and a short growing season. Trees cannot grow at high altitude, because of cold temperature or lack of available moisture. The lack of trees causes an ecotone, or boundary, that is obvious to observers. This boundary is known as the tree line.\n", "On poorly drained impermeable areas of millstone grit, shale or clays the topsoil gets waterlogged in winter and spring. Here tree suppression combined with the heavier rainfall results in blanket bog up to thick. The erosion of peat still exposes stumps of ancient trees.\n\nSection::::Fauna.\n", "Section::::Global distribution and geography.\n", "Postglacial vegetation\n\nPostglacial vegetation refers to plants that colonize the newly exposed substrate after a glacial retreat. The term \"postglacial\" typically refers to processes and events that occur after the departure of glacial ice or glacial climates.\n\nSection::::Climate Influence.\n", "In suitable environments, such as the Daintree Rainforest in Queensland, or the mixed podocarp and broadleaf forest of Ulva Island, New Zealand, forest is the more-or-less stable climatic climax community at the end of a plant succession, where open areas such as grassland are colonised by taller plants, which in turn give way to trees that eventually form a forest canopy.\n", "Section::::Anthropogenic relationships.\n\nAnthropogenic interactions have been used over the years to help change and drive vegetation in the eastern US. Meaning that the actions of human-beings will play a role in what type of vegetation will grow in some locations. This is including things like fires and fire suppression, grazing, logging and agriculture clearing. Research has been done and anecdotal evidence has been shown to suggest vegetation structures and composition in the eastern serpentine barrens may have also been influenced by local disturbance regimes associated with these events as well as mining \n", "Often these plants are low and store energy in spreading roots, with relatively little vegetation above ground.\n\nThis vegetation may be cleared for cultivation or road building, or may be overgrazed, resulting in rapid soil loss through erosion.\n\nPeople have both adapted to mountain conditions and modified those conditions.\n\nFor example, farmers in many areas use terracing to retain soil and water.\n\nContour ploughing also helps stabilize the fragile soil.\n\nOften human activity has degraded the mountain environments.\n\nHumans have reduced biodiversity in many of the world's mountain regions.\n", "The effect of the climate on the ecology at an elevation can be largely captured through a combination of amount of precipitation, and the biotemperature, as described by Leslie Holdridge in 1947. Biotemperature is the mean temperature; all temperatures below are considered to be 0 °C. When the temperature is below 0 °C, plants are dormant, so the exact temperature is unimportant. The peaks of mountains with permanent snow can have a biotemperature below .\n\nSection::::Ecology.\n", "At tree line, the trees are reduced to krummholz, battered and twisted by wind and frost. The bristlecone pine grove on the east slope of Mount Goliath () contains at least one tree that sprouted in the year 403 AD. For many years, these were the oldest known trees in Colorado, but in 1992, trees dating to 442 BC were found in the southern Front Range and South Park. The Mount Goliath Natural Area, jointly managed by the United States Forest Service and the Denver Botanic Gardens protects this grove of old trees.\n", "Regions on the earth’s surface where soils are dominating the ecosystems with little to no plant cover are often referred to as “Barren”. These places are areas like deserts, Polar Regions, areas of high elevation, and zones of glacier retreat. For barren zones that are situated in mountain ranges, they are often called the \"Subnival Zone\", and are found at elevations between the upper limit of the vegetation zone and the lower limit of the ice covered zone. Subnival zones in places like the Rockies, Andes and Himalayas have increased greatly in the past few years due to the retreat of high elevation glaciers and the ice caps.\n", "On poorly drained impermeable areas of millstone grit, shale or clays the topsoil gets waterlogged in Winter and Spring. Here tree suppression combined with the heavier rainfall results in blanket bog up to thick. The erosion of peat ca 2010 still exposes stumps of ancient trees.\n", "Most orthents are found in very steep, mountainous regions where erodible material is so rapidly removed by erosion that a permanent covering of deep soil cannot establish itself. Such conditions occur in almost all regions of the world where steep slopes are prevalent. In Australia and a few regions of Africa, orthents occur in flat terrain because the parent rock contains \"absolutely no weatherable minerals except short-lived additions from rainfall\", so that there is no breaking down of the minerals (chiefly iron oxides) in the rock.\n", "Section::::Geology.:Erosion.\n\nDuring and following uplift, mountains are subjected to the agents of erosion (water, wind, ice, and gravity) which gradually wear the uplifted area down. Erosion causes the surface of mountains to be younger than the rocks that form the mountains themselves. Glacial processes produce characteristic landforms, such as pyramidal peaks, knife-edge arêtes, and bowl-shaped cirques that can contain lakes. Plateau mountains, such as the Catskills, are formed from the erosion of an uplifted plateau.\n", "At this time, the best documented occurrences of unfossilized buried upright trees occur within the historic and late-Holocene volcanic deposits of Mount St. Helens (Skamania County, Washington) and of Mount Pinatubo in the Philippines. At Mount St. Helens, both unfossilized and partially fossilized trees have occurred in many outcrops of volcanic debris and mud flows (lahars) and pyroclastic flow deposits, which date from 1885 to over 30,000 BP., along the South Toutle and other rivers. Late Holocene forests of upright trees also occur within the volcanic deposits of other Cascade Range volcanoes. In the space of a few years after the eruption of Mount Pinatubo in 1991, the erosion of loose pyroclastic deposits covering the slopes of the mountain generated a series of volcanic lahars, which ultimately buried large parts of the countryside along major streams draining these slopes beneath several meters of volcanic sediments. The repeated deposition of sediments by volcanic lahars and by sediment-filled rivers not only created innumerable polystrate trees, but also \"polystrate\" telephone-poles, churches, and houses, over a period a few years. The volcanic deposits enclosing modern upright trees are often virtually identical in their sedimentary structures, external and internal layering, texture, buried soils, and other general character to the volcanic deposits containing the Yellowstone buried forests. As in case of modern forests buried by lahars, the individual buried forests of the Yellowstone Petrified Forest and the layers containing them are very limited in their areal extent.\n", "The growth form and physiology of subalpine plants is reflective of the stressful environment to which they are adapted. Leaves are very long-lived at this elevation because they are costly to produce and soils are usually nutrient-poor. Since plants ultimately take nutrients such as nitrogen from the soil to produce organs such as leaves, this adaptation provides them an advantage in subalpine soils because their nutrient retention is enhanced. Also, evergreen plants can carry out photosynthesis on periodic warm days during the winter, which is an advantage in a climate with a very short growing season.\n" ]
[]
[]
[ "normal" ]
[]
[ "normal" ]
[]
2018-01044
What's the difference between bananas and plantains?
Bananas are cultivated to taste good raw. Plantains are starchy and cultivated to be eaten cooked. There's a distinction between "cooking plantains"--starchy cultivars of banana--and "true plantains"--a specific group of plants that have genomic differences between them and your average store's banana.
[ "The term \"plantain\" is loosely applied to any banana cultivar that is usually cooked before it is eaten. However, there is no botanical distinction between bananas and plantains. Cooking is also a matter of custom, rather than necessity, for many bananas. In fact, ripe plantains can be eaten raw since their starches are converted to sugars. In some countries, where only a few cultivars of banana are consumed, there may be a clear distinction between plantains and bananas. In other countries, where many cultivars are consumed, there is no distinction in the common names used.\n", "Worldwide, there is no sharp distinction between \"bananas\" and \"plantains\". Especially in the Americas and Europe, \"banana\" usually refers to soft, sweet, dessert bananas, particularly those of the Cavendish group, which are the main exports from banana-growing countries. By contrast, \"Musa\" cultivars with firmer, starchier fruit are called \"plantains\". In other regions, such as Southeast Asia, many more kinds of banana are grown and eaten, so the binary distinction is not useful and is not made in local languages.\n", "In summary, in commerce in Europe and the Americas (although not in small-scale cultivation), it is possible to distinguish between \"bananas\", which are eaten raw, and \"plantains\", which are cooked. In other regions of the world, particularly India, Southeast Asia and the islands of the Pacific, there are many more kinds of banana and the two-fold distinction is not useful and not made in local languages. Plantains are one of many kinds of cooking bananas, which are not always distinct from dessert bananas.\n\nSection::::Historical cultivation.\n\nSection::::Historical cultivation.:Early cultivation.\n", "In Southeast Asia – the center of diversity for bananas, both wild and cultivated – the distinction between \"bananas\" and \"plantains\" does not work, according to Valmayor et al. Many bananas are used both raw and cooked. There are starchy cooking bananas which are smaller than those eaten raw. The range of colors, sizes and shapes is far wider than in those grown or sold in Africa, Europe or the Americas. Southeast Asian languages do not make the distinction between \"bananas\" and \"plantains\" that is made in English (and Spanish). Thus both Cavendish cultivars, the classic yellow dessert bananas, and Saba cultivars, used mainly for cooking, are called \"pisang\" in Malaysia and Indonesia, \"kluai\" in Thailand and \"chuoi\" in Vietnam. Fe'i bananas, grown and eaten in the islands of the Pacific, are derived from entirely different wild species than traditional bananas and plantains. Most Fe'i bananas are cooked, but Karat bananas, which are short and squat with bright red skins, very different from the usual yellow dessert bananas, are eaten raw.\n", "An alternative approach divides bananas into dessert bananas and cooking bananas, with plantains being one of the subgroups of cooking bananas. Triploid cultivars derived solely from \"M. acuminata\" are examples of \"dessert bananas\", whereas triploid cultivars derived from the hybrid between \"M. acuminata\" and \"M. balbinosa\" (in particular the plantain subgroup of the AAB Group) are \"plantains\". Small farmers in Colombia grow a much wider range of cultivars than large commercial plantations. A study of these cultivars showed that they could be placed into at least three groups based on their characteristics: dessert bananas, non-plantain cooking bananas, and plantains, although there were overlaps between dessert and cooking bananas.\n", "Fe'i bananas (\"Musa\" × \"troglodytarum\") from the Pacific Islands are often eaten roasted or boiled, and thus informally referred to as \"mountain plantains.\" However, they do not belong to either of the two species that all modern banana cultivars are descended from.\n\nSection::::Description.\n\nPlantains contain more starch and less sugar than dessert bananas, therefore they are usually cooked or otherwise processed before being eaten. They are always cooked or fried when eaten green. At this stage, the pulp is hard and the peel often so stiff that it has to be cut with a knife to be removed.\n", "In regions such as North America and Europe, \"Musa\" fruits offered for sale can be divided into \"bananas\" and \"plantains\", based on their intended use as food. Thus the banana producer and distributor Chiquita produces publicity material for the American market which says that \"a plantain is not a banana\". The stated differences are that plantains are more starchy and less sweet; they are eaten cooked rather than raw; they have thicker skin, which may be green, yellow or black; and they can be used at any stage of ripeness. Linnaeus made the same distinction between plantains and bananas when first naming two \"species\" of \"Musa\". Members of the \"plantain subgroup\" of banana cultivars, most important as food in West Africa and Latin America, correspond to the Chiquita description, having long pointed fruit. They are described by Ploetz et al. as \"true\" plantains, distinct from other cooking bananas. The cooking bananas of East Africa belong to a different group, the East African Highland bananas, so would not qualify as \"true\" plantains on this definition.\n", "Plantains are a staple food in the tropical regions of the world, ranking as the tenth most important staple food in the world. As a staple, plantains are treated in much the same way as potatoes and with a similar neutral flavour and texture when the unripe fruit is cooked by steaming, boiling or frying.\n", "Plantains are used in the Ivory Coast dish \"aloco\" as the main ingredient. Fried plantains are covered in an onion-tomato sauce, often with a grilled fish between the plantains and sauce.\n", "In botanical usage, the term \"plantain\" is used only for true plantains, while other starchy cultivars used for cooking are called \"cooking bananas\". All modern true plantains have three sets of chromosomes (i.e. they are triploid). Many are hybrids derived from the cross of two wild species, \"Musa acuminata\" and \"Musa balbisiana\". The currently accepted scientific name for all such crosses is \"Musa\" × \"paradisiaca\". Using Simmonds and Shepherds' 1955 genome-based nomenclature system, cultivars which are cooked often belong to the AAB Group, although some (e.g. the East African Highland bananas) belong to the AAA Group, and others (e.g. Saba bananas) belong to the ABB Group.\n", "A traditional mangú from the Dominican Republic consists of peeled green, boiled plantains, mashed with enough hot water they were boiled in so the consistency is a little stiffer than mashed potatoes. It is traditionally eaten for breakfast, topped with sautéed onions and accompanied by fried eggs, fried cheese, fried salami, or avocado.\n\nSection::::Plantain dishes.:Central and South America.\n", "Guineos are used widely in Latin American cooking as they are versatile, inexpensive, and filling.\n\nSection::::Dominican Republic.\n\nPlantains are more widely used in the Dominican Republic than green bananas. There aren't many uses for green bananas and most dishes have been adapted. As in the Haitian \"labouyi Bannann,\" a green banana porridge, and the Puerto Rican dishes \"mofongo\", \"alcapurria\", and \"pasteles\" along with other dishes from the neighboring island. Green plantains are also commonly used in \"sancocho\", \"mondongo\" and other soups.\n\nGuineítos a dish where green bananas are boiled then sauteed with peppers and onions.\n", "Since they fruit all year round, plantains are a reliable all-season staple food, particularly in developing countries with inadequate food storage, preservation and transportation technologies. In Africa, plantains and bananas provide more than 25 percent of the carbohydrate requirements for over 70 million people. \"Musa\" spp. do not stand high winds well, however, so plantain plantations are liable to destruction by hurricanes.\n", "In Guatemala, ripe plantains are eaten boiled, fried, or in a special combination where they are boiled, mashed and then stuffed with sweetened black beans. Afterwards, they are deep fried in sunflower or corn oil. The dish is called and is served as a dessert. In Puerto Rico, the Dominican Republic, and Cuba, it can also be mashed after it has been fried and be made a , or fried and made into , or , or it can be boiled or stuffed. , also known as are a popular staple in many South American countries.\n\nSection::::Food preparations.:Fruit.\n", "\"Guanimes\" are a type of sweet and savory dumpling that can be made with a corn flour or ripe and unripe plantain base. This recipe uses plantains along with coconut milk and sugar for the dumplings, which are wrapped in plantain leaves and boiled in chicken broth. Serve these as a side dish or snack.\n", "Section::::Food preparations.:Curries and soup.\n\nCurries and soups using plantains are consumed throughout the world.\n\nSection::::Plantain dishes.\n\nSection::::Plantain dishes.:Puerto Rico.\n\nPuerto Rico has a close relationship with plantains. Many dish originating from Puerto Rico can be seen in other parts the Caribbean and Latin America. \n\n\"Alcapurria\" is a type of savory fritter. Although usually consisting mainly of grated green bananas and yautias, they can also contain plantains. The \"masa\" (dough) is used to encase a filling of ground meat (\"picadillo\"), and the \"alcapurrias\" are then deep-fried.\n", "Cooking banana\n\nCooking bananas are banana cultivars in the genus \"Musa\" whose fruits are generally used in cooking. They may be eaten ripe or unripe and are generally starchy. Many cooking bananas are referred to as plantains ( , ) or green bananas, although not all of them are true plantains. Bananas are treated as a starchy fruit with a relatively neutral flavour and soft texture when cooked. Bananas fruit all year round, making them a reliable all-season staple food.\n", "In Peru, plantains are boiled and blended with water, spices, and sugar to make chapo. In Kerala , ripe plantains are boiled with sago, coconut milk, sugar and spices to make a pudding.\n\nSection::::Food preparations.:Chips.\n", "This is a list of banana dishes and foods in which banana or plantain is used as a primary ingredient. Foods prepared with banana or plantain as a primary ingredient are also included in this list. A banana is an edible fruit produced by several kinds of large herbaceous flowering plants in the genus \"Musa\". (In some countries, bananas used for cooking may be called plantains.) The fruit is variable in size, color and firmness, but is usually elongated and curved, with soft flesh rich in starch covered with a rind which may be green, yellow, red, purple, or brown when ripe. The fruits grow in clusters hanging from the top of the plant.\n", "Banana plants were originally classified by Linnaeus into two species, which he called \"Musa paradisiaca\" for those used as cooking bananas (plantains), and \"M. sapientum\" for those used as dessert bananas. It was later discovered that both of his \"species\" were actually cultivated varieties of the hybrid between two wild species, \"M. acuminata\" and \"M. balbisiana\", which is now called \"M.\" × \"paradisiaca\" L. The circumscription of the modern taxon \"M.\" × \"paradisiaca\" thus includes both the original \"M. paradisiaca\" and \"M. sapientum\", the latter being reduced to a synonym of \"M.\" × \"paradisiaca\".\n", "Section::::Plantain dishes.:Philippines.\n", "The country's cuisine includes a variety of different banana types such as \"oritos\" (sweet baby bananas): yellow eating bananas which are short, fat and very sweet. A related fruit, plantains or \"plátanos\" (pronounced “PLAH-ta-nohs”), are also grown extensively in Ecuador, and are usually cooked for eating, both when green and at various stages of ripening. In the coastal areas, a popular side dish served is Patacones, or fried plantains. Plantains are eaten in deep fried form or baked or boiled and used in a wide variety of dishes. The green variety is unripe and is known as \"verde\", the Spanish word for green. When they are ripe they turn yellow and then black . Green plantains are commonly cut into thin slices and deep fried. These are known as \"chifles\" and are very popular, much like potato chips, used to accompany ceviches and many other dishes; the \"maqueño\" type is especially good for chifles. \"Viche\" is a soup containing banana, calamari, conch, crab, crawfish, fish, and peanuts.\n", "Philippine plantains (called Saba or Cardaba Bananas) are much smaller than the Latin American varieties, usually around 4–5 inches and somewhat boxy in shape. They are eaten mostly in their ripe stage as a dessert or sweet snack, often simply boiled, in syrup, or sliced lengthwise and fried, then sprinkled with sugar. They are also quite popular in this fried form (without the sugar) in the local version of the Spanish dish, \"Arroz a la Cubana\", consisting of minced picadillo-style seasoned beef, white rice, and fried eggs, with fried plantains on the side. In addition, there is the equally popular \"merienda\" snack, \"Turrón\", where ripe plantains, as well as jackfruit in some variants, are sliced and then wrapped in lumpia wrapper (a thin rice paper) and deep-fried. Turron is then finished off with a brown sugar glaze.\n", "Section::::Food preparations.\n\nSection::::Food preparations.:Steamed, boiled, grilled, baked, or fried.\n\nIn countries in Central America and the Caribbean, the plantain is either simply fried, boiled or made into plantain soup.\n", "The plantain, a larger member of the banana family, is another commonly used fruit and can be served in a variety of ways. Ripe plantains (\"maduro\") have a sweet flavor and can be fried in oil, baked in a honey or a sugar-based sauce, or put in soups. Green (unripe) plantains can be boiled in soups or can be sliced, fried, smashed and then refried to make \"patacones\". These are often served with a bean dip or guacamole.\n" ]
[]
[]
[ "normal" ]
[]
[ "normal" ]
[]
2018-21152
How do [win X] competitions where the prize is a lifetime supply of the product manage to work?
First of all, there are limits to how much is given for free. Usually "one per visit", "one per day" or "one per week". That keeps the costs from getting too prohibitive. The winner could go into a McDonald's, get an ice cream, and had to come back later, and couldn't just sit in the lobby from open to close and demand a thousand ice cream cones. As for why companies do this, it's great advertising. Even if the person takes advantage a little, the actual cost per cone is pretty cheap, and it would take years for the cost of him eating thousands upon thousands of ice creams to equal a single commercial spot. So, for the cost of two commercials (the one advertising the contest, plus the cost of the ice creams), they get thousands or millions of active participants in a contest, including names, addresses, phone numbers, etc., that they know, for sure, are interested in McDonald's.
[ "BULLET::::- Task: Pick two gadgets and sell them at an over-50s exhibition. The team that makes the most sales wins.\n", "Competitive advantages of hidden champions are rarely because of cost leadership, more because of quality, total cost of ownership, high performance, and consultation close to the customer. They \"earn\" their market leadership through performance and not through price aggression. Their high real net output ratio is often achieved by working with proprietary processes which make it hard for competitors to imitate their products. On the other hand, management tasks like finance are often outsourced.\n", "BULLET::::- Result: Platinum ended up with an umbrella that doubled up as a seat, and a cardboard toilet. Odyssey also wanted the toilet, but lost out due to Steven and Lucy spending their entire pitch haggling on the price and showing no real enthusiasm for the product. They therefore ended up with novelty onesies and a pedal-powered washing machine. The teams found all their products hard to sell, in particular Odyssey, who ended up failing to sell a single washing machine. Platinum ultimately edged Odyssey in the final figures, winning by around just £30.\n\nBULLET::::- Winner: Platinum\n\nBULLET::::- Fired:\n", "BULLET::::- Result: Spirit made a profit of €3,491,000 from their sales and so were declared this week's winner\n\nBULLET::::- Winner: Spirit\n\nBULLET::::- Sent to boardroom: Conor and Maurice\n", "BULLET::::- Reasons for win: Despite a slow start, their sales eventually picked up after a strong push by the promotional company that Dawna hired. The three remaining team members also proved very effective at selling the coffee makers. Primarius sold 36 coffee units for a total revenue of $6,621.\n\nBULLET::::- Primarius' reward: A plane ride to Bar Harbor, Maine and a tour of one of Martha Stewart's homes with her daughter Alexis.\n\nBULLET::::- Losing team: Matchstick\n", "BULLET::::- Reasons for win: The Mattel executives felt that Apex's product was more innovative, and would generate more money in the longer term due to the potential for selling add-on packs containing new parts.\n\nBULLET::::- Reward: Dinner with Trump and Melania Knauss.\n", "BULLET::::- Despite Kinetic's victory, their selling strategies were criticised by Lord Sugar, Nick and the public, including charging extra for cones and adding additional toppings to ice creams without asking customers. Sugar also felt that while Hayley had done a reasonably good job as project manager, Zara and Haya were most deserving of the credit for the team's victory, due to their pricing strategy and strong sales figures.\n", "Section::::Prize manufacturers.:Nosco Plastics.\n", "Section::::Prize manufacturers.:R&L plastics.\n", "A particularly difficult question of value arises where inventors/owners use their patents to extract other advantages without actually marketing the invention (e.g., cross-licensing of related patents to avoid litigation, or suppressing a technology that could compete with the owner's other products). How can one determine the value of a patented product (and the underlying patent) that has not actually been produced, let alone sold in any quantity? Furthermore, many products incorporate numerous patented inventions (owned or licensed), and may carry exclusive trademarks, making it difficult to attribute a specific value to an individual patent. Would the same invention be as valuable if owned and marketed under a weak brand?\n", "Section::::Technical advances.\n\nSection::::Technical advances.:Sticky business.\n", "BULLET::::- Reasons for win: Synergy increased sales by 997%, while Gold Rush only increased sales by 608%. Carolyn didn't like the hats Synergy had, but the price of $4 for one slice and $6 for two slices helped Synergy secure a win and their marketing flyers, as well.\n\nBULLET::::- Reward: A luxury flight to Washington, D.C. and a stay at a privileged hotel with New York Senator Chuck Schumer\n\nBULLET::::- Losing team: Gold Rush\n", "The same is likewise true of the long run equilibria of monopolistically competitive industries and, more generally, any market which is held to be contestable. Normally, a firm that introduces a differentiated product can initially secure a \"temporary\" market power for a \"short while\" (See \"Persistence\" in \"Monopoly Profit\"). At this stage, the initial price the consumer must pay for the product is high, and the demand for, as well as the availability of the product in the market, will be limited. In the long run, however, when the profitability of the product is well established, and because there are few barriers to entry, the number of firms that produce this product will increase until the available supply of the product eventually becomes relatively large, the price of the product shrinks down to the level of the average cost of producing the product. When this finally occurs, all monopoly profit associated with producing and selling the product disappears, and the initial monopoly turns into a competitive industry. In the case of contestable markets, the cycle is often ended with the departure of the former \"hit and run\" entrants to the market, returning the industry to its previous state, just with a lower price and no economic profit for the incumbent firms.\n", "Most awarding organizations charge an entry fee. Alternatively, for some awards, producers or retailers pay a fee only if their products are awarded and/or they decide to communicate about the award (payment for the right to show the award on packaging). \n", "Section::::Technical advances.:Photographic lenticular printing.\n", "Section::::Prize manufacturers.\n\nSection::::Prize manufacturers.:Cloudcrest.\n\nC. Carey Cloud, sometimes called \"year-round Santa Claus\", was best known as a designer and producer of hundreds of different prizes for Cracker Jack from the 1930s through the 1960s through his company Cloudcrest. It is estimated that he created, produced, and delivered to the Cracker Jack Company 700 million toys. At the same time he designed hundreds of premiums for companies such as Brach's Confections, Breck Candy Company, Bunny Bread, Carnival Candies, CoCo Wheats, Johnston Candies and Chocolates, New Orleans Confections Inc, Ovaltine, Pillsbury flour, Post Bran Flakes, Shotwell of Chicago, Thinshell Candies, and more.\n", "BULLET::::- \"Guest Advisor:\" Colin Cowie, designer and creator of the Colin Cowie Home Collection at HSN\n\nBULLET::::- \"Task One:\" Contestants were challenged to create a concept board for their product using a product image and text to highlight its top five most compelling benefits. The guest advisor decided that Sarah was the most successful in this task.\n", "Each team is given a storage container and fifty boxes. Each box contains two labels: one label on one side names a product (for example, Vasanti Soft Finish Matte Lipstick with Anti-Oxidants), and the other label on the opposite side names a \"related\" product (for example, Guerlain KissKiss Gold and Diamonds Lipstick). All the boxes must be placed in the storage container with the label naming the more expensive product facing the opening of the storage container. The prices of the products that the team deems to be more expensive are totalled, and the larger total wins the challenge.\n", "In the second stage, spread over two days, the inventors' products and sales skills are evaluated in a two task challenge. The particular tasks depend on the products featured in each episode, and the inventors are given additional feedback and guidance from a guest advisor from the field (generally someone with experience selling products from that field on HSN). The first task may range from staging a photo shoot for promotional materials for their product, to developing further variations of their product to build a marketable product line, and is judged by the guest advisor. The second task involves having the products evaluated for sales potential, which may range from pitching the product to potential customers in stores, staging a fashion show, or having their products tested and rated by Good Housekeeping.\n", "The cash awards for the matched products were revealed and the team split the total amount won. The cash awards hidden beside the seven products included one each of $10,000, $3,000, and $2,000, and two each of $1,000 and $200.\n\nSection::::Inactive games.:P.:Poker Game.\n", "Cooper and Mayers conducted household interviews to establish the age of 17 electrical and electronic items at discard and break. However it has been noted that user interviews are subject to the accuracy of memory, and that reviews of products which have failed in the past only provides information on \"a historical situation\" (: p. 10), not taking into account the features and lifetime of extant products.\n\nSection::::Measuring product lifetimes.:Actual product lifetimes.:Modelling.\n", "BULLET::::- Winning team: Kinetic\n\nBULLET::::- Reasons for win: Kinetic had a good marketing strategy, with Derek creating a buzz in a bee suit. They also used Angela and her slogan \"Olympic Gold honey\" as marketing tools (she was an Olympic winner) and Aimee's signage outside the supermarket labelled \"Today is HONEY DAY\" also worked in their favor. The price of each honey bottle is $4.99 and 2 for $5. The team sold a total of 345 bottles and earned $836.48 in sales.\n", "Section::::Series 2 (2011).:Episode summary.:Week Two: Parent and Baby.\n\nBULLET::::- Original Air date: 31 October 2011\n\nBULLET::::- Atomic: Lewis (Project Manager), Ben, Harry M., Harry H. and James.\n\nBULLET::::- Kinetic: Gbemi (Project Manager), Hannah, Haya, Hayley, Lizzie and Zara.\n\nBULLET::::- Task: To create a product for the baby and parents market. The team that receives the most orders win.\n", "BULLET::::- Pegasus in crystal 2016 to Karl Kletzmaier, co-founder of KEBA, for entrepreneurial life's work\n\nBULLET::::- ineo Award Award from the WKO OÖ for the strong commitment in the apprenticeship training\n\nBULLET::::- Robotic Award 2014 for the “directMove” handheld operating device T10\n\nBULLET::::- Upper Austrian Pegasus (business prize) 2014 in Bronze\n\nBULLET::::- Postal Technology Award 2013 as the Supplier of the Year (KePol parcel lockers)\n\nBULLET::::- IF Product Design Award 2010 for KeControl C3\n\nBULLET::::- Ringier Technology Innovation Awards for the KePlast EasyMold software tool\n\nBULLET::::- FFG (Forschungsförderungsgesellschaft) Award 2008\n\nBULLET::::- Best Family-Owned Company in Upper Austria 2008\n", "Section::::Technical advances.:Lenticular technology.\n" ]
[ "Win X competitions should not be able to give a lifetime supply of product." ]
[ "There are limits to how much actual product is given for free, usually things are given in small increments which keeps the costs from being too prohibitive." ]
[ "false presupposition" ]
[ "Win X competitions should not be able to give a lifetime supply of product.", "Win X competitions should not be able to give a lifetime supply of product." ]
[ "normal", "false presupposition" ]
[ "There are limits to how much actual product is given for free, usually things are given in small increments which keeps the costs from being too prohibitive.", "There are limits to how much actual product is given for free, usually things are given in small increments which keeps the costs from being too prohibitive." ]
2018-18451
How does mold react when it comes into contact with bleach?
It dies. Bleach is toxic to all life. Essentially what it does is cause the proteins (the small molecular machines that do all the work in your cells) to unfold, in a similar way to what heat does. This means that the cells stop functioning, break apart and die. Bleach would do the same to bacteria, plants, and animals and so is dangerous to all forms of life, the same way heat is.
[ "Section::::Antagonists.:Wandenreich.:V: Gremmy Thoumeaux.\n", "Section::::Antagonists.:Wandenreich.:W: Nianzol Weizol.\n", "Section::::Antagonists.:Wandenreich.:E: Bambietta Basterbine.\n", "Section::::Antagonists.:Wandenreich.:B: Jugram Haschwalth.\n", "Section::::Antagonists.:Wandenreich.:N: Robert Accutrone.\n", "Section::::Prokaryotes and fungi.\n", "Section::::Bleaching chemical pulps.:Chelant wash.\n", "Section::::Other characters.:Quincy.:Ryūken Ishida.\n", "The kids discover mold in the restaurant, so Bob calls Hugo, the health inspector, to get rid of it. Hugo needs to shut down the restaurant while it is fumigated over the weekend (claiming that bleach cannot get rid of mold). Bob plans to go to a hotel, but Mort offers for them to stay at his crematory home, which the family agrees to.\n", "Delignification of chemical pulps is frequently composed of four or more discrete steps, with each step designated by a letter in the Table:\n", "Section::::Antagonists.:Wandenreich.:X: Lille Barro.\n", "Section::::Antagonists.:Wandenreich.:H: Bazz-B.\n", "Section::::Antagonists.:Wandenreich.:F: Äs Nödt.\n", "Section::::Antagonists.:Bounts.\n", "Section::::Antagonists.:Wandenreich.:C: Pernida Parnkgjas.\n", "Section::::Antagonists.:Wandenreich.:D: Askin Nakk Le Vaar.\n\n is a male Sternritter who has the epithet \"D\" for , which allows Askin to calculate the \"absolute lethal dose\" of substance by consuming a lethal dosage of it. This allows him to not only increase or decrease the lethal dosage needed to kill his opponent, he can render himself immune to all consumed substances and projects it through attacks like . \n", "Section::::Antagonists.:Wandenreich.:G: Liltotto Lamperd.\n", "Section::::Soul Reaper characters.:Squad Eleven.:Kenpachi Kiganjō.\n", "Section::::Antagonists.:Wandenreich.:I: Cang Du.\n", "Section::::Antagonists.:Wandenreich.:J: Quilge Opie.\n", "Section::::Soul Reaper characters.:Squad Thirteen.:Kaien Shiba.\n", "Section::::Bioleaching.\n", "Section::::Soul Reaper characters.:Squad Eleven.:Kenpachi Zaraki.\n", "Section::::Antagonists.:Wandenreich.:L: PePe Waccabrada.\n", "Section::::Soul Reaper characters.:Squad Ten.\n\nSection::::Soul Reaper characters.:Squad Ten.:Isshin Kurosaki.\n" ]
[]
[]
[ "normal" ]
[]
[ "normal", "normal" ]
[]
2018-03532
Why can’t we only eat monosaccharides as a source for energy if that’s what our body breaks down carbs into anyways?
We certainly can eat these as an energy source. But they don't last long, and they don't contain the other nutrients we need -- not just for energy but for construction, such as protein and calcium.
[ "Section::::Storage polysaccharides.:Glycogen.\n\nGlycogen serves as the secondary long-term energy storage in animal and fungal cells, with the primary energy stores being held in adipose tissue. Glycogen is made primarily by the liver and the muscles, but can also be made by glycogenesis within the brain and stomach.\n", "Carbohydrates may be classified as monosaccharides, disaccharides or polysaccharides depending on the number of monomer (sugar) units they contain. They are a diverse group of substances, with a range of chemical, physical and physiological properties. They make up a large part of foods such as rice, noodles, bread, and other grain-based products, but they are not an essential nutrient, meaning a human does not need to eat carbohydrates. The brain is the largest consumer of sugars in the human body, and uses particularly large amounts of glucose, accounting for 20% of total body glucose consumption. The brain uses mostly glucose for energy unless it is insufficient, in which case it switches to using fats.\n", "Section::::Relevance for nuclear waste disposal.\n", "Monosaccharides are the major source of fuel for metabolism, being used both as an energy source (glucose being the most important in nature) and in biosynthesis. When monosaccharides are not immediately needed by many cells they are often converted to more space-efficient forms, often polysaccharides. In many animals, including humans, this storage form is glycogen, especially in liver and muscle cells. In plants, starch is used for the same purpose. The most abundant carbohydrate, cellulose, is a structural component of the cell wall of plants and many forms of algae. Ribose is a component of RNA. Deoxyribose is a component of DNA. Lyxose is a component of lyxoflavin found in the human heart. Ribulose and xylulose occur in the pentose phosphate pathway. Galactose, a component of milk sugar lactose, is found in galactolipids in plant cell membranes and in glycoproteins in many tissues. Mannose occurs in human metabolism, especially in the glycosylation of certain proteins. Fructose, or fruit sugar, is found in many plants and in humans, it is metabolized in the liver, absorbed directly into the intestines during digestion, and found in semen. Trehalose, a major sugar of insects, is rapidly hydrolyzed into two glucose molecules to support continuous flight.\n", "Cell-wall containing organisms, such as plants, fungi, and bacteria, require very large amounts of carbohydrates during growth for the biosynthesis of complex structural polysaccharides, such as cellulose, glucans, and chitin. In these organisms, in the absence of available carbohydrates (for example, in certain microbial environments or during seed germination in plants), the glyoxylate cycle permits the synthesis of glucose from lipids via acetate generated in fatty acid β-oxidation.\n", "BULLET::::- Reducing disaccharides, in which one monosaccharide, the reducing sugar of the pair, still has a free hemiacetal unit that can perform as a reducing aldehyde group; cellobiose and maltose are examples of reducing disaccharides, each with one hemiacetal unit, the other occupied by the glycosidic bond, which prevents it from acting as a reducing agent.\n", "In experiments designed to test the suitability of HMOs as a prebiotic source of carbon for intestinal bacteria it was discovered that they are highly selective for a commensal bacteria known as \"Bifidobacteria longum biovar infantis\". The presence of genes unique to \"B. infantis\", including co-regulated glycosidases, and its efficiency at using HMOs as a carbon source may imply a co-evolution of HMOs and the genetic capability of select bacteria to utilize them.\n", "The majority of nutrients taken in by such organisms must be able to provide carbon, proteins, vitamins and in some cases, ions. Due to the carbon composition of the majority of organisms, dead and organic matter provide rich sources of disaccharides and polysaccharides such as maltose and starch, and of the monosaccharide glucose.\n", "Section::::Metabolism.:Catabolism.\n\nCatabolism is the metabolic reaction which cells undergo to break down larger molecules, extracting energy. There are two major metabolic pathways of monosaccharide catabolism: glycolysis and the citric acid cycle.\n", "Section::::Structural polysaccharides.:Chitin.\n\nChitin is one of many naturally occurring polymers. It forms a structural component of many animals, such as exoskeletons. Over time it is bio-degradable in the natural environment. Its breakdown may be catalyzed by enzymes called chitinases, secreted by microorganisms such as bacteria and fungi, and produced by some plants. Some of these microorganisms have receptors to simple sugars from the decomposition of chitin. If chitin is detected, they then produce enzymes to digest it by cleaving the glycosidic bonds in order to convert it to simple sugars and ammonia.\n", "Isomalto-oligosaccharides are a normal part of the human diet and occur naturally in fermented foods, such as fermented sourdough breads and kimchi. The disaccharide isomaltose is also present in rice miso, soy sauce, and sake. Isomaltose, one of the α(1,6)-linked disaccharide components of IMO, has been identified as a natural constituent of honey and although chemically related, it is not and IMO . IMO is a sweet-tasting, high-density syrup which could be spray-dried into powder form.\n\nSection::::Manufacturing.\n", "Natural saccharides are generally of simple carbohydrates called monosaccharides with general formula (CHO) where \"n\" is three or more. Examples of monosaccharides are glucose, fructose, and glyceraldehyde. Polysaccharides, meanwhile, have a general formula of C(HO) where \"x\" is usually a large number between 200 and 2500. When the repeating units in the polymer backbone are six-carbon monosaccharides, as is often the case, the general formula simplifies to (CHO), where typically 40≤n≤3000.\n", "The capsular polysaccharide of GBS is not only an important GBS virulence factor but it is also an excellent candidate for the development of an effective vaccine. Protein-based vaccines are also in development.\n\nSection::::GBS infection in adults.\n", "Section::::Health effects.:Stimulating bacteria.\n\nGalacto-oligosaccharides are a substrate for bacteria, such as bifidobacteria and lactobacilli. Studies with infants and adults have shown that foods or drinks enriched with galacto-oligosaccharides result in a significant increase in bifidobacteria.\n\nSection::::Health effects.:Immune response.\n", "BULLET::::- Non-reducing disaccharides, in which the component monosaccharides bond through an acetal linkage between their anomeric centers. This results in neither monosaccharide being left with a hemiacetal unit that is free to act as a reducing agent. Sucrose and trehalose are examples of non-reducing disaccharides because their glycosidic bond is between their respective hemiacetal carbon atoms. The reduced chemical reactivity of the non-reducing sugars in comparison to reducing sugars, may be an advantage where stability in storage is important.\n\nSection::::Formation.\n", "Glucose is an energy source in most life forms. For instance, polysaccharides are broken down into their monomers by enzymes (glycogen phosphorylase removes glucose residues from glycogen, a polysaccharide). Disaccharides like lactose or sucrose are cleaved into their two component monosaccharides.\n\nSection::::Metabolism.:Carbohydrates as energy source.:Glycolysis (anaerobic).\n", "Additionally, upregulation of the glyoxylate cycle has been seen for pathogens that attack humans. This is the case for fungi such as \"Candida albicans\", which inhabits the skin, mouth, GI tract, gut and vagina of mammals and can lead to systemic infections of immunocompromised patients; as well as for the bacterium \"Mycobacterium tuberculosis\", the major causative agent of tuberculosis. In this latter case, ICL has been found to be essential for survival in the host. Thus, ICL is a current inhibition target for therapeutic treatments of tuberculosis.\n", "Glycosaminoglycans have high degrees of heterogeneity with regards to molecular mass, disaccharide construction, and sulfation due to the fact that GAG synthesis, unlike proteins or nucleic acids, is not template driven, and dynamically modulated by processing enzymes.\n", "Not yet formally proposed as an essential macronutrient (as of 2005), dietary fiber is nevertheless regarded as important for the diet, with regulatory authorities in many developed countries recommending increases in fiber intake.\n\nSection::::Storage polysaccharides.\n\nSection::::Storage polysaccharides.:Starch.\n", "In the liver hepatocytes, glycogen can compose up to eight percent (100–120 g in an adult) of the fresh weight soon after a meal. Only the glycogen stored in the liver can be made accessible to other organs. In the muscles, glycogen is found in a low concentration of one to two percent of the muscle mass. The amount of glycogen stored in the body—especially within the muscles, liver, and red blood cells—varies with physical activity, basal metabolic rate, and eating habits such as intermittent fasting. Small amounts of glycogen are found in the kidneys, and even smaller amounts in certain glial cells in the brain and white blood cells. The uterus also stores glycogen during pregnancy, to nourish the embryo.\n", "Even though these complex polysaccharides are not very digestible, they provide important dietary elements for humans. Called dietary fiber, these carbohydrates enhance digestion among other benefits. The main action of dietary fiber is to change the nature of the contents of the gastrointestinal tract, and to change how other nutrients and chemicals are absorbed. Soluble fiber binds to bile acids in the small intestine, making them less likely to enter the body; this in turn lowers cholesterol levels in the blood. Soluble fiber also attenuates the absorption of sugar, reduces sugar response after eating, normalizes blood lipid levels and, once fermented in the colon, produces short-chain fatty acids as byproducts with wide-ranging physiological activities (discussion below). Although insoluble fiber is associated with reduced diabetes risk, the mechanism by which this occurs is unknown.\n", "Generally the carbohydrate part(s) play an integral role in the function of a glycoconjugate; prominent examples of this are NCAM and blood proteins where fine details in the carbohydrate structure determine cell binding or not or lifetime in circulation.\n\nAlthough the important molecular species DNA, RNA, ATP, cAMP, cGMP, NADH, NADPH, and coenzyme A all contain a carbohydrate part, generally they are not considered as glycoconjugates.\n", "Monosaccharides are the simplest form of carbohydrates with only one simple sugar. They essentially contain an aldehyde or ketone group in their structure. The presence of an aldehyde group in a monosaccharide is indicated by the prefix \"aldo-\". Similarly, a ketone group is denoted by the prefix \"keto-\". Examples of monosaccharides are the hexoses, glucose, fructose, Trioses, Tetroses, Heptoses, galactose, pentoses, ribose, and deoxyribose. Consumed fructose and glucose have different rates of gastric emptying, are differentially absorbed and have different metabolic fates, providing multiple opportunities for 2 different saccharides to differentially affect food intake. Most saccharides eventually provide fuel for cellular respiration.\n", "In addition to the adjusted glycolysis, \"Monocercomonoides\" contain enzymes needed in the arginine deiminase (degradation) pathway. The arginine deiminase pathway may be used for ATP production, as in \"Giardia intestinalis\" and \"Trichomonas vaginalis\". In \"G. intestinalis \"(an anaerobic unicellular eukaryote) this pathway produces eight times more ATP than sugar metabolism, a similar output is expected in \"Monocercomonoides\", but has yet to be confirmed.\n\nSection::::Iron-sulfur cluster.\n", "O-linked glycopeptides recently have been shown to exhibit excellent CNS permeability and efficacy in multiple animal models with disease states. In addition one of the most intriguing aspects thereof is the capability of O-glycosylation to extend half life, decrease clearance, and improve PK/PD thereof the active peptide beyond increasing CNS penetration. The innate utilization of sugars as solubilizing moieties in Phase II and III metabolism (glucuronic acids) has remarkably allowed an evolutionary advantage in that mammalian enzymes are not directly evolved to degrade O glycosylated products on larger moieties.\n" ]
[ "We cannot only eat monosaccharides as a source for energy." ]
[ "We can only eat monosaccharides as a source for energy, but monosaccharides don't last long and don't contain other nutrients." ]
[ "false presupposition" ]
[ "We cannot only eat monosaccharides as a source for energy." ]
[ "false presupposition" ]
[ "We can only eat monosaccharides as a source for energy, but monosaccharides don't last long and don't contain other nutrients." ]
2018-03496
How do deep-sea creatures who live on the ocean floor survive the pressure of the ocean while submarines need airtight seals?
They aren't normal creatures. They live in the pressure, and that pressure is what actually holds them together. It makes learning about them incredibly difficult as pulling from from the ocean floor results in them basically exploding.
[ "Earless seals sleep bihemispherically like most mammals, under water, hanging at the water surface or on land. They hold their breath while sleeping under water, and wake up regularly to surface and breathe. They can also hang with their nostrils above water and in that position have REM sleep, but they do not have REM sleep underwater.\n", "Breath-hold diving depth is limited in animals when the volume of rigid walled internal air spaces is occupied by all of the compressed gas of the breath and the soft spaces have collapsed under external pressure. Animals that can dive deeply have internal air spaces that can extensively collapse without harm, and may actively exhale before diving to avoid absorption of inert gas during the dive.\n", "In ambient pressure diving, the diver is directly exposed to the pressure of the surrounding water. The ambient pressure diver may dive on breath-hold, or use breathing apparatus for scuba diving or surface-supplied diving, and the saturation diving technique reduces the risk of decompression sickness (DCS) after long-duration deep dives. Atmospheric diving suits (ADS) may be used to isolate the diver from high ambient pressure. Crewed submersibles can extend depth range, and remotely controlled or robotic machines can reduce risk to humans.\n", "Elephant seals have a very large volume of blood, allowing them to hold a large amount of oxygen for use when diving. They have large sinuses in their abdomens to hold blood and can also store oxygen in their muscles with increased myoglobin concentrations in muscle. In addition, they have a larger proportion of oxygen-carrying red blood cells. These adaptations allow elephant seals to dive to such depths and remain underwater for up to two hours.\n", "BULLET::::1. Open to ambient pressure via a moon pool, meaning the air pressure inside the habitat equals underwater pressure at the same level, such as SEALAB, and which makes entry and exit easy as there is no physical barrier other than the moon pool water surface. Living in ambient pressure habitats is a form of saturation diving, and return to the surface will require appropriate decompression.\n", "BULLET::::2. Closed to the sea by hatches, with internal air pressure less than ambient pressure and at or closer to atmospheric pressure; entry or exit to the sea requires passing through hatches and an airlock. Decompression may be necessary when entering the habitat after a dive. This would be done in the airlock.\n\nA third or composite type has compartments of both types within the same habitat structure and connected via airlocks, such as Aquarius.\n\nSection::::Technical classification and description.:Excursions.\n", "The Mark 8 Mod 1 SDV has an endurance of about eight to 12 hours, giving it a range of with a diving team or without. The main limiting factor on endurance is not batteries or air for the SEALs, but water temperature: humans can only spend so much time in cold water, even with wetsuits, before their blood pressure drops and they become dehydrated from losing blood volume and body fluids, respectively.\n\nSection::::Design.:Mark 9 SDV.\n", "An underwater habitat has to meet the needs of human physiology and provide suitable environmental conditions, and the one which is most critical is breathing air of suitable quality. Others concern the physical environment (pressure, temperature, light, humidity), the chemical environment (drinking water, food, waste products, toxins) and the biological environment (hazardous sea creatures, microorganisms, marine fungi). Much of the science covering underwater habitats and their technology designed to meet human requirements is shared with diving, diving bells, submersible vehicles and submarines, and spacecraft.\n", "BULLET::::- 2007 – Saul Goldman proposed an Interconnected Compartment Model (3 compartment series/parallel model) using a single risk bearing active tissue compartment and two non-risk bearing peripheral compartments which indirectly affect the risk of the central compartment. This model predicts initially fast gas washout which slows with time.\n\nBULLET::::- 2008 – US Navy Diving Manual Revision 6 published, which includes a version of the 2007 tables by Gerth & Doolette.\n\nSection::::Haldanean (perfusion limited, dissolved phase) models.\n", "An underwater habitat has to meet the needs of human physiology and provide suitable environmental conditions, and the one which is most critical is breathing air of suitable quality. Others concern the physical environment (pressure, temperature, light, humidity), the chemical environment (drinking water, food, waste products, toxins) and the biological environment (hazardous sea creatures, microorganisms, marine fungi). Much of the science covering underwater habitats and their technology designed to meet human requirements is shared with diving, diving bells, submersible vehicles and submarines, and spacecraft.\n", "Seals, whales and porpoises have slower respiratory rates and larger tidal volume to total lung capacity ratio than land animals which gives them a large exchange of gas during each breath and compensates for low respiratory rate. This allows greater utilisation of available oxygen and reduce energy expenditure.\n\nIn seals, bradycardia of the diving reflex reduces heart rate to about 10% of resting level at the start of a dive.\n\nSection::::References.\n\nSection::::References.:Sources.\n\nBULLET::::- CD-ROM prepared and distributed by the National Technical Information Service (NTIS)in partnership with NOAA and Best Publishing Company/ref\n", "Seals in high-pressure vessels are also susceptible to explosive decompression; the O-rings or rubber gaskets used to seal pressurised pipelines tend to become saturated with high-pressure gases. If the pressure inside the vessel is suddenly released, then the gases within the rubber gasket may expand violently, causing blistering or explosion of the material. For this reason, it is common for military and industrial equipment to be subjected to an explosive decompression test before it is certified as safe for use.\n\nSection::::Myths.\n\nSection::::Myths.:Exposure to a vacuum causes the body to explode.\n", "polar bear. All are air-breathing, and while some such as the sperm whale can dive for prolonged periods, all must return to the surface to breathe.\n\nSection::::Marine habitats.\n", "Section::::Interactions with humans.:In captivity.:Military.\n", "Similarly, much higher than normal relative vacuum readings are possible deep in the Earth's ocean. A submarine maintaining an internal pressure of 1 atmosphere submerged to a depth of 10 atmospheres (98 metres; a 9.8 metre column of seawater has the equivalent weight of 1 atm) is effectively a vacuum chamber keeping out the crushing exterior water pressures, though the 1 atm inside the submarine would not normally be considered a vacuum.\n", "In captivity, they often develop pathologies, such as the dorsal fin collapse seen in 60–90% of captive males. Captives have reduced life expectancy, on average only living into their 20s, although some live longer, including several over 30 years old and two, Corky II and Lolita, in their mid-40s. In the wild, females who survive infancy live 46 years on average and up to 70–80 years. Wild males who survive infancy live 31 years on average and can reach 50–60 years.\n", "Weddell seals' metabolism is relatively constant during deep-water dives, so another way to compensate for functioning with a lack of oxygen over an extended period of time must exist. Seals, unlike other terrestrial mammals such as humans, can undergo anaerobic metabolism for these extended dives, which causes a build-up of lactic acid in the muscles. The seals can also release oxygenated blood from their spleens into the rest of their bodies, acting as an oxygen reserve.\n\nSection::::Behavior.:Vocalizing.\n", "BULLET::::- Pressure – they can withstand the extremely low pressure of a vacuum and also very high pressures, more than 1,200 times atmospheric pressure. Tardigrades can survive the vacuum of open space and solar radiation combined for at least 10 days. Some species can also withstand pressure of 6,000 atmospheres, which is nearly six times the pressure of water in the deepest ocean trench, the Mariana Trench.\n", "Climatic and osmotic pressure places physiological constraints on organisms, especially those that fly and respire at high altitudes, or dive to deep ocean depths. These constraints influence vertical limits of ecosystems in the biosphere, as organisms are physiologically sensitive and adapted to atmospheric and osmotic water pressure differences. For example, oxygen levels decrease with decreasing pressure and are a limiting factor for life at higher altitudes. Water transportation by plants is another important ecophysiological process affected by osmotic pressure gradients. Water pressure in the depths of oceans requires that organisms adapt to these conditions. For example, diving animals such as whales, dolphins, and seals are specially adapted to deal with changes in sound due to water pressure differences. Differences between hagfish species provide another example of adaptation to deep-sea pressure through specialized protein adaptations.\n", "Umbilicals or airline hoses are safer, as the breathing gas supply is unlimited, and the hose is a guideline back to the habitat, but they restrict freedom of movement and can become tangled.\n\nThe horizontal extent of excursions is limited to the scuba air supply or the length of the umbilical. The distance above and below the level of the habitat are also limited and depend on the depth of the habitat and the associated saturation of the divers. The open space available for exits thus describes the shape of a vertical axis cylinder centred on the habitat.\n", "Researchers find that due to a pup's differing needs in regards to sustaining work and foraging while under water compared to adults, the skeletal and cardiac muscles develop differently. Studies show that cardiac blood flow provides sufficient O2 to sustain lipolytic pathways during dives, remedying their hypoxic challenge. Cardiac tissue is found to be more developed than skeletal muscles at birth and during the weaning period, although neither tissue is fully developed by the end of the weaning period. Pups are born with fully developed hemoglobin stores (found in blood), but their myoglobin levels (found in skeletal tissue) are only 25–30% of adult levels. These observations conclude that pup muscles are less able to sustain both aerobic ATP and anaerobic ATP production during dives than adults are. This is due to the large stores of oxygen, either bound to hemoglobin or myoglobin, which the seals rely on to dive for extended periods of time. This could be a potential explanation for pups’ short weaning period as diving is essential to their living and survival.\n", "Each country has its own tank requirements; in the US, the minimum enclosure size is set by the Code of Federal Regulations, 9 CFR E § 3.104, under the \"Specifications for the Humane Handling, Care, Treatment and Transportation of Marine Mammals\".\n", "In the 1972 Edalhab II Florida Aquanaut Research Expedition experiments, the University of New Hampshire and NOAA used nitrox as a breathing gas. In the three FLARE missions, the habitat was positioned off Miami at a depth of 13.7 m. The conversion to this experiment increased the weight of the habitat to 23 tonnes.\n\nSection::::History.:Historical underwater habitats.:BAH I.\n", "Immersion in water and exposure to cold water and high pressure have physiological effects on the diver which limit the depths and duration possible in ambient pressure diving. Breath-hold endurance is a severe limitation, and breathing at high ambient pressure adds further complications, both directly and indirectly. Technological solutions have been developed which can greatly extend depth and duration of human ambient pressure dives, and allow useful work to be done underwater.\n\nSection::::Physiological constraints on diving.:Immersion.\n", "The strength of a submarine hull is not merely that of absolute strength, but rather yield strength. As well as the obvious need for a hull strong enough not to be crushed at depth, the hundreds of dives over a submarine's lifetime mean that the fatigue life is also an important issue. To provide resistance to fatigue, the hull must be designed so that the steel always operates below its elastic limit; that is the stress applied by pressure is less than its yield stress. The HY- series steels were developed to provide a high yield stress, so allowing this.\n" ]
[]
[]
[ "normal" ]
[]
[ "normal" ]
[]
2018-00474
How do Artists and Music Producers make sure that their music does not sound like any other song or composition on the planet, and so there are no copyright infringements, since it is not practically possible to listen to every song?
The short answer is: They don't. Musicians generally don't cross-reference an idea or song they've made with others, because of the nature of music being an art form, and a form of self-expression. There's a certain amount of leniency and flexibility when it comes to one song sounding like another. This, of course, raises a lot of issues, especially when it comes to copyright. Ultimately, a copyright infringement is claimed by an artist when they feel another artist has used "too much" of their intellectual property as inspiration, to the point where they've "stolen" it. But there are also cases where the original artist is more or less fine with the excessive similarities in works. A great example is that of Breakbot's "Baby I'm Yours" vs. Bruno Mars' "Treasure." I'd highly recommend reading into it, as it's quite interesting IMO. [There's even whole websites dedicated to songs that sound very similar.]( URL_0 ) , not to mention the whole debate about if music will get to the point where no truly original work can be made. Hope that was more or less what you were looking for. It's a very tough question to properly answer haha :)
[ "The use of copyrighted material to create new content is a hotly debated topic. The emergence of the musical \"mashup\" genre has compounded the issue of creative licensing. A moral conflict is created between those who believe that copyright protects any unauthorized use of content, and those who maintain that sampling and mash-ups are acceptable musical styles and, though they use portions of copyrighted material, the end result is a new creative piece which is the property of the creator, and not of the original copyright holder. Whether or not the mashup genre should be allowed to use portions of copyrighted material to create new content is one which is currently under debate.\n", "Section::::Legal issues.:Universal archive fire (2008).\n", "The United States treatment of mechanical royalties differs markedly from international practice. In the United States, while the right to use copyrighted music for making records for public distribution (for private use) is an exclusive right of the composer, the Copyright Act provides that \"once\" the music is so recorded, anyone else can record the composition/song without a negotiated license but on the payment of the statutory compulsory royalty. Thus, its use by different artists could lead to several separately owned copyrighted \"sound recordings\".\n", "Ken Kragen chaired a production meeting at a bungalow off Sunset Boulevard on January 25, 1985. There, Kragen and his team discussed where the recording sessions with the supergroup of musicians should take place. He stated, \"The single most damaging piece of information is where we're doing this. If that shows up anywhere, we've got a chaotic situation that could totally destroy the project. The moment a Prince, a Michael Jackson, a Bob Dylan—I guarantee you!—drives up and sees a mob around that studio, he will never come in.\" On the same night, Quincy Jones' associate producer and vocal arranger, Tom Bahler, was given the task of matching each solo line with the right voice. Bahler stated, \"It's like vocal arranging in a perfect world.\" Jones disagreed, stating that the task was like \"putting a watermelon in a Coke bottle\". The following evening, Lionel Richie held a \"choreography\" session at his home, where it was decided who would stand where.\n", "The collective management of copyright and related rights is undertaken by various types of collective management organisations, most commonly collecting societies. Collecting societies act on behalf of their members, which may be authors or performers, and issue copyright licenses to users authorising the use of the works of their members. Collecting societies negotiate the royalty rates and other licence terms on behalf of their members and collect royalty payment on behalf of their members. Royalties are distributed by the collecting society to relevant members, who as individual right owners are not directly involved in the negotiation of the licence.\n", "Private copying of sound recordings is dealt with by a regime established in 1998 specifically for that purpose. This regime makes an exception to copyright infringement for copies of music for private use. The regime does not involve licensing. Instead it remunerates copyright holders by collecting funds through the Canadian Private Copying Collective and sets out a share of these funds which all eligible authors, performers, and makers or record producers are entitled.\n\nSection::::The Copyright Board.\n", "Section::::Criticism.:2009 inaugural conference criticism and legal action.:Settlement criticism.\n", "Confusion sometimes occurs when the copyright status of the elements is conflated with the copyright status of the compilation. For instance, copyright on a filmed musical may lapse, but public display of the film without license may remain a copyright infringement if the songs performed therein are still protected by copyright.\n\nSection::::Examples.\n\nUnder the U.S. law, which protects the human creativity expressed in the selection, coordination, or arrangement of the material, the copyright office gives the following examples of compilations in which copyright might exist, as each represents compilations that reflect human creativity in preparation:\n", "Collecting societies can sell blanket licences, which grant the right to perform their catalogue for a period of time. Such a licence might for example provide a broadcaster with a single annual authorisation encompassing thousands of songs owned by thousands of composers, lyricists and publishers. The societies also sell individual licenses for users who reproduce and distribute music. For example, Apple must submit the download reports for the iTunes Store, which are used to determine their royalty payments.\n", "BULLET::::- Reports to the copyright owners of each song used is transmitted on the service, pursuant to negotiate reporting requirements or requirements adopted by the Copyright Royalty Board. As a result, royalties may be fairly allocated and distributed to individual copyright owners and recording artists.\n\nSection::::Partnerships.\n", "Howard King filed a lawsuit in Los Angeles on June 21, 2019, on behalf of Soundgarden, Hole, Steve Earle, the estate of Tupac Shakur and a former wife of Tom Petty that seeks class action status for artists whose master recordings were believed to have been destroyed in the Universal Studios fire.\n\nSection::::Legal issues.:Megaupload.\n", "So far the Music of Life catalog of recordings and copyrights under Music of Life Productions and released on many labels worldwide including Music of Life have exceeded 2000 titles and include music videos and documentary TV shows ('Kings of Rap' - MTV).\n", "For most of the 20th century, music was recorded in studios, produced by record company executives, producers, arrangers and engineers who were hired to deliver the artist's finished recordings for mastering and duplication - for sale on the varied media: 78, 45 and 33 RPM vinyl, reel-to-reel, 8-track tape, cassette and compact disc. To protect their investment, the record companies obtain a mechanical copyright in order to protect and control the recording with which they derive usage and sales income. (For those still uncertain of the difference between \"song title\" and \"mechanical\" copyrights, consider the Capitol Records lawsuit for copyright infringement against Nike some 20 years ago. Nike legally obtained permission to use the Beatles song title \"Revolution\" from the title's owner, Michael Jackson. They used the Capitol Records owned recording of the Beatles' performance, but failed to obtain and pay for permission and use. Capitol Records sued and prevailed because Nike ONLY had a license to use the title and did not have a license to use the mechanical recording.)\n", "In 2012, The Opilec Music label from Italy released an EP. with three songs written by The Units and remixed by Todd Terje from Norway and I-Robots from Italy. The same year, The Opilec Music label from Italy released two songs by The Units on the \"We Are Opilec\" compilation. Also the Tsugi Sampler label from France released the \"Ivan Smagghe – A Walk In The Woods With Ivan Smagghe\" that included a remix of \"High Pressure Days\" by Todd Terje.\n", "A musical composition obtains copyright protection as soon as it is written out or recorded. But it is not protected from infringed use unless it is registered with the copyright authority, for instance, the Copyright Office in the United States, which is administered by the Library of Congress. No person or entity, other than the copyright owner, can use or employ the music for gain without obtaining a license from the composer/songwriter.\n\nInherently, as copyright, it confers on its owner, a distinctive \"bundle\" of five exclusive rights:\n", "The term phonogram is used to refer to any sound recording: under the Rome Convention, it must be composed exclusively of a sound recording, although some national laws protect film soundtracks with the same measures to the extent that they are not also protected by other rights. The producers of phonograms, that is the person who makes the recording rather than the person who performs, has the right to prevent the direct or indirect reproduction of the recording (Art. 10 Rome Convention, Art. 2 Geneva Phonograms Convention). The WPPT adds the rights to license:\n", "For a motion picture, the following are considered authors: main director, scriptwriter, dialogue author, and composer of original musical score.\n\nSection::::European countries.:Spain.\n\nUnder the 1987 copyright law of Spain collections of other works, such as anthologies, and other elements or data that by the selection or arrangement of materials constitute intellectual creations, are considered protected works without prejudice to the rights of the authors of the original works.\n", "Established in 1965, Sonoton is the largest independent production music library in the world. There are numerous independent libraries that include Vanacore Music and West One Music Group.\n\nSection::::Hybrid license method.\n", "John West \"Earth Maker\", Frontiers Records & Yamaha Records- Songwriter, co-Producer, Mix & Mastering Engineer, Backing Vocals, Keyboards\n\nJohn West \"Long Time No sing\" Frontiers Records – Songwriter, co-Producer, Mix & Mastering Engineer, Backing Vocals, Keyboards, Guitars, Bass\n\nArtension \"Machine\", Shrapnel Records- Backing Vocals and Vocal Arrangements, Engineering\n\nArtension \"Sacred Pathways\", Frontiers Records – Co-Production, Backing Vocals and Vocal Arrangements, Tracking, Mixing, and Mastering Engineering\n\nArtension \"Future World\"- Backing Vocals and Vocal Arrangements, Engineering\n\nArtension \"New Discovery\"- Backing Vocals and Vocal Arrangements, Engineering\n\nFeinstein \"Third Wish\", SPV Records – Vocal Production and Engineering\n", "A second copyright exists when it comes to licensing use for telephonic MOH. A piece of music, as mentioned above, is copyrighted and may be licensed for use as a music title and is understood to be a combination of melody, harmony and, where applicable, lyrics. However, neither radio listeners nor MOH listeners could hear this music unless it was recorded - providing a delivery medium whereby the \"music\" becomes a \"performance.\" As mentioned elsewhere, in countries such as the US, where said copyright laws are enforced, nearly every recording of a song title holds its own \"mechanical copyright.\"\n", "BULLET::::- engineering – Kevin \"She'kspere\" Briggs, Andre Debourg, Tyrice Jones, Brian Springer\n\nBULLET::::- executive production – Billy Moss, L.A. Reid\n\nBULLET::::- guitar – Charles Fearing\n\nBULLET::::- instrumentation – Kevin \"She'kspere\" Briggs, Eric Johnson, Tyrice Jones\n\nBULLET::::- mastering – Herb Powers\n\nBULLET::::- midi – Kevin \"She'kspere\" Briggs\n\nBULLET::::- mixing – Steve Baughman, Kevin \"KD\" Davis\n\nBULLET::::- photography – Mike Ruiz\n\nBULLET::::- production – Kevin \"She'kspere\" Briggs, Chad Elliot, Christopher \"Deep\" Henderson, Chris Jennings, Eric Johnson, Tyrice Jones, Billy Moss, Doug Rasheed, Anthony Dent\n\nBULLET::::- programming – Chris Jennings, Tyrice Jones, Billy Moss, Victor White\n\nBULLET::::- pro-tools – Chauncey Mahan\n", "BULLET::::- 1978 \"Heavy Metal Be-Bop\" - The Brecker Brothers - Assistant Engineer\n\nBULLET::::- 1978 \"Jorge Santana\" - Jorge Santana - Assistant Engineer\n\nBULLET::::- 1978 \"Stone Blue\" - Foghat - Remixing\n\nBULLET::::- 1978 \"The Captain's Journey\" - Lee Ritenour - Assistant Engineer\n\nBULLET::::- 1977 \"Alive II\"- Kiss - Assistant Engineer\n\nBULLET::::- 1977 \"Love Eyes\" - Art Webb - Assistant Engineer\n\nSoundtracks\n\nBULLET::::- 2000 \"Dolphins\" [Original Soundtrack] - Producer\n\nBULLET::::- 1995 \"\" [Soundtrack from the IMAX film] - Producer\n\nBULLET::::- 1995 \"The \" [Original Soundtrack] - Mixing, Producer\n\nBULLET::::- 1994 \"Four Weddings and a Funeral\" - Original Soundtrack - Producer\n", "On a similar note, they have worked with a number of musicians from other African countries to create \"an African sound\" in a slew of languages, claiming they \"don't care about the languages they sing in, because the music will still be captured\".\n\nOwning both their own recording studio and company (Four Sounds Productions), they have complete artistic control over their output, producing all their songs and managing themselves, and also owning their own video production equipment and company for their music videos.\n", "Copyright protection is not available for databases that aim to be \"complete\"—that is, where the entries are selected by objective criteria: these are covered by \"sui generis\" database rights. While copyright protects the creativity of an author, database rights specifically protect the \"qualitatively and/or quantitatively [a] substantial investment in either the obtaining, verification or presentation of the contents\": if there has not been substantial investment (which need not be financial), the database will not be protected.\n\nDatabase rights are independent of any copyright in the database, and the two could, in principle, be held by different people.\n", "Other classics mastered by Ray Staff include \"Physical Graffiti\" and \"Presence\" by Led Zeppelin, \"Crime of the Century\" by Supertramp, \"It's Only Rock 'n Roll\" by The Rolling Stones and \"Hemispheres\" by Rush.\n\nSection::::Recent years.\n" ]
[ "Artists make sure their music is unique.", "Music producers and artists make sure their music doesn't sound like any other song composition." ]
[ "They don't make sure of this. They just make what they want. Sometimes people make similar stuff.", "Music artists and producers don't actually check their music for identical signs." ]
[ "false presupposition" ]
[ "Artists make sure their music is unique.", "Music producers and artists make sure their music doesn't sound like any other song composition." ]
[ "false presupposition", "false presupposition" ]
[ "They don't make sure of this. They just make what they want. Sometimes people make similar stuff.", "Music artists and producers don't actually check their music for identical signs." ]
2018-03686
How does an icicle form?
It starts with a one water droplet hanging off of an awning. The droplet is held on to the surface with surface tension, then it gets cold enough for it to freeze. Then a second water-droplet flows down the frozen water-droplet, hangs on to the tip via surface tension, then freezes. Repeat over and over again, eventually it gets bigger and bigger. Ta-dah! Icicle.
[ "Icicle\n\nAn icicle is a spike of ice formed when water dripping or falling from an object freezes.\n\nSection::::Formation and dynamics.\n", "Icicles can form during bright, sunny, but subfreezing weather, when ice or snow melted by sunlight or some other heat source (such as a poorly insulated building), refreezes as it drips off under exposed conditions. Over time continued water runoff will cause the icicle to grow. Another set of conditions is during ice storms, when rain falling in air slightly below freezing slowly accumulates as numerous small icicles hanging from twigs, leaves, wires, etc. Thirdly, icicles can form wherever water seeps out of or drips off vertical surfaces such as road cuts or cliffs. Under some conditions these can slowly form the \"frozen waterfalls\" favored by ice climbers\n", "Icicle (comics)\n\nIcicle is the name of two fictional supervillains appearing in comic books published by DC Comics: Joar Mahkent and Cameron Mahkent (father and son; to differentiate between the two, the suffixes, Senior and Junior, are used).\n\nA version of the character appeared in the fifth season of \"The Flash\", played by actor Kyle Secor. This version is Thomas Snow who is the father of Caitlin Snow.\n\nSection::::Publication history.\n\nThe Joar Mahkent version of Icicle first appeared in \"All-American Comics\" #90 and was created by Robert Kanigher and Irwin Hasen.\n", "Given the right conditions, icicles may also form in caves (in which case they are also known as \"ice stalactites\"). They can also form within salty water (brine) sinking from sea ice. These so-called \"brinicles\" can actually kill sea urchins and starfish, which was observed by BBC film crews near Mount Erebus, Antarctica.\n\nSection::::Damage and injuries caused by icicles.\n", "Icicles form on surfaces which might have a smooth and straight, or irregular shape, which in turn influences the shape of an icicle. Another influence is melting water, which might flow toward the icicle in a straight line or which might flow from several directions. Impurities in the water can lead to ripples on the surface of the icicles.\n", "The Cameron Mahkent version of Icicle first appeared in \"Infinity, Inc.\" #34 and was created by Roy Thomas, Dann Thomas, and Todd McFarlane.\n\nSection::::Fictional character biographies.\n\nSection::::Fictional character biographies.:Dr. Joar Mahkent.\n\nWhen noted European physicist Dr. Joar Mahkent arrived in America with his latest scientific discovery, spectators at dockside were astonished to witness the luxury liner upon which Mahkent was traveling suddenly frozen solid in Gotham Harbor.\n", "BULLET::::- Icicle appears in \"Robot Chicken DC Comics Special\", voiced by Tom Root.\n", "BULLET::::- Though he does not appear in \"The Icicle Cometh,\" Cameron Mahkent's name was used on Thomas Snow's death certificate as the ME who signed off on his death.\n\nSection::::In other media.:Film.\n\nBULLET::::- The Cameron Mahkent version of Icicle was also reportedly featured in David S. Goyer's script for the Green Arrow film project entitled \"Super Max\".\n", "Based in the DC Super Friends universe, Icicle is part of a group of ice-themed villains called the \"Ice Pack\" that encased a city in ice and snow. The Ice Pack appear in \"DC Super Friends\" #16 (August 2009).\n\nSection::::Other versions.:Flashpoint.\n", "The Icicles\n\nThe Icicles are an American indie pop band from Grand Rapids, Michigan. The band was founded in 2000 by singer/songwriter/guitarist Gretchen DeVault, keyboardist Joleen Rumsey and drummer Korrie Sue, who met when they were students at Grand Valley State University. The group released a debut EP titled \"Pure Sugar\" in 2000.\n", "During the \"Infinite Crisis\" storyline, Cameron popped up as a member of Alexander Luthor, Jr.'s Secret Society of Super Villains.\n\nOne Year Later, he is approached by Mirror Master to join the Suicide Squad for a mission.\n\nOn the cover of \"Justice League of America\" #13 (Vol. 2), it shows Icicle as a member of the new Injustice League, though this was not corroborated by the story.\n\nHe can seen as the member of Libra's Secret Society of Super Villains.\n", "Alex de Campi\n\nAlex de Campi is a British-American music video director, comics writer and columnist.\n\nSection::::Career.\n\nSection::::Career.:Comics.\n", "BULLET::::- The Super Friends version of Icicle appeared in the \"Post Super Heroes Create a Villain Contest\" cereal commercial from the 1980s. Robin says \"Holy Icicles!\" and then Icicle shoots his freeze ray, but Superman deflects it right back at him with hands.\n\nBULLET::::- A character based on Icicle appears in the \"Justice League\" animated series: Dr. Blizzard (voiced by Corey Burton). In the episode \"Legends\", he is a member of the Injustice Guild.\n", "In July 2018, Mehran was found dead in his Hollywood home due to suicide, just after completing work on a planned solo album. Hynes paid tribute to him on Instagram, saying, “Every time I was with you we were 17 again. You were such a gift to this world. The floor has gone and I don’t know where to stand. RIP.”\n\nSection::::Members.\n\nBULLET::::- Rory Attwell (aka Raary Deci-hells, Raary Rambert RAT ATT AGG, Rory Brattwell) English (born in London, England) — guitar and vocals\n", "Led by singer/songwriter Ian McNabb, the band released five albums from 1984 to 1990 before breaking up in 1991. McNabb later convened a revised line-up of the band in 2006 to play live shows; this revised Icicle Works line-up still plays sporadic live dates.\n\nSection::::History.\n\nSection::::History.:1980–1983: Formation and early years.\n", "Section::::Life cycle.\n", "BULLET::::- Both versions of Icicle appear in the \"Young Justice\" animated series with Cameron Mahkent voiced by Yuri Lowenthal and Joar Mahkent voiced by James Remar. Cameron first appeared in \"Independence Day\" is shown causing havoc on a bridge in Star City until he is taken down by Green Arrow and Speedy. In \"Terrors\", Joar appears where he, his son, and the ice-based villains (Killer Frost, Captain Cold and Mr. Freeze) specifically plan to start a breakout in Belle Reve. Icicle Jr. mentions to the disguised Superboy that his father acts as a jerk to him. Once the breakout and takeover has been done, Icicle Sr. has his son and Superboy had to break through the walls to the Women's side. Superboy convinces Icicle Jr. to \"show some initiative\". During this time, Icicle Jr. unwittingly aids Superboy and even helps fight off several villains, including starting a fight with Mr. Freeze. Shortly after, he discovers who Superboy really is and states \"dad's gonna kill me\". It turns out that Icicle Sr. was in cahoots with Hugo Strange as both are shown to be associated with the Light (Project Cadmus' Board of Directors). In \"Beneath\", Icicle Jr. appears in Bialya with Psimon, Mammoth, Shimmer, and Devastation where they assigned to guard a shipment of children to The Light's partner. When Devastation brings down Wonder Girl, Icicle Jr. tells her that she isn't alone. While Psimon and Miss Martian have a psychic battle, he tries to kill Miss Martian, saying his heart got broken after Belle Reve. Before Icicle can kill, Psimon is defeated by Bumblebee, allowing Miss Martian to defend herself by hurling Icicle Jr. into a wall. He is later seen at the end of the episode relatively unharmed. In \"Darkest\", Icicle Jr. is sent with Aqualad, Tigress, and the Terror Twins on a mission to attack Impulse and Blue Beetle. He manages to freeze Blue Beetle until he breaks free. When the Blue Beetle scarab takes control, he manages to knock down Icicle until Aqualad and Tigress subdue Blue Beetle. Icicle Jr. joins Aqualad, Tigress, and the Terror Twins into retreating back to Black Manta.\n", "One study found that the majority of capillaries in cherry hemangiomas are fenestrated and stain for carbonic anhydrase activity.\n\nSection::::Cause.\n\nCherry angiomas appear spontaneously in many people in middle age but can also, less commonly, occur in young people. They can also occur in an aggressive eruptive manner in any age. The underlying cause for the development of cherry angiomas is not understood.\n\nCherry angioma may occur through two different mechanisms: angiogenesis (the formation of new blood vessels from pre-existing vessels), and vasculogenesis (the formation of totally new vessels, which usually occurs during embryonic and fetal development).\n", "BULLET::::2. Cyst development stage: Epithelial cells form strands and are attracted to the area which contains exposed connective tissue and foreign substances. Several strands from each rest converge and surround the abscess or foreign body.\n\nBULLET::::3. Cyst growth stage: Fluid flows into the cavity where the forming cyst is growing due to the increased osmolality of the cavity in relation to surrounding serum in capillaries. Pressure and size increase.\n\nThe definitive mechanism by which cysts grow is under debate; several theories exist.\n\nSection::::Mechanisms.:Biomechanical theory.\n", "Section::::Cause.\n\nThe prefer formation site for the cysts is the renal tubule segments, after they grow into a few millimeters they detach from the parent tubule, induced by excessive proliferation of tubular epithelium or excessive fluid secretions.\n\nSection::::Diagnosis.\n", "Section::::Taxonomy.\n\nSection::::Taxonomy.:History.\n", "BULLET::::- Chronic inflammation\n\nBULLET::::- Diabetes\n\nBULLET::::- Lupus erythematosus\n\nBULLET::::- Pseudopelade of Brocq\n\nBULLET::::- Telogen effluvium\n\nBULLET::::- Tufted folliculitis\n\nSection::::Causes.:Genetics.\n\nGenetic forms of localized autosomal recessive hypotrichosis include:\n\nSection::::Pathophysiology.\n\nHair follicle growth occurs in cycles. Each cycle consists of a long growing phase (anagen), a short transitional phase (catagen) and a short resting phase (telogen). At the end of the resting phase, the hair falls out (exogen) and a new hair starts growing in the follicle beginning the cycle again.\n", "ACD is a rare disease. About 100 cases have been reported. The first case was reported in 1981.\n\nSection::::Signs and symptoms.\n", "After returning to the Twin Cities, he started working for Niceman Merchandising. Arens' album design/layout career continues to this day, and has featured many projects, including: Dolly Parton's \"Backwoods Barbie\" and latest, \"Blue Smoke\", Eagles of Death Metal's \"Heart On\", Ziggy Marley's \"Wild and Free\" , Glen Campbell's \"Ghost on the Canvas\", Pete Yorn's \"Back and Forth\", and, most recently, LA musician/producer Jonathan Wilson's \"Fanfare\".\n\nSection::::Career.:Flipp.\n", "Section::::Pathogenesis.\n\nTrichilemmal cysts are derived from the outer root sheath of the hair follicle. Their origin is currently unknown, but it has been suggested that they are produced by budding from the external root sheath as a genetically determined structural aberration. They arise preferentially in areas of high hair follicle concentrations, therefore, 90% of cases occur on the scalp. They are solitary in 30% of cases and multiple in 70% of cases.\n" ]
[]
[]
[ "normal" ]
[]
[ "normal", "normal" ]
[]
2018-16189
After a major stressor/psychological trauma why is there following anxiety symptoms that are hard to shake?
The effects of psychological wounds are similar to physical wounds but are much harder to identify and treat. Imagine the wound on your mind as a large physical wound like an infected boil. Using that area of the body only results in mind-numbing pain so you avoid it. However, that just creates a sense of anxiety that you'll disturb the wound while going through your daily life. However, depending on the location, the wound is hard to ignore. If it's on your foot, every step reminds you of pain. The wound may also get worse the more times you disturb it so healing is difficult. Fixing the wound is very hard. Unlike a physical wound where you can see how large it is, a psychological one requires poking and prodding it to figure out its size. This takes a great deal of willpower to relive those memories and could lead to more trauma. By figuring out the triggers and effects, the trauma can be slowly and gradually treated. Healing is hard if the wound has become part of who you are. If you cover it and ignore it, it can fester and get worse over time and affect other parts of your mind. Even if it heals and becomes a scar, the wound may have been such a constant part of your life that you feel emptier without it. Some may even seek similar wounds in order to "feel" something. These things happens often to people with childhood trauma where the person's brain is not developed to handle these traumas.
[ "Patients suffering from an extreme case of anxiety may seek treatment when all support systems have been exhausted and they are unable to bear the anxiety. Feelings of anxiety may present in different ways from an underlying medical illness or psychiatric disorder, a secondary functional disturbance from another psychiatric disorder, from a primary psychiatric disorder such as panic disorder or generalized anxiety disorder, or as a result of stress from such conditions as adjustment disorder or post-traumatic stress disorder. Clinicians usually attempt to first provide a \"safe harbor\" for the patient so that assessment processes and treatments can be adequately facilitated. The initiation of treatments for mood and anxiety disorders are important as patients suffering from anxiety disorders have a higher risk of premature death.\n", "Section::::Effects.:Post-operation.\n\nAnxiety has also been proven to cause higher analgesic and anaesthetic requirement, postoperative pain, and prolonged hospital stay.\n\nIrving L. Janis separates the effects of preoperative anxiety on postoperative reactions into three levels:\n\nBULLET::::- Low anxiety: The defenses of denial and other reassurances that were created to ward off the worry and apprehension preoperatively are not effective long-term. When all the pain and stress is experienced post-operatively, the emotional tension is unrelieved because there aren’t any real reassurances available from the pre-operational stage.\n", "In one study in 1988–90, illness in approximately half of patients attending mental health services at British hospital psychiatric clinic, for conditions such as panic disorder or social phobia, was determined to be the result of alcohol or benzodiazepine dependence. In these patients, anxiety symptoms, while worsening initially during the withdrawal phase, disappeared with abstinence from benzodiazepines or alcohol. Sometimes anxiety pre-existed alcohol or benzodiazepine dependence, but the dependence was acting to keep the anxiety disorders going and could progressively make them worse. Recovery from benzodiazepines tends to take a lot longer than recovery from alcohol, but people can regain their previous good health.\n", "A study looked at dissociation responses and PTSD in students who survived the 2005 Zarand earthquake in Iran. The earthquake measured 6.4 on the Richter scale, killed more than 1,500 people and displaced more than 6,700 for two months or more. It looked at dissociation one month later then two years later to see if level of dissociation predicted PTSD.\n\nFour weeks after the earthquake, researchers solicited for volunteers at local universities. All of the participants met the DSM-IV criterion for a Class A1 trauma. Many were wearing black mourning clothes or had injuries from the earthquake.\n", "In the control condition, where subjects were asked to talk about their anxiety related to their worst exam, death thought accessibility was lower for those with higher levels of PTSD. This suggests that people suffering from strong PTSD repress thoughts of death. But when mortality was made salient, it provoked a marked increase in death thought accessibility for those with high PTSD. The results indicate that the anxiety buffer of death thought suppression under normal circumstances failed when subjects were reminded of the traumatic event.\n", "Anxiety buffer disruption theory not only focuses on the thoughts and emotions of an individual, but it also studies the behavior that results when terror management theory and shattered assumptions theory are examined together. Excessive anxiety experienced by post-traumatic stress disorder sufferers occurs because the events causing the post-traumatic stress disorder have demonstrated to these individuals that anxiety-buffering mechanisms are not capable of protecting them from death. Individuals who have high levels of peritraumatic dissociation and low levels of self-efficacy coping, two indicators of post-traumatic stress disorder, have abnormal responses to reminders of death. These individuals in turn do not utilize the coping mechanisms that are typically used to remove the fear of death: culture, self-esteem, and interpersonal relationships. In fact, in individuals with post traumatic stress disorder, mortality salience coping mechanisms are viewed as worthless and perhaps are even seen to be detestable.\n", "Anxiety disorders, including general anxiety disorder, acute stress disorder, social anxiety disorder, and other related diagnoses are also frequently found in the military and first response community. While PTSD falls under the larger category of anxiety disorders, it is often considered distinctly due to its greater prevalence than other anxiety disorders. Anxiety disorders frequently manifest in the form of debilitating stress and anxiety experienced by a victim in the presence or anticipation of triggering stimuli. Anxiety may be disabling in that it may render someone incapable of coping well or at all with a situation that would normally be within their capabilities absent the clinical anxiety. Military research has found anxiety disorders to be more prevalent in those who had deployed to active conflicts. When PTSD is totalled with other anxiety disorders, this category of mental health diagnosis is the most prevalent among Canadian military personnel with deployments \n", "This disorder may resolve itself with time or may develop into a more severe disorder such as PTSD. However, results of Creamer, O'Donnell, and Pattison's (2004) study of 363 patients suggests that a diagnosis of acute stress disorder had only limited predictive validity for PTSD. Creamer et al. did find that re-experiences of the traumatic event and arousal were better predictors of PTSD. Early pharmacotherapy may prevent the development of posttraumtic symptoms. Additionally, early trauma-focused cognitive-behavioral therapy (TFCBT) for those with a diagnosis of ASD can protect an individual from chronic PTSD.\n", "How the patient copes with the injury has been found to influence the level at which they experience the emotional complications correlated with ABI. Three coping strategies for emotions related to ABI have presented themselves in the research, approach-oriented coping, passive coping and avoidant coping. Approach-oriented coping has been found to be the most effective strategy, as it has been negatively correlated with rates of apathy and depression in ABI patients; this coping style is present in individuals who consciously work to minimize the emotional challenges of ABI. Passive coping has been characterized by the person choosing not to express emotions and a lack of motivation which can lead to poor outcomes for the individual. Increased levels of depression have been correlated to avoidance coping methods in patients with ABI; this strategy is represented in people who actively evade coping with emotions. These challenges and coping strategies should be kept in consideration when seeking to understand individuals suffering from ABI.\n", "Two years after the quake, the researchers returned and 172 of the original respondents participated. They predicted that subjects with high PTSD symptoms would have a disrupted worldview on both foreign aid and the Islamic dress code. They found a strong relationship between dissociation and subsequent PTSD symptom severity. Even after the passage of two years, subjects with high dissociative tendencies were still not defending against existential threats in a way typical for the population who has not experienced trauma.\n\nSection::::Anxiety buffer disruption theory studies.:Extent of trauma exposure and PTSD symptom severity.\n", "Secondary symptoms are symptoms that surface during rehabilitation from the injury including social competence issues, depression, personality changes, cognitive disabilities, anxiety, and changes in sensory perception. More than 50% of patients who suffer from a traumatic brain injury will develop psychiatric disturbances. Although precise rates of anxiety after brain injury are unknown, a 30-year follow-up study of 60 patients found 8.3% of patients developed a panic disorder, 1.7% developed an anxiety disorder, and 8.3% developed a specific phobia. Patients recovering from a closed-head or traumatic brain injury often suffer from decreased self-esteem and depression. This effect is often attributed to difficulties re-entering society and frustration with the rehabilitation process. Patients who have suffered head injuries also show higher levels of unemployment, which can lead to the development of secondary symptoms.\n", "Patients with psychotic symptoms are common in psychiatric emergency service settings. The determination of the source of the psychosis can be difficult. Sometimes patients brought into the setting in a psychotic state have been disconnected from their previous treatment plan. While the psychiatric emergency service setting will not be able to provide long term care for these types of patients, it can exist to provide a brief respite and reconnect the patient to their case manager and/or reintroduce necessary psychiatric medication. A visit to a crisis unit by a patient suffering from a chronic mental disorder may also indicate the existence of an undiscovered precipitant, such as change in the lifestyle of the individual, or a shifting medical condition. These considerations can play a part in an improvement to an existing treatment plan.\n", "There must be a clear temporal connection between the impact of an exceptional stressor and the onset of symptoms; onset is usually within a few minutes or days but may occur up to one month after the stressor. In addition, the symptoms show a mixed and usually changing picture; in addition to the initial state of \"daze,\" depression, anxiety, anger, despair, overactivity, and withdrawal may all be seen, but no one type of symptom predominates for long; the symptoms usually resolve rapidly in those cases where removal from the stressful environment is possible; in cases where the stress continues or cannot by its nature be reversed, the symptoms usually begin to diminish after 24–48 hours and are usually minimal after about 3 days.\n", "Patients in this category may only experience minor emotional tension. The occasional worry or fear that is experienced by a patient with moderate anxiety can usually be suppressed.\n\nSome may suffer from insomnia, but they also usually respond well to mild sedatives. Their outward manner may seem relatively calm and well controlled, except for small moments where it is apparent to others that the patient is suffering from an inner conflict. They can usually perform daily tasks, only becoming restless from time to time.\n", "Counselors are encouraged to be aware of the typical responses of those who have experienced a crisis or are currently struggling with a trauma. On the cognitive level, they may blame themselves or others for the trauma. Often, the person appears disoriented, becomes hypersensitive or confused, has poor concentration, uncertain, and poor troubleshooting capabilities. Physical responses to trauma include increased heart rate, tremors, dizziness, weakness, chills, headaches, vomiting, shock, fainting, sweating, and fatigue. Among the common emotional responses of people who experience crisis in their lives include apathy, depression, irritability, anxiety, panic, helplessness, hopelessness, anger, fear, guilt, and denial. When assessing behavior, some typical responses to crisis are difficulty eating and/or sleeping, conflicts with others, withdrawal and lack of interest in social activities.\n", "Section::::Signs and symptoms.:Associated medical conditions.\n\nTrauma survivors often develop depression, anxiety disorders, and mood disorders in addition to PTSD.\n\nDrug abuse and alcohol abuse commonly co-occur with PTSD. Recovery from posttraumatic stress disorder or other anxiety disorders may be hindered, or the condition worsened, when substance use disorders are comorbid with PTSD. Resolving these problems can bring about improvement in an individual's mental health status and anxiety levels.\n\nIn children and adolescents, there is a strong association between emotional regulation difficulties (e.g. mood swings, anger outbursts, temper tantrums) and post-traumatic stress symptoms, independent of age, gender, or type of trauma.\n", "Paramedics, amongst other first responders, can suffer from posttraumatic stress symptoms and depressive symptoms as a result of repeated exposure to human pain and suffering on a daily basis. A study of paramedics reported more than 80% of paramedics in a large urban area experienced: the death of a patient while in their care, the death of a child, and violence. In addition to this, the same study reported that 70% had been assaulted on the job and 56% reported experiencing events which could have resulted in their own death. Often small scale triggers (in combination with larger events), such as the lonely death of an elderly person or a death by suicide which can trigger emotional responses.\n", "Section::::Scope.:Disasters.\n\nNatural disasters and man-made hazards can cause severe psychological stress in victims surrounding the event. Emergency management often includes psychiatric emergency services designed to help victims cope with the situation. The impact of disasters can cause people to feel shocked, overwhelmed, immobilized, panic-stricken, or confused. Hours, days, months and even years after a disaster, individuals can experience tormenting memories, vivid nightmares, develop apathy, withdrawal, memory lapses, fatigue, loss of appetite, insomnia, depression, irritability, panic attacks, or dysphoria.\n", "Disinhibited social engagement disorder is a stress-related disorder stemming from neglect during an individual's childhood. According to Erikson's work on the stages of psychosocial development, the psychosocial crisis of trust versus mistrust during infancy causes neglect during that period to have permanent effects because a neglected infant does not learn to trust his parent due to his parent's failure to fulfill his basic needs. Feelings of mistrust and anxiety may eventually lead to traumatic stress, especially through disinhibited social engagement disorder, among other disorders. Symptom persistence is necessary for a diagnosis of disinhibited social engagement disorder: specific symptoms must be present for at least twelve months.\n", "Due to the typically disorganized and hazardous environment following a disaster, mental health professionals typically assess and treat patients as rapidly as possible. Unless a condition is threatening life of the patient, or others around the patient, other medical and basic survival considerations are managed first. Soon after a disaster clinicians may make themselves available to allow individuals to ventilate to relieve feelings of isolation, helplessness and vulnerability. Dependent upon the scale of the disaster, many victims may suffer from both chronic or acute post-traumatic stress disorder. Patients suffering severely from this disorder often are admitted to psychiatric hospitals to stabilize the individual.\n", "These patients are usually very motivated to develop reliable information from medical authority in order to reach a point of comfortable relief.\n\nHigh anxiety\n\nPatients in this category will usually try to reassure themselves by seeking information, but these attempts, in the long-run, are unsuccessful at helping the patient reach a comfortable point because the fear is so dominant.\n", "Anxiety is a state of distress or uneasiness of mind caused by fear and it is a consistently associated with witnessing crimes. In a study by Clifford and Scott (1978), participants were shown either a film of a violent crime or a film of a non-violent crime. The participants who viewed the stressful film had difficulty remembering details about the event compared to the participants that watched the non-violent film. However, in a study done by Yuille and Cutshall (1986), they discovered that witness of real life violent crimes were able to remember the event quite vividly even five months after it originally occurred. Therefore, depending on the situation, stress can either cause a lapse in memory or it may cause a memory to become more apparent.\n", "Anxiety and depression can be caused by alcohol abuse, which in most cases improves with prolonged abstinence. Even moderate, sustained alcohol use may increase anxiety levels in some individuals. Caffeine, alcohol, and benzodiazepine dependence can worsen or cause anxiety and panic attacks. Anxiety commonly occurs during the acute withdrawal phase of alcohol and can persist for up to 2 years as part of a post-acute withdrawal syndrome, in about a quarter of people recovering from alcoholism. In one study in 1988–1990, illness in approximately half of patients attending mental health services at one British hospital psychiatric clinic, for conditions including anxiety disorders such as panic disorder or social phobia, was determined to be the result of alcohol or benzodiazepine dependence. In these patients, an initial increase in anxiety occurred during the withdrawal period followed by a cessation of their anxiety symptoms.\n", "Hyperresponsiveness in the norepinephrine system can also be caused by continued exposure to high stress. Overactivation of norepinephrine receptors in the prefrontal cortex can be connected to the flashbacks and nightmares frequently experienced by those with PTSD. A decrease in other norepinephrine functions (awareness of the current environment) prevents the memory mechanisms in the brain from processing that the experience, and emotions the person is experiencing during a flashback are not associated with the current environment.\n", "In 1992, Janoff-Bulman delineated a theory of trauma response (Shattered Assumptions Theory). Janoff-Bulman posits that humans have basic assumptions about the world in which they live, based on the belief that the world is a benevolent and meaningful place and that the individual has self-worth. These assumptions give the individual the illusion that they have a measure of control on their own lives as well as a feeling of invulnerability. When an individual faces a traumatic event, their deeply held beliefs that the world is a benevolent and meaningful place and that they have a worthy role in that world are shattered. The world is no longer benevolent or predictable.\n" ]
[]
[]
[ "normal" ]
[]
[ "normal" ]
[]
2018-12823
Why do banana peels seem to get dark faster after they've been peeled?
Because they oxidize. Contact with oxygen makes them change color, like apples and some other fruits do once you peel them.
[ "The nutritional value of banana peel depends on the stage of maturity and the cultivar; for example plantain peels contain less fibre than dessert banana peels, and lignin content increases with ripening (from 7 to 15% dry matter). On average, banana peels contain 6-9% dry matter of protein and 20-30% fibre (measured as NDF). Green plantain peels contain 40% starch that is transformed into sugars after ripening. Green banana peels contain much less starch (about 15%) when green than plantain peels, while ripe banana peels contain up to 30% free sugars.\n", "A 2008 study reported that ripe bananas fluoresce when exposed to ultraviolet light. This property is attributed to the degradation of chlorophyll leading to the accumulation of a fluorescent product in the skin of the fruit. The chlorophyll breakdown product is stabilized by a propionate ester group. Banana-plant leaves also fluoresce in the same way. Green bananas do not fluoresce. The study suggested that this allows animals which can see light in the ultraviolet spectrum (tetrachromats and pentachromats) to more easily detect ripened bananas.\n\nSection::::Modern cultivation.:Storage and transport.\n", "Some foods, such as bananas, are picked when unripe, are cooled to prevent ripening while they are shipped to market, and then are induced to ripen quickly by exposing them to propylene or ethylene, chemicals produced by plants to induce their own ripening; as flavor and texture changes during ripening, this process may affect those qualities of the treated fruit.\n\nSection::::Chemical composition.\n", "Most people peel a banana by cutting or snapping the stem and divide the peel into sections while pulling them away from the bared fruit. Another way of peeling a banana is done in the opposite direction, from the end with the brownish floral residue—a way usually perceived as \"upside down\". This way is also known as the \"monkey method\", since it is how monkeys are said to peel bananas.\n", "Depending on the thickness and taste, fruit peel is sometimes eaten as part of the fruit, such as with apples. In some cases the peel is unpleasant or inedible, in which case it is removed and discarded, such as with bananas or grapefruits.\n\nThe peel of some fruits — for example, pomegranates — is high in tannins and other polyphenols, and is employed in the production of dyes.\n", "When the fruit have grown to harvesting size and begin to turn yellow they are picked and not clipped. To achieve produce of the highest market value, it is important not to pick the fruit too early in the morning; the turgor is high then, and handling turgid fruit releases the peel oils and may cause spoilage.\n\nSection::::Agronomy.:Postharvest process.\n", "Though the fruit comes into bearing early, its ripening is late. Picking is at the end of October or beginning of November, while the skin is a light green. It matures in December, and when fully mature the colour of the fruit is green.\n\nSection::::Processing.\n", "Using a \"rastrello\", a special spoon-shaped knife, the fresh peel is de-pulped. It is then thoroughly washed with limewater and drip-dried on woven mats or special baskets for 3 to 24 hours, depending on the ripeness of the fruit, the temperature, and the humidity. These steps harden the peel, causing the oil to spurt from the oil glands more easily, and the lime helps neutralize the acidity of the peel.\n", "During the ripening process, bananas produce the gas ethylene, which acts as a plant hormone and indirectly affects the flavor. Among other things, ethylene stimulates the formation of amylase, an enzyme that breaks down starch into sugar, influencing the taste of bananas. The greener, less ripe bananas contain higher levels of starch and, consequently, have a \"starchier\" taste. On the other hand, yellow bananas taste sweeter due to higher sugar concentrations. Furthermore, ethylene signals the production of pectinase, an enzyme which breaks down the pectin between the cells of the banana, causing the banana to soften as it ripens.\n", "Export bananas are picked green, and ripen in special rooms upon arrival in the destination country. These rooms are air-tight and filled with ethylene gas to induce ripening. The vivid yellow color consumers normally associate with supermarket bananas is, in fact, caused by the artificial ripening process. Flavor and texture are also affected by ripening temperature. Bananas are refrigerated to between during transport. At lower temperatures, ripening permanently stalls, and the bananas turn gray as cell walls break down. The skin of ripe bananas quickly blackens in the environment of a domestic refrigerator, although the fruit inside remains unaffected.\n", "The fruit hangs in drooping clusters that are circular and about wide. The peel is tan, thin, and leathery with tiny hairs. The flesh is translucent, and the seed is large and black with a circular white spot at the base. This gives the illusion of an eye. The flesh has a musky, sweet taste, which can be compared to the flavor of lychee fruit.\n", "Bananas are a popular fruit consumed worldwide with a yearly production of over 165 million tonnes in 2011. Once the peel is removed, the fruit can be eaten raw or cooked and the peel is generally discarded. Because of this removal of the banana peel, a significant amount of organic waste is generated.\n\nBanana peels are sometimes used as feedstock for cattle, goats, pigs, monkeys, poultry, fish, zebras and several other species, typically on small farms in regions where bananas are grown. There are some concerns over the impact of tannins contained in the peels on animals that consume them.\n", "Like yellow sigatoka, black sigatoka, was first documented in the Sigatoka valley of Fiji. It was first recorded in 1964 and being more virulent tended to displace yellow sigatoka in banana crops. Therefore, yellow sigatoka is rarely found in locations where black sigatoka occurs. Black sigatoka infection appears on the leaves of crops during the unfurling. Sigatoka spores will incubate on the leaves for up to six days before penetrating the leaf. After this the infection will continue to colonize for a week before the plant exhibits symptoms. The initial symptoms are small spots on the undersides of the leaves. These appear 10–15 days after infection and grow until they appear as black streaks on the leaves. This is what gives black sigatoka its alternate name of black leaf streak. These streaks can dry out and collapse in less than a day. This affects growth and yield of bananas by reducing the total photosynthetic area of the leaf. However the largest effect on yields is through the toxins produced by black sigatoka that causes a premature ripening of the bananas. These prematurely ripened fruit cannot be sold and must be discarded.\n", "Bananas are green when they are picked because of the chlorophyll their skin contains. Once picked, they begin to ripen; hormones in the bananas convert amino acids into ethylene gas, which stimulates the production of several enzymes. These enzymes start to change the color, texture and flavor of the banana. The green chlorophyll supply is stopped and the yellow color of the carotenoids replaces it; eventually, as the enzymes continue their work, the cell walls break down and the bananas turn brown.\n\nSection::::Science and nature.:Biology.:Fish.\n", "Covered fruit ripening bowls are commercially available. The manufacturers claim the bowls increase the amount of ethylene and carbon dioxide gases around the fruit, which promotes ripening.\n\nClimacteric fruits are able to continue ripening after being picked, a process accelerated by ethylene gas. Non-climacteric fruits can ripen only on the plant and thus have a short shelf life if harvested when they are ripe.\n\nSome fruits can be ripened by placing them in a plastic bag with a ripe banana, as the banana will release ethylene.\n\nSection::::Ripening indicators.\n", "When the tip of a banana is pinched with two fingers, it will split and the peel comes off in two clean sections. The inner fibres, or \"strings\", between the fruit and the peel will remain attached to the peel and the stem of the banana can be used as a handle when eating the banana.\n\nSection::::Psychoactive effects of banana peels.\n", "During harvest, pickers must climb ladders to carefully remove branches of fruit from longan trees. Longan fruit remain fresher when still attached to the branch, so efforts are made to prevent the fruit from detaching too early. Mechanical picking would damage the delicate skin of the fruit, so the preferred method is to harvest by hand. Knives and scissors are the most commonly used tools.\n", "Section::::Uses.\n\nCavendish bananas accounted for 47% of global banana production between 1998 and 2000, and the vast majority of bananas entering international trade.\n\nThe fruits of the Cavendish bananas are eaten raw, used in baking, fruit salads, fruit compotes, and to complement foods. The outer skin is partially green when bananas are sold in food markets, and turns yellow when the fruit ripens. As it ripens the starch is converted to sugars turning the fruit sweet. When it reaches its final stage (stage 7), brown/black \"sugar spots\" develop. When overripe, the skin turns black and the flesh becomes mushy.\n", "Climacteric fruits undergo a number of changes during fruit ripening. The major changes include fruit softening, sweetening, decreased bitterness, and colour change. These changes begin in an inner part of the fruit, the locule, which is the gel-like tissue surrounding the seeds. Ripening-related changes initiate in this region once seeds are viable enough for the process to continue, at which point ripening-related changes occur in the next successive tissue of the fruit called the pericarp. As this ripening process occurs, working its way from the inside towards outer most tissue of the fruit, the observable changes of softening tissue, and changes in color and carotenoid content occur. Specifically, this process activates ethylene production and the expression of ethylene-response genes affiliated with the phenotypic changes seen during ripening. \n", "Plantains can be used for cooking at any stage of ripeness, but ripe ones can be eaten raw. As the plantain ripens, it becomes sweeter and its colour changes from green to yellow to black, just like bananas. Green plantains are firm and starchy, and resemble potatoes in flavour. Yellow plantains are softer and starchy yet sweet. Extremely ripe plantains have softer, deep yellow pulp that is much sweeter.\n", "BULLET::::- Scientists discovered in 2008 that when a banana becomes ripe and ready to eat, it glows bright indigo under a black light. Some insects, as well as bats and birds, may see into the ultraviolet, because they are tetrachromats and can use this information to tell when a banana is ripe and ready to eat. The glow is the result of a chemical created as the green chlorophyll in the peel breaks down.\n\nSection::::In culture.:Military.\n", "Black sigatoka is a fungal leaf spot disease that is considered one of the most devastating blights affecting contemporary banana cultivation. Present across much of the world and with the potential to reduce yields by 50%, Black Sigatoka poses a very real and acute threat to subsistence banana farmers, as most of the world’s export-grade bananas are highly susceptible to the disease.\n", "Mature, yellow plantains can be peeled like typical dessert bananas; the pulp is softer than in immature, green fruit and some of the starch has been converted to sugar. They can be eaten raw, but are not as flavourful as dessert bananas, so are usually cooked. When mature, yellow plantains are fried, they tend to caramelize, turning a golden-brown color. They can also be boiled, baked, microwaved or grilled over charcoal, either peeled or unpeeled.\n", "Banana peels are also used for water purification, to produce ethanol, cellulase, laccase, as fertilizer and in composting.\n\nSection::::In comical context.\n", "Section::::Ultimate Causation Hypotheses.\n\nSection::::Ultimate Causation Hypotheses.:Fruit theory.\n\nThis theory encompasses the idea that this trait became favorable in the increased ability to find ripe fruit against a mature leaf background. Research has found that the spectral separation between the L and the M cones is closely proportional to the optimal detection of fruit against foliage. The reflectance spectra of fruits and leaves naturally eaten by the \"Alouatta seniculus\" were analyzed and found that the sensitivity in the L and M cone pigments is optimal for detecting fruit among leaves.\n" ]
[]
[]
[ "normal" ]
[]
[ "normal", "normal" ]
[]
2018-19173
How is skin cancer caused (or assisted) by sun burns if skin cells die and replace themselves relatively quickly? Wouldn’t the damaged skin cells die before they can become cancerous?
The [skin has several layers]( URL_0 ), and only the epidermis layers "die and replace themselves" rapidly. And the way that happens is the cells in the layer divide and push "older" cells up and out, where they eventually harden and flake off. But, cells "replace themselves" and "multiply" by division; one cell copies all of its DNA and splits into two copies. Ultraviolets, x-rays, and gamma radiation are "ionizing" radiation, the photons have enough energy to knock electrons off atoms, and those atoms will immediately go and chemically react with the nearest thing, disrupting the delicate chemistry in a cell. Most of the time cells die from this disruption. However, sometimes what's disrupted is the DNA strands, and the cell doesn't die from that. It just copies the damaged DNA strands when it "divides" and "multiplies". Cancer is actually cells from your own body that have DNA that's damaged in such a way that it doesn't kill off the cell because it can't function, but the DNA doesn't let the cell perform its intended purpose (to function as a skin cell or a liver cell or a heart muscle cell, for example). So cancer is live cells that multiply out of control and don't have a function, they just live, consume, and multiply. So, anyway, sun burn is damaged cells. Most of them are dead / dying. But some of them are OK, but with damaged DNA that can cause them to *become* cancer, and then multiply as such.
[ "The greatest risk factor is high total exposure to ultraviolet radiation from the Sun. Other risks include prior scars, chronic wounds, actinic keratosis, lighter skin, Bowen's disease, arsenic exposure, radiation therapy, poor immune system function, previous basal cell carcinoma, and HPV infection. Risk from UV radiation is related to total exposure, rather than early exposure. Tanning beds are becoming another common source of ultraviolet radiation. It begins from squamous cells found within the skin. Diagnosis is often based on skin examination and confirmed by tissue biopsy.\n", "There are certain genetic conditions, for example xeroderma pigmentosum, that increase a person's susceptibility to sunburn and subsequent skin cancers. These conditions involve defects in DNA repair mechanisms which in turn decreases the ability to repair DNA that has been damaged by UV radiation.\n\nSection::::Causes.:Medications.\n\nThe risk of a sunburn can be increased by pharmaceutical products that sensitize users to UV radiation. Certain antibiotics, oral contraceptives, and tranquillizers have this effect.\n\nSection::::Causes.:UV intensity.\n\nThe UV Index indicates the risk of getting a sunburn at a given time and location. Contributing factors include:\n", "At least 20% of MCC tumors are not infected with MCV, suggesting that MCC may have other causes, especially sunlight or ultraviolet light as in a tanning beds. MCC can also occur together with other sun exposure-related skin cancers that are not infected with MCV (i.e. basal cell carcinoma, squamous cell carcinoma, melanoma). Ultraviolet radiation such as in sun exposure increases the risk in MCC development, consistent with the fact that MCCs occur more commonly in sun-exposed areas.\n\nSection::::Pathophysiology.:Immunosuppression.\n", "Free radical damage to DNA can occur as a result of exposure to ionizing radiation or to radiomimetic compounds. Damage to DNA as a result of free radical attack is called indirect DNA damage because the radicals formed can diffuse throughout the body and affect other organs. Malignant melanoma can be caused by indirect DNA damage because it is found in parts of the body not exposed to sunlight. DNA is vulnerable to radical attack because of the very labile hydrogens that can be abstracted and the prevalence of double bonds in the DNA bases that radicals can easily add to.\n", "BULLET::::- History of sunburn: Studies show that even a single episode of painful sunburn as a child can increase an individual's risk of developing AK as an adult. Six or more painful sunburns over the course of a lifetime was found to be significantly associated with the likelihood of developing AK.\n\nSection::::Risk factors.\n\nSection::::Risk factors.:Skin pigmentation.\n", "BCC and SCC often carry a UV-signature mutation indicating that these cancers are caused by UVB radiation via direct DNA damage. However malignant melanoma is predominantly caused by UVA radiation via indirect DNA damage. The indirect DNA damage is caused by free radicals and reactive oxygen species. Research indicates that the absorption of three sunscreen ingredients into the skin, combined with a 60-minute exposure to UV, leads to an increase of free radicals in the skin, if applied in too little quantities and too infrequently. However, the researchers add that newer creams often do not contain these specific compounds, and that the combination of other ingredients tends to retain the compounds on the surface of the skin. They also add the frequent re-application reduces the risk of radical formation.\n", "Greater than 90% of cases are caused by exposure to ultraviolet radiation from the Sun. This exposure increases the risk of all three main types of skin cancer. Exposure has increased partly due to a thinner ozone layer. Tanning beds are becoming another common source of ultraviolet radiation. For melanomas and basal-cell cancers exposure during childhood is particularly harmful. For squamous-cell skin cancers total exposure, irrespective of when it occurs, is more important. Between 20% and 30% of melanomas develop from moles. People with light skin are at higher risk as are those with poor immune function such as from medications or HIV/AIDS. Diagnosis is by biopsy.\n", "BULLET::::- Extent of sun exposure: Cumulative sun exposure leads to an increased risk for development of AKs. In one U.S. study, AKs were found in 55% of fair-skinned men with high cumulative sun exposure, and in only 19% of fair-skinned men with low cumulative sun exposure in an age-matched cohort (the percents for women in this same study were 37% and 12% respectively). Furthermore, the use of sunscreen (SPF 17 or higher) has been found to significantly reduce the development of AK lesions, and also promotes the regression of existing lesions.\n", "Ultraviolet radiation causes sunburns and increases the risk of three types of skin cancer: melanoma, basal-cell carcinoma and squamous-cell carcinoma. Of greatest concern is that the melanoma risk increases in a dose-dependent manner with the number of a person's lifetime cumulative episodes of sunburn. It has been estimated that over 1/3 of melanomas in the United States and Australia could be prevented with regular sunscreen use.\n\nSection::::Causes.\n", "Section::::Prognosis.\n\nThe mortality rate of basal-cell and squamous-cell carcinoma is around 0.3%, causing 2000 deaths per year in the US. In comparison, the mortality rate of melanoma is 15–20% and it causes 6500 deaths per year. Even though it is much less common, malignant melanoma is responsible for 75% of all skin cancer-related deaths.\n", "In general, cancers are caused by damage to DNA. UVA light mainly causes thymine dimers. UVA also produces reactive oxygen species and these inflict other DNA damage, primarily single-strand breaks, oxidized pyrimidines and the oxidized purine 8-oxoguanine (a mutagenic DNA change) at 1/10th, 1/10th and 1/3rd the frequencies of UVA-induced thymine dimers, respectively.\n", "People who have received solid organ transplants are at a significantly increased risk of developing squamous cell carcinoma due to the use of chronic immunosuppressive medication. While the risk of developing all skin cancers increases with these medications, this effect is particularly severe for SCC, with hazard ratios as high as 250 being reported, versus 40 for basal cell carcinoma. The incidence of SCC development increases with time posttransplant. Heart and lung transplant recipients are at the highest risk of developing SCC due to more intensive immunosuppressive medications used. Squamous cell cancers of the skin in individuals on immunotherapy or suffering from lymphoproliferative disorders (i.e. leukemia) tend to be much more aggressive, regardless of their location. The risk of SCC, and non-melanoma skin cancers generally, varies with the immunosuppressive drug regimen chosen. The risk is greatest with calcineurin inhibitors like cyclosporine and tacrolimus, and least with mTOR inhibitors, such as sirolimus and everolimus. The antimetabolites azathioprine and mycophenolic acid have an intermediate risk profile.\n", "Having multiple severe sunburns increases the likelihood that future sunburns develop into melanoma due to cumulative damage. The sun and tanning beds are the main sources of UV radiation that increase the risk for melanoma and living close to the equator increases exposure to UV radiation.\n\nSection::::Cause.:Genetics.\n", "Evidence also suggests that the human papillomavirus (HPV) plays a role in the development of AKs. The HPV virus has been detected in AKs, with measurable HPV viral loads (1 HPV-DNA copy per less than 50 cells) measured in 40% of AKs. Similar to UV radiation, higher levels of HPV found in AKs reflect enhanced viral DNA replication. This is suspected to be related to the abnormal keratinocyte proliferation and differentiation in AKs, which facilitate an environment for HPV replication. This in turn may further stimulate the abnormal proliferation that contributes to the development of AKs and carcinogenesis.\n\nSection::::Causes.:Ultraviolet radiation.\n", "A family history of melanoma greatly increases a person's risk because mutations in several genes have been found in melanoma-prone families. People with a history of one melanoma are at increased risk of developing a second primary tumor.\n\nFair skin is the result of having less melanin in the skin, which means there is less protection from UV radiation. A family history could indicate a genetic predisposition to melanoma.\n\nSection::::Pathophysiology.\n", "Section::::Coverage limitations.:Limited coverage of skin cancers.\n\nSkin cancer is the most commonly diagnosed form of cancer. The primary categories of skin cancer are basal-cell carcinoma (BCC), squamous-cell carcinoma (SCC), and melanoma. The first two, collectively known as non-melanoma skin cancers (NMSC), are highly unlikely to metastasize and comprise the majority of skin cancer diagnoses.\n", "AC may occur with skin lesions of actinic keratosis or skin cancer elsewhere, particularly on the head and neck since these are the most sun exposed areas. Rarely it may represent a genetic susceptibility to light damage (e.g. xeroderma pigmentosum or actinic prurigo).\n\nSection::::Causes.\n\nAC is caused by chronic and excessive exposure to ultraviolet radiation in sunlight.\n\nRisk factors include:\n", "Replication stress, along with the selection for inactivating mutations in DNA damage response genes in the evolution of the tumor, leads to downregulation and/or loss of some DNA damage response mechanisms, and hence loss of DNA repair and/or senescence/programmed cell death. In experimental mouse models, loss of DNA damage response-mediated cell senescence was observed after using a short hairpin RNA (shRNA) to inhibit the double-strand break response kinase ataxia telangiectasia (ATM), leading to increased tumor size and invasiveness. Humans born with inherited defects in DNA repair mechanisms (for example, Li-Fraumeni syndrome) have a higher cancer risk.\n", "Squamous cell carcinoma is the second-most common cancer of the skin (after basal-cell carcinoma but more common than melanoma). It usually occurs in areas exposed to the sun. Sunlight exposure and immunosuppression are risk factors for SCC of the skin, with chronic sun exposure being the strongest environmental risk factor. There is a risk of metastasis starting more than 10 years after diagnosable appearance of squamous cell carcinoma, but the risk is low, though much higher than with basal-cell carcinoma. Squamous cell cancers of the lip and ears have high rates of local recurrence and distant metastasis. In a recent study, it has also been shown that the deletion or severe down-regulation of a gene titled Tpl2 (tumor progression locus 2) may be involved in the progression of normal keratinocytes into becoming squamous cell carcinoma.\n", "Exposure to ultraviolet radiation (UVR), whether from the sun or tanning devices is known to be a major cause of the three main types of skin cancer: non-melanoma skin cancer (basal cell carcinoma and squamous cell carcinoma) and melanoma. Overexposure to UVR induces at least two types of DNA damage: cyclobutane–pyrimidine dimers (CPDs) and 6–4 photoproducts (6–4PPs). While DNA repair enzymes can fix some mutations, if they are not sufficiently effective, a cell will acquire genetic mutations which may cause the cell to die or become cancerous. These mutations can result in cancer, aging, persistent mutation and cell death. For example, squamous cell carcinoma can be caused by a UVB-induced mutation in the p53 gene.\n", "Section::::Cancer.\n\nHsp70 is overexpressed in malignant melanoma and underexpressed in renal cell cancer.\n\nSection::::Expression in skin tissue.\n\nBoth HSP70 and HSP47 were shown to be expressed in dermis and epidermis following laser irradiation, and the spatial and temporal changes in HSP expression patterns define the laser-induced thermal damage zone and the process of healing in tissues. HSP70 may define biochemically the thermal damage zone in which cells are targeted for destruction, and HSP47 may illustrate the process of recovery from thermally induced damage.\n\nSection::::Family members.\n\nProkaryotes express three Hsp70 proteins: DnaK, HscA (Hsc66), and HscC (Hsc62).\n", "Section::::Signs and symptoms.:Other.\n\nMerkel cell carcinomas are most often rapidly growing, non-tender red, purple or skin colored bumps that are not painful or itchy. They may be mistaken for a cyst or another type of cancer.\n\nSection::::Causes.\n\nUltraviolet radiation from sun exposure is the primary environmental cause of skin cancer. This can occur in professions such as farming. Other risk factors that play a role include:\n\nBULLET::::- Smoking tobacco\n\nBULLET::::- HPV infections increase the risk of squamous-cell skin cancer.\n", "A number of rare mutations, which often run in families, greatly increase melanoma susceptibility. Several genes increase risks. Some rare genes have a relatively high risk of causing melanoma; some more common genes, such as a gene called MC1R that causes red hair, have a relatively lower elevated risk. Genetic testing can be used to search for the mutations.\n", "Direct DNA damage\n\nDirect DNA damage can occur when DNA directly absorbs a UVB photon, or for numerous other reasons. UVB light causes thymine base pairs next to each other in genetic sequences to bond together into pyrimidine dimers, a disruption in the strand, which reproductive enzymes cannot copy. It causes sunburn and it triggers the production of melanin.\n\nOther names for the \"direct DNA damage\" are: \n\nBULLET::::- thymine dimers\n\nBULLET::::- pyrimidine dimers\n\nBULLET::::- Cyclobutane Pyrimidine Dimers (CPDs)\n\nBULLET::::- UV-endonuclease-sensitive-sites (ESS)\n", "The absorption spectrum of DNA shows a strong absorption for UVB radiation and a much lower absorption for UVA radiation. Since the action spectrum of sunburn is indistinguishable from the absorption spectrum of DNA, it is generally accepted that the direct DNA damages are the cause of sunburn. \n\nWhile the human body reacts to direct DNA damages with a painful warning signal, no such warning signal is generated from indirect DNA damage.\n\nSection::::Sunscreen and melanoma.\n\nA study by Hanson suggests sunscreen that penetrates into the skin and thereby amplifies the amount of free radicals and oxidative stress\n" ]
[ "All skin cells die and replace themselves." ]
[ "The skin cells on the surface of the epidermis are the only skin cells that die and regenerate." ]
[ "false presupposition" ]
[ "All skin cells die and replace themselves." ]
[ "false presupposition" ]
[ "The skin cells on the surface of the epidermis are the only skin cells that die and regenerate." ]
2018-01553
Why is the color that seems the brightest (yellow) towards the middle of the visible spectrum instead of a color at one of the ends?
Perceived brightness relates to how sensitive the human eye is to various colors of light, not to the physics of the light waves.
[ "Section::::Science and nature.:Optics, color printing, and computer screens.\n\nYellow is found between green and orange on the spectrum of visible light. It is the color the human eye sees when it looks at light with a dominant wavelength between 570 and 590 nanometers.\n", "Section::::Science and nature.:Astronomy.\n\nStars of spectral classes F have color temperatures that make them look \"yellowish\". The first astronomer to classify stars according to their color was F. G. W. Struve in 1827. One of his classifications was \"flavae\", or yellow, and this roughly corresponded to stars in the modern spectral range F5 to K0. The Strömgren photometric system for stellar classification includes a 'y' or yellow filter that is centered at a wavelength of 550 nm and has a bandwidth of 20–30 nm.\n\nSection::::Science and nature.:Biology.\n", "Newton's own color circle has yellow directly opposite the boundary between indigo and violet. These results, that the complement of yellow is a wavelength shorter than 450 nm, are derivable from the modern CIE 1931 system of colorimetry if it is assumed that the yellow is about 580 nm or shorter wavelength, and the specified white is the color of a blackbody radiator of temperature 2800 K or lower (that is, the white of an ordinary incandescent light bulb). More typically, with a daylight-colored or around 5000 to 6000 K white, the complement of yellow will be in the blue wavelength range, which is the standard modern answer for the complement of yellow.\n", "Because of the characteristics of paint pigments and use of different color wheels, painters traditionally regard the complement of yellow as the color indigo or blue-violet.\n\nSection::::Science and nature.:Optics, color printing, and computer screens.:Lasers.\n", "The color yellow, for example, is perceived when the L cones are stimulated slightly more than the M cones, and the color red is perceived when the L cones are stimulated significantly more than the M cones. Similarly, blue and violet hues are perceived when the S receptor is stimulated more. Cones are most sensitive to light at wavelengths around 420 nm. However, the lens and cornea of the human eye are increasingly absorptive to shorter wavelengths, and this sets the short wavelength limit of human-visible light to approximately 380 nm, which is therefore called 'ultraviolet' light. People with aphakia, a condition where the eye lacks a lens, sometimes report the ability to see into the ultraviolet range. At moderate to bright light levels where the cones function, the eye is more sensitive to yellowish-green light than other colors because this stimulates the two most common (M and L) of the three kinds of cones almost equally. At lower light levels, where only the rod cells function, the sensitivity is greatest at a blueish-green wavelength.\n", "Each of the three types of cones in the retina of the eye contains a different type of photosensitive pigment, which is composed of a transmembrane protein called opsin and a light-sensitive molecule called 11-cis retinal. Each different pigment is especially sensitive to a certain wavelength of light (that is, the pigment is most likely to produce a cellular response when it is hit by a photon with the specific wavelength to which that pigment is most sensitive). The three types of cones are L, M, and S, which have pigments that respond best to light of long (especially 560 nm), medium (530 nm), and short (420 nm) wavelengths respectively.\n", "The yellow on a color television or computer screen is created in a completely different way; by combining green and red light at the right level of intensity. (See RGB color model).\n\nSection::::Science and nature.:Optics, color printing, and computer screens.:Complementary colors.\n\nTraditionally, the complementary color of yellow is purple; the two colors are opposite each other on the color wheel long used by painters. Vincent Van Gogh, an avid student of color theory, used combinations of yellow and purple in several of his paintings for the maximum contrast and harmony.\n", "Section::::Opponent process.\n\nThe color opponent process is a color theory that states that the human visual system interprets information about color by processing signals from cone and rod cells in an antagonistic manner. The three types of cone cells have some overlap in the wavelengths of light to which they respond, so it is more efficient for the visual system to record differences between the responses of cones, rather than each type of cone's individual response. The opponent color theory suggests that there are three opponent channels:\n\nBULLET::::- Red versus green.\n\nBULLET::::- Blue versus yellow\n", "The normal three kinds of light-sensitive photoreceptor cells in the human eye (cone cells) respond most to yellow (long wavelength or L), green (medium or M), and violet (short or S) light (peak wavelengths near 570 nm, 540 nm and 440 nm, respectively). The difference in the signals received from the three kinds allows the brain to differentiate a wide gamut of different colors, while being most sensitive (overall) to yellowish-green light and to differences between hues in the green-to-orange region.\n", "Section::::Reception.\n", "The color defined as yellow in the Munsell color system (Munsell 5Y) is shown at apex of color wheel. The \"Munsell color system\" is a color space that specifies colors based on three color dimensions: hue, value (lightness), and chroma (color purity), spaced uniformly in three dimensions in the elongated oval at an angle shaped Munsell color solid according to the logarithmic scale which governs human perception. In order for all the colors to be spaced uniformly, it was found necessary to use a color wheel with five primary colors—red, yellow, green, blue, and purple.\n", "Shades of yellow\n\nVarieties of the color yellow may differ in hue, chroma (also called saturation, intensity, or colorfulness) or lightness (or value, tone, or brightness), or in two or three of these qualities. Variations in value are also called tints and shades, a tint being a yellow or other hue mixed with white, a shade being mixed with black. A large selection of these various colors is shown below.\n\nSection::::Tints of yellow.\n\nSection::::Tints of yellow.:Light yellow.\n\nDisplayed at right is the web color light yellow.\n\nSection::::Tints of yellow.:Cream.\n", "BULLET::::- Methyl yellow (p-Dimethylaminoazobenzene) is a pH indicator used to determine acidity. It changes from yellow at pH=4.0 to red at pH=2.9.\n\nBULLET::::- Yellow fireworks are produced by adding sodium compounds to the firework mixture. Sodium has a strong emission at 589.3 nm (D-line), a very slightly orange-tinted yellow.\n\nBULLET::::- Amongst the elements, sulfur and gold are most obviously yellow. Phosphorus, arsenic and antimony have allotropes which are yellow or whitish-yellow; fluorine and chlorine are pale yellowish gases.\n\nSection::::History, art, and fashion.:Minerals and chemistry.:Pigments.\n", "BULLET::::- Stygian colors: these are simultaneously dark and impossibly saturated. For example, to see \"stygian blue\": staring at bright yellow causes a dark blue afterimage, then on looking at black, the blue is seen as blue against the black, but due to lack of the usual brightness contrast it seems to be as dark as the black.\n", "BULLET::::- Orpiment, also called King's Yellow or Chinese Yellow is arsenic trisulfide () and was used as a paint pigment until the 19th century when, because of its high toxicity and reaction with lead-based pigments, it was generally replaced by Cadmium Yellow.\n\nBULLET::::- azo dye-based pigment (a brightly colored transparent or semitransparent dye with a white pigment) is used as the colorant in most modern paints requiring either a highly saturated yellow or simplicity of color mixing. The most common is the monoazo arylide yellow family, first marketed as Hansa Yellow.\n\nSection::::History, art, and fashion.:Minerals and chemistry.:Dyes.\n", "The characteristic colors are, from long to short wavelengths (and, correspondingly, from low to high frequency), red, orange, yellow, green, blue, and violet. Sufficient differences in wavelength cause a difference in the perceived hue; the just-noticeable difference in wavelength varies from about 1 nm in the blue-green and yellow wavelengths, to 10 nm and more in the longer red and shorter blue wavelengths. Although the human eye can distinguish up to a few hundred hues, when those pure spectral colors are mixed together or diluted with white light, the number of distinguishable chromaticities can be quite high.\n", "If, instead of white, we stare at yellow, then the afterimage, or physiological color spectrum, is violet. Yellow, unlike white, does not fully stimulate and exhaust the retina's activity. Yellow partially stimulates points on the retina and leaves those points partially unstimulated. The retina's activity has been qualitatively divided and separated into two parts. The unstimulated part results in a violet afterimage. Yellow and violet are the complement of each other because together they add up to full retinal activity. Yellow is closer to white, so it activates the retina more than violet, which is closer to black.\n", "Displayed at right is the web color cream, a pale tint of yellow.\n\nSection::::Tints of yellow.:Lemon chiffon.\n\nDisplayed at right is the web color lemon chiffon.\n\nLemon chiffon is a color that is reminiscent of the color of lemon chiffon cake.\n\nSection::::Computer web color yellow.\n\nSection::::Computer web color yellow.:Yellow (RGB) (X11 yellow) (color wheel yellow).\n\nThe color box at right shows the most intense yellow representable in 8-bit RGB color model; yellow is a \"secondary\" color in an additive RGB space.\n", "The flame is yellow because of its temperature. To produce enough soot to be luminous, the flame is operated at a lower temperature than its efficient heating flame (see Bunsen burner). The colour of simple incandescence is due to black-body radiation. By Planck's law, as the temperature decreases, the peak of the black-body radiation curve moves to longer wavelengths, i.e. from the blue to the yellow. However, the blue light from a gas burner's premixed flame is primarily a product of molecular emission (Swan bands) rather than black-body radiation.\n", "Lasers emitting in the yellow part of the spectrum are less common and more expensive than most other colors. In commercial products diode pumped solid state (DPSS) technology is employed to create the yellow light. An infrared laser diode at 808 nm is used to pump a crystal of neodymium-doped yttrium vanadium oxide (Nd:YVO4) or neodymium-doped yttrium aluminium garnet (Nd:YAG) and induces it to emit at two frequencies (281.76 THz and 223.39 THz: 1064 nm and 1342 nm wavelengths) simultaneously. This deeper infrared light is then passed through another crystal containing potassium, titanium and phosphorus (KTP), whose non-linear properties generate light at a frequency that is the sum of the two incident beams (505.15 THz); in this case corresponding to the wavelength of 593.5 nm (\"yellow\"). This wavelength is also available, though even more rarely, from a helium–neon laser. However, this not a true yellow, as it exceeds 590 nm. A variant of this same DPSS technology using slightly different starting frequencies was made available in 2010, producing a wavelength of 589 nm, which is considered a true yellow color. The use of yellow lasers at 589 nm and 594 nm have recently become more widespread thanks to the field of optogenetics.\n", "The ability of the [[human eye]] to distinguish colors is based upon the varying sensitivity of different cells in the [[retina]] to light of different [[wavelengths]]. Humans are [[Trichromacy|trichromatic]]—the retina contains three types of color receptor cells, or [[cone cell|cones]]. One type, relatively distinct from the other two, is most responsive to light that is perceived as blue or blue-violet, with wavelengths around 450 [[nanometre|nm]]; cones of this type are sometimes called \"short-wavelength cones\", \"S cones\", or \"blue cones\". The other two types are closely related genetically and chemically: \"middle-wavelength cones\", \"M cones\", or \"green cones\" are most sensitive to light perceived as green, with wavelengths around 540 nm, while the \"long-wavelength cones\", \"L cones\", or \"red cones\", are most sensitive to light is perceived as greenish yellow, with wavelengths around 570  nm.\n", "Section::::Individual efforts.\n", "Section::::Formal definition.\n\nThe UNECE Regulations formally define selective yellow in terms of the CIE 1931 colour space as follows:\n\nFor front fog lamps, the limit towards white is extended:\n\nThe entirety of the basic selective yellow definition lies outside the gamut of the sRGB colour space—such a pure yellow cannot be represented using RGB primaries. The colour swatch above is a desaturated approximation, created by taking the centroid of the standard selective yellow definition at (0.502, 0.477) and moving it towards the D65 white point, until it meets the sRGB gamut triangle at (0.478, 0.458).\n", "Since each wavelength \"w\" stimulates each of the 3 types of cone cells to a known extent, these extents may be represented by 3 functions \"s\"(\"w\"), \"m\"(\"w\"), \"l\"(\"w\") corresponding to the response of the \"S\", \"M\", and \"L\" cone cells, respectively.\n", "The average color of starlight in the observable universe is a shade of yellowish-white that has been given the name Cosmic Latte. \n\nStarlight spectroscopy, examination of the stellar spectra, was pioneered by Joseph Fraunhofer in 1814. Starlight can be understood to be composed of three main spectra types, \"continuous spectrum\", \"emission spectrum\", and \"absorption spectrum\".\n\nSection::::Oldest starlight.\n" ]
[]
[]
[ "normal" ]
[ "Brightness has to due with where on the visible spectrum a color is." ]
[ "false presupposition", "normal" ]
[ "Brightness has to do with how sensitive our eyes are to a certain color. " ]
2018-09236
What kind of service or "package" does a Tier 3/Last Mile ISP get from a Tier 1/Infrastructure ISP
Tier 3 providers buy per gigabyte from Tier 1 ISPs and use the Tier 1 ISPs infrastructure. Tier 1's are forced by law to allow a minimum amount of Tier 3's so there aren't as many monopolies (we saw how that turned out). They generally turn it around and sell it at an upcharge and pocket the difference to keep themselves running. It's cheaper because the Tier 3 is guaranteeing they'll buy so many lines from the Tier 1, even if they aren't being used. Bulk discount and all that. They usually were put on the "back burner" of traffic priorities and got slower speeds because of it. I don't know if they have caps or not, since the ISP I work for doesn't implement caps.
[ "Tier 2 network\n\nA Tier 2 network is an Internet service provider which engages in the practice of peering with other networks, but which also purchases IP transit to reach some portion of the Internet.\n\nTier 2 providers are the most common Internet service providers as it is much easier to purchase transit from a Tier 1 network than it is to peer with them and attempt becoming a Tier 1 carrier.\n\nThe term Tier 3 is sometimes also used to describe networks who solely purchase IP transit from other networks to reach the Internet.\n\nSection::::See also.\n\nBULLET::::- Peering point\n", "It can be difficult to determine whether a network is paying for peering or transit, as these business agreements are rarely public information, or are covered under a non-disclosure agreement. The Internet peering community is roughly the set of peering coordinators present at the Internet exchange points on more than one continent. The subset representing Tier 1 networks is collectively understood in a loose sense, but not published as such.\n\nCommon definitions of Tier 2 and Tier 3 networks:\n", "A typical scenario for this characteristic involves a network that was the incumbent telecommunications company in a specific country or region, usually tied to some level of government-supported monopoly. Within their specific countries or regions of origin, these networks maintain peering policies which mimic those of Tier 1 networks (such as lack of openness to new peering relationships and having existing peering with every other major network in that region). However, this network may then extend to another country, region, or continent outside of its core region of operations, where it may purchase transit or peer openly like a Tier 2 network.\n", "More appropriately then, peering means the exchange of an equitable and fair amount of data-miles between two networks, agreements of which do not preclude any pay-for-transit contracts to exist between the very same parties. On the subject of routing, settlement-free peering involves conditions disallowing the abuse of the other's network by sending it traffic not destined for that network (i.e. intended for transit). Transit agreements however would typically cater for just such outbound packets. Tier 1 providers are more central to the Internet backbone and would only purchase transit from other Tier 1 providers, while selling transit to providers of all tiers. Given their huge networks, Tier 1 providers do not participate in public Internet Exchanges but rather sell transit services to such participants. \n", "A bilateral private peering agreement typically involves a direct physical link between two partners. Traffic from one network to the other is then primarily routed through that direct link.\n", "ISPs may engage in peering, where multiple ISPs interconnect at peering points or Internet exchange points (IXs), allowing routing of data between each network, without charging one another for the data transmitted—data that would otherwise have passed through a third upstream ISP, incurring charges from the upstream ISP.\n\nISPs requiring no upstream and having only customers (end customers or peer ISPs) are called Tier 1 ISPs.\n", "There is no authority that defines tiers of networks participating in the Internet. The most common and well-accepted definition of a Tier 1 network is a network that can reach every other network on the Internet without purchasing IP transit or paying for peering. By this definition, a Tier 1 network must be a transit-free network (purchases no transit) that peers for free with every other Tier 1 network and can reach all major networks on the Internet. Not all transit-free networks are Tier 1 networks, as it is possible to become transit-free by paying for peering, and it is also possible to be transit-free without being able to reach all major networks on the Internet.\n", "Tier 1 network\n\nA Tier 1 network is an Internet Protocol (IP) network that can reach every other network on the Internet solely via settlement-free interconnection (also known as settlement-free peering). Tier 1 networks can exchange traffic with other Tier 1 networks without having to pay any fees for the exchange of traffic in either direction, while some Tier 2 networks and all Tier 3 networks must pay to transmit traffic on other networks.\n", "Section::::List of Tier 1 networks.\n\nThese networks are recognized by the Internet community as Tier 1 networks, even if some of them appear to have transit providers in CAIDA ranking. \n\nWhile most of these Tier 1 providers offer global coverage (based on the published network map on their respective public websites), there are some which are restricted geographically. However these do offer global coverage for mobiles and IP-VPN type services which are unrelated to being a Tier 1 provider.\n\nA 2008 report shows Internet traffic relying less on U.S. networks than previously.\n\nSection::::Regional Tier 1 networks.\n", "Tier 1\n\nTier 1 or Tier One may refer to:\n\nBULLET::::- Tier 1 capital, core measure of a bank's financial strength\n\nBULLET::::- Tier 1 network, or Tier 1 carrier, an ISP which can connect to the entire Internet without paying transit fees\n\nBULLET::::- Scaled Composites Tier One, Scaled Composites' suborbital human spaceflight program\n\nBULLET::::- WTA Tier I Events, a series of tennis tournaments on the women's tour\n\nBULLET::::- Tier 1 visas under the Points-based immigration system (United Kingdom)\n\nBULLET::::- Tier 1 - UK Nuclear Site Management & Licensing, nuclear Site management licensees\n", "The most widely quoted source for identifying Tier 1 networks is published by Renesys Corporation, but the base information to prove the claim is publicly accessible from many locations, such as the RIPE RIS database, the Oregon Route Views servers, Packet Clearing House, and others.\n", "BULLET::::- Tier 2 network: A network that peers for free with some networks, but still purchases IP transit or pays for peering to reach at least some portion of the Internet.\n\nBULLET::::- Tier 3 network: A network that solely purchases transit/peering from other networks to participate in the Internet.\n\nSection::::History.\n", "Diverse routing is where the carrier provides more than one route to bring the ISDN 30’s from the exchange, or exchanges, (as in dual parenting), but they may share underground ducting and cabinets, (those green boxes by the side of the road.)\n\nSection::::Separacy.\n\nSeparacy is the carrier can provide more than one route to bring the ISDN 30’s from the exchange, or exchanges, (as in dual parenting), but they may not share underground ducting and cabinets, and therefore should be absolutely separate from the telephone exchange to the customer premises.\n\nSection::::Exchange based solutions.\n", "The largest providers, known as tier 1 providers, have such comprehensive networks that they never purchase transit agreements from other providers. As of 2016 there were six tier 1 providers in the telecommunications industry. Current Tier 1 carriers include CenturyLink (Level 3), Telia Carrier, NTT, GTT, Tata Communications, and Telecom Italia.\n\nSection::::Infrastructure.\n", "Any given ISP may use their own PoPs to deliver one service, and use a VISP model to deliver another service, or, use a combination to deliver a service in different areas.\n", "The service provided by a wholesale ISP in a VISP model is distinct from that of an upstream ISP, even though, in some cases, they may both be one and the same company. The former provides connectivity from the end-user's premises to the Internet or to the end-user's ISP, and the latter provides connectivity from the end user's ISP to all or parts of the rest of the Internet.\n", "In the simplest case, a single connection is established to an upstream ISP and is used to transmit data to or from areas of the Internet beyond the home network; this mode of interconnection is often cascaded multiple times until reaching a tier 1 carrier. In reality, the situation is often more complex. ISPs with more than one point of presence (PoP) may have separate connections to an upstream ISP at multiple PoPs, or they may be customers of multiple upstream ISPs and may have connections to each one of them at one or more point of presence. Transit ISPs provide large amounts of bandwidth for connecting hosting ISPs and access ISPs.\n", "When the Internet was opened to the commercial markets, multiple for-profit Internet backbone and access providers emerged. The network routing architecture then became decentralized and attained a need for exterior routing protocols, in particular the Border Gateway Protocol emerged. New Tier 1 ISPs and their peering agreements supplanted the government-sponsored NSFNet, a program that was officially terminated on April 30, 1995. The NSFnet-supplied regional networks then sought to buy national-scale Internet connectivity from these now numerous, private, long-haul networks.\n\nSection::::Routing through peering.\n", "In the most logical definition, a Tier 1 provider will never pay for transit because the set of all Tier 1 providers sells transit to all of the lower tier providers everywhere, and because(a) all Tier 1 providers peer with every other Tier 1 provider globally and,(b) the peering agreement allows \"access\" to all of the transit customers, this means that (c) the Tier 1 network contains all hosts everywhere that are connected to the global Internet.As such, by the peering agreement, all the customers of any Tier 1 provider already have access to all the customers of all the other Tier 1 providers without the Tier 1 provider itself having to pay transit costs to the other networks. Effectively, the actual transit costs incurred by provider A on behalf of provider B are logically identical to the transit costs incurred by provider B on behalf of provider A -- hence there not being any payment required.\n", "End users connect to the Internet through Internet service providers (ISPs). The Tier 1 ISPs own the infrastructure, which includes routers, switches and fiber optic footprints. The back bone of the Internet is connected through Tier 1 ISPs that peer with other Tier 1 ISPs in a transit-free network. These peering agreements between Tier 1 ISPs have no overt settlements, meaning there is no money exchanged for the right to pass traffic between Tier 1 peers. Tier 2 and Tier 3 ISPs are customers of the Tier 1 ISPs and rely on the Tier 1 ISPs to route their traffic across the Internet. This is a disadvantage for the lower tier ISPs due to the amount of traffic hops and shared common gateways to Tier 1 ISPs. The shared common gateways are choke points that contribute to the Internet Rush Hour. Each Tier 1 ISP has a peering policy that defines how IP Traffic exchanges are created and guidelines for managing peer traffic.\n", "Winsock 2 LSPs are implemented as Windows DLLs with a single exported entry function, \"WSPStartup\". All other transport SPI functions are made accessible to ws2_32.dll or an upper chain layered provider via the LSP's dispatch table. LSPs and base providers are strung together to form a protocol chain. The LSP DLL has to be registered using a special LSP registrant which instructs Winsock 2, the loading order of the LSPs (there can be more than one LSP installed) and which protocols to intercept.\n", "In December 2013, the company became an authorised supplier for Connection Vouchers. The Scheme is managed by Broadband Delivery UK (BDUK), a unit within the Department for Culture, Media and Sport.\n\nSection::::Products & Services.\n\nSpectrum Telecoms provides products and services in 5 main categories:\n\nBULLET::::- Unified communications\n\nBULLET::::- Managed private networks\n\nBULLET::::- Cloud and hosted services\n\nBULLET::::- Enterprise mobility\n\nBULLET::::- Network security\n\nSection::::PR & Social Responsibility.\n", "Customers, depending on local equipment, condition of the local loop, and distance from the DSLAM, receive one option in the High Speed Internet Category, and one in the High Speed Internet Enhanced category. Enhanced Service is $10 more, and phone service is required for both. However, for business/commercial customers, Verizon doesn't require phone service, and offers different tiers.\n\nVerizon also leases out their DSL lines for other 3rd party competitive local exchange carriers. Customers can receive DSL services from those CLECs, using Verizon's infrastructure.\n\nSection::::Technical Implementation.\n", "The three service types are recognized by the IT industry although specifically defined by ITIL and the U.S. Telecommunications Act of 1996.\n\nBULLET::::- Type I: internal service provider\n\nBULLET::::- Type II: shared service provider\n\nBULLET::::- Type III: external service provider\n\nType III SPs provide IT services to external customers and subsequently can be referred to as external service providers (ESPs) which range from a full IT organization/service outsource via managed services or MSPs (managed service providers) to limited product feature delivery via ASPs (application service providers).\n\nSection::::Types of service providers.\n\nBULLET::::- Application service provider (ASP)\n\nBULLET::::- Network service provider (NSP)\n", "Siro, a joint venture between the state owned power company ESB Group and Vodafone Ireland, is also rolling out 1 Gbit/s FTTH download and 200Mbit/s upload to 500,000 properties in Ireland by 2018. This network uses ESB's physical electrical distribution network to carry fibres through ducts and on poles directly into homes and offices.\n\nBoth of these networks are being operated on a wholesale basis and end users can select from a range of different ISPs and IP television providers and a wide range of service for residential and business users.\n" ]
[]
[]
[ "normal" ]
[]
[ "normal", "normal" ]
[]
2018-01667
Why does the dentist make me take antibiotics (Clindamycin) before a cleaning?
You’re exactly right that the dentist is worried about bacterial infection after teeth cleaning. In particular strep viridans is known for being a oral bacteria. It can have some severe complications including heart valve damage. Clindamycin has great oral absorption. Which is probably why your dentist used that particular drug. It works by stopping protein synthesis in bacterial ribosomes.
[ "Antibiotic use in dentistry\n\nThere are many circumstances during dental treatment where antibiotics are prescribed by dentists to prevent further infection (e.g. post-operative infection). The most common antibiotic prescribed by dental practitioners is penicillin in the form of amoxicillin, however many patients are hypersensitive to this particular antibiotic. Therefore, in the cases of allergies, erythromycin is used instead.\n\nSection::::Post-operative Infections.\n\nSection::::Post-operative Infections.:Bacteraemia.\n", "There are many different routes of administration for antibiotic treatment. Antibiotics are usually taken by mouth. In more severe cases, particularly deep-seated systemic infections, antibiotics can be given intravenously or by injection. Where the site of infection is easily accessed, antibiotics may be given topically in the form of eye drops onto the conjunctiva for conjunctivitis or ear drops for ear infections and acute cases of swimmer's ear. Topical use is also one of the treatment options for some skin conditions including acne and cellulitis. Advantages of topical application include achieving high and sustained concentration of antibiotic at the site of infection; reducing the potential for systemic absorption and toxicity, and total volumes of antibiotic required are reduced, thereby also reducing the risk of antibiotic misuse. Topical antibiotics applied over certain types of surgical wounds have been reported to reduce the risk of surgical site infections. However, there are certain general causes for concern with topical administration of antibiotics. Some systemic absorption of the antibiotic may occur; the quantity of antibiotic applied is difficult to accurately dose, and there is also the possibility of local hypersensitivity reactions or contact dermatitis occurring.\n", "This is performed in conjunction with mechanical debridement, with application of chlorhexidine digluconate, a potent antiseptic. To achieve positive treatment results, 3–4 weeks of regular administration of chlorhexidine, either in the form of daily rinse (of 0.1%, 0.12% or 0.2%) or as a gel, is necessary. This is also recommended to maintain satisfactory plaque control. Chlorhexidine is shown to significantly improve the mucosal condition in bleeding on probing, probing pocket depth, and clinical attachment level.\n\nSection::::Treatment.:Antibiotic treatment.\n", "When ingested, it is usually recommended that the more water-soluble, short-acting tetracyclines (plain tetracycline, chlortetracycline, oxytetracycline, demeclocycline and methacycline) be taken with a full glass of water, either two hours after eating or two hours before eating. This is partly because most tetracyclines bind with food and also easily with magnesium, aluminium, iron and calcium, which reduces their ability to be completely absorbed by the body. Dairy products, antacids and preparations containing iron should be avoided near the time of taking the drug. Partial exceptions to these rules occur for doxycycline and minocycline, which may be taken with food (though not iron, antacids, or calcium supplements). Minocycline can be taken with dairy products because it does not chelate calcium as readily, although dairy products do decrease absorption of minocycline slightly.\n", "These bacteria are present in the normal oral flora and enter the bloodstream usually by dental surgical procedures (tooth extractions) or genitourinary manipulation; as such, dental surgeons must fully carry out protective and preventive measures. In some countries e.g. the USA, high risk patients may be given prophylactic antibiotics such as penicillin or clindamycin for penicillin allergic patients prior to dental procedures. Prophylactics should be bactericidal rather than bacteriostatic. Such measures are not taken in certain countries e.g. Scotland due to the fear of antibiotic resistance.\n", "This is important for patients who have systemic cause for bleeding. Usually, local haemostatics do not work well on limiting their bleeding because they only result in temporary cessation of bleeding. Antibiotics can be prescribed to manage any bleeding associated with a spreading infection.\n\nSection::::Complications.\n\nBULLET::::- Infection: The dentist may opt to prescribe antibiotics pre- and/or post-operatively if he or she determines the patient to be at risk of infection.\n", "In the past, bacteremia caused by dental procedures (in most cases due to viridans streptococci, which reside in oral cavity), such as a cleaning or extraction of a tooth was thought to be more clinically significant than it actually was. However, it is important that a dentist or a dental hygienist be told of any heart problems before commencing treatment. Antibiotics are administered to patients with certain heart conditions as a precaution, although this practice has changed in the US, with new American Heart Association guidelines released in 2007, and in the UK as of August 2018 due to new SDCEP advice in line with the NICE guidelines. Everyday tooth brushing and flossing will similarly cause bacteremia. Although there is little evidence to support antibiotic prophylaxis for dental treatment, the current AHA guidelines are highly accepted by clinicians and patients.\n", "After stabilizing the patient’s airway, extracting the infected tooth will typically promote adequate drainage and the infection will resolve shortly thereafter. If the infection involves multiple primary spaces or any of the secondary spaces previously mentioned, then incision and drainage with culture-guided antibiotics may be indicated. Since most mouth infections are polymicrobial, penicillin is an appropriate initial choice of antibiotic because of its activity against \"Streptococcus\" and gram negative anaerobes. If the patient has a penicillin allergy, then clindamycin with or without metronidazole are also effective empiric antibiotic regimens. Additionally, empiric antibiotics should be initiated in patients with a compromised immune system, like those on immunosuppressive medications, with diabetes, or with cancer. In situations where the infection worsens or fails to improve after multiple days, washing out the wound in the operating room should control the source of infection and promote healing.\n", "Tetracycline has been used with some success in the treatment of localised juvenile periodontitis and this has proven to be particularly effective with in vitro studies of organisms associated with chronic and juvenile periodontitis.\n", "In the past, one in eight cases of infective endocarditis were because of bacteremia caused by dental procedures (in most cases due to \"Streptococcus viridans\", which reside in the oral cavity), such as a cleaning or extraction of a tooth; this was thought to be more clinically significant than it actually was. However, it is important that a dentist or a dental hygienist be told of any heart problems before commencing treatment. Antibiotics are administered to patients with certain heart conditions as a precaution, although this practice has changed in the US, with new American Heart Association guidelines released in 2007, and in the UK as of March 2008 due to new NICE guidelines. Everyday tooth brushing and flossing will similarly cause bacteremia, so a high standard of oral health should be adhered to at all times. Although there is little evidence to support antibiotic prophylaxis for dental treatment, the current American Heart Association guidelines are highly accepted by clinicians and patients.\n", "Antibiotics can be prescribed by dental professionals to reduce risks of certain post-extraction complications. There is evidence that use of antibiotics before and/or after impacted wisdom tooth extraction reduces the risk of infections by 70%, and lowers incidence of dry socket by one third. For every 12 people who are treated with an antibiotic following impacted wisdom tooth removal, one infection is prevented. Use of antibiotics does not seem to have a direct effect on manifestation of fever, swelling, or trismus seven days post-extraction. In the 2013 Cochrane review, 18 randomized control double-blinded experiments were reviewed and, after considering the biased risk associated with these studies, it was concluded that there is moderate overall evidence supporting the routine use of antibiotics in practice in order to reduce risk of infection following a third molar extraction. There are still reasonable concerns remaining regarding the possible adverse effects of indiscriminate antibiotic use in patients. There are also concerns about development of antibiotic resistance which advises against the use of prophylactic antibiotics in practice.\n", "Studies conducted to investigate the effects of antibiotics on patients with acute periapical periodontitis and acute apical abscess showed that patients receiving antibiotics in addition to root canal treatment did not have a reduced level of inflammation as compared to the patients not receiving antibiotics. However the available research on this topic is not of optimal quality therefore the results are not entirely reliable.\n\nSection::::Common antibiotics used in Dentistry.\n", "Prescribing by an infectious disease specialist compared with prescribing by a non-infectious disease specialist decreases antibiotic consumption and reduces costs.\n\nSection::::Antibiotic resistance.\n", "Antibiotic creams are the preferred treatment for mild cases of impetigo, despite their limited systemic absorption. Such prescribed ointments include neosporin, fusidic acid, chloramphenicol and mupirocin. More severe cases of impetigo however (especially bullous impetigo) will likely require oral agents with better systemic bioavailability, such as cephalexin. Cases that do not resolve with initial antibiotic therapy or require hospitalization may also be indicative an MRSA infection, which would require the use of agents specifically able to treat it, such as clindamycin.\n", "BULLET::::- Oral Hygiene Instructions: The clinician should advise the patient of his intrinsic susceptibility to plaque, which his body induces a strong, pro-inflammatory response to. It is thus essential to keep his oral hygiene immaculate. This involve going over both smooth surfaces (tooth brushing instructions) and the use of interproximal aids (e.g. floss).\n", "Although considerable bacterial reduction can be achieved by the mechanical action of instruments and irrigation solutions, microorganisms are rarely completely eliminated from the root canals regardless of the instrumentation technique and file sizes employed. Due to the anatomical localization of the endodontic infection, it only can be treated through professional intervention using both chemical and mechanical procedures. However, many studies have proved that total elimination of bacteria can not be observed in most of the cases.\n\nBULLET::::- Canal Irrigation (CI)\n", "Bacteraemia is a condition in which bacteria are present in the blood and may cause disease, including systemic disease such as infective endocarditis. Some dental treatments may cause bacteraemia, such as tooth extractions, subgingival scaling or even simple aggressive tooth brushing by patients.\n\nSection::::Post-operative Infections.:Infective Endocarditis.\n", "BULLET::::- For prophylaxis in order to prevent bacterial infections occurring. For example, this can occur before surgery, to prevent infection during the operation, or for patients with immunosuppression who are at high-risk for dangerous bacterial infections.\n\nSection::::Bacterial targets.\n", "Osteomyelitis often requires prolonged antibiotic therapy for weeks or months. A PICC line or central venous catheter can be placed for long-term intravenous medication administration. Some studies of children with acute osteomyelitis report that antibiotic by mouth may be justified due to PICC-related complications. It may require surgical debridement in severe cases, or even amputation. Antibiotics by mouth and by intravenous appear similar.\n", "Section::::Management.:Oral antimicrobials.\n\nThe use of these are based on the clinical evaluation of the condition and if pathogenic bacteria presence is indicated. This is generally a 2-week course for a patient with a persistent presentation of the disease or a 4-6 week course for more severe cases. Penicillin is the first line of choice, although if this is contraindicated commonly used antimicrobials are: clindamycin, fluoroquinolones and/or metronidazole.\n\nSection::::Management.:Intravenous antimicrobials.\n", "Sulphonamides : This a group of drugs which is used in dentistry as they have a major advantage of being able to penetrate cerebrospinal fluid and this is particularly relevant when prescribing antibiotics, prophylactically against bacterial meningitis in patients who have had severe maxillofacial injuries, where the risk of infection is high. There are various other uses for sulphonamides as treatment with other parts of the body.\n", "BULLET::::- 2. Bacterial transportation: Bacteria will readily adhere to the acquired pellicle through adhesins, proteins and enzymes within one to two hours\n\nBULLET::::- 3. Reversible interaction:There is electrostatic attraction or hydrophobic interaction between microorganisms and the tooth surface\n\nBULLET::::- 4. Irreversible interaction:Bacterial adhesins recognise specific host receptors such as pili and outer membrane proteins. The different species of bacteria bind together and require specific receptors to interact with the pellicle.\n", "Section::::Interactions.:Alcohol.\n\nInteractions between alcohol and certain antibiotics may occur and may cause side effects and decreased effectiveness of antibiotic therapy. While moderate alcohol consumption is unlikely to interfere with many common antibiotics, there are specific types of antibiotics, with which alcohol consumption may cause serious side effects. Therefore, potential risks of side effects and effectiveness depend on the type of antibiotic administered.\n", "Children with impetigo can return to school 24 hours after starting antibiotic therapy as long as their draining lesions are covered.\n\nSection::::Treatment.\n\nAntibiotics, either as a cream or by mouth, are usually prescribed. Mild cases may be treated with mupirocin ointments. In 95% of cases, a single 7-day antibiotic course results in resolution in children. It has been advocated that topical antiseptics are not nearly as efficient as topical antibiotics, and therefore should not be used as a replacement.\n", "Prior to the early 1990s, it was common practice for the physician performing the procedure to prescribe an antibiotic to take for a few days to prevent an infection. Since that time, many urologists will order a \"urine C & S\" (urinalysis with bacterial/fungal cultures and testing for sensitivities to anti-infective medications) prior to the performance of the cystoscopy, and as part of the pre-operative workup. Depending on the results of the testing and other circumstances, he or she may elect to prescribe a 10- to 14-day course of antibiotic or other anti-infective treatment, commencing 3 days before the cystoscopy is to be performed, as this may alleviate some inflammation of the urethra prior to the procedure. This practice may provide an additional benefit by preventing an accidental infection from occurring during the procedure. The full-course of antibiotic treatment also lessens the possibility of the bacteria becoming resistant to the antibiotic/anti-infective agent prescribed.\n" ]
[]
[]
[ "normal" ]
[]
[ "normal" ]
[]
2018-00720
Why is it that our bodies are 98 degrees Fahrenheit, but 80+ degrees outside is uncomfortable and temperatures of objects around 98 degrees can cause burns on skin?
Everyone else has explained about core temp and heat loss. I just want to say that 98 F is not hot enough to burn you. Hot tubs are set to a water temp between 100-102 F, and people literally lay in them for hours. 98 is not hot enough to burn skin.
[ "Radio frequency (RF) energy at power density levels of 1-10 mW/cm or higher can cause measurable heating of tissues. Typical RF energy levels encountered by the general public are well below the level needed to cause significant heating, but certain workplace environments near high power RF sources may exceed safe exposure limits. A measure of the heating effect is the specific absorption rate or SAR, which has units of watts per kilogram (W/kg). The IEEE and many national governments have established safety limits for exposure to various frequencies of electromagnetic energy based on SAR, mainly based on ICNIRP Guidelines, which guard against thermal damage.\n", "The minimum temperature that can cause a burn in a finite amount of time is 44 °C (111 °F) for exposure times exceeding 6 hours. From 44° to 51 °C (111° to 124 °F), the rate of burn approximately doubles with each degree risen. The burn would develop in less than a second if the exposure temperature is at least 70 °C (160 °F).\n\nSection::::Factors.:Resistance.\n\nThere are skin factors that offer resistance to burns. A person who is more burn resistant would require higher temperature and longer exposure to burn as badly than a less resistant person.\n\nSection::::Factors.:Skin Thickness.\n", "Section::::Epidemiology.\n\nIn the United States, over two million people required medical attention for thermal burns every year. About 1 in 30 of those victims (75,000) are hospitalized for thermal burns every year, with a third of those patients staying in the hospital for more than two months. About 14,000 Americans die each year from burns.\n\nSection::::Epidemiology.:Children.\n", "When the outside temperature is , the temperature inside a car parked in direct sunlight can quickly exceed . Young children or elderly adults left alone in a vehicle are at particular risk of succumbing to heat stroke. \"Heat stroke in children and in the elderly can occur within minutes, even if a car window is opened slightly.\" As these groups of individuals may not be able to open car doors or to express discomfort verbally (or audibly, inside a closed car), their plight may not be immediately noticed by others in the vicinity. In 2018 51 children in the United States died in hot cars, more than the previous high of 49 in 2010.\n", "Section::::Hazards.:Intrinsic.\n\nDielectric heating from electromagnetic fields can create a biological hazard. For example, touching or standing around an antenna while a high-power transmitter is in operation can cause severe burns. These are exactly the kind of burns that would be caused inside a microwave oven. The dialectric heating effect varies with the power and the frequency of the electromagnetic energy, as well as the distance to the source. The eyes and testes are particularly susceptible to radio frequency heating due to the paucity of blood flow in these areas that could otherwise dissipate the heat buildup.\n", "At temperatures greater than , proteins begin losing their three-dimensional shape and start breaking down. This results in cell and tissue damage. Many of the direct health effects of a burn are secondary to disruption in the normal functioning of the skin. They include disruption of the skin's sensation, ability to prevent water loss through evaporation, and ability to control body temperature. Disruption of cell membranes causes cells to lose potassium to the spaces outside the cell and to take up water and sodium.\n", "Workers who are working in laundries, bakeries, restaurant kitchens, steel foundries, glass factories, brick-firing and ceramic plants, electrical utilities, smelters, and outdoor workers such as construction workers, firefighters, farmers, and mining workers are more vulnerable to exposure to extreme heat. Effects of heat stress include:\n\nBULLET::::- Increased irritability\n\nBULLET::::- Dehydration\n\nBULLET::::- Heat stroke\n\nBULLET::::- Chronic heat exhaustion\n\nBULLET::::- Cramps, rashes, and burns\n\nBULLET::::- Sweaty palms and dizziness\n\nBULLET::::- Increased risk of other accidents\n\nBULLET::::- Loss of concentration and ability to do mental tasks and heavy manual work\n\nBULLET::::- Sleep disturbances, sickness, and susceptibility to minor injuries\n", "In the United States, fire and hot liquids are the most common causes of burns. Of house fires that result in death, smoking causes 25% and heating devices cause 22%. Almost half of injuries are due to efforts to fight a fire. Scalding is caused by hot liquids or gases and most commonly occurs from exposure to hot drinks, high temperature tap water in baths or showers, hot cooking oil, or steam. Scald injuries are most common in children under the age of five and, in the United States and Australia, this population makes up about two-thirds of all burns. Contact with hot objects is the cause of about 20–30% of burns in children. Generally, scalds are first- or second-degree burns, but third-degree burns may also result, especially with prolonged contact. Fireworks are a common cause of burns during holiday seasons in many countries. This is a particular risk for adolescent males.\n", "The ADS's effect of repelling humans occurs at slightly higher than 44 °C (111 °F), though first-degree burns occur at about 51 °C (124 °F), and second-degree burns occur at about 58 °C (136 °F). In testing, pea-sized blisters have been observed in less than 0.1% of ADS exposures, indicating that second degree surface burns have been caused by the device. The radiation burns caused are similar to microwave burns, but only on the skin surface due to the decreased penetration of shorter millimeter waves. The surface temperature of a target will continue to rise so long as the beam is applied, at a rate dictated by the target's material and distance from the transmitter, along with the beam's frequency and power level set by the operator. Most human test subjects reached their pain threshold within 3 seconds, and none could endure more than 5 seconds.\n", "The square root of the product of thermal conductivity, density, and specific heat capacity is called thermal effusivity, and tells how much heat energy the body absorbs or releases in a certain amount of time per unit area when its surface is at a certain temperature. Since the heat taken in by the cooler body must be the same as the heat given by the hotter one, the surface temperature must lie closer to the temperature of the body with the greater thermal effusivity. The bodies in question here are human feet (which mainly consist of water) and burning coals.\n", "Thick skin is more prone to superficial burns because of the simple fact that there's more to burn before it gets deep. Skin thickness varies throughout the body, from 0.5mm of the eye lids to 6mm in the soles of the feet. Skin thickness overall varies with age, being thinner at extremes of age.\n\nExternal factors on the skin like hair, moisture or oils can also help ease and delay the burn. Another factor is skin circulation, which is used to dissipate heat imprinted on the skin.\n\nSection::::Causes.\n\nSection::::Causes.:Hot liquids and steam.\n", "ASHRAE 55-2010 defines SET as \"the temperature of an imaginary environment at 50% relative humidity, average air speed, and mean radiant temperature equal to average air temperature, in which total heat loss from the skin of an imaginary occupant with an activity level of 1.0 met and a clothing level of 0.6 clo is the same as that from a person in the actual environment, with actual clothing and activity level.\"\"\"\n", "In those hospitalized from scalds or fire burns, 310% are from assault. Reasons include: child abuse, personal disputes, spousal abuse, elder abuse, and business disputes. An immersion injury or immersion scald may indicate child abuse. It is created when an extremity, or sometimes the buttocks are held under the surface of hot water. It typically produces a sharp upper border and is often symmetrical, known as \"sock burns\", \"glove burns\", or \"zebra stripes\" - where folds have prevented certain areas from burning. Deliberate cigarette burns are preferentially found on the face, or the back of the hands and feet. Other high-risk signs of potential abuse include: circumferential burns, the absence of splash marks, a burn of uniform depth, and association with other signs of neglect or abuse.\n", "To prevent children from getting burned, water temperature must not be set too high when taking baths or washing hands, nonflammable sleepwear should be worn, backburners should be used when cooking something on the stove, and hot foods, drinks, and irons should be kept away from the edge of counter and table. Oven mitts and potholders must be used in handling hot containers. People should be careful when taking hot foods out of microwave ovens, and covers should be opened gently to reduce the risk of steam burns.\n\nSection::::Treatment.\n", "Burn injuries occur more commonly among the poor. Smoking and alcoholism are other risk factor. Fire-related burns are generally more common in colder climates. Specific risk factors in the developing world include cooking with open fires or on the floor as well as developmental disabilities in children and chronic diseases in adults.\n\nSection::::Cause.:Thermal.\n", "As described within the standard: \"The purpose of the standard is to specify the combinations of indoor thermal environmental factors and personal factors that will produce thermal environmental conditions acceptable to a majority of the occupants within the space\".\n\nSection::::Scope.\n\nThe standard addresses the four primary environmental factors (temperature, thermal radiation, humidity, and air speed) and two personal factors (activity and clothing) that affect thermal comfort. It is applicable for healthy adults at atmospheric pressures in altitudes up to (or equivalent to) , and for indoor spaces designed for occupancy of at least 15 minutes.\n\nSection::::Definitions.\n\nSection::::Definitions.:Adaptive model.\n", "Heat is also a hazard The temperature required for the proper progression of certain reactions in the refining process can reach 1600 degrees F.  As with chemicals, the operating system is designed to safely contain this hazard without injury to the worker.  However, in system failures this is a potent threat to workers’ health.  Concerns include both direct injury through a heat illness or injury, as well as the potential for devastating burns should the worker come in contact with super-heated reagents/equipment.\n", "Those working in industry, in the military, or as first responders may be required to wear personal protective equipment (PPE) against hazards such as chemical agents, gases, fire, small arms and even Improvised Explosive Devices (IEDs). PPE includes a range of hazmat suits, firefighting turnout gear, body armor and bomb suits, among others. Depending on design, the wearer may be encapsulated in a microclimate, due to an increase in thermal resistance and decrease in vapor permeability. As physical work is performed, the body’s natural thermoregulation (i.e., sweating) becomes ineffective. This is compounded by increased work rates, high ambient temperature and humidity levels, and direct exposure to the sun. The net effect is that desired protection from some environmental threats inadvertently increases the threat of heat stress.\n", "Heat intolerance\n\nHeat intolerance is a symptom reported by people who feel uncomfortable in hot environments. Typically, the person feels uncomfortably hot and sweats excessively. \n\nCompared to heat illnesses like heatstroke, heat intolerance is usually a symptom of endocrine disorders, drugs, or other medical conditions, rather than the result of too much exercise or hot, humid weather.\n\nSection::::Symptoms.\n\nBULLET::::- Feeling subjectively hot\n\nBULLET::::- Sweating, which may be excessive\n\nIn patients with multiple sclerosis (MS), heat intolerance may cause a pseudoexacerbation, which is a temporary worsening of MS-related symptoms.\n", "One may observe that the heat flow is directly related to the thermal conductivities of the bodies in contact, formula_6 and formula_7, the contact area formula_4, and the thermal contact resistance, formula_9, which, as previously noted, is the inverse of the thermal conductance coefficient, formula_1.\n\nSection::::Importance.\n\nMost experimentally determined values of the thermal contact resistance fall between\n\n0.000005 and 0.0005 m² K/W (the corresponding range of thermal contact\n", "In the 19th century, most books quoted \"blood heat\" as 98 °F, until a study published the mean (but not the variance) of a large sample as . Subsequently that mean was widely quoted as \"37 °C or 98.4 °F\" until editors realised 37 °C is closer to 98.6 °F than 98.4 °F. The 37 °C value was set by German physician Carl Reinhold August Wunderlich in his 1868 book, which put temperature charts into widespread clinical use. Dictionaries and other sources that quoted these averages did add the word \"about\" to show that there is some variance, but generally did not state how wide the variance is.\n", "In the United States, only about 4% of patients with photosensitive disorders are reported to have been diagnosed with solar urticaria. Internationally, the number is slightly larger at 5.3%. Solar urticaria may occur in all races but studies monitoring 135 African Americans and 110 Caucasians with photodermatoses found that 2.2% of the African Americans had SU and 8% of the Caucasians had the disease showing that Caucasians have a better chance of getting the disease. The age ranges anywhere from 5–70 years old, but the average age is 35 and cases have been reported with children that are still in infancy. Solar urticaria accounts for less than one percent of the many documented urticaria cases. To put that into a better perspective, since its first documented case in Japan in 1916, over one hundred other instances of the disease have been reported.\n", "The total surface area of an adult is about 2 m, and the mid- and far-infrared emissivity of skin and most clothing is near unity, as it is for most nonmetallic surfaces. Skin temperature is about 33 °C, but clothing reduces the surface temperature to about 28 °C when the ambient temperature is 20 °C. Hence, the net radiative heat loss is about\n", "One can think of this as going from a scenario were before the majority of people caught in the open, who just stood there staring at the fireball out to a radius of 11 km would probably die from lethal [[Burn|3rd-degree burns]] on their unclothed skin, to a scenario now were instead the vast majority of people who 'duck and cover' from 7 km out to 11 km would remain alive, with generally non life-threatening [[Burn|1st-degree burns]] and [[Burn|2nd-degree burns]] over their exposed skin, with the burn severity naturally depending on their range from the explosion.\n", "In addition to the dangers of touching the hot bulb or element, high-intensity short-wave infrared radiation may cause indirect thermal burns when the skin is exposed for too long or the heater is positioned too close to the subject. Individuals exposed to large amounts of infrared radiation (like glass blowers and arc welders) over an extended period of time may develop depigmentation of the iris and opacity of the aqueous humor, so exposure should be moderated.\n\nSection::::Efficiency.\n" ]
[ "Temperatures of 98 degrees can cause burns on skin" ]
[ "Temperatures of 98 degrees cannot cause burns on skin." ]
[ "false presupposition" ]
[ "Temperatures of 98 degrees can cause burns on skin" ]
[ "false presupposition" ]
[ "Temperatures of 98 degrees cannot cause burns on skin." ]
2018-00982
Why didn't this bottle of water freeze?
Basically way I understand it is that if the water is too still and in a smooth container (like plastic), ice crystals won't be able to form. As soon as it is disturbed it will "snap freeze" where basically it all freezes at once. You should have filmed it for Reddit karma.
[ "Section::::Observations.:Modern context.\n\nMpemba and Osborne describe placing samples of water in beakers in the ice box of a domestic refrigerator on a sheet of polystyrene foam. They showed the time for \"freezing to start\" was longest with an initial temperature of and that it was much less at around . They ruled out loss of liquid volume by evaporation as a significant factor and the effect of dissolved air. In their setup most heat loss was found to be from the liquid surface.\n", "Water does not always freeze at . Water that persists in liquid state below this temperature is said to be supercooled, and supercooled water droplets cause icing on aircraft. Below , icing is rare because clouds at these temperatures usually consist of ice particles rather than supercooled water droplets. Below , supercooled water cannot exist, therefore icing is impossible.\n", "Section::::Bottle Imp paradox.\n", "Section::::How water freezes.\n", "BULLET::::- On December 23, 1927, Frances Wilson Grayson, niece of U.S. President Woodrow Wilson, was to attempt to be the first woman to make a transatlantic flight (non-solo). However, her Sikorsky amphibian plane disappeared en route from New York's Long Island to Harbour Grace, Newfoundland, and was never found. A bottled message was found in Salem Harbor, Massachusetts, in January 1929, the unauthenticated message reading, \"1928, we are freezing. Gas leaked out. We are drifting off Grand Banks. Grayson.”\n", "Although the fuel itself did not freeze, small quantities of water in the fuel did.\n\nIce adhered to the inside of the fuel lines, probably where they ran through the struts attaching the engines to the wings.\n", "BULLET::::- In December 1928, a trapper working at the mouth of the Agawa River, Ontario, found a bottled note from Alice Bettridge, an assistant stewardess in her early twenties who initially survived the December 1927 sinking in a blizzard of the freighter \"Kamloops\" and, before she herself perished, wrote \"I am the last one left alive, freezing and starving to death on Isle Royale in Lake Superior. I just want mom and dad to know my fate.\"\n", "An example of experimental data on the freezing of small water droplets is shown at the right. The plot shows the fraction of a large set of water droplets, that are still liquid water, i.e., have not yet frozen, as a function of temperature. Note that the highest temperature at which any of the droplets freezes is close to -19° C, while the last droplet to freeze does so at almost -35° C.\n\nSection::::Examples.\n\nSection::::Examples.:Examples of the nucleation of fluids (gases and liquids).\n", "Water will freeze at different temperatures depending upon the type of ice nuclei present. Ice nuclei cause water to freeze at higher temperatures than it would spontaneously. For pure water to freeze spontaneously, called homogeneous nucleation, cloud temperatures would have to be . Here are some examples of ice nuclei:\n\nSection::::Ice multiplication.\n", "There are phenomena like supercooling, in which the water is cooled below its freezing point, but the water remains liquid, if there are too few defects to seed crystallization. One can therefore observe a delay until the water adjusts to the new, below-freezing temperature. Supercooled liquid water must become ice at minus 48 C (minus 55 F) not just because of the extreme cold, but because the molecular structure of water changes physically to form tetrahedron shapes, with each water molecule loosely bonded to four others. This suggests the structural change from liquid to \"intermediate ice\". The crystallization of ice from supercooled water is generally initiated by a process called nucleation. Because of the speed and size of nucleation, which occurs within nanoseconds and nanometers.\n", "David Auerbach describes an effect that he observed in samples in glass beakers placed into a liquid cooling bath. In all cases the water supercooled, reaching a temperature of typically before spontaneously freezing. Considerable random variation was observed in the time required for spontaneous freezing to start and in some cases this resulted in the water which started off hotter (partially) freezing first.\n", "The failure of an O-ring seal was determined to be the cause of the Space Shuttle \"Challenger\" disaster on January 28, 1986. A crucial factor was cold weather prior to the launch. This was famously demonstrated on television by Caltech physics professor Richard Feynman, when he placed a small O-ring into ice-cold water, and subsequently showed its loss of flexibility before an investigative committee.\n", "The National Transportation Safety Board determined that the probable cause was inadequate standards for icing operations while in flight, specifically the failure of the Federal Aviation Administration to establish adequate minimum airspeeds for icing conditions, leading to a loss of control when the airplane accumulated a thin, rough accretion of ice on its lifting surfaces.\n", "To prevent the pre-coolers from icing up, the first pre-cooler cooled the air to around 10 degrees above freezing point, to liquefy the water vapour in the air. Then LOX would have been injected into the airflow to drop the temperature to flash freezing the water into microscopic ice crystals, sufficiently cold that they wouldn't melt due to kinetic heating if they struck the second pre-cooler elements. A water trap could have been added after the first pre-cooler if operating conditions resulted in an excess of moisture.\n", "The compressed ice was then transported to the University of Rochester where it was blasted by a pulse of laser light. The reaction created conditions like those inside of ice giants such as Uranus and Neptune by heating up the ice thousands of degrees under a pressure a million times greater than the earth's atmosphere in only ten to 20 billionths of a second. The experiment concluded that the current in the conductive water was indeed carried by ions rather than electrons and thus pointed to the water being superionic. More recent experiments from the same LLNL/Rochester team used x-ray crystallography on laser-shocked water droplets to determine that the oxygen ions enter a face-centered-cubic phase, which was dubbed ice XVIII and reported in the journal \"Nature\" in May 2019.\n", "The freezing of small water droplets to ice is an important process, particularly in the formation and dynamics of clouds. Water (at atmospheric pressure) does not freeze at 0° C, but rather at temperatures that tend to decrease as the volume of the water decreases and as the water impurity increases. Thus small droplets of water, as found in clouds, may remain liquid far below 0° C.\n", "BULLET::::- The National Institute of Standards and Technology in Boulder, Colorado using a new technique, managed to chill a microscopic mechanical drum to 360 microkelvins, making it the coldest object on record. Theoretically, using this technique, an object could be cooled to absolute zero.\n", "Jean Hilliard\n\nJean Hilliard (born 1961), from Lengby, Minnesota, is a survivor of a severe 6-hour freezing during the nighttime after a car accident in rural northwestern Minnesota, United States, left her car inoperable in sub-zero temperatures. She survived and her recovery was described as a miracle.\n\nSection::::Survival and recovery.\n", "Ice XVI is the least dense (0.81 g/cm) experimentally obtained crystalline form of ice. It is topologically equivalent to the empty structure of sII clathrate hydrates. It was first obtained in 2014 by removing gas molecules from a neon clathrate under vacuum at temperatures below 147 K. The resulting empty water frame, ice XVI, is thermodynamically unstable at the experimental conditions, yet it can be preserved at cryogenic temperatures. Above 145–147 K at positive pressures ice XVI transforms into the stacking-faulty Ice I and further into ordinary Ice I . Theoretical studies predict Ice XVI to be thermodynamically stable at negative pressures (that is under tension). \n", "BULLET::::- Freezing in this scenario begins at a temperature significantly below 0 °C.\n\nBULLET::::- The first material to freeze is not the water, but a dilute solution of alcohol in water.\n\nBULLET::::- The liquid left behind is richer in alcohol, and as a consequence, further freezing would take place at progressively lower temperatures. The frozen material, while always poorer in alcohol than the (increasingly rich) liquid, becomes progressively richer in alcohol.\n\nBULLET::::- Further stages of removing frozen material and waiting for more freezing will come to naught once the liquid uniformly cools to the temperature of whatever is cooling it.\n", "Section::::Background.\n", "Section::::Supercooling.\n", "Once antifreeze has been mixed with water and put into use, it periodically needs to be maintained. If engine coolant leaks, boils, or if the cooling system needs to be drained and refilled, the antifreeze's freeze protection will need to be considered. In other cases a vehicle may need to be operated in a colder environment, requiring more antifreeze and less water. Three methods are commonly employed to determine the freeze point of the solution:\n\nBULLET::::1. Specific gravity—(using a hydrometer test strip or some sort of floating indicator),\n", "BULLET::::- The most common crystallisation process on Earth is the formation of ice. Liquid water does not freeze at 0° C unless there is ice already present, cooling significantly below 0° C is required to nucleate ice and so for the water to freeze. For example, small droplets of very pure water can remain liquid down to below -30° C although ice is the stable state below 0° C.\n", "BULLET::::- \"Thermal conductivity\": The container of hotter liquid may melt through a layer of frost that is acting as an insulator under the container (frost is an insulator, as mentioned above), allowing the container to come into direct contact with a much colder lower layer that the frost formed on (ice, refrigeration coils, etc.) The container now rests on a much colder surface (or one better at removing heat, such as refrigeration coils) than the originally colder water, and so cools far faster from this point on.\n" ]
[ "Bottle of water didn't freeze." ]
[ "The water is frozen it just hasn't changed state. As soon as it is disturbed it will snap freeze." ]
[ "false presupposition" ]
[ "Bottle of water didn't freeze." ]
[ "false presupposition" ]
[ "The water is frozen it just hasn't changed state. As soon as it is disturbed it will snap freeze." ]
2018-02929
Why do engines/motors sometimes have a hard time starting up in colder weather?
1. The gasoline vaporizes considerably slower. 2. The oil is less "warm maple syrup" and more "cold jelly." 3. The battery is cold and you need a new one or block heater if you're somewhere it is very cold.
[ "Cold start (automotive)\n\nA cold start is an attempt to start a vehicle's engine when it is cold, relative to its normal operating temperature, often due to normal cold weather. A cold start situation is commonplace, as weather conditions in most climates will naturally be at a lower temperature than the typical operating temperature of an engine. Occasionally, the term also refers to starting the engine of a vehicle that has been inactive or abandoned for a significant amount of time such as months, years or even decades.\n\nSection::::Causes of cold starts.\n", "Cold starts are more difficult than starting a vehicle that has been run recently (typically between 90 minutes and 2 hours). More effort is needed to turn over a cold engine for multiple reasons:\n\nBULLET::::- The engine compression is higher as the lack of heat makes ignition more difficult\n\nBULLET::::- Low temperatures cause engine oil to become more viscous, making it more difficult to circulate.\n\nBULLET::::- Air becomes more dense the cooler it is. This affects the air-fuel ratio, which in turn affects the flammability of the mixture.\n\nSection::::Solutions to cold starting.\n", "Section::::Turbine engines.\n\nIn contrast to reciprocating fuel injected engines, a hot start in a turbine type engine is the \"result\" of improper starting technique and not simply the condition of starting an engine which is hot due to having been recently run and shutdown.\n", "The problem of cold starting has been greatly reduced since the introduction of engine starters, which are now commonplace on all modern vehicles. The higher revs that can be achieved using electric starter motors improves the chance of successful ignition.\n\nStarting fluid, a volatile liquid, is sometimes sprayed into the combustion chamber of an engine to assist the starting procedure.\n", "While gasoline internal combustion engines are much easier to start in cold weather than diesel engines, they can still have cold weather starting problems under extreme conditions. For years, the solution was to park the car in heated areas. In some parts of the world, the oil was actually drained and heated over night and returned to the engine for cold starts. In the early 1950s, the gasoline Gasifier unit was developed, where, on cold weather starts, raw gasoline was diverted to the unit where part of the fuel was burned causing the other part to become a hot vapor sent directly to the intake valve manifold. This unit was quite popular until electric engine block heaters became standard on gasoline engines sold in cold climates.\n", "Production gaskets will let go under these conditions every time, just like the example shown. It is important to note here that no engine should ever be run under these adverse conditions purposely, however during the ‘tuning’ period of many performance engines these severe conditions may be encountered for short periods, so it is vital to have a cylinder head gasket that can withstand brief bursts of detonation without letting go, so that further adjustments can be made to “tune out” any detonation spikes.\n", "Hot standby may have a slightly different connotation of being active but not productive to hot spare, that is it is a state rather than object. For example, in a national power grid, the supply of power needs to be balanced to demand over a short term. It can take many hours to bring a coal-fired power station up to productive temperatures. To allow for load balancing, generator turbines may be kept running with the generators switched off so as peaks of demand occur, the generators can rapidly be switched on to balance the load. Being in the state of being ready to run is known as hot standby. Though it is not a modern phenomenon, steam train operators might hold a spare steam engine at a terminus fired up, as starting an engine cold would take a significant amount of time.\n", "As the fuel filter is the first point to become clogged up, most diesel motor cars are equipped with a filter heater. This allows summer diesel with a CFPP of –7 °C to be operated safely in –20 °C weather conditions. The fluid characteristics of winter diesel are also extended allowing a diesel type of CFPP –15 °C to be operated safely in –24 °C weather conditions. Note that a filter heater cannot melt wax particles in the \"feed system\" so that fuel supply can be problematic at cold start.\n\nSection::::Operation.:Additives.\n", "There are a number of solutions available which allow diesel engines to continue to operate in cold weather conditions. Once the diesel motor is started it may continue to operate at temperatures \"below\" the CFPP — most engines have a \"spill return\" system, by which any excess fuel from the injector pump and injectors is returned to the fuel tank. When the engine has warmed, returning warm fuel should prevent gelling in the tank.\n\nSection::::Operation.:Fuel Preheater.\n", "Section::::Diesel engine particularities.:Cold weather starting.\n\nIn general, diesel engines do not require any starting aid. In cold weather however, some diesel engines can be difficult to start and may need preheating depending on the combustion chamber design. The minimum starting temperature that allows starting without pre-heating is 40 °C for precombustion chamber engines, 20 °C for swirl chamber engines, and 0 °C for direct injected engines. Smaller engines with a displacement of less than 1 litre per cylinder usually have glowplugs, whilst larger heavy-duty engines have flame-start systems.\n", "Low-output electric heaters in fuel tanks and around fuel lines are a way to extend the fluid characteristics of diesel fuel. This is a standard equipment in vehicles that operate in arctic weather conditions.\n", "Equipment designed for use in particularly extreme cold conditions (such as the polar regions) also undergoes a \"winterization\" process. Many complex devices (automobiles, electronics and radios) as well as common materials (metals, rubbers, petroleum lubricants) are not designed to operate at extremely low temperatures and must be winterized to operate without severe damage from the elements in such conditions. This might involve a chemical treatment process, additional waterproofing/insulation, or even the total substitution of new parts. An example would be the internal combustion engine of an automobile; the installation of heaters on the engine block and battery as well as the substitution of winter-grade coolants and lubricants allows the vehicle to start and run in sub-freezing conditions where a non-winterized engine would quickly break down.\n", "Jumper cables typically do not have overload protection, so when reversed they may begin to function as resistive heaters and become hot enough that the wire insulation begins to melt. If this continues without the problem being detected, the insulation may melt until the wires inside make contact, resulting in an unfused direct short of the supply battery.\n", "This duration is controlled by the thermotime switch, which is a bimetallic strip based device placed in the water jacket or in the case of an air-cooled engine, in the engine/timing case. When an engine first starts, the coolant temperature is cold and the bimetallic strip is closed, which in turn supplies the cold start injector with current.\n", "Unlike a turbine engine, a hot start is unlikely to damage a reciprocating fuel injected engine. However, with improper starting procedure the situation may progress to the point that the operator depletes the starter battery before successfully starting the engine and there is risk of battery or starter damage, and certainly excess wear, due to these repeated unsuccessful attempts to start the engine.\n", "In cold ambient conditions the friction caused by viscous engine oil causes a high load on the starting system. Another problem is the reluctance of the fuel to vaporise and combust at low temperatures. Oil dilution systems were developed (mixing fuel with the engine oil), and engine pre-heaters were used (including lighting fires under the engine). The Ki-Gass priming pump system was used to assist starting of British engines.\n\nAircraft fitted with variable-pitch propellers or constant speed propellers are started in fine pitch to reduce air loads and current in the starter motor circuit.\n", "The fuel can vaporize due to being heated by the engine, by the local climate or due to a lower boiling point at high altitude. In regions where fuels with lower viscosity (and lower boiling threshold) are used during the winter to improve engine startup, continued use of the specialized fuels during the summer can cause vapor lock to occur more readily.\n\nSection::::Causes and incidence.\n", "As the ignition system is normally arranged to produce sparks before top dead centre there is a risk of the engine kicking back during hand starting, to avoid this problem one of the two magnetos used in a typical aero engine ignition system is fitted with an 'impulse coupling', this spring-loaded device delays the spark until top dead centre and also increases the rotational speed of the magneto to produce a stronger spark. When the engine fires, the impulse coupling no longer operates and the second magneto is switched on.\n", "A small heater coil is built into the thermotime switch, which effectively gives a timed output to the cold start injector with the timing duration dependent on the temperature of the engine. If the engine is started whilst warm the bimetallic strip remains open due to the higher temperature, meaning the cold start routine is not entered.\n", "Lithium-ion batteries operate best at certain temperatures. The Model S motor, controller and battery temperatures are controlled by a liquid cooling/heating circuit. Waste heat from the motor heats the battery in cold conditions, and battery performance is reduced until a suitable battery temperature is reached.\n", "Section::::Types.:Engine temperature/EV battery temperature.\n\nThe engine temperature tell-tale is usually installed singly, but has less commonly been installed in pairs. A pair of lights indicate insufficient (, blue) and excessive (, red) engine temperature. A single light usually indicates only an overheat condition in engine. In electric cars, it is usually to monitor the EV battery temperature and indicate the EV battery is overheating or is too cold to operate. One example is in a Nissan Leaf EV.\n\nSection::::Types.:Malfunction indicator (check engine/hybrid/EV system).\n", "Once the engine is running, the heat of compression and ignition maintains the hot bulb at the necessary temperature, and the blow-lamp or other heat source can be removed. Thereafter, the engine requires no external heat and requires only a supply of air, fuel oil and lubricating oil to run. However, under low power the bulb could cool off too much, and a throttle can cut down the cold fresh air supply. Also, as the engine's load is increased, so does the temperature of the bulb, causing the ignition period to advance; to counteract pre-ignition, water is dripped into the air intake. Equally, if the load on the engine is low, combustion temperatures may not be sufficient to maintain the temperature of the hot bulb. Many hot-bulb engines cannot be run off-load without auxiliary heating for this reason.\n", "The method provides no hard and fast rules about what lines of questions to explore, or how long to continue the search for additional root causes. Thus, even when the method is closely followed, the outcome still depends upon the knowledge and persistence of the people involved.\n\nSection::::Example.\n\nBULLET::::- The vehicle will not start. (the problem)\n\nBULLET::::1. Why? - The battery is dead. (First why)\n\nBULLET::::2. Why? - The alternator is not functioning. (Second why)\n\nBULLET::::3. \"Why?\" - The alternator belt has broken. (Third why)\n", "Most of driveability issues can be traced back to a few issues:\n\nBULLET::::- Bad ECM earth/ground\n\nBULLET::::- Bad o2/lambda sensor earth/ground\n\nBULLET::::- Faulty Engine Coolant Temperature sensor (ECT)\n\nThe engine coolant temperature sensor is located in the coolant flange,(under the \n", "In an aircraft with a reciprocating fuel injected engine a hot start is a condition where an engine start is attempted after it has been run, achieved operating temperature, and then recently shut down. The engine is \"hot\" and hence the terminology hot start. When a reciprocating fuel injected engine is shut down, the residual engine heat dissipates into the air and the surrounding aircraft structure. Some of this heat is transferred to the engine fuel lines and fuel injector lines in the engine compartment and, because no fuel is flowing in the lines to cool them as would be under normal operating conditions, the fuel may vaporize or \"boil\" within these fuel lines creating a condition called vapor lock. This combination of liquid fuel and vaporized fuel within the fuel line will result in inconsistent fuel availability to the engine fuel pump and fuel injection system. If severe, the fuel pumps can \"cavitate\" (when the pumping chamber fills with fuel vapor rather than liquid fuel) and become ineffective. The vapor in the fuel lines and loss of fuel pump effectiveness result in inconsistent fuel flow to the engine fuel injectors and ultimately the cylinders resulting difficult starting. Vapor lock can also occur in flight in some aircraft resulting in a rough running engine or engine stoppage.\n" ]
[]
[]
[ "normal" ]
[]
[ "normal" ]
[]
2018-05389
Why does a company care about its stock price after the IPO?
In addition to the other comments, the stockholders become the new ownership. In essence, they are the company, and they care about the stock price because they want to make money off their investment. The officers and executives running the company care, because if they tank the stock price they lose their jobs.
[ "Underwriters and investors and corporations going for an initial public offering (IPO), issuers, are interested in their market value. There is always tension that results since the underwriters want to keep the price low while the companies want a high IPO price.\n", "The danger of overpricing is also an important consideration. If a stock is offered to the public at a higher price than the market will pay, the underwriters may have trouble meeting their commitments to sell shares. Even if they sell all of the issued shares, the stock may fall in value on the first day of trading. If so, the stock may lose its marketability and hence even more of its value. This could result in losses for investors, many of whom being the most favored clients of the underwriters. Perhaps the best known example of this is the Facebook IPO in 2012.\n", "Some researchers (Friesen & Swift, 2009) believe that the underpricing of IPOs is less a deliberate act on the part of issuers and/or underwriters, and more the result of an over-reaction on the part of investors (Friesen & Swift, 2009). One potential method for determining underpricing is through the use of IPO underpricing algorithms.\n\nSection::::Procedure.:Pricing.:Dutch auction.\n", "Leland and Pyle (1977) analyse the role of signals within the process of IPO. The authors show how companies with good future perspectives and higher possibilities of success (\"good companies\") should always send clear signals to the market when going public; the owner should keep control of a significant percentage of the company. To be reliable, the signal must be too costly to be imitated by \"bad companies\". If no signal is sent to the market, asymmetric information will result in adverse selection in the IPO market.\n\nSection::::Brands.\n", "After the IPO, shares are traded freely in the open market at what is known as the free float. Stock exchanges stipulate a minimum free float both in absolute terms (the total value as determined by the share price multiplied by the number of shares sold to the public) and as a proportion of the total share capital (i.e., the number of shares sold to the public divided by the total shares outstanding). Although IPO offers many benefits, there are also significant costs involved, chiefly those associated with the process such as banking and legal fees, and the ongoing requirement to disclose important and sometimes sensitive information.\n", "Section::::Procedure.:Pricing.\n\nA company planning an IPO typically appoints a lead manager, known as a bookrunner, to help it arrive at an appropriate price at which the shares should be issued. There are two primary ways in which the price of an IPO can be determined. Either the company, with the help of its lead managers, fixes a price (\"fixed price method\"), or the price can be determined through analysis of confidential investor demand data compiled by the bookrunner (\"book building\").\n", "Effectively, the institutional investor's large order has given an option to the professional investor. Institutional investors don't like this, because either the stock price rises to $9.99 and comes back down, without them having the opportunity to sell, or the stock price rises to $10.00 and keeps going up, meaning the institutional investor could have sold at a higher price.\n\nSection::::See also.\n\nBULLET::::- Market impact cost – the transaction costs associated with price movements from the market impact\n\nBULLET::::- Slippage (finance)\n\nBULLET::::- Insider Trading\n\nSection::::External links.\n\nBULLET::::- Market impact and trading profile of large trading orders in stock markets\n", "The pre-IPO studies are sometimes criticized because the sample size is relatively small, the pre-IPO transactions may not be arm's length, and the financial structure and product lines of the studied companies may have changed during the three year pre-IPO window.\n\nSection::::Discounts and premiums.:Applying the studies.\n\nThe studies confirm what the marketplace knows intuitively: Investors covet liquidity and loathe obstacles that impair liquidity. Prudent investors buy illiquid investments only when there is a sufficient discount in the price to increase the rate of return to a level which brings risk-reward back into balance.\n", "When raising capital, some types of securities are more prone to adverse selection than others. An equity offering for a company that reliably generates earnings at a good price will be bought up before an unknown company's offering, leaving the market filled with less desirable offerings that were unwanted by other investors. Assuming that managers have inside information about the firm, outsiders are most prone to adverse selection in equity offers. This is because managers may offer stock when they know the offer price exceeds their private assessments of the company's value. Outside investors therefore require a high rate of return on equity to compensate them for the risk of buying a \"lemon\".\n", "Underwriters, therefore, take many factors into consideration when pricing an IPO, and attempt to reach an offering price that is low enough to stimulate interest in the stock but high enough to raise an adequate amount of capital for the company. When pricing an IPO, underwriters use a variety of key performance indicators and non-GAAP measures. The process of determining an optimal price usually involves the underwriters (\"syndicate\") arranging share purchase commitments from leading institutional investors.\n", "Investment banks, such as Goldman Sachs or Morgan Stanley are frequently intermediaries in the equity issue process, and for some of these firms the fees associated with IPOs are a substantial part of their income. The role of these banks is to study the characteristics and business plans of the firm which is issuing equity and then recommend a minimum purchase price to investors. On the other hand, they are in charge of convincing investors that the purchase is a good opportunity and therefore the success of IPO placement partly hinges on the reputation of the investment bank that is doing it.\n", "Research sponsored by the Journal of Business Ethics states there is an equity valuation effect of press releases of upgrades or downgrades reflected in the CEQ . The research encompasses a joint test of the value relevance of the index and the impact of ethical reputation on a firm’s value. A significant causal relationship is identified between stock market reactions and the changes in the CEQ. Particularly, disclosures of positive and negative changes in firm ethical performance cause increases or decreases in firm value. Second, the research found cross-sectional analysis indicates a positive association between changes in firm ethical performance and both its financial performance and its financial reporting quality. The results measures taken to increase ethical performance are associated with positive benefits to shareholders.\n", "Companies want to become public through an APO for several reasons. The public shell company already has shareholders, so after the APO is complete, the formerly private company typically already meets the shareholder requirements for NASDAQ and AMEX; 400 and 300 respectively. A company that goes public through an IPO must sell its stock to a large number of shareholders in order to meet these requirements necessitating a broad marketing and roadshow process. Unlike an IPO, there is no public disclosure required until the transaction closes. Customers, suppliers, employees, and press are unaware until closing. Therefore, a private company can pursue going public through an APO and understand what kind of investor response and valuation they will receive without having to make the “leap of faith” requirement of an IPO. With an IPO a company must publicly announce its intentions and file with the SEC at the beginning of the process. It is only after clearing comments with the SEC and after going on the roadshow that a company learns what kind of investor response and valuation it will receive. \n", "Executive's access to insider information affecting stock prices can be used in the timing of both the granting of options and sale of equities after the options are exercised. Studies of the timing of option grants to executives have found \"a systematic connection\" between when the option were granted and corporate disclosures to the public. That is, they found options are more likely to be granted after companies release bad news or just before they \"release good news\" when company insiders are likely to know the options will be most profitable because the stock price is relatively low. Repricing of stock options also frequently occurs after the release of bad news or just prior to the release of good news.\n", "IPO underpricing algorithm\n\nIPO underpricing is the increase in stock value from the initial offering price to the first-day closing price. Many believe that underpriced IPOs leave money on the table for corporations, but some believe that underpricing is inevitable. Investors state that underpricing signals high interest to the market which increases the demand. On the other hand, overpriced stocks will drop long-term as the price stabilizes so underpricing may keep the issuers safe from investor litigation.\n\nSection::::IPO underpricing algorithms.\n", "Section::::Discounts and premiums.:Discount for lack of marketability.:Pre-IPO studies.\n\nAnother approach to measure the marketability discount is to compare the prices of stock offered in initial public offerings (IPOs) to transactions in the same company's stocks prior to the IPO. Companies that are going public are required to disclose all transactions in their stocks for a period of three years prior to the IPO. The pre-IPO studies are the leading alternative to the restricted stock stocks in quantifying the marketability discount.\n", "The market, however, may not correctly estimate the implications of earnings surprises when it revises its expectations of future earnings, which will decrease the change in stock prices associated with the change in earnings. In fact, many studies in accounting research have documented that the market takes up to a year to adjust to earnings announcements, a phenomenon known as the post-earnings announcement drift.\n", "It is fairly easy for a top executive to reduce the price of their company's stock due to information asymmetry. The executive can accelerate accounting of expected expenses, delay accounting of expected revenue, engage in off balance sheet transactions to make the company's profitability appear temporarily poorer, or simply promote and report severely conservative (e.g. pessimistic) estimates of future earnings. Such seemingly adverse earnings news will be likely to (at least temporarily) reduce share price. (This is again due to information asymmetries, since it is more common for top executives to do everything they can to window dress their company's earnings forecasts).\n", "Another phenomenon—also from psychology—that works against an objective assessment is \"group thinking\". As social animals, it is not easy to stick to an opinion that differs markedly from that of a majority of the group. An example with which one may be familiar is the reluctance to enter a restaurant that is empty; people generally prefer to have their opinion validated by those of others in the group.\n", "but with one of the options designated as the status quo. In this case, the opening passage continued: \"A significant portion of this portfolio is invested in a moderate risk company . . . (The tax and broker commission consequences of any changes are insignificant.)\"\n\nThe result was that an alternative became much more popular when it was designated as the status quo.\n\nElectric power consumers:\n\nCalifornia electric power consumers were asked about their preferences regarding trade-offs between service\n", "Underpricing may also be caused by investor over-reaction causing spikes on the initial days of trading. The IPO pricing process is similar to pricing new and unique products where there is sparse data on market demand, product acceptance, or competitive response. Besides, underpricing is also affected by the firm idiosyncratic factors such as its business model. Thus it is difficult to determine a clear price which is compounded by the different goals issuers and investors have.\n", "When a company wants to raise money for long-term investment, one of its first decisions is whether to do so by issuing bonds or shares. If it chooses shares, it avoids increasing its debt, and in some cases the new shareholders may also provide non-monetary help, such as expertise or useful contacts. On the other hand, a new issue of shares will dilute the ownership rights of the existing shareholders, and if they gain a controlling interest, the new shareholders may even replace senior managers. From an investor's point of view, shares offer the potential for higher returns and capital gains if the company does well. Conversely, bonds are safer if the company does poorly, as they are less prone to severe falls in price, and in the event of bankruptcy, bond owners may be paid something, while shareholders will receive nothing.\n", "In the mid-aughts, the backdating of stock options was looked into by federal regulators. Options backdating, changing the date of an options issue, to an earlier time when the share price was lower, has been disparaged as a way of \"rewarding managers when stock prices fall.\" An option granted on June 1 when a stock shares price was $100, but backdated to May 15, when shares were only $80, for example, gives the option holder $20/share more profit.\n", "BULLET::::- Price range width – The width of the non-binding reference price range offered to potential customers during the roadshow. This width can be interpreted as a sign of uncertainty regarding the real value of the company and a therefore, as a factor that could influence the initial return.\n\nBULLET::::- Price adjustment – The difference between the final offer price and the price range width. It can be viewed as uncertainty if the adjustment is outside the previous price range.\n\nBULLET::::- Offering price – The final offer price of the IPO\n", "The issuer usually allows the underwriters an option to increase the size of the offering by up to 15% under a specific circumstance known as the greenshoe or overallotment option. This option is always exercised when the offering is considered a \"hot\" issue, by virtue of being oversubscribed.\n" ]
[]
[]
[ "normal" ]
[]
[ "normal", "normal" ]
[]
2018-02586
Why does it cost money for a video game developer to have breakable items in maps that they create in a game?
Because people don't work for free, and it takes work to make those breakable pots happen. It's not like the developer just says "let things break" and it just magically happens, people have to go in, write the code for breakable items, debug that code, and you need artists to make the models for the unbroken and broken pots. All of that takes time and effort, and it costs money to pay people for the work required.
[ "Business deals were another new concept to \"SimCity 3000\"; by allowing certain structures, such as a maximum security prison, to be built within the city, the player can receive a substantial amount of funds from them. Business deal structures, however, tend to have negative effects on the city, such as reduced land value.\n", "Similarly, in the sandbox game \"Minecraft\", a player character can carry thousands of tonnes of material such as gold in the character's inventory without encumbrance, as if an empty inventory were the same as a full one. In reality, even one block of most materials in Minecraft would weigh hundreds or thousands of kilograms, and the player can carry up to 2304 blocks in their inventory. Since some blocks can be converted into multiple blocks of another type, it is possible to carry enough material to build an entire city in one's inventory invisibly.\n", "Facilities can be bought or produced by agencies to be put onto each hex. Labs produce blueprints which are consumed when they are used. Mines produce materials needed by blueprints. Factories combine materials and blueprints to make usable items. Whenever something is produced, an agency gains Networth. After two weeks, the agency with the highest Networth wins the season. Agencies in the top three positions get rewards on top of what was produced during the season, such as the Black Lance Vandal and Dark Steel Vindicator. Players also receive holographic armor and dyes for coming in top three or first for several seasons.\n", "Another form of advertising is sponsorship of in-game equipment. Players can virtually acquire various tools and weapons to use in the game. Sponsored versions of these include the \"AXA Shield\", the \"Lawson Power Cube\", the \"Circle K Power Cube\", the \"Ito En Transmuter (+/-)\", the \"SoftBank Ultra Link\" and the \"MUFG Capsule\", all categorized as \"Very Rare\" and performing significantly better than non-sponsored versions. In-game sponsorship with AXA and MUFG ended in December 2017.\n", "Using Dubai as a setting allowed them to incorporate sand as part of the game's key mechanics. Davis described the Dubai in the game as \"a mix of fantasy and real-world environment\". To prevent the sand mechanics from turning gimmicky, the team introduced multiple ways for players to use sand as a weapon, such as the player's ability to trigger dust clouds by throwing grenades on sand and cause a sand avalanche by shooting weak structures and supports. In addition, the team added several scripted sequences regarding sand to keep the game dynamic. The occurrence of these moments were decided based on the game's production value. The team also consulted Wil Makeneole for military advice.\n", "However, the game avoided a pay-to-win scenario by opting for a different \"pay-to-not-grind\" system instead; players could pay money to obtain in-game weapons, items, internals, holotaunts, boosters, etc., without the need to grind for HCs. The only aspect of the game that cannot be unlocked without paying money (that also don't have a HC price tag) are most cosmetics—i.e. player avatars, mech paint and mech camos/skins.\n", "As an Inventor, Sims need scrap metal. These can either be bought (at a relatively high price) from the Inventing tool, or it can be collected.\n\nSims can find plenty of scrap at the local junkyard. They obtain it by blowing it up; they can alternatively find broken objects then fix them.\n\nHowever, players can also make sims detonate other objects – including other sims' property. They will usually receive a fine for causing public damage, but it can be useful to take revenge on sims that they dislike, or just simply to see a car blowing up.\n", "Although there are numerous examples from the genre, Hammerspace usage is not just limited to adventure games. In \"The Sims 2\",\"The Sims 3\" and \"The Sims 4\", the Sims make extensive use of Hammerspace, regularly pulling items out of their back pockets which could not possibly fit there. Examples include rakes, hairdryers, watering cans and bags of flour. In addition they have seemingly limitless personal inventories in which they can carry around almost anything, from a mobile phone to a sports car, without actually having anywhere to store it. These items are also occasionally pulled from the back pocket when used in game (as in the case of mobile phones). Although both games are supposed to mimic reality in many ways, they do still retain cartoon-like elements; Hammerspace was probably implemented to prevent Sims having to trek to a storage shed/closet/drawer etc. every time they wanted to use a certain item, something which would no doubt have been both boring for the player and impractical in terms of gameplay.\n", "Mercenaries, guards and tappers need to be paid; if needed, the player can raise the guards' and tappers' salaries to make them happier. Mercenaries can be left at the base to either heal, train skills or repair items. Occasionally, Jack or Brenda Richards contact the player with an important objective, such as securing a clean water source. These requests can be ignored but usually cause some kind of hindrance to the player at a later date.\n\nSection::::Gameplay.:Tactical screen.\n", "After completing the story missions, Nick can focus on expanding his trucking company, with his goal being to win the award for having the most successful company at a certain in-game date. If he succeeds, he receives a large cash reward. Failure to do so results in a \"Game Over\" screen although the player can continue playing the game with no penalties.\n\nSection::::Gameplay.\n", "Using credits, a player can buy wares from a station. These wares may then be flown to another station where they can be sold, ideally for a higher price. However, prices vary - from minute to minute, second to second - depending on demand. The less of a ware there is, the higher its price. As such, the X-Universe has a truly dynamic market-driven economy. A player can capitalize on emergent trends to make vast profits; or as easily, can waste money and time on a bad cargo choice. As a player builds profits they can buy equipment, weapons, ships and stations. The player can acquire an unlimited number of ships and stations, of varying size, shape and function. The player can build factories to produce goods (including weapons and shields) to sell or consume. As the factories require resources, the player can set up ships to perform trading tasks for factories such as buying resources from other stations or selling the product.\n", "The game allows players to shoot their way through walls, blow up and pull down even bigger Radars and Statues, which will shatter into more pieces. You can even blow up an entire bridge, if you're being chased on it. \"That is the scale of destruction we're after, not really breaking a hole in the wall.\"\n\nSection::::Games.:\"Just Cause 4\" (2018).\n", "Some maps feature other objectives than capturing and protecting scientists. A first kind of alternative objective is breakables: these are computers or machinery that can be broken and cost money to replace. A second kind of alternative objective is resources: these are computer discs or research specimen that can be stolen and give a cash bonus.\n\nPlayers also cost money to their company each time they die, since cloning them back to life is not free.\n\nSection::::Reception.\n\n\"Science and Industry\" has received praise for the quality of its maps and unique weapons, but with some criticism over patchy online play.\n", "Chests drop from the sky randomly during gameplay and contain items that can either help or hinder. Bombs, shields, extra time and lives make up the good offerings, while the bad offerings can send you back a few screens, send you to sleep and waste your time or can even flip the screen upside down.\n", "The earliest first-person shooter example may be \"Ghen War\", released in 1995 for the Sega Saturn, which featured a 3D terrain map generator that allows fully destructible environments. However, the trend to make more and more items and environmental features destroyable by the player hearkens all the way back to the explosive barrels in \"Doom\" (1993). Games like \"\" (1998) also featured major amounts of destroyable objects, in that game a room filled with objects could be turned into an empty room filled only with debris.\n", "The principal advantage of product placement in in-games advertising is visibility and notoriety. For advertisers an ad may be displayed multiple times and a game may provide an opportunity to ally a product's brand image with the image of the game. Such examples include the use Sobe drink in Tom Clancy’s Splinter Cell: Double Agent.\n", "Certain missions give various bonuses for success. These are usually access to a new kind of weapon or piece of equipment, but in a few cases, the player is rewarded for victory with a new type of Herc chassis. In some cases, the equipment is made available later in the game if the player fails the mission, but in other cases, failure means never being able to access the equipment.\n", "Various sectors of a military group report being attacked by some of their own war machines and that a terrorist organization led by a man named Rattlesnake is at the center of each attack. A commando and vehicle specialist code-named Storm is assigned to enter each affected area and progressively stop Rattlesnake's ambitions.\n\nSection::::Gameplay.\n", "The game is played from a top-down isometric view-perspective typical for turn-based strategy games. The player gets credits for every enemy they destroy as well as a mission-bonus dependent on the difficulty of the mission. However, in the beginning of the game the primary source of income is through mining ore, which can be found scattered across the maps. All HERCs can be outfitted with an \"ore extractor\" which when activated collects all mineable ore in the hexagon where the machine is standing.\n", "Gameplay typically consists of the two sides fighting one another in missions, where one side must complete a series of objectives with the other side attempting to stop them doing so. For example, several Criminal players may rob a convenience store within the game; the game will then seek out one or more Enforcer players of equivalent skills and other criteria and will issue an all-points bulletin for them to stop the robbery and apprehend or eliminate the Criminals. Players earn money for participating in these missions, which can then be used to upgrade weapons, vehicles, and their character appearances, all of which influence the game.\n", "Non-lethal and lethal weapons, bought or picked up from enemies, can be modified with parts salvaged from other areas. New components and elements, such as the single-use multitool unlocking devices, can be bought from vendors or built from salvage in each area. Using salvage to craft new components requires blueprints discovered in the overworld. Adam can hack a variety of devices, with the hacking divided into two modes. The first has Adam hacking static devices such as computers, which triggers a minigame allowing players to capture points (nodes) and access a device. The second mode involves hacking devices such as laser traps and security robots, triggering an altered minigame where zones on a graph must be triggered to deactivate a device within a time limit.\n", "All weapons and apparel found, regardless of whether they are a makeshift-weapon such as a lead pipe, or a gun, degrade over time the more they are used, and thus become less effective. For firearms, degrading into poor condition causes them to do less damage and possibly jam when reloading, while apparel that reduces damage becomes less protective as it gradually absorbs damage from attacks. When too much damage is taken, the items breaks and cannot be used. To ensure weapons and apparel are working effectively, such items require constant maintenance and repairs which can be done in one of two ways. The first method is find certain vendors that repair items, though how much they can repair an item depends on their skill level, while the cost of the repairs depends upon the item itself. The second method is for players to find a second of the same item that needs repairs (or a comparable item), and salvaging parts from it for the repair, though how much they can do depends on their character's own skill level in repairs.\n", "Uranium is currently the only other resource being utilized. This is the resource which only Elite(payer) Heroes can use for various reasons, including the purchase of Gold Weapons, reseting stats, changing factions/planets and obtaining the right to purchase 2 extra unit types from the factory. Typically, Gold Weapons have better statistics than their normal counterparts, so serious paying players equip gold weapons for enhanced performance. Basic Heroes cannot use Uranium to outfit their weapons but can stock uranium, albeit slower than payers, so they can spend it later when they pay to become an Elite Hero.\n", "The resources in the game are magma and energy. Magma is gathered by engineers or harvesters from rocks or manufactured by magma pumps from hot spots. Energy is gathered by engineers or harvesters from trees or morels, or manufactured by power plants.\n\nThere are three types of structures: basic, unit, and defense. The basic structures are power plants, magma pumps, radar, cameras, and vaults. The unit structures are training camps, vehicle factories, hospitals and aircraft factories. Lastly, the defensive buildings are gun turrets, cannons, and missile silos.\n\nSection::::Technical features.\n", "BULLET::::- Tower Topper: Set in France (Sharon's home country), the player must use the second button, to make their Numan jump from one building to the other before a flame under them disappears; if it does, the Numan falls to the ground and automatically fails the event.\n" ]
[]
[]
[ "normal" ]
[]
[ "normal" ]
[]
2018-02504
Why are brass, copper, and bronze used in pluming?
They're very corrosion resistant, considering the constant exposure to water. They're also very malleable (meaning they're easily shaped into tube and pipe) and not toxic, plus it is easy enough to be bent by hand rather than having to fabricate exact curves and lengths. In addition, the three metals are also resistant to the growth of bacteria and other microbes. Brass, copper, and bronze are all mostly copper. Admiralty brass, the type of brass you'd normally see in plumbing, is only 30% zinc. Bronze is typically no more than 12% tin. Keeping the metal mostly the same also helps limit corrosion. Typically you'll see copper tubing and brass fittings, because pure copper doesn't hold its shape very well under the higher stress at a fitting.
[ "Bronze has also been used in coins; most “copper” coins are actually bronze, with about 4 percent tin and 1 percent zinc.\n", "Section::::Materials.:Came.:Brass and copper.\n\nBrass and copper have been used to bring a copper or golden hue to the works. Generally, though, they were used only for windows between about 1890 and 1920. Both metals were often alternatives to zinc for Frank Lloyd Wright designed windows.\n\nSection::::Materials.:Came.:Brass-capped lead.\n\nBrass-capped lead is another type of came used for glasswork projects.\n\nSection::::Materials.:Came.:Lead.\n", "Supplies of the tin and copper used to make bell metal were probably obtained from brass foundries in Kelston and Bristol. The metal was melted in a wood-burning furnace to over and then poured into a mould made from loam, or foundry mud, from the River Chew.\n", "In the Bronze Age, two forms of bronze were commonly used: \"classic bronze\", about 10% tin, was used in casting; and \"mild bronze\", about 6% tin, was hammered from ingots to make sheets. Bladed weapons were mostly cast from classic bronze, while helmets and armor were hammered from mild bronze.\n\nCommercial bronze (90% copper and 10% zinc) and architectural bronze (57% copper, 3% lead, 40% zinc) are more properly regarded as brass alloys because they contain zinc as the main alloying ingredient. They are commonly used in architectural applications.\n", "Some alloys are made by melting and mixing two or more metals. Bronze, an alloy of copper and tin, was the first alloy discovered, during the prehistoric period now known as the Bronze Age. It was harder than pure copper and originally used to make tools and weapons, but was later superseded by metals and alloys with better properties. In later times bronze has been used for ornaments, bells, statues, and bearings. Brass is an alloy made from copper and zinc.\n", "From the late 13th century to the end of the 14th century, purpose-made jetons were produced in England, similar in design to contemporary Edwardian pennies. Although they were made of brass they were often pierced or indented at the centre to avoid them being plated with silver and passed off as real silver coins. By the middle of the 14th century, English jetons were being produced at a larger size, similar to the groat. \n", "During the 18th century brass was largely used in the production of objects for domestic use; the manufacture of large hanging chandeliers also continued, together with wall-sconces and other lighting apparatus. In the latter half of the 19th century there came an increasing demand for ecclesiastical work in England; lecterns, alms dishes, processional crosses and altar furniture were made of brass; the designs were for the greater part adaptations of older work and without any great originality.\n\nSection::::European brass and bronze.:Monumental brasses.\n", "Bronze continued to be used as a metal for various statues and statuettes during Classical Period as can be seen from the Bronze hoard discovered in Chausa Bihar, which consisted of bronze statuettes dating between 2nd BC to 6th Century AD. \n", "By popular tradition the bell metal contained gold and silver, as component parts of the alloy, as it is recorded that rich and devout people threw coins into the furnace when bells were cast in the churchyard. The practice was believed to improve the tone of the bell. This however is probably erroneous as there are no authentic analyses of bell metal, ancient or modern, which show that gold or silver has ever been used as a component part of the alloy. If used to any great extent, the addition would injure the tone not improve it. Small quantities of other metals found in old bell metal are likely to be impurities in the metals used to form the alloy.\n", "Lump metal clay in bronze was introduced in 2008 by Metal Adventures Inc. and in 2009 by Prometheus. Lump metal clays in copper were introduced in 2009 by Metal Adventures Inc. and Aida. Because of the lower cost, the bronze and copper metal clays are used by artists more often than the gold and silver metal clays in the American market place. The actual creation time of a bronze or copper piece is also far greater than that of its silver counterpart. Base metal clays, such as bronze, copper, and steel metal clays are best fired in the absence of oxygen to eliminate the oxidation of the metal by atmospheric oxygen. A means to accomplish this –- to place the pieces in activated carbon inside a container – was developed by Bill Struve.\n", "Present-day water-supply systems use a network of high-pressure pumps, and pipes in buildings are now made of copper, brass, plastic (particularly cross-linked polyethylene called PEX, which is estimated to be used in 60% of single-family homes), or other nontoxic material. Due to its toxicity, most cities moved away from lead water-supply piping by the 1920s in the United States, although lead pipes were approved by national plumbing codes into the 1980s, and lead was used in plumbing solder for drinking water until it was banned in 1986. Drain and vent lines are made of plastic, steel, cast iron, or lead.\n", "Singing bowls are also sometimes said to incorporate meteoritic iron. Some modern 'crystal' bowls are made of re-formed crushed synthetic crystal.\n\nThe usual manufacturing technique for standing bells was to cast the molten metal followed by hand-hammering into the required shape. Modern bells/bowls may be made in that way, but may also be shaped by machine-lathing.\n\nThe finished article is often decorated with an inscription such as a message of goodwill, or with decorative motifs such as rings, stars, dots or leaves. Bowls from Nepal sometimes include an inscription in the Devanagari script.\n", "Monumental brass\n\nA monumental brass is a type of engraved sepulchral memorial which in the 13th century began to partially take the place of three-dimensional monuments and effigies carved in stone or wood. Made of hard latten or sheet brass, let into the pavement, and thus forming no obstruction in the space required for the services of the church, they speedily came into general use, and continued to be a favourite style of sepulchral memorial for three centuries.\n\nSection::::In Europe.\n", "Brass is an alloy composed of copper and zinc, usually for sheet metal, and casting in the proportion of seven parts of the former to three of the latter. Such a combination secures a good, brilliant colour. There are, however, varieties of tone ranging from a pale lemon colour to a deep golden brown, which depends upon a smaller or greater amount of zinc. In early times this metal seems to have been sparingly employed, but from the Middle Ages onward the industry in brass was a very important one, carried out on a vast scale and applied in widely different directions. The term \"latten\", which is frequently met with in old documents, is rather loosely employed, and is sometimes used for objects made of bronze; its true application is to the alloy we call brass. In Europe its use for artistic purposes centered largely in the region of the Meuse valley in south-east Belgium, together with north-eastern France, parts of the Netherlands and the Rhenish provinces of which Cologne was the center. As far back as the 11th century the inhabitants of the town of Huy and Dinant are found working this metal; zinc they found in their own country, while for copper they went to Cologne or Dortmund, and later to the mines of the Harz Mountains. Much work was produced both by casting and repoussé, but it was in the former process that they excelled. Within a very short time the term \"dinanderie\" was coined to designate the work in brass which emanated from the foundries of Dinant and other towns in the neighbourhood. Their productions found their way to France, Spain, England and Germany. In London the Dinant merchants, encouraged by Edward III, established a \"Hall\" in 1329 which existed until the end of the 16th century; in France they traded at Rouen, Calais, Paris and elsewhere. The industry flourished for several centuries, but was weakened by quarrels with their rivals at the neighboring town of Bouvignes; in 1466 the town was sacked and destroyed by Charles the Bold. The brass-founders fled to Huy, Namur, Middleburg, Tournai and Bruges, where their work was continued.\n", "The use of bronze dates from remote antiquity. This important metal is an alloy composed of copper and tin, in proportion which vary slightly, but may be normally considered as nine parts of copper to one of tin. Other ingredients which are occasionally found are more or less accidental. The result is a metal of a rich golden brown colour, capable of being worked by casting — a process little applicable to its component parts, but peculiarly successful with bronze, the density and hardness of the metal allowing it to take any impression of a mould, however delicate. It is thus possible to create ornamental work of various kinds.\n", "Piggot states the brass used for machinery and locomotives in England was composed of copper 74.5%, zinc 25%, and lead 0.5%- which would make it a tombac according to Ure.\n\nPiggot's own definition of tombak is problematic at best: \"red brass or tombak as it is called by some, has a great preponderance of copper, from 5 ounces of zinc down to 1/2 ounce of zinc to the pound [sic: copper?]\"\n\nSection::::Common types.:Tempers.\n\nTypical tempers are soft annealed and rolled hard.\n\nSection::::Applications.\n", "Bronze, or bronze-like alloys and mixtures, were used for coins over a longer period. Bronze was especially suitable for use in boat and ship fittings prior to the wide employment of stainless steel owing to its combination of toughness and resistance to salt water corrosion. Bronze is still commonly used in ship propellers and submerged bearings.\n", "Brass was widely used for smaller objects in churches and for domestic use. Flemish and German pictures show candlesticks, holy water stoups, reflectors, censers and vessels for washing the hands as used in churches. The inventories of Church goods in England made the time of the Reformation disclose a very large number of objects in latten which were probably made in the country. In general use was an attractive vessel known as the \"aquamanile\" (cf. Fig. 6); this is a water-vessel usually in the form of a standing lion, with a spout projecting from his mouth; on the top of the head is an opening for filling the vessel, and a lizard-shaped handle joins the back of the head with the tail. Others are in the form of a horse or ram; a few are in the form of a human bust, and some represent a mounted warrior. They were produced from the 12th to the 15th centuries. Countless are the domestic objects: mortars, small candlesticks, warming pans, trivets, fenders; these date mainly from the 17th and 18th centuries, when brass ornamentation was also frequently applied to clockdials, large and small. Two English developments during the 17th century call for special notice. The first was an attempt to use enamel with brass, a difficult matter, as brass is a bad medium for enamel. A number of objects exist in the form of firedogs, candlesticks, caskets, plaques and vases, the body of which is of brass roughly cast with a design in relief; the hollow spaces between the lines of the design are filled in with patches of white, black, blue or red enamel, with very pleasing results (cf. Fig. 7). The nearest analogy is found in the small enamelled brass plaques and icons produced in Russia in the 17th and 18th centuries. The second use of brass is found in a group of locks of intricate mechanism, the cases of which are of brass cast in openwork with a delicate pattern of scroll work and bird forms sometimes engraved. A further development shows solid brass cases covered with richly engraved designs (cf. Fig. 8). The Victoria and Albert Museum of London, contains a fine group of these locks; others are \"in situ\" at Hampton Court Palace and in country mansions.\n", "Section::::History.:Hydrometallurgy in Chinese antiquity.\n\nDuring the Song Dynasty, Chinese copper output from domestic mining was in decline and the resulting shortages caused miners to seek alternative methods for extracting copper. The discovery of a new “wet process” for extracting copper from mine water was introduced between the eleventh and twelfth century, which helped to mitigate their loss of supply.\n", "Tin-opacified glazes appear in Iraq in the eighth century AD. Originally containing 1–2% PbO; by the eleventh century high-lead glazes had developed, typically containing 20–40% PbO and 5–12% alkali. These were used throughout Europe and the Near East, especially in Iznik ware, and continue to be used today. Glazes with even-higher lead content occur in Spanish and Italian maiolica, with up to 55% PbO and as low as 3% alkali. Adding lead to the melt allows the formation of tin oxide more readily than in an alkali glaze: tin oxide precipitates into crystals in the glaze as it cools, creating its opacity.\n", "In the early Islamic world silver, though continuing in use for vessels at the courts of princes, was much less widely used by the merely wealthy. Instead, vessels of the copper alloys bronze and brass included inlays of silver and gold in their often elaborate decoration, leaving less of a place for niello. Other black fillings were also used, and museum descriptions are often vague about the actual substances involved. \n", "The General Instruction of the Roman Missal lays down rules for patens: \"Sacred vessels should be made from precious metal. If they are made from metal that rusts or from a metal less precious than gold, they should generally be gilded on the inside.\" However, provisions for vessels made from non-precious metals are made as well, provided they are \"made from other solid materials which in the common estimation in each region are considered precious or noble.\"\n", "Although cheaper and easier to manufacture, assemble, and repair, the wooden pumps manufactured from then on would most likely not have been durable enough for maritime use, which is why we usually find lead pipes associated with bilge pumps, bilge wells, and hydraulic devices of other function about ships. The amphora shipwreck discovered at Grado in Gorizia, Italy, which dates to around 200 CE, and contained what archaeologists hypothesized to be a ‘hydraulics’ system to change the water in live fish tanks, since other evidence indicates the ship’s involvement in the processed fish trade (Beltrame and Gaddi 2005). \n", "For example, it is used to produce coins, badges, buttons, precision-energy springs and precision parts with small or polished surface features.\n", "The earliest existing brass is that of Bishop Ysowilpe at Verden, in Germany, which dates from 1231 and is on the model of an incised stone, as if by an artist accustomed to work in that material. In England the oldest example is at Stoke D'Abernon church, in Surrey, to the memory of Sir John D'Abernon, who died in 1277. Numerous brasses are to be found in Belgium, and some in France and the Netherlands. Apart from their artistic attractiveness, these ornamental brasses are of the utmost value in faithfully depicting the costumes of the period, ecclesiastical, civil or military; they furnish also appropriate inscriptions in beautiful lettering (cf. Brass Gallery).\n" ]
[]
[]
[ "normal" ]
[]
[ "normal", "normal" ]
[]
2018-05036
Why do some injuries "sting", while others may give a more "dull" pain?
The type of pain is an indication of what kind of injury you have. It's your brain categorizing what's happening. For example if you have a sore muscle it's probably dull because your brain knows it's pain, but it's not an immediate danger pain. Then take stinging pain. Those are meant to warn you of something immediate or major. Like if you step on a broken hanger you have a stinging that says "stop stepping on a hanger you idiot and remove your foot immediately" it's evolutionary to tell you the extent of pain and the severity and immediateness of the injury
[ "Peripheral injuries trigger complex changes in the central nociceptive system which can lead to central sensitization that enhances the sensitivity and responsiveness of the brain regions involved in sensory processing. In some cases, these physiological responses progress to neuropathic centralized pain.\n\nSection::::Treatments.\n", "BULLET::::- Multiple bee-sting like pains in the affected area\n\nBULLET::::- Occasionally, aching in the groin area or pain spreading across the buttocks\n\nBULLET::::- Usually more sensitive to light touch than to firm pressure\n\nBULLET::::- Hyper sensitivity to heat (warm water from shower feels like it is burning the area)\n\nBULLET::::- Occasionally, patients may complain of itching or a bothersome sensation rather than pain in the affected area.\n\nThe entire distribution of the nerve is rarely affected. Usually, the unpleasant sensation(s) affect only part of the skin supplied by the nerve.\n\nSection::::Cause.\n", "Spontaneous pain or allodynia (pain resulting from a stimulus which would not normally provoke pain, such as a light touch of the skin) is not limited to the territory of a single peripheral nerve and is disproportionate to the inciting event.\n\nBULLET::::1. There is a history of edema, skin blood flow abnormality, or abnormal sweating in the region of the pain since the inciting event.\n\nBULLET::::2. No other conditions can account for the degree of pain and dysfunction.\n", "BULLET::::1. The presence of continuing pain, allodynia, or hyperalgesia after a nerve injury, not necessarily limited to the distribution of the injured nerve\n\nBULLET::::2. Evidence at some time of edema, changes in skin blood flow, or abnormal sudomotor activity in the region of pain\n\nBULLET::::3. The diagnosis is excluded by the existence of any condition that would otherwise account for the degree of pain and dysfunction.\n", "The sting of a weever is acute and intense. The pain frequently is radiated to the area around the limb. The seriousness of the pain reaches its peak 30 minutes after the sting, and then slowly decreases. However, some pain (or other sensation, such as a tingle) may continue to affect the area for up to 24 hours. Very rarely, pain can be propagated to the tributary lymph nodes, i.e. those in the groin (when the sting is on the sole of the foot), or those in the armpit (if the sting is on the hands).\n", "BULLET::::- Foreign bodies. Sharp, small foreign bodies can penetrate the skin leaving little surface wound but causing internal injury and internal bleeding. For a glass foreign body, \"frequently, an innocent skin wound disguises the extensive nature of the injuries beneath\". First-degree nerve injury requires a few hours to a few weeks to recover. If a foreign body passes by a nerve and causes first-degree nerve injury during entry, then the sensation of the foreign body or pain due to internal wounding may be delayed by a few hours to a few weeks after entry. A sudden increase in pain during the first few weeks of wound healing could be a sign of a recovered nerve reporting internal injuries rather than a newly developed infection.\n", "\"Nociceptive\" pain is a physiological response described as stabbing, throbbing, aching, or sharp. Nociceptive pain is considered to be an appropriate to painful stimuli that occurs as a result from underlying tissue damage and may be acute or chronic. Nociceptive pain that is persistent may due to conditions causing ongoing tissue damage such as ischemia, or edema.\n\n\"Neuropathic\" pain is associated with chronic pain and results from a nervous system dysfunction, which causes an inappropriate response to pain. Neuropathic pain is described as burning or tingling persistent pain.\n", "BULLET::::- Referred pain is often experienced on the same side of the body as the source, but not always.\n\nSection::::Mechanism.\n\nThere are several proposed mechanisms for referred pain. Currently there is no definitive consensus regarding which is correct. The cardiac general visceral sensory pain fibers follow the sympathetics back to the spinal cord and have their cell bodies located in thoracic dorsal root ganglia 1-4(5).\n\nAs a general rule, in the thorax and abdomen, general visceral afferent (GVA) pain fibers follow sympathetic fibers back to the same spinal cord segments that gave rise to the preganglionic sympathetic fibers.\n", "BULLET::::- There are sensory and motor deficits distal to the site of lesion.\n\nBULLET::::- There is no nerve conduction distal to the site of injury (3 to 4 days after injury).\n\nBULLET::::- EMG shows fibrillation potentials (FP), and positive sharp waves (2 to 3 weeks postinjury).\n\nBULLET::::- Axonal regeneration occurs and recovery is possible without surgical treatment. Sometimes surgical intervention because of scar tissue formation is required.\n\nSection::::Seddon's classification.:Neurotmesis (Class III).\n", "Section::::Classification.:Neuropathic.\n\nNeuropathic pain is caused by damage or disease affecting any part of the nervous system involved in bodily feelings (the somatosensory system). Neuropathic pain may be divided into peripheral, central, or mixed (peripheral and central) neuropathic pain. Peripheral neuropathic pain is often described as \"burning\", \"tingling\", \"electrical\", \"stabbing\", or \"pins and needles\". Bumping the \"funny bone\" elicits acute peripheral neuropathic pain.\n\nSection::::Classification.:Neuropathic.:Allodynia.\n", "Section::::Types.\n\nSection::::Types.:Neurapraxia.\n", "In 1943, Seddon described three basic types of peripheral nerve injury that include:\n\nSection::::Seddon's classification.:Neurapraxia (Class I).\n\nIt is a temporary interruption of conduction without loss of axonal continuity. In neurapraxia, there is a physiologic block of nerve conduction in the affected axons.\n\nOther characteristics:\n\nBULLET::::- It is the mildest type of peripheral nerve injury.\n\nBULLET::::- There are sensory-motor problems distal to the site of injury.\n\nBULLET::::- The endoneurium, perineurium, and the epineurium are intact.\n\nBULLET::::- There is no wallerian degeneration.\n", "Central neuropathic pain is found in spinal cord injury, multiple sclerosis, and some strokes. Aside from diabetes (see diabetic neuropathy) and other metabolic conditions, the common causes of painful peripheral neuropathies are herpes zoster infection, HIV-related neuropathies, nutritional deficiencies, toxins, remote manifestations of malignancies, immune mediated disorders and physical trauma to a nerve trunk. Neuropathic pain is common in cancer as a direct result of cancer on peripheral nerves (e.g., compression by a tumor), or as a side effect of chemotherapy (chemotherapy-induced peripheral neuropathy), radiation injury or surgery.\n\nSection::::Mechanisms.\n\nSection::::Mechanisms.:Peripheral.\n", "Nociceptive pain may also be classed according to the site of origin and divided into \"visceral\", \"deep somatic\" and \"superficial somatic\" pain. Visceral structures (e.g., the heart, liver and intestines) are highly sensitive to stretch, ischemia and inflammation, but relatively insensitive to other stimuli that normally evoke pain in other structures, such as burning and cutting. Visceral pain is diffuse, difficult to locate and often referred to a distant, usually superficial, structure. It may be accompanied by nausea and vomiting and may be described as sickening, deep, squeezing, and dull. \"Deep somatic\" pain is initiated by stimulation of nociceptors in ligaments, tendons, bones, blood vessels, fasciae and muscles, and is dull, aching, poorly-localized pain. Examples include sprains and broken bones. \"Superficial\" pain is initiated by activation of nociceptors in the skin or other superficial tissue, and is sharp, well-defined and clearly located. Examples of injuries that produce superficial somatic pain include minor wounds and minor (first degree) burns.\n", "Section::::Mechanism.:Thalamic-convergence.\n\nThalamic convergence suggests that referred pain is perceived as such due to the summation of neural inputs in the brain, as opposed to the spinal cord, from the injured area and the referred area. Experimental evidence on thalamic convergence is lacking. However, pain studies performed on monkeys revealed convergence of several pathways upon separate cortical and subcortical neurons.\n\nSection::::Laboratory testing methods.\n", "BULLET::::3. Displays protective motor reactions that might include reduced use of an affected area such as limping, rubbing, holding or autotomy\n\nBULLET::::4. Has opioid receptors and shows reduced responses to noxious stimuli when given analgesics and local anaesthetics\n\nBULLET::::5. Shows trade-offs between stimulus avoidance and other motivational requirements\n\nBULLET::::6. Shows avoidance learning\n\nBULLET::::7. High cognitive ability and sentience\n\nSection::::Vertebrates.\n\nSection::::Vertebrates.:Fish.\n", "A nerve contains sensory fibers, motor fibers, or both. Sensory fibers lesions cause the sensory problems below to the site of injury. Motor fibers injuries may involve lower motor neurons, sympathetic fibers, and or both.\n\nAssessment items include:\n\nBULLET::::- Sensory fibers that send sensory information to the central nervous system.\n\nBULLET::::- Motor fibers that allow movement of skeletal muscle.\n\nBULLET::::- Sympathetic fibers that innervate the skin and blood vessels of the four extremities.\n", "BULLET::::1. The presence of an initiating noxious event or a cause of immobilization\n\nBULLET::::2. Continuing pain, allodynia (perception of pain from a nonpainful stimulus), or hyperalgesia (an exaggerated sense of pain) disproportionate to the inciting event\n\nBULLET::::3. Evidence at some time of edema, changes in skin blood flow, or abnormal sudomotor activity in the area of pain\n\nBULLET::::4. The diagnosis is excluded by the existence of any condition that would otherwise account for the degree of pain and dysfunction.\n\nAccording to the IASP, CRPS II (causalgia) is diagnosed as follows:\n", "BULLET::::- A history of widespread pain lasting more than three months – affecting all four quadrants of the body, i.e., both sides, and above and below the waist.\n\nBULLET::::- Tender points – there are 18 designated possible tender points (although a person with the disorder may feel pain in other areas as well). Diagnosis is no longer based on the number of tender points.\n", "BULLET::::- Numbness or paralysis may develop immediately or come on gradually as bleeding or swelling occurs in or around the spinal cord\n\nSection::::Mechanism.\n", "Section::::Theory.:Three dimensions of pain.\n\nIn 1968 Ronald Melzack and Kenneth Casey described chronic pain in terms of its three dimensions:\n\nBULLET::::- \"sensory-discriminative\" (sense of the intensity, location, quality and duration of the pain),\n\nBULLET::::- \"affective-motivational\" (unpleasantness and urge to escape the unpleasantness), and\n\nBULLET::::- \"cognitive-evaluative\" (cognitions such as appraisal, cultural values, distraction and hypnotic suggestion).\n", "Depending on the sensation associated with neuropathic pain, it may be considered as acute or chronic. Acute neuropathic pain is associated with burning, squeezing, throbbing, shooting, or electric shock sensations that resolve. Neuropathic sensations such as numbness, tingling, and prickling are considered as chronic neuropathic pain. Chronic neuropathic pain may be intermittent or continuous, and may remain unresolved post tissue healing.\n\nSection::::Classifications.\n", "Neuropathic pain is divided into \"peripheral\" (originating in the peripheral nervous system) and \"central\" (originating in the brain or spinal cord). Peripheral neuropathic pain is often described as \"burning\", \"tingling\", \"electrical\", \"stabbing\", or \"pins and needles\".\n\nSection::::Causes.\n\nSection::::Causes.:Pathophysiology.\n", "Section::::Epidemiology.\n\nBULLET::::- The number of discharges related to median nerve injuries decreased from 3,402 in 1993 to 2,737 in 2006.\n\nBULLET::::- The mean hospital charges in nominal dollars increased from $9,257 to $27,962 between 1993 and 2006.\n\nBULLET::::- 37.1% of patients in 2006 presenting with median nerve injuries needed acute repair.\n\nBULLET::::- Median nerve injuries were the least likely to be admitted to the emergency room out of all peripheral nerve injuries (median nerve 68.89%, ulnar nerve 71.3% and radial nerve 77.06%).\n", "Neuroglia (\"glial cells\") may play a role in central sensitization. Peripheral nerve injury induces glia to release proinflammatory cytokines and glutamate—which, in turn influence neurons.\n\nSection::::Mechanisms.:Cellular.\n" ]
[]
[]
[ "normal" ]
[]
[ "normal" ]
[]
2018-12580
Why does alcohol in mouthwash not make you drunk through your palate? While you use a straw in alcoholic drinks so you get drunk faster because the alcoholic drink goes through your palate?
You aren't ingesting enough alcohol to get drunk. > While you use a straw in alcoholic drinks so you get drunk faster because the alcoholic drink goes through your palate? That's just 100% false. A straw makes no difference, only the amount of alcohol and how much you're ingesting over a certain period of time.
[ "Mouth alcohol can also be created in other ways. Dentures, some have theorized, will trap alcohol, although experiments have shown no difference if the normal 15 minute observation period is observed. Periodontal disease can also create pockets in the gums which will contain the alcohol for longer periods. Also known to produce false results due to residual alcohol in the mouth is passionate kissing with an intoxicated person. Recent use of mouthwash or breath fresheners can skew results upward as they can contain fairly high levels of alcohol.\n\nSection::::Common sources of error.:Testing during absorptive phase.\n", "Alcohol is added to mouthwash not to destroy bacteria but to act as a carrier agent for essential active ingredients such as menthol, eucalyptol and thymol which help to penetrate plaque. Sometimes a significant amount of alcohol (up to 27% vol) is added, as a carrier for the flavor, to provide \"bite\". Because of the alcohol content, it is possible to fail a breathalyzer test after rinsing although breath alcohol levels return to normal after 10 minutes. In addition, alcohol is a drying agent, which encourages bacterial activity in the mouth, releasing more malodorous volatile sulfur compounds. Therefore, alcohol-containing mouthwash may temporarily worsen halitosis in those who already have it, or indeed be the sole cause of halitosis in other individuals.\n", "The Dry-Gas Method utilizes a portable calibration standard which is a precise mixture of alcohol and inert nitrogen available in a pressurized canister. Initial equipment costs are less than alternative methods and the steps required are fewer. The equipment is also portable allowing calibrations to be done when and where required.\n\nThe Wet Bath Method utilizes an alcohol/water standard in a precise specialized alcohol concentration, contained and delivered in specialized simulator equipment. Wet bath apparatus has a higher initial cost and is not intended to be portable. The standard must be fresh and replaced regularly.\n", "Action: The mullen mouth and straight bar are fairly similar in action, placing pressure on the tongue, lips, and bars. The mullen provides extra space for the tongue, instead of constantly pushing into it, resulting in more tongue relief, and making it more comfortable, but the mullen does not have as high of a port as a curb, thus does not offer full tongue relief. This bit is generally considered a very mild mouthpiece, although this varies according to the type of bit leverage (snaffle, pelham or curb), and improper use may make it harsh, since the majority of the bit pressure is applied on the sensitive tongue.\n", "Avoidance of ethanol is the safest, surest, and cheapest treatment. Indeed, surveys find a positive correlation between high incidences of glu487lys ALDH2 allele-related alcohol-induced respiratory reactions as well as other causes of these reactions and low levels of alcohol consumption, alcoholism, and alcohol-related diseases. Evidently, people suffering these reaction self-impose avoidance behavior. There is a proviso here: ethanol, at surprisingly high concentrations, is used as a solvent to dissolve many types of medicines and other ingredients. This pertains particularly to liquid cold medicines and mouthwashes. Ethanol avoidance includes avoiding the ingestion of and, depending on an individual's history, mouth washing with, such agents.\n", "On the other hand, it is alleged that products such as mouthwash or breath spray can \"fool\" breath machines by significantly raising test results. Listerine mouthwash, for example, contains 27% alcohol. The breath machine is calibrated with the assumption that the alcohol is coming from alcohol in the blood diffusing into the lung rather than directly from the mouth, so it applies a partition ratio of 2100:1 in computing blood alcohol concentration—resulting in a false high test reading. To counter this, officers are not supposed to administer a preliminary breath test for 15 minutes after the subject eats, vomits, or puts anything in their mouth. In addition, most instruments require that the individual be tested twice at least two minutes apart. Mouthwash or other mouth alcohol will have somewhat dissipated after two minutes and cause the second reading to disagree with the first, requiring a retest. (Also see the discussion of the \"slope parameter\" of the Intoxilyzer 5000 in the \"Mouth Alcohol\" section above.)\n", "High doses of alcohol severely disrupt the storage process of semantic memories. Alcohol was found to impair the storage of novel stimuli but not that of previously learned information. Since alcohol affects the central nervous system, it hinders semantic storage functioning by restricting the consolidation of the information from encoding.\n", "Reduced mean plaque scores and reduced marginal bleeding scores were achieved more effectively from chlorhexidine irrigation than from the use of chlorhexidine mouthwash. Listerine mouthwash was found to be statistically significantly better than a placebo at attaining reduced mean plaque scores and reduced marginal bleeding scores. When Listerine mouthwash was used twice daily for 30 seconds in addition to routine oral hygiene, it was shown that a reduction of 54% in mean plaque and 34% in marginal bleeding compared to a placebo. Chlorhexidine irrigations reduced mean plaque by 20% and marginal bleeding by 35% in comparison to a chlorhexidine mouthwash. Chlorhexidine is the most effective antiplaque agent used in the mouth to date. Reducing the mean plaque scores and the marginal bleeding scores contributes to both the prevention and the treatment of peri-implant mucositis. Initially, the use of mouthwashes was only proposed for patients with physical disabilities which would result in decreased manual dexterity and hence make active cleaning difficult. However, it is now thought that this will lead to less peri-implant mucositis being caused in all implant patients. Despite this, there have been concerns about the link between mouthwashes containing alcohol and the incidence of oral cancer.\n", "Alcohol's negative effects on learning, spatial abilities and memory has been shown in many studies. This raises a question: does using alcohol in combination with other substances impair cognitive functioning even more? One study decided to try to determine if polysubstance users who also abused alcohol would display poorer performance on a verbal learning and memory test in comparison to those who abused alcohol specifically. The California Verbal Learning Test (CVLT) was used due to its ability to \"quantify small changes in verbal learning and memory\" by evaluating errors made during the test and the strategies used to make those errors. The results of this study showed that the group of polysubstance and alcohol abusers did perform poorly on the CVLT recall and recognition tests in comparison to the group of alcohol abusers only, which implies that alcohol and drug abuse combined impaired the memory and learning of the group of polysubstance and alcohol abusers in a different way than the effects of alcohol alone can explain.\n", "BULLET::::- great lung volumes are used,\n\nBULLET::::- it is accompanied by larger facial movements, though these do not aid as much as sound changes\n\nThese changes cannot be controlled by instructing a person to speak as they would in silence, though people can learn control with feedback.\n\nThe Lombard effect also occurs following laryngectomy when people following speech therapy talk with esophageal speech.\n\nSection::::Mechanisms.\n", "Oil pulling has received little study and there is little evidence to support claims made by the technique's advocates. When compared with chlorhexidine in one small study, it was found to be less effective at reducing oral bacterial load, otherwise the health claims of oil pulling have failed scientific verification or have not been investigated. There is a report of lipid pneumonia caused by accidental inhalation of the oil during oil pulling.\n", "Variance in how much one breathes out can also give false readings, usually low. This is due to biological variance in breath alcohol concentration as a function of the volume of air in the lungs, an example of a factor which interferes with the liquid-gas equilibrium assumed by the devices. The presence of volatile components is another example of this; mixtures of volatile compounds can be more volatile than their components, which can create artificially high levels of ethanol (or other) vapors relative to the normal biological blood/breath alcohol equilibrium.\n\nSection::::Common sources of error.:Mouth alcohol.\n", "A 2003 episode of the science television show \"MythBusters\" tested a number of methods that supposedly allow a person to fool a breath analyzer test. The methods tested included breath mints, onions, denture cream, mouthwash, pennies and batteries; all of these methods proved ineffective. The show noted that using these items to cover the smell of alcohol may fool a person, but, since they will not actually reduce a person's BrAC, there will be no effect on a breath analyzer test regardless of the quantity used, if any, it appeared that using mouthwash only raised the BrAC. Pennies supposedly produce a chemical reaction, while batteries supposedly create an electrical charge, yet neither of these methods affected the breath analyzer results.\n", "The addition of the active ingredients means the ethanol is considered to be undrinkable, known as denatured alcohol, and it is therefore not regulated as an alcoholic beverage in the United States. (Specially Denatured Alcohol Formula 38-B, specified in Title 27, Code of Federal Regulations, Part 21, Subpart D) However, consumption of mouthwash to obtain intoxication does occur, especially among alcoholics and underage drinkers.\n\nSection::::Safety.\n", "For many patients, however, the mechanical methods could be tedious and time-consuming and additionally some local conditions may render them especially difficult. Chemotherapeutic agents, including mouthrinses, could have a key role as adjuncts to daily home care, preventing and controlling supragingival plaque, gingivitis and oral malodor.\n", "Other substances that might reduce the BrAC reading include a bag of activated charcoal concealed in the mouth (to absorb alcohol vapor), an oxidizing gas (such as NO, Cl, O, etc.) that would fool a fuel cell type detector, or an organic interferent to fool an infrared absorption detector. The infrared absorption detector is more vulnerable to interference than a laboratory instrument measuring a continuous absorption spectrum since it only makes measurements at particular discrete wavelengths. However, due to the fact that any interference can only cause higher absorption, not lower, the estimated blood alcohol content will be overestimated.\n", "It is hypothesized that alcohol mouthwashes acts as a carcinogen (cancer-inducing). Generally, there is no scientific consensus about this. One review stated:\n", "On 15 July 2012, Molumphy was again selected at midfield when Waterford contested their fourth successive Munster final. He ended the game on the losing side following a 2-17 to 0-16 defeat by Tipperary.\n\nOn 5 January 2013, Waterford manager Michael Ryan confirmed Molumphy's unavailability to the team for the 2013 season due to overseas work commitments with the army in Lebanon.\n", "Minor and transient side effects of mouthwashes are very common, such as taste disturbance, tooth staining, sensation of a dry mouth, etc. Alcohol-containing mouthwashes may make dry mouth and halitosis worse since it dries out the mouth. Soreness, ulceration and redness may sometimes occur (e.g. aphthous stomatitis, allergic contact stomatitis) if the person is allergic or sensitive to mouthwash ingredients such as preservatives, coloring, flavors and fragrances. Such effects might be reduced or eliminated by diluting the mouthwash with water, using a different mouthwash (e.g. salt water), or foregoing mouthwash entirely.\n", "However, mouthing may also be iconic, as in the word for (of food or drink) in ASL, UtCbf\", where the mouthing suggests something hot in the mouth and does not correspond to the English word \"hot\".\n\nMouthing is an essential element of cued speech and simultaneous sign and speech, both for the direct instruction of oral language and to disambiguate cases where there is not a one-to-one correspondence between sign and speech. However, mouthing does not always reflect the corresponding spoken word; when signing 'thick' in Auslan (Australian Sign Language), for example, the mouthing is equivalent to spoken \"fahth\".\n", "The problem with mouth alcohol being analyzed by the breath analyzer is that it was not absorbed through the stomach and intestines and passed through the blood to the lungs. In other words, the machine's computer is mistakenly applying the partition ratio (2100:1, see above) and multiplying the result. Consequently, a very tiny amount of alcohol from the mouth, throat or stomach can have a significant impact on the breath-alcohol reading.\n", "EE has been found to have disproportionate effects on liver protein synthesis and VTE risk regardless of whether the route of administration is oral, transdermal, or vaginal, indicating that the use of parenteral routes over the oral route does not result in EE having proportional hepatic actions relative to non-hepatic actions. However, the potency of EE on liver protein synthesis is in any case reduced with parenteral administration. A dosage of 10 µg/day vaginal EE has been found to be equivalent to 50 µg oral EE in terms of effects on liver protein synthesis, such as stimulation of hepatic SHBG production. As such, parenteral EE, which bypasses the first pass through the liver that occurs with oral EE, has been found to have a 5-fold lower impact on liver protein synthesis by weight than oral EE. In contrast to EE as well as to oral estradiol, transdermal estradiol shows few or no effects on liver protein synthesis at typical menopausal dosages.\n", "Another study published in \"Sleep\" (2008) on the influence of nasal resistance (NAR) on oral device treatment outcome in OSA demonstrates the need for an interdisciplinary approach between ENT surgeons and sleep physicians to treating OSA. The study suggests that higher levels of NAR may negatively affect outcome with MAS and subsequently methods to lower nasal resistance may improve the outcome of oral device treatment.\n\nSection::::Therapy alternatives.:Tongue retaining device.\n", "BULLET::::- Dr. Phyllis (voiced by Hallie Todd): A dugong who is a physician that helps people with their problems, especially Brandy and Whiskers's problems. She recommends Brandy to be like Mr. Whiskers, and Mr. Whiskers to be like Brandy, which by the way, ticked off Brandy. After Brandy turns the tables on Whiskers into going back to his normal self, they promised to Dr. Phyllis to never say THE blaming word ever again.\n\nBULLET::::- Gina (voiced by Jennifer Hale): Gina is an orange Coati that Whiskers fell in love with. She appeared in \"Cyranosaurus Rex\".\n", "It is proposed that verbalization requires a shift to verbal processing, and this shift obstructs the application of non-verbal (face-specific) processing in the following face recognition test. The key difference from other hypotheses is whether an operation-specific representation is postulated or not.\n" ]
[]
[]
[ "normal" ]
[]
[ "normal", "normal" ]
[]
2018-03528
Why do some screens' colors get distorted when viewed from extreme angles?
The crystals in some types of Liquid Crystal Displays (LCD for short) are carefully aligned in a way that's the best from viewing from the front. That's the "TN" type. If you're viewing from an angle, the light isn't shaped in the way that it should be, and that causes the distortion. There are more modern display types that do not suffer from this issue. EDIT: I suppose I might also add that the crystals are there to act as colored filters. Essentially, the backlight passes through them to get the colors that it actually has to display. The way the crystals do it differs between panels. The way they work in TN makes it suffer the worst from the distortion as a side effect, VA is an improvement, and IPS offers the best viewing angles. Though it is important to consider all other aspects of various panels when purchasing one, not just the viewing angles.
[ "TN displays suffer from limited viewing angles, especially in the vertical direction. Colors will shift when viewed off-perpendicular. In the vertical direction, colors will shift so much that they will invert past a certain angle.\n", "An example of pixel shape affecting \"resolution\" or perceived sharpness: displaying more information in a smaller area using a higher resolution makes the image much clearer or \"sharper\". However, most recent screen technologies are fixed at a certain resolution; making the resolution lower on these kinds of screens will greatly decrease sharpness, as an interpolation process is used to \"fix\" the non-native resolution input into the display's native resolution output.\n", "In offset printing, colors are output on separate lithographic plates. Failing to use the correct set of angles to output every color may lead to a sort of optical noise called a moiré pattern which may appear as bands or waves in the final print. There is another disadvantage associated with incorrect sets of angle values, as the colors will look dimmer due to overlapping.\n\nWhile the angles depend on how many colors are used and the preference of the press operator, typical CMYK process printing uses any of the following screen angles:\n", "When different screens are combined, a number of distracting visual effects can occur, including the edges being overly emphasized, as well as a moiré pattern. This problem can be reduced by rotating the screens in relation to each other. This screen angle is another common measurement used in printing, measured in degrees clockwise from a line running to the left (9 o'clock is zero degrees).\n", "Some LCDs compensate the inter-pixel color mix effect by having borders between pixels slightly larger than borders between subpixels. Then, in the example above, a viewer of such an LCD would see a blue line appearing adjacent to a red line instead of a single magenta line.\n\nSection::::PenTile.:Example with - alternated stripes layout.\n", "The vulnerability was specific to the monitors’ on-screen-display (OSD) controllers, which are used to control and adjust viewing options on the screen, such as brightness, contrast or horizontal/vertical positioning. However, as Cui, Kataria and Charbonneau noted in their talk abstract for the 2016 REcon security conference, with the Monitor Darkly exploit, the OSD can also be used to “read the content of the screen, change arbitrary pixel values, and execute arbitrary code supplied through numerous control channels.”\n", "A fortunate side-effect of inversion (see above) is that, for most display material, what little cross-talk there is largely cancelled out. For most practical purposes, the level of crosstalk in modern LCDs is negligible.\n\nCertain patterns, particularly those involving fine dots, can interact with the inversion and reveal visible cross-talk. If you try moving a small Window in front of the inversion pattern (above) which makes your screen flicker the most, you may well see cross-talk in the surrounding pattern.\n\nDifferent patterns are required to reveal cross-talk on different displays (depending on their inversion scheme).\n", "Today's displays, being driven by digital signals (such as DVI, HDMI and DisplayPort), and based on newer fixed-pixel digital flat panel technology (such as liquid crystal displays), can safely assume that all pixels are visible to the viewer. On digital displays driven from a digital signal, therefore, no adjustment is necessary because all pixels in the signal are unequivocally mapped to physical pixels on the display. As overscan reduces picture quality, it is undesirable for digital flat panels; therefore, is preferred. When driven by analog video signals such as VGA, however, displays are subject to timing variations and cannot achieve this level of precision.\n", "When projecting images onto a completely flat screen, the distance light has to travel from its point of origin (i.e., the projector) increases the farther away the destination point is from the screen's center. This variance in the distance traveled results in a distortion phenomenon known as the pincushion effect, where the image at the left and right edges of the screen becomes bowed inwards and stretched vertically, making the entire image appear blurry.\n", "Any CRT that can run 12801024 can also run 1280960, which has the standard 4:3 ratio. A flat panel TFT screen, including one designed for 12801024, will show stretching distortion when set to display any resolution other than its native one, as the image needs to be interpolated to fit in the fixed grid display. Some TFT displays do not allow a user to disable this, and will prevent the upper and lower portions of the screen from being used forcing a \"letterbox\" format when set to a 4:3 ratio.\n", "BULLET::::- Limited viewing angle in some older or cheaper monitors, causing color, saturation, contrast and brightness to vary with user position, even within the intended viewing angle.\n\nBULLET::::- Uneven backlighting in some monitors (more common in IPS-types and older TNs), causing brightness distortion, especially toward the edges (\"backlight bleed\").\n\nBULLET::::- Black levels may not be as dark as required because individual liquid crystals cannot completely block all of the backlight from passing through.\n", "The image may seem garbled, poorly saturated, of poor contrast, blurry or too faint outside the stated viewing angle range, the exact mode of \"failure\" depends on the display type in question. For example, some projection screens reflect more light perpendicular to the screen and less light to the sides, making the screen appear much darker (and sometimes colors distorted) if the viewer is not in front of the screen. Many manufacturers of projection screens thus define the viewing angle as the angle at which the luminance of the image is exactly half of the maximum. With LCD screens, some manufacturers have opted to measure the contrast ratio, and report the viewing angle as the angle where the contrast ratio exceeds 5:1 or 10:1, giving minimally acceptable viewing conditions.\n", "BULLET::::- Viewing angle: The maximum angle at which the display can be viewed with acceptable quality. The angle is measured from one direction to the opposite direction of the display, such that the maximum viewing angle is 180 degrees. Outside of this angle the viewer will see a distorted version of the image being displayed. The definition of what is acceptable quality for the image can be different among manufacturers and display types. Many manufacturers define this as the point at which the luminance is half of the maximum luminance. Some manufacturers define it based on contrast ratio and look at the angle at which a certain contrast ratio is realized.\n", "BULLET::::- The license and terms of service box of LG Mobile Support Tool 1.8.9.0 can easily exceed the height of the screen (768px), with no way to scroll down to the 'OK' button. If the display is wide enough, then one remedy is to use the display OSD, or the computer's video adapter software (if available) to temporarily rotate the visible display output sideways 90° or 270° to reach those buttons.\n", "This way of proceeding is suitable only when the display device does not exhibit \"loading effects\", which means that the luminance of the test pattern is varying with the size of the test pattern. Such loading effects can be found in CRT-displays and in PDPs. A small test pattern (e.g. 4% window pattern) displayed on these devices can have significantly higher luminance than the corresponding full-screen pattern because the supply current may be limited by special electronic circuits.\n\nSection::::Full-swing contrast.\n", "The 12801024 resolution is not the standard 4:3 aspect ratio, but 5:4 (1.25:1 instead of 1.333:1). A standard 4:3 monitor using this resolution will have rectangular rather than square pixels, meaning that unless the software compensates for this the picture will be distorted, causing circles to appear elliptical.\n", "BULLET::::- Twisted Nematic (TN): This type of display is the most common and makes use of twisted nematic-phase crystals, which have a natural helical structure and can be untwisted by an applied voltage to allow light to pass through. These displays have low production costs and fast response times but also limited viewing angles, and many have a limited color gamut that cannot take full advantage of advanced graphics cards. These limitations are due to variation in the angles of the liquid crystal molecules at different depths, restricting the angles at which light can leave the pixel.\n", "Unwanted posterization, also known as banding, may occur when the color depth, sometimes called bit depth, is insufficient to accurately sample a continuous gradation of color tone. As a result, a continuous gradient appears as a series of discrete steps or bands of color — hence the name. When discussing fixed pixel displays, such as LCD and plasma televisions, this effect is referred to as false contouring. Additionally, compression in image formats such as JPEG can also result in posterization when a smooth gradient of colour or luminosity is compressed into discrete quantized blocks with stepped gradients. The result may be compounded further by an optical illusion, called the Mach band illusion, in which each band appears to have an intensity gradient in the direction opposing the overall gradient. This problem may be resolved, in part, with dithering.\n", "It is a common misconception that Gouraud shading is any interpolation of colors between vertices. For example perspective correct interpolation. The original paper makes it clear Gouraud shading is specifically linear interpolation of color between vertices. By default most modern GPUs use perspective correct interpolation between vertices which produces a different result than Gouraud shading. The differences will be especially pronounced on polygons stretching deep into the view where the differences between linear interpolation and perspective correct interpolation will be more pronounced.\n\nSection::::Mach Bands.\n", "BULLET::::2. The LCD moves around two axes which are at a right angle to each other, so that the screen both tilts and swivels. This type is called \"swivel screen\". Other names for this type are \"vari-angle screen\", \"fully articulated screen\", \"fully articulating screen\", \"rotating screen\", \"multi-angle screen\", \"variable angle screen\", \"flip-out-and-twist screen\", \"twist-and-tilt screen\" and \"swing-and-tilt screen\".\n", "Modern arcade emulators are able to handle this difference in screen orientation by dynamically changing the screen resolution to allow the portrait oriented game to resize and fit a landscape display, showing wide empty black bars on the sides of the portrait-on-landscape screen.\n\nPortrait orientation is still used occasionally within some arcade and home titles (either giving the option of using black bars or rotating the display), primarily in the vertical shoot 'em up genre due to considerations of aesthetics, tradition and gameplay.\n\nSection::::Modern display rotation methods.\n", "Photographs of a TV screen taken with a digital camera often exhibit moiré patterns. Since both the TV screen and the digital camera use a scanning technique to produce or to capture pictures with horizontal scan lines, the conflicting sets of lines cause the moiré patterns. To avoid the effect, the digital camera can be aimed at an angle of 30 degrees to the TV screen.\n\nSection::::Implications and applications.:Marine navigation.\n", "BULLET::::- Test cards including large circles were used to confirm the linearity of the set's deflection systems. As solid-state components replaced vacuum tubes in receiver deflection circuits, linearity adjustments were less frequently required (few newer sets have user-adjustable \"VERT SIZE\" and \"VERT LIN\" controls, for example). In LCD and other deflectionless displays, the linearity is a function of the display panel's manufacturing quality; for the display to work, the tolerances will already be far tighter than human perception.\n", "BULLET::::- Spatial performance: For a computer monitor or some other display that is being viewed from a very close distance, resolution is often expressed in terms of dot pitch or pixels per inch, which is consistent with the printing industry. Display density varies per application, with televisions generally having a low density for long-distance viewing and portable devices having a high density for close-range detail. The Viewing Angle of an LCD may be important depending on the display and its usage, the limitations of certain display technologies mean the display only displays accurately at certain angles.\n", "Ordinary interpolation without premultiplied alpha leads to RGB information leaking out of fully transparent (A=0) regions, even though this RGB information is ideally invisible. When interpolating or filtering images with abrupt borders between transparent and opaque regions, this can result in borders of colors that were not visible in the original image. Errors also occur in areas of semitransparancy because the RGB components are not correctly weighted, giving incorrectly high weighting to the color of the more transparent (lower alpha) pixels.\n" ]
[]
[]
[ "normal" ]
[]
[ "normal", "normal" ]
[]
2018-20372
Why does our appetite disappear when we become sick?
Because you have so much mucus flowing into your stomach you feel full and don't produce the hormones that make you feel hungry.
[ "For example, anorexia of infection is part of the acute phase response (APR) to infection. The APR can be triggered by lipopolysaccharides and peptidoglycans from bacterial cell walls, bacterial DNA, and double-stranded viral RNA, and viral glycoproteins, which can trigger production of a variety of proinflammatory cytokines. These can have an indirect effect on appetite by a number of means, including peripheral afferents from their sites of production in the body, by enhancing production of leptin from fat stores. Inflammatory cytokines can also signal to the central nervous system more directly by specialized transport mechanisms through the blood–brain barrier, via circumventricular organs (which are outside the barrier), or by triggering production of eicosanoids in the endothelial cells of the brain vasculature. Ultimately the control of appetite by this mechanism is thought to be mediated by the same factors normally controlling appetite, such as neurotransmitters (serotonin, dopamine, histamine, norepinephrine, corticotropin releasing factor, neuropeptide Y, and α-melanocyte-stimulating hormone).\n", "Infection causes a relative protein deficiency that leads to reduced weight gain or even weight loss. This due in part to a reduction in appetite. There is also a loss in digestive efficiency. Lesions in the allow a loss of protein and in addition protein is diverted to tissue repair and immune and inflammatory processes.  Protein supplementation of the diet can prevent the appearance of clinical signs which argues strongly that pathogenesis is a consequence of the relative protein deficiency.\n", "The melanocortin system is one of the mammalian body's tools to regulate food intake in a push-pull fashion. The only neurons known to release melanocortins are located in the arcuate nucleus of the hypothalmus. Accordingly, there is a subpopulation called POMC neurons and one called AgRP neurons. When POMC neurons release α-MSH, appetite is decreased. On the other hand, when AgRP neurons release AgRP, appetite is stimulated. \n", "BULLET::::3. Glucostatic hypothesis: The activity of the satiety center in the ventromedial nuclei is probably governed by the glucose utilization in the neurons. It has been postulated that when their glucose utilization is low and consequently when the arteriovenous blood glucose difference across them is low, the activity across the neurons decrease. Under these conditions, the activity of the feeding center is unchecked and the individual feels hungry. Food intake is rapidly increased by intraventricular administration of 2-deoxyglucose therefore decreasing glucose utilization in cells.\n", "BULLET::::- The parasitus (parasite) is often portrayed as a selfish liar. He is typically associated with the \"miles gloriosus\" character, and hangs upon his every word. The \"parasitus\" is primarily concerned with his own appetite, or from where he will obtain his next free meal.\n", "Gastrointestinal infection is one of the most common causes of acute nausea and vomiting. Chronic nausea may be the presentation of many gastrointestinal disorders, occasionally as the major symptom, such as gastroesophageal reflux disease, functional dyspepsia, gastroparesis, peptic ulcer, celiac disease, non-celiac gluten sensitivity, Crohn's disease, hepatitis, upper gastrointestinal malignancy, and pancreatic cancer. Uncomplicated \"Helicobacter pylori\" infection does not cause chronic nausea.\n\nSection::::Causes.:Food poisoning.\n", "The term \"bulimia\" comes from Greek \"boulīmia\", \"ravenous hunger\", a compound of βοῦς \"bous\", \"ox\" and λιμός, \"līmos\", \"hunger\". Literally, the scientific name of the disorder, \"bulimia nervosa\", translates to \"nervous ravenous hunger\".\n\nSection::::History.:Before the 20th century.\n", "It was not until the 2nd century B.C., when Rome had conquered Italy and monopolized the commercial and road networks, that a huge diversity of products flooded the capital and began to change their diet, and by association, the diet of Italy most notably with the more frequent inclusion of meats, including as a stock for soups.\n\nSpelt flour was also removed from soups, as bread had been introduced into the Roman diet by the Greeks, and \"pulte\" became a meal largely for the poor.\n", "The contagion has an addiction element to it; if a victim can be isolated from a food supply, within a couple of weeks the cravings disappear and the subject is able to perform a functional role within society, albeit with necrotized flesh and near-immortality. Any additional ingestion of flesh will restore the hunger, though it can again be overcome. Another interesting note about the addiction is that, even while lacking a digestive system or even most of their lower body, the infected still have hunger pangs and wish to continue feeding.\n\nSection::::Main series.\n\nSection::::Main series.:\"Ultimate Fantastic Four\".\n", "Section::::Indirect manipulation of specific appetite.\n\nSpecific appetite can be indirectly induced under experimental circumstances. In one study, normal (sodium-replete) rats exposed to angiotensin II via infusion directly into the brain developed a strong sodium appetite which persisted for months. However, the conclusions of this experiment have been contested. Nicotine implants in rats have been shown to induce a specific appetite for sucrose, even after removal of the implants.\n\nSection::::Specific appetite in humans.\n", "If an individual patient’s susceptibility to infection increases, it is important to reassess immune function in case deterioration has occurred and a new therapy is indicated. If infections are occurring in the lung, it is also important to investigate the possibility of dysfunctional swallow with aspiration into the lungs (see above sections under Symptoms: Lung Disease and Symptoms: Feeding, Swallowing and Nutrition.)\n", "It has been suggested that, having taken into account the unusual level of decomposition, Alexander VI was accidentally poisoned to death by his son, Cesare, with cantarella (which had been prepared to eliminate Cardinal Adriano), although some commentaries doubt these stories and attribute the Pope's death to malaria, then prevalent in Rome, or to another such pestilence. The ambassador of Ferrara wrote to Duke Ercole that it was no wonder the Pope and the duke were sick because nearly everyone in Rome was ill because of bad air (\"per la mala condictione de aere\").\n", "BULLET::::- Feeding: a 2012 paper suggested that oxytocin neurons in the para-ventricular hypothalamus in the brain may play a key role in suppressing appetite under normal conditions and that other hypothalamic neurons may trigger eating via inhibition of these oxytocin neurons. This population of oxytocin neurons is absent in Prader-Willi syndrome, a genetic disorder that leads to uncontrollable feeding and obesity, and may play a key role in its pathophysiology.\n\nSection::::Biological function.:Psychological.\n", "A study demonstrated that when subjects received meals of similar nutritional values based either on chicken or mycoprotein, those who received mycoprotein felt less hungry in the evening, and when dinner time came, they ate less when compared to those who ate chicken. In another study with the same dynamic, these results were validated, and it was also demonstrated that the next ingest in the sequent day, it was also lower in quantity compared with the other group, proving that diets with a high content of mycoprotein can have a positive effect on appetite regulation.\n\nSection::::Benefits.:Effects on the glycaemic response.\n", "This system is a target for drugs which treat obesity, diabetes and cachexia. Stimulation of the Melanocortin-4 receptor causes a decrease in appetite and an increase in metabolism of fat and lean body mass, even in a relatively starved state. Conversely, damage to this receptor has been shown to result in morbid obesity. \n\nSection::::References.\n\nBULLET::::- Cone (2005) \"Anatomy and Regulation of the Central Melanocortin System\" Nature Neuroscience 7: 1048-54\n\nBULLET::::- Daniel L. Marks, Nicholas Ling and Roger D. Cone (2001) \"Role of the Central Melanocortin System in Cachexia\" Cancer Research 61, 1432- 1438\n", "Section::::Episodes.:\"Eating kids\".\n\nThe event of the suicide of the Neapolitan child is picked up by a female journalist, who is holding a service on the theme of the great child poverty in Naples. As a guest in the transmission there is an Italian-German professor (Paolo Villaggio) who has a proposal to solve the problem. The man, looking through a satirical article of Jonathan Swift, says that the problem of overpopulation in Naples can be solved by eating babies. Proposing various methods to cook the children, the professor says that only poor children should be eaten, because they are more succulent.\n", "From 17 January 408 to 26 April 409 he was \"praefectus urbi\" of Constantinople. Towards the end of his term, there was a shortage of food in the city, caused by delay in the shipment of grain from Alexandria to the capital, and the population revolted, burning the \"praetorium\" and dragging Monaxius' carriage around the streets. Grain supplies directed to other cities were sent to Constantinople, and the overall grain supply for the capital was re-organised. Monaxius also created an emergency fund, partially formed by senatorial contribution, to buy grain in case of shortage.\n", "There is a common misconception that ancient Romans designated spaces called \"vomitoria\" for the purpose of actual vomiting, as part of a binge and purge cycle. According to Cicero, Julius Caesar once escaped an assassination attempt because he felt ill after dinner. Instead of going to the latrine, where his assassins were waiting for him, he went to his bedroom and avoided assassination. This may be the origin of the misconception. The actual term \"vomitorium\" does not appear until the 4th century CE, about 400 years after Caesar and Cicero.\n", "Sores or ulcerations can become infected by virus, bacteria or fungus. Pain and loss of taste perception makes it more difficult to eat, which leads to weight loss. Ulcers may act as a site for local infection and a portal of entry for oral flora that, in some instances, may cause septicaemia (especially in immunosuppressed patients). Therefore, oral mucositis can be a dose-limiting condition, disrupting a patient’s optimal cancer treatment plan and consequentially decreasing their chances of survival.\n\nSection::::Pathophysiology.\n", "Section::::Wartime.\n\nOn one occasion, when he was driven out of the Ancona hospital while tendering religious consolation to a Jewish patient, he sought out the local head of the carabinieri who immediately provided him with an escort of four gendarmes that enabled him to return to the patient's bedside. The marshall in question assured Toaff that he could call on him for help if any other problems arose.\n", "Chemotherapy often causes mucositis, severe inflammation of primarily the small intestines. Currently, there is no treatment to alleviate the symptoms of mucositis caused by chemotherapy. When rats were inflicted with mucositis by chemotherapy drugs, the intestinal tissues in those pretreated with streptococcus thermophilus TH-4 functioned more healthily and were less distressed.\n", "There appears to be a further increase in programmed cell death by \"Giardia intestinalis\", which further damages the intestinal barrier and increases permeability. There is significant upregulation of the programmed cell death cascade by the parasite, and, furthermore, substantial downregulation of the anti-apoptotic protein Bcl-2 and upregulation of the proapoptotic protein Bax. These connections suggest a role of caspase-dependent apoptosis in the pathogenesis of giardiasis.\n", "Like all other forms of giant cells Touton giant cell has pretty much the same symptoms as any of the other form of giant cells. Which include:\n\nBULLET::::- Fever\n\nBULLET::::- Weight loss\n\nBULLET::::- Fatigue\n\nBULLET::::- Loss of appetite\n\nSection::::Foreign-body giant cell.\n", "Section::::Feeding.\n", "According to Livy, writing five hundred years after the fact, Menenius was chosen by the patricians during the secession of the plebs in 494 BC to persuade the plebs to end their secession. Livy says that Menenius told the soldiers a fable about the parts of the human body and how each has its own purpose in the greater function of the body. The rest of the body thought the stomach was getting a free ride so the body decided to stop nourishing the stomach. Soon, the other parts became fatigued and unable to function so they realized that the stomach did serve a purpose and they were nothing without it. In the story, the stomach represents the patrician class and the other body parts represent the plebs. Eventually, Livy says, an accord was reached between the patricians and the plebs, which included creating the office of tribune of the plebs.\n" ]
[]
[]
[ "normal" ]
[]
[ "normal", "normal" ]
[]
2018-02508
How do planes avoid collisions above the Atlantic where there is no radar?
There are airplane “highways” usually separated by 10 000 ft. East you fly odd (ie 30 000ft) and west you fly even(ie 40 000ft)
[ "Section::::Efforts to prevent collisions.\n\nSection::::Efforts to prevent collisions.:TCAS.\n\nAlmost all modern aircraft are fitted with TCAS, which is designed to try to prevent mid-air collisions. The system, based on the signals from aircraft transponders, alerts pilots if a potential collision with another aircraft is imminent. Despite its limitations, it is believed to have greatly reduced the chance of a mid-air collision.\n\nSection::::Efforts to prevent collisions.:Civilian/military mid-air collisions.\n", "BULLET::::- Air traffic control using surveillance based separation standards will be possible over water, in areas that radar does not currently cover. Currently, air traffic control uses the larger procedural separation standard in oceanic and remote areas.\n\nBULLET::::- As is currently possible in radar covered areas, a position history will be available for lost aircraft, as in the case of Malaysia Airlines Flight 370.\n", "Section::::Current systems.\n\nMany countries have developed their own AEW&C systems, although the Boeing E-3 Sentry and Northrop Grumman E-2 Hawkeye are the most common systems worldwide. The E-3 Sentry was built by the Boeing Defense and Space Group (now Boeing Defense, Space & Security) and was based on the Boeing 707-320 aircraft. Sixty-five E-3s were built and it is operated by the United States, NATO, the United Kingdom, France, and Saudi Arabia. For the Japan Air Self-Defense Force, the E-3 technology has been fitted into the Boeing E-767.\n", "To combat these effects most recently, great emphasis is placed upon software solutions. It is highly likely that one of those software algorithms was the proximate cause of a mid-air collision recently, as one airplane was reported at showing its altitude as the pre-flight paper filed flight plan, and not the altitude assigned by the ATC controller (see the reports and observations contained in the below reference ATC Controlled Airplane Passenger Study of how radar worked).\n\nSee the reference section below for errors in performance standards for ATCRBS transponders in the US.\n", "The techniques used for navigation in the air will depend on whether the aircraft is flying under visual flight rules (VFR) or instrument flight rules (IFR). In the latter case, the pilot will navigate exclusively using instruments and radio navigation aids such as beacons, or as directed under radar control by air traffic control. In the VFR case, a pilot will largely navigate using \"dead reckoning\" combined with visual observations (known as pilotage), with reference to appropriate maps. This may be supplemented using radio navigation aids or satellite based positioning systems.\n\nSection::::Route planning.\n", "Modern aircraft can use several types of collision avoidance systems to prevent unintentional contact with other aircraft, obstacles, or the ground:\n\nBULLET::::- Airborne radar can detect the relative location of other aircraft, and has been in military use since World War II, when it was introduced to help night fighters (such as the de Havilland Mosquito and Messerschmitt Bf 110) locate bombers. While larger civil aircraft carry weather radar, sensitive anti-collision radar is rare in non-military aircraft.\n", "BULLET::::- March 5 – at Saint-Forget, France a Socata Rallye MS.892 (registered as F-BLSO) collided midair with a Cessna F150 (registered as F-BSIQ) killing the instructor and student pilot in the latter aircraft. After investigation, the BEA called for obligatory use of transponders in a large zone around Paris.\n\nBULLET::::- July 21 – a South African registered aircraft, carrying 12 passengers and two crew, crashed into Mount Kenya: there were no survivors.\n\nBULLET::::- December 17 – Scaled Composites SpaceShipOne suffered a collapsed landing gear and a runway excursion during a freefall flight prior to its space launches.\n\nSection::::2004.\n", "BULLET::::- Air France Flight 1611 — crashed into the Mediterranean sea off Nice.\n\nBULLET::::- Air India Flight 182 — was flying over the Atlantic Ocean in west of Ireland when a bomb placed by Sikh extremist exploded in the cargo hold. The Boeing 747 disintegrated and plunged into the Atlantic killing everyone aboard in Ireland's deadliest aviation disaster.\n\nBULLET::::- American Airlines Flight 63 — Attempted bombing over Atlantic Ocean, terrorist was restrained and sedated.\n", "Commercial air transport crews routinely encounter this type of storm in this area. With the aircraft under the control of its automated systems, one of the main tasks occupying the cockpit crew was that of monitoring the progress of the flight through the ITCZ, using the on-board weather radar to avoid areas of significant turbulence. Twelve other flights had recently shared more or less the same route that Flight 447 was using at the time of the accident.\n\nSection::::Search and recovery.\n\nSection::::Search and recovery.:Surface search.\n", "BULLET::::- Most airports with scheduled airline service now have a surrounding controlled airspace (ICAO designation Class B or Class C) for improved IFR/VFR traffic separation - all aircraft must be transponder equipped and in communication with ATC to operate within this controlled airspace\n\nBULLET::::- Most commercial/air-carrier aircraft (and some general aviation) now have an airborne collision avoidance or TCAS device on board, that can detect and warn about nearby transponder-equipped traffic\n\nBULLET::::- ATC radar systems now have \"conflict alert\" - automated ground-based collision avoidance software that sounds an alarm when aircraft come within less than a minimum safe separation distance\n", "To supplement air traffic control, most large transport aircraft and many smaller ones use a traffic alert and collision avoidance system (TCAS), which can detect the location of nearby aircraft, and provide instructions for avoiding a midair collision. Smaller aircraft may use simpler traffic alerting systems such as TPAS, which are passive (they do not actively interrogate the transponders of other aircraft) and do not provide advisories for conflict resolution.\n", "BULLET::::- The Washington Post; December 26, 1927; New York, December 25, 1927 (Associated Press) Mrs. Frances Wilson Grayson, who has been missing since she took off Friday with three companions for Harbor Grace, Newfoundland, was preparing to undertake her fourth attempt within three months to fly the Atlantic in her Sikorsky amphibian plane, the Dawn.\n\nBULLET::::- The New York Times; December 26, 1927, page 1; \"Grayson Plane Radioed 'Something Wrong' Friday Night; Then the Signaling Ceased, Silent for 54 Hours Since; Probably Lost Off The Nova Scotia Coast In A Storm\"\n", "Once the black boxes and communication transcripts were obtained, the investigators interviewed the Legacy jet's flight crew and the air traffic controllers, trying to piece together the scenario which allowed two modern jet aircraft, equipped with the latest anti-collision gear, to collide with each other while on instrument flights in positive control airspace.\n", "At the time of the accident, the Ajaccio airport had no radar system. As a direct result of the accident, the equipment was upgraded and the approach pattern changed.\n\nSection::::2008 clean-up operation.\n", "BULLET::::- 2011 Friboug near-collision, involving Germanwings Airbus A319 Flight 2529 and Hahn-Air-Lines Raytheon Premier I Flight 201. Air traffic control at Geneva allowed flight 2529 to sink to flight level 250 but entered flight level 280 as usual for handover to traffic control at Zurich. Air traffic control at Zurich allowed flight 201 to climb to flight level 270. This triggered a resolution advisory for the Airbus to sink and for the Raytheon to climb which was followed by both aircraft. Nine seconds later Geneva instructed the Raytheon to sink to flight level 260 which they followed now. It led to a situation where both planes passed at 100 feet minimum distance. Shortly later the Raytheon was lower than the Airbus and TCAS issued a reversal RA for the Airbus to climb and for the Raytheon to sink.\n", "BULLET::::- 2019 near collision between a Boeing 777-328(ER) and an Airbus A320-232 over Mumbai airspace. The Boeing AF 253 operated by Air France was flying from Ho Chi Minh City to Paris at a flight level 320 while the Airbus EY 290 operated by Etihad Airways was flying from Abu Dhabi to Kathmandu at FL 310. After a TCAS activation the ATC ordered the Etihad to climb to FL330.\n\nSection::::Overview.\n\nSection::::Overview.:System description.\n", "Finally, an aircraft may be supervised from the ground using surveillance information from e.g. radar or multilateration. ATC can then feed back information to the pilot to help establish position, or can actually tell the pilot the position of the aircraft, depending on the level of ATC service the pilot is receiving.\n", "The techniques used for navigation in the air will depend on whether the aircraft is flying under the visual flight rules (VFR) or the instrument flight rules (IFR). In the latter case, the pilot will navigate exclusively using instruments and radio navigation aids such as beacons, or as directed under radar control by air traffic control. In the VFR case, a pilot will largely navigate using dead reckoning combined with visual observations (known as pilotage), with reference to appropriate maps. This may be supplemented using radio navigation aids.\n\nSection::::Guidance, navigation and control.:Guidance.\n", "The 1948 S.O.L.A.S. International Conference made several recommendations, including the recognition of radar these were eventually ratified in 1952 and became effective in 1954. Further recommendations were made by a S.O.L.A.S. Conference in London in 1960 which became effective in 1965\n", "In Canada, the Transportation Safety Board of Canada (TSB), is an independent agency responsible for the advancement of transportation safety through the investigation and reporting of accident and incident occurrences in all prevalent Canadian modes of transportation — marine, air, rail and pipeline.\n\nSection::::Investigation.:France.\n\nIn France, the agency responsible for investigation of civilian air crashes is the Bureau d'Enquêtes et d'Analyses pour la Sécurité de l'Aviation Civile (BEA). Its purpose is to establish the circumstances and causes of the accident and to make recommendations for their future avoidance.\n\nSection::::Investigation.:Germany.\n", "BULLET::::- Dan-Air Flight 1903, the de Havilland DH 106 Comet series 4 was approaching Barcelona Airport when it flew into the woods of Serralada del Montseny near Girona, killing everyone on board in the deadliest aircraft accident involving a De Havilland Comet series.\n\nSection::::Europe.:Sweden.\n\nBULLET::::- 1970 Spantax Convair crash, the aircraft crashed shortly after take-off from Stockholm Arlanda Airport, killing 5 people out of 10.\n", "To start the cycle, an interrogation is sent out from ground-based RADAR stations and/or TCAS or other actively interrogating systems in your area. This signal is sent on 1,030 MHz. For TCAS, this interrogation range can have a radius of 40 miles from the interrogation source. The Ground RADAR range can be 200 miles or more.\n\nSection::::Detailed operation.:Step 2.\n\nThe transponder on any aircraft within range of the interrogation replies on 1090 MHz with their squawk code (known as mode A) and altitude code (or mode C). The altitude information is sent in an encoded format.\n", "After a short flight from Budapest, the Tupolev started descent to its destination in very good weather conditions. The flight path followed the mountains and was only above the hilltops at times. The ground proximity warning system (GPWS) system, detecting such a low height, constantly warned the crew to lower the undercarriage. Disturbed by the ever sounding horn, the flight crew switched the system off.\n", "BULLET::::- Two own wide-ranging secondary radar stations, also referred to as «en route»- radar stations, with locations above the Zurich community Boppelsen on the Jura hillside Lägern and on the La Dôle\n\nBULLET::::- Wide-ranging secondary radar dates from the Swiss Air Force FLORAKO radar (\"TG\") in Ticino on Mt.Scopi\n\nBULLET::::- Two own combined primary and secondary radar, also referred as «Approach» radar stations, at the airports of Geneva (in Cointrin) and Zurich (on the Klotener Holberg) for landing and take-off guidance.\n", "The flight AZ 112 contacted Palermo Approach around 9:10 PM stating to be at from VOR (which is installed on Mount Gradara, above the municipality of Borgetto, with a frequency of 112.3 MHz, around south of the airport of Punta Raisi).\n" ]
[]
[]
[ "normal" ]
[]
[ "normal", "normal" ]
[]
2018-17242
How do limes reproduce if there are no seeds in them? Why do lemons have so many seeds versus limes that typically have none?
All fruits naturally have seeds in them. However, over thousands of years, humans have discovered mutant trees that produced fruit with no/minimal seeds in them. Plants are special in that if you cut off a branch, replant it and treat it a certain way, it will grow into a fully functioning plant again. It’s like if I chopped off your finger and then it grew into a clone of you. These cloned plants produce the seedless fruit we eat today. We just keep chopping bits off and perpetuating them.
[ "BULLET::::- Australian limes (former \"Microcitrus\" and \"Eremocitrus\")\n\nBULLET::::- Australian desert lime (\"Citrus glauca\")\n\nBULLET::::- Australian finger lime (\"Citrus australasica\")\n\nBULLET::::- Australian lime (\"Citrus australis\")\n\nBULLET::::- Blood lime (red finger lime × (sweet orange × mandarin) )\n\nBULLET::::- Kaffir lime (\"Citrus hystrix\"); a papeda relative, is one of three most widely produced limes globally.\n\nBULLET::::- Key lime (\"Citrus\" × \"aurantifolia\"=\"Citrus micrantha\" × \"Citrus medica\") is also one of three most widely produced limes globally.\n\nBULLET::::- Musk lime (calamondin, \"Citrofortunella mitis\"), a kumquat × mandarin hybrid\n", "BULLET::::- Persian lime (\"Citrus\" × \"latifolia\") a key lime × lemon hybrid, is the single most widely produced lime globally, with Mexico being the largest producer.\n\nBULLET::::- Rangpur lime (Mandarin lime, lemandarin, \"Citrus limonia\"), a mandarin orange × citron hybrid\n\nBULLET::::- Spanish lime (\"Melicoccus bijugatus\"); not a citrus\n\nBULLET::::- Sweet lime etc. (\"Citrus limetta\", etc.); several distinct citrus hybrids\n\nBULLET::::- Wild lime (\"Adelia ricinella\"); not a citrus\n\nBULLET::::- Wild lime (\"Zanthoxylum fagara\"); not a citrus\n\nBULLET::::- Limequat (lime × kumquat)\n", "The Key lime has given rise to several other lime varieties. The best known, the triploid progeny of a Key lime-lemon cross, is the Persian lime (\"Citrus × latifolia\"), the most widely produced lime, globally. Others are, like their parent, classed within \"C. aurantiifolia\". Backcrossing with citron has produced a distinct group of triploid limes that are also of commercial value to a limited degree, the seedy Tanepeo, Coppenrath, Ambilobe and Mohtasseb lime varieties as well as the Madagascar lemon. Hybridization with a mandarin-pomelo cross similar to the oranges has produced the Kirk lime. The New Caledonia and Kaghzi limes appear to have resulted from an F2 Key lime self-pollination, while a spontaneous genomic duplication gave us the tetraploid Giant Key lime. The potential to produce a wider variety of lime hybrids from the Key lime due to its tendency to form diploid gametes may reduce the disease risk presented by the limited diversity of the current commercial limes.\n", "Australian and New Guinean citrus species had been viewed as belonging to separate genera by Swingle, who placed in \"Microcitrus\" all but the desert lime, which he assigned to \"Eremocitrus\". However, genomic analysis shows that though they form a distinct clade from other citrus, this is nested within the citrus phylogenetic tree, most closely related to kumquats, suggesting that all these species should be included in the genus \"Citrus\". Wu, \"et al.\", found that several of the finger lime cultivars were actually hybrids with round lime, and concluded there were just three species among those tested, desert lime (\"C. glauca\"), round lime (\"C. australis\") and the finger lime (\"C. australasica\"), though their analysis did not include other types previously identified as distinct species. In more limited genomic analysis, the New Guinea wild lime, \"Clymenia\" and \"Oxanthera\" (false orange) all cluster with the Australian limes as members of \"Citrus\". The outback lime is a desert lime agriculturally-selected for more commercial traits, while some commercial varieties of the Australian lime are hybrids with mandarins, lemons, and/or sweet oranges. \"Clymenia\", will hybridize with kumquats and some limes.\n", "Finally, there is the mechanism of \"facultative\" abortion of fruits, where a maternal plant without the resources to mature all fruit aborts the least vigorous ones. This is thought to be common in those taxa that are generally self-compatible, since even these have high outcrossing rates. For example, \"Banksia spinulosa\" var. \"neoanglica\", one of the most self-compatible \"Banksia\" species, has been shown to set far more cross-pollinated than self-pollinated fruit.\n", "While most other citrus are diploid, many of the Key lime hybrid progeny have unusual chromosome numbers. For example, the Persian lime is triploid, deriving from a diploid Key lime gamete and a haploid lemon ovule. A second group of Key lime hybrids, including the Tanepao lime and Madagascar lemon, are also triploid but instead seem to have arisen from a backcross of a diploid Key lime ovule with a citron haploid gamete. The 'Giant Key lime' owes its increased size to a spontaneous duplication of the entire diploid Key lime genome to produce a tetraploid.\n", "BULLET::::- African shaddock X trifoliate hybrid\n\nBULLET::::- Benton citrange trifoliate hybrid\n\nBULLET::::- Borneo Rangpur lime\n", "It is often advisable to graft the plants onto rootstocks with low susceptibility to gummosis, because seedlings generally are highly vulnerable to the disease. Useful rootstocks include wild grapefruit, cleopatra mandarin and tahiti limes. \"C. macrophylla\" is also sometimes used as a rootstock in Florida to add vigor.\n\nClimatic conditions and fruit maturation are crucial in cultivation of the lime tree. Under consistently warm conditions potted trees can be planted at any season, whereas in cooler temperate regions it is best to wait for the late winter or early spring.\n", "BULLET::::- \"Citrus maideniana\" (Maiden's Australian wild lime) may be a subspecies of \"C. indora\".\n\nSection::::Species from Australia.:Cultivars.\n\nA number of cultivars have been developed in recent years. These can be grafted on to standard citrus rootstocks. They may be grown as ornamental trees in the garden or in containers.\n\nGrafted standards are available for some varieties. The cultivars include:\n\nBULLET::::- 'Australian Outback' (or 'Australian Desert'), developed from several desert lime varieties\n\nBULLET::::- 'Australian Red Centre' (or 'Australian Blood' or Blood Lime), a cross of finger lime and a mandarin-lemon or mandarin-sweet orange hybrid\n", "Common lemons are the product of orange/citron hybridization, and hence have pomelo ancestry, and although Key limes are papeda/citron hybrids, the more commercially prevalent Persian limes and similar varieties are crosses of the Key lime with lemons, and hence likewise have pomelo ancestry. These limes can also inhibit drug metabolism. Other less-common citrus species also referred to as lemons or limes are genetically distinct from the more common varieties, with different proportions of pomelo ancestry.\n\nSection::::Affected fruit.:Citrus genetics and interactions.:Inaccurate labeling.\n", "BULLET::::- Limes: A highly diverse group of hybrids go by this name. Rangpur limes, like rough lemons, arose from crosses between citron and mandarin. The sweet limes, so-called due to their low acid pulp and juice, come from crosses of citron with either sweet or sour oranges, while the Key lime arose from a cross between a citron and a micrantha.\n", "Research conducted in the 1970s indicated that a wild selection of \"C. australasica\" was highly resistant to \"Phytophthora citrophthora\" root disease, which has resulted in a cross-breeding program with finger lime to develop disease-resistant citrus rootstock.\n", "The difficulty in identifying exactly which species of fruit are called lime in different parts of the English-speaking world (and the same problem applies to homonyms in other European languages) is increased by the botanical complexity of the citrus genus itself, to which the majority of limes belong. Species of this genus hybridise readily, and it is only recently that genetic studies have started to throw light on the structure of the genus. The majority of cultivated species are in reality hybrids, produced from the citron (\"Citrus medica\"), the mandarin orange (\"Citrus reticulata\"), the pomelo (\"Citrus maxima\") and in particular with many lime varieties, the micrantha (\"Citrus micrantha\").\n", "Due to the sterility of many of the genetic hybrids as well as disease- or temperature-sensitivity of some \"Citrus\" trees, domesticated citrus cultivars are usually propagated via grafting to the rootstock of other, often hardier though less palatable citrus or close relatives. As a result, graft hybrids, also called graft-chimaeras, can occur in \"Citrus\". After grafting, the cells from the scion and rootstock are not somatically fused, but rather the cells of the two intermix at the graft site, and can produce shoots from the same tree that bear different fruit. For example, the 'Faris' lemon, has some branches with purple immature leaves and flowers with a purple blush that give rise to sour fruit, while other branches produce genetically-distinct sweet lemons coming from white flowers, with leaves that are never purple. Graft hybrids can also give rise to an intermixed shoot that bears fruit with a combination of the characteristics of the two contributing species due to the presence of cells from both in that fruit. In an extreme example, on separate branches Bizzaria produces fruit identical to each of the two contributing species, but also fruit that appears to be half one species and half the other, unmixed. In taxonomy, graft hybrids are distinguished from genetic hybrids by designating the two contributing species with a '+' between the individual names (\"Citrus medica\" + \"C. aurantium\").\n", "BULLET::::- Lemon – \"Citrus \" ×\"limon\" (\"C. medica\" × \"C.\" ×\"aurantium\")\n\nBULLET::::- Key lime, Mexican lime, Omani lime – \"Citrus\" ×\"aurantiifolia\" (\"C. medica\" × \"C. micrantha\")\n\nBULLET::::- Limetta, Sweet Lemon, Sweet Lime, mosambi – \"Citrus\" ×\"limetta\" (\"C. medica\" × \"C.\" ×\"aurantium\")\n\nBULLET::::- Lumia – a pear shaped lemon hybrid, (several distinct hybrids)\n\nBULLET::::- Persian lime, Tahiti lime – \"C.\" x\"latifolia\" (\"C.\" x\"aurantiifolia\" x \"C.\" x\"limon\")\n\nBULLET::::- Rhobs el Arsa – bread of the garden, a Moroccan citron x lemon hybrid.\n\nBULLET::::- Yemenite citron – a pulpless true citron.\n\n\"Citrus reticulata\"–based\n", "Limequats are more cold hardy than limes but less cold-hardy than kumquats.\n\nSection::::Varieties.\n\nThere are three different named cultivars of limequats: \n\nBULLET::::- Eustis (\" Citrus aurantiifolia\") - Key lime crossed with round kumquat, the most common limequat. It was named after the city of Eustis, Florida.\n\nBULLET::::- Lakeland (\"Citrus japonica Citrus aurantiifolia\") - Key lime crossed with round kumquat, different seed from same hybrid parent as Eustis. Fruit is slightly larger and contains a few fewer seeds than Eustis. It was named after the city of Lakeland, Florida.\n", "The method of cultivation greatly affects the size and quality of the harvest. Trees cultivated from seedlings take 4–8 years before producing a harvest. They attain their maximal yield at about 10 years of age. Trees produced from cuttings and air layering bear fruit much sooner, sometimes producing fruit (though not a serious harvest) a year after planting. It takes approximately 9 months from the blossom to the fruit.\n", "Lime (fruit)\n\nA lime (from French \"lime\", from Arabic \"līma\", from Persian \"līmū\", \"lemon\") is a citrus fruit, which is typically round, green in color, in diameter, and contains acidic juice vesicles.\n\nThere are several species of citrus trees whose fruits are called limes, including the Key lime (\"Citrus aurantifolia\"), Persian lime, kaffir lime, and desert lime. Limes are a rich source of vitamin C, are sour, and are often used to accent the flavours of foods and beverages. They are grown year-round. Plants with fruit called \"limes\" have diverse genetic origins; limes do not form a monophyletic group.\n", "BULLET::::- 'Australian Sunrise', a hybrid cross of finger lime and a calomondin which is pear shaped and orange inside\n\nBULLET::::- 'Rainforest Pearl', a pink-fruited form of finger lime from Bangalow, New South Wales\n\nBULLET::::- 'Sunrise Lime ', parentage unknown\n\nBULLET::::- 'Outback Lime', a desert lime cultivar\n\nSection::::Species from Papua New Guinea.\n\nBULLET::::- \"Citrus warburgiana\" (Kakamadu or New Guinea wild lime) grows on the south coast of the Papuan Peninsula near Alatau (pictures).\n\nBULLET::::- \"Citrus wakonai\" (also locally called kakamadu) has been reported from Goodenough Island.\n", "BULLET::::- \"Citrus australasica\" – Australian finger lime\n\nBULLET::::- \"Citrus australis\" – Australian round lime\n\nBULLET::::- \"Citrus glauca\" – Australian desert lime\n\nBULLET::::- \"Citrus garrawayae \" – Mount White lime\n\nBULLET::::- \"Citrus gracilis\" – Kakadu lime or Humpty Doo lime\n\nBULLET::::- \"Citrus inodora\" – Russel River lime\n\nBULLET::::- \"Citrus warburgiana \" – New Guinea wild lime\n\nBULLET::::- \"Citrus wintersii \" – Brown River finger lime\n\nBULLET::::- Papedas, including\n\nBULLET::::- \"Citrus halimii\" – \"limau kadangsa\", \"limau kedut kera\", from Thailand and Malaya\n\nBULLET::::- \"Citrus hystrix\" – Kaffir lime, \"makrut\"\n\nBULLET::::- \"Citrus ichangensis\" – Ichang papeda\n", "Section::::Cultivation.\n\nCitrus trees hybridise very readily – depending on the pollen source, plants grown from a Persian lime's seeds can produce fruit similar to grapefruit. Thus, all commercial citrus cultivation uses trees produced by grafting the desired fruiting cultivars onto rootstocks selected for disease resistance and hardiness.\n", "Under the Swingle system, the desert lime was classified in the genus \"Eremocitrus\", a close relative of the genus \"Citrus\". More recent taxonomy considers all the Australian limes to be included in the genus \"Citrus\", and most authorities treat the desert lime this way. \"Citrus glauca\" is one of the most resilient \"Citrus\" species, and is comparatively heat, drought, and cold tolerant. Hence the species is potentially important for \"Citrus\" breeding programs, and readily hybridises with many common Citrus species.\n\nSection::::Description.\n", "This plant is now grown in Japan, Israel, Spain, Malaysia, South Africa, the United Kingdom and the United States in California, Florida, and Texas. The fruit can be found, in small quantities, during the fall and winter months in the United States, India and Japan.\n\nLimequats can be grown indoors or outdoors providing the temperature stays between 10 °C to 30 °C (50 °F to 86 °F). They are fairly small and can be planted in containers or pots, in well-drained fertile soil. Plants grow fairly slowly and flower and fruit between 5–7 months and rest for 7–5 months.\n", "Persian lime\n\nPersian lime (\"Citrus × latifolia\"), also known by other common names such as seedless lime, Bearss lime and Tahiti lime, is a citrus fruit species of hybrid origin, known only in cultivation. The Persian lime is a triploid cross between key lime (\"Citrus × aurantiifolia\") and lemon (\"Citrus\" \"limon\").\n", "Section::::Mating.\n\nSection::::Mating.:Mate searching behavior.\n" ]
[ "Seeds are required for a plant to reproduce." ]
[ "Plants can reproduce by having a cut branch planted, not only by planting seeds." ]
[ "false presupposition" ]
[ "Seeds are required for a plant to reproduce." ]
[ "false presupposition" ]
[ "Plants can reproduce by having a cut branch planted, not only by planting seeds." ]
2018-01024
How does it work when you sync those buttons in your car to transponders (e.g. garage door openers)?
The garage door opener responds to a particular code. You can set the buttons in your car to produce that code. Push the button and the garage door opens.
[ "A transponder system is a system which is always armed until a device, usually a small RFID transponder, enters the vehicle's transmitter radius. Since the device is carried by the driver, usually in their wallet or pocket, if the driver leaves the immediate vicinity of the vehicle, so will the transponder, causing the system to assume the vehicle has been hijacked and disable it. \n", "Light aircraft electrical systems are typically 14 V or 28 V. To allow seamless integration with either, the encoder uses a number of open-collector (open-drain) transistors to interface to the transponder. The height information is represented as 11 binary digits in a parallel form using 11 separate lines designated D2 D4 A1 A2 A4 B1 B2 B4 C1 C2 C4. As a twelfth bit, the Gillham code contains a D1 bit but this is unused and consequently set to zero in practical applications.\n", "Section::::History.\n\nIn 1959, Nobel Economics Prize winner William Vickrey was the first to propose a system of electronic tolling for the Washington metropolitan area. He proposed that each car would be equipped with a transponder. The transponder's personalized signal would be picked up when the car passed through an intersection and then relayed to a central computer which would calculate the charge according to the intersection and the time of day and add it to the car's bill.\n", "The system works by having a series of LF (low frequency 125 kHz) transmitting antennas both inside and outside the vehicle. The external antennas are located in the door handles. When the vehicle is triggered, either by pulling the handle or touching the handle, an LF signal is transmitted from the antennas to the key. The key becomes activated if it is sufficiently close and it transmits its ID back to the vehicle via RF (Radio frequency 300 MHz) to a receiver located in the vehicle. If the key has the correct ID, the PASE module unlocks the vehicle.\n", "This was the Y-Control System. It was based around the work done for the Y-Gerät system of bomber guidance. There were differences in frequency and a dedicated transponder was not required. Initially it used a modified FuG17E radio but this was soon replaced by an add on to the standard radio FuG16Z turning it into the FuG16ZY.\n", "If an RGPO jammer responds to such a signal by sending out the same frequency it received, this additional signal will be sent into the same filter, adding to the original signal and making it stronger. It the transponder instead responds at a fixed frequency, it will fall into a different filter and can be easily distinguished. In either case, the original target return remains locked-on.\n", "If a suitable chartplotter is not available, local area AIS transceiver signals may be viewed via a computer using one of several computer applications such as ShipPlotter and Gnuais. These demodulate the signal from a modified marine VHF radiotelephone tuned to the AIS frequencies and convert into a digital format that the computer can read and display on a monitor; this data may then be shared via a local or wide area network via TCP or UDP protocols but will still be limited to the collective range of the radio receivers used in the network.\n", "Section::::Information transmission.:Transponder.\n", "As the transponder itself is concealed, the thief would not be aware that such a system is active on a vehicle until they had ejected the driver and moved the vehicle out of range of the driver (usually only a couple of meters). This is probably the most common anti-hijack system, and a central locking system that uses the same concept was demonstrated by Jeremy Clarkson on an old episode of the BBC Top Gear program where he teased a butler by asking him to put his bags in a Mercedes-Benz S600 but didn't give him the RFID transponder. The butler was confused when the S600 doors wouldn't open when he tried, but when Jeremy approached with the transponder in his pocket, the system acknowledged this and unlocked the car, allowing Jeremy to simply pull the door handle to gain entry to the vehicle.\n", "Each Autopass unit features a move detect mechanism. When the unit is removed from the windscreen, an electrical switch will be activated, causing a flag to be set in a processor within the Autopass unit. This flag will be registered when doing a tolling transaction the next time the unit passes a toll plaza.\n\nSection::::Obligatory tag for heavy vehicles.\n", "In 1959, Nobel Economics Prize winner William Vickrey was the first to propose a system of electronic tolling for the Washington Metropolitan Area. He proposed that each car would be equipped with a transponder: \"The transponder's personalised signal would be picked up when the car passed through an intersection, and then relayed to a central computer which would calculate the charge according to the intersection and the time of day and add it to the car’s bill.\" In the 1960s and the 1970s, free flow tolling was tested with fixed transponders at the undersides of the vehicles and readers, which were located under the surface of the highway. Modern toll transponders are typically mounted under the windshield, with readers located in overhead gantries.\n", "A lockout system is armed when the driver turns the ignition key to the \"on\" position and carries out a specified action, usually flicking a hidden switch or depressing the brake pedal twice. It is activated when the vehicle drops below a certain speed or becomes stationary, and will cause all of the vehicles doors to automatically lock, to prevent against thieves stealing the vehicle when it is stopped, for example at a traffic light or pedestrian crossing.\n\nSection::::Technology.:Transponder.\n", "Section::::Products and services.:Security.\n\nAt the Chaos Computer Club Congress on December 28, 2009, Karsten Nohl and Henryk Plötz reported that LEGIC prime transponders can be produced without being in possession of an authorized Master-Token. Hereby, these transponders can be copied and modified by non-authorized people. The manufacturer added the system description with the information “Basic security with focus on organization and comfort”. \n", "The radio transponder, also known as a tag, can be obtained from the office of the toll plazas, authorized major banks, mobile points of sales and on the internet as well, after signing an agreement. The tag is issued for a specific licence plate and so for a specific vehicle category. It cannot be transferred without notice to the provider. The tags slightly differ in size and form depending on the provider, and can be replaced within the guarantee period, which varies from three to five years. Before using OGS, the tag's account has to be credited.\n", "Section::::Operation.:Operation modes.\n\nTCAS II can be currently operated in the following modes:\n\nBULLET::::- Stand-by: Power is applied to the TCAS Processor and the mode S transponder, but TCAS does not issue any interrogations and the transponder will reply to only discrete interrogations.\n\nBULLET::::- Transponder: The mode S transponder is fully operational and will reply to all appropriate ground and TCAS interrogations. TCAS remains in stand-by.\n", "When the vehicle is started, the on-board computer sends out an RF signal which is picked up by the transponder in the key. The transponder then returns a unique RF signal to the vehicle's computer, giving it confirmation to start and continue to run. This all happens in less than a second. If the on-board computer does not receive the correct identification code certain components, such as the fuel pump and on some the starter, will remain disabled.\n\nSection::::Replacement keys.\n", "Many people who have transponder keys, such as those that are part of Ford Motor Company's SecuriLock system, are not aware of the fact because the circuit is hidden inside the plastic head of the key. On the other hand, General Motors produced what are known as VATS keys (Vehicle Anti-Theft System) during the 1990s, which are often erroneously believed to be transponders but actually use a simple resistor, which is visible in the blade of the key. If the electrical resistance of the resistor is wrong, or the key is a normal key without a resistor, the circuit of the car's electrical system will not allow the engine to get started.\n", "Section::::Road.\n\nElectronic toll collection systems such as E-ZPass in the eastern United States use RFID transponders to identify vehicles. Highway 407 in Ontario is one of the world's first completely automated toll highways.\n\nSection::::Motorsport.\n\nTransponders are used in motorsport for lap timing purposes. A cable loop is dug into the race circuit near to the start/finish line. Each car has an active transponder with a unique ID code. When the racing car passes the start/finish line the lap time and the racing position is shown on the score board.\n", "The Transponder receives the two phase-coherent X-band cw signals transmitted from the ground equipment. A klystron with a 68 MHz coherent frequency offset is phase locked to each of the received signals. These klystrons provide the phase coherent return transmission. There are two separate phase locked loops, continuous and calibrate.\n\nBULLET::::- MISTRAM \"A\" Model Transponder Specifications\n\nSection::::M-236 computer.\n", "A pilot may be requested to squawk a given code by an air traffic controller, via the radio, using a phrase such as \"Cessna 123AB, squawk 0363\". The pilot then selects the 0363 code on their transponder and the track on the air traffic controller's radar screen will become correctly associated with their identity.\n", "Section::::C-Band Radar Transponder.:Transponder operation.\n", "The Galmer chassis had one unique characteristic compared to its chassis counterparts in the CART series in 1992. It was standard for all cars to mount their scoring transponder in the left side pod of the car. The Galmer chassis, however, did not have room in that location. The cars of Unser, Jr. and Sullivan instead had the transponders placed in the nosecone of the car.\n", "When the vehicle is stationary, the TPM may periodically transmit to the vehicle. This allows (as long as the vehicle receiver is always on) the driver or vehicle operator to be warned of low pressure as soon as the Ignition system is switched on rather than having to wait until the vehicle is moving.\n\nAll TPM units on a vehicle operate on the same RF channel frequency and each message includes pressure data, temperature data, a unique ID code, operating state data, status information and check digits. The check digit is either a checksum or a cyclic redundancy check (CRC).\n", "HomeLink Wireless Control System\n\nThe HomeLink Wireless Control System is a radio frequency (RF) transmitter integrated into some automobiles that can be programmed to activate devices such as garage door openers, RF-controlled lighting, gates and locks, including those with rolling codes.\n\nThe system features three buttons, most often found on the driver-side visor or on the overhead console, which can be programmed via a training sequence to replace existing remote controls. It is compatible with most RF-controlled garage door openers, as well as home automation systems such as those based on the X10 protocol.\n", "Section::::Versions.:TCAS IV.\n\nTCAS IV uses additional information encoded by the target aircraft in the Mode S transponder reply (i.e. target encodes its own position into the transponder signal) to generate a horizontal resolution to an RA. In addition, some reliable source of position (such as Inertial Navigation System or GPS) is needed on the target aircraft in order for it to be encoded.\n" ]
[]
[]
[ "normal" ]
[]
[ "normal" ]
[]
2018-09418
People still make profit out of a market crisis?
The most basic rule is buy low sell high. A market crisis is the ultimate low. Mostly only work if you didn't loose all you money in the first place. There's also shorts, but is that what you mean by getting against it?
[ "According to this narrative, in recent decades \"economic fundamentals\" were in a poor state; nothing much was done about that, except that workers were beaten down; instead, the economy was artificially pumped up with cheap credit and cheap imports, prompting a housing and spending boom; when the credit bubble popped, the economy sagged right back into its poor state. Such arguments may have some plausibility, but if they are examined in fine detail, it is clear that a whole series of different arguments are actually being made about the way that falling profitability is connected to economic slumps. In reality, therefore, it is still far from resolved what the role of profitability in crises actually is.\n", "Section::::Objections.:Ecological.\n\nIn ecological economics, the concept of externalities is considered a misnomer, since market agents are viewed as making their incomes and profits by systematically 'shifting' the social and ecological costs of their activities onto other agents, including future generations. Hence, externalities is a \"modus operandi\" of the market, not a failure: The market cannot exist without constantly 'failing'.\n", "These large bubbles and crashes in the absence of significant changes in valuation cast doubt on the assumption of efficient markets that incorporate all public information accurately. In his book, “Irrational Exuberance”, Robert Shiller discusses the excesses that have plagued markets, and concludes that stock prices move in excess of changes in valuation. This line of reasoning has also been confirmed in several studies (e.g., Jeffrey Pontiff ), of closed-end funds which trade like stocks, but have a precise valuation that is reported frequently. (See Seth Anderson and Jeffrey Born “Closed-end Fund Pricing” for review of papers relating to these\n", "(4) Relative overproduction (which keeps many workers employed in relatively backward industries, such as luxury goods, where the organic composition of capital is low);\n\n(5) Foreign trade (which offers cheaper commodities and more profitable channels of investment); and\n\n(6) The increase of \"stock capital\" (interest bearing capital, whose low rate of return is not averaged with others).\n", "BULLET::::- \"The tendency of the rate of profit to fall\". The accumulation of capital, the general advancement of techniques and scale of production, and the inexorable trend to oligopoly by the victors of capitalist market competition, all involve a general tendency for the degree of capital intensity, i.e., the \"organic composition of capital\" of production to rise. All else constant, this is claimed to lead to a fall in the rate of profit, which would slow down accumulation.\n", "In \"adaptive learning\" or \"adaptive expectations\" models, investors are assumed to be imperfectly rational, basing their reasoning only on recent experience. In such models, if the price of a given asset rises for some period of time, investors may begin to believe that its price always rises, which increases their tendency to buy and thus drives the price up further. Likewise, observing a few price decreases may give rise to a downward price spiral, so in models of this type large fluctuations in asset prices may occur. Agent-based models of financial markets often assume investors act on the basis of adaptive learning or adaptive expectations.\n", "Zerbe and McCurdy connected criticism of market failure paradigm to transaction costs. Market failure paradigm is defined as follows:\n\n\"A fundamental problem with the concept of market failure, as economists occasionally recognize, is that it describes a situation that exists everywhere.”\n", "In the case of Northern Rock, Bear Stearns or Lehman Brothers, all three wiped out by the subprime crisis, in 2008, if the trading room finally could not find counterparts on the money market to refinance itself, and therefore had to face a liquidity crisis, each of those defaults is due to the company's business model, not to a dysfunction of its trading room.\n\nOn the contrary, in the examples shown below, if the failure has always been precipitated by market adverse conditions, it also has an operational cause :\n", "Besides, a liquidity crisis may even result due to \"uncertainty\" associated with market activities. Typically, market participants jump on the financial innovation bandwagon, often before they can fully apprehend the risks associated with new financial assets. Unexpected behaviour of such new financial assets can lead to market participants disengaging from risks they don't understand and investing in more liquid or familiar assets. This can be described as the Information Amplification Mechanism. In the subprime mortagage crisis, rapid endorsement and later abandonment of complicated structured finance products such as collateralized debt obligations, mortgage-backed securities, etc. played a pivotal role in amplifying the effects of a drop in property prices.\n", "Classical economics and neoclassical economics posit that market clearing happens by the price adjusting—upwards if demand exceeds supply and downwards if supply exceeds demand. Therefore, it reaches equilibrium at a price that both buyers and sellers will accept, and, in the absence of outside interference (in a free market), this will happen.\n\nThis has not happened for many types of financial assets during the financial crisis that began in 2007, hence one speaks of \"the market breaking down\".\n", "BULLET::::- 1637: Bursting of tulip mania in the Netherlands – while tulip mania is popularly reported as an example of a financial crisis, and was a speculative bubble, modern scholarship holds that its broader economic impact was limited to negligible, and that it did not precipitate a financial crisis.\n", "Unfamiliarity with recent technical and financial innovations may help explain how investors sometimes grossly overestimate asset values. Also, if the first investors in a new class of assets (for example, stock in \"dot com\" companies) profit from rising asset values as other investors learn about the innovation (in our example, as others learn about the potential of the Internet), then still more others may follow their example, driving the price even higher as they rush to buy in hopes of similar profits. If such \"herd behaviour\" causes prices to spiral up far above the true value of the assets, a crash may become inevitable. If for any reason the price briefly falls, so that investors realize that further gains are not assured, then the spiral may go into reverse, with price decreases causing a rush of sales, reinforcing the decrease in prices.\n", "Section::::21st century Marxist controversies.:Pollution.:Clean technology.\n\nThirdly, the new technologies may be cleaner and cheaper, rather than more expensive and more polluting. For example, in 2014, the municipal authorities in the Dutch townships of Beverwijk, Heemskerk, Uitgeest and Velsen offered residents a subsidy of up to €1,050 (US$1,300) if they traded in their old petrol-driven scooter for an electric scooter.\n\nSection::::21st century Marxist controversies.:Pollution.:No consensus.\n", "An additional effect of sudden stops and third generation crises in emerging markets are related to financial institutions and sudden stops in short term capital inflows, in comparison to previous crises where the main features were related to fiscal imbalances or weakness in real activity. In this type of model, international financial markets play a key role, where small open economies face a problem of international illiquidity during the crisis episodes, associated with the collapse of the financial system.\n", "More than a third of the private credit markets thus became unavailable as a source of funds. In February 2009, Ben Bernanke stated that securitization markets remained effectively shut, with the exception of conforming mortgages, which could be sold to Fannie Mae and Freddie Mac.\n\n\"The Economist\" reported in March 2010: \"Bear Stearns and Lehman Brothers were non-banks that were crippled by a silent run among panicky overnight \"repo\" lenders, many of them money market funds uncertain about the quality of securitized collateral they were holding. Mass redemptions from these funds after Lehman's failure froze short-term funding for big firms.\"\n", "LTCM's strategies were compared (a contrast with the market efficiency aphorism that there are no $100 bills lying on the street, as someone else has already picked them up) to \"picking up nickels in front of a bulldozer\"—a likely small gain balanced against a small chance of a large loss, like the payouts from selling an out-of-the-money naked call option.\n\nSection::::Aftermath.\n\nIn 1998, the chairman of Union Bank of Switzerland resigned as a result of a $780 million loss incurred from the short put option on LTCM, which had become very significantly in the money due to its collapse.\n", "An example of asset-based loan usage was when the global securitization market shrank to an all-time low during the aftermath of the collapse of the investment bank Lehman Brothers Holdings Inc in 2008. Within Europe in 2008, over 710 billion euros worth of bonds were issued, backed largely by asset-based loans, such as home and auto loans.\n", "Deleveraging is frustrating and painful for private sector entities in distress: selling assets at a discount can itself lead to heavy losses. In addition, dysfunctional security and credit markets make it difficult to raise capital from public market. Private capital market is often no easier: equity holders usually have already incurred heavy losses themselves, bank/firm share prices have fallen substantially and are expected to fall further, and the market expects the crisis to last long. These factors can all contribute to hindering the sources of private capital and the effort of deleveraging.\n\nSection::::In macroeconomics.\n", "In 1929 the Communist Academy in Moscow published \"The Capitalist Cycle: An Essay on the Marxist Theory of the Cycle\", a 1927 report by Bolshevik theoretician Pavel Maksakovsky to the seminar on the theory of reproduction at the Institute of Red Professors of the Communist Academy. This work explains the connection between crises and regular business cycles based on the cyclical dynamic disequilibrium of the reproduction schemes in volume 2 of \"Capital\". This work rejects the various theories elaborated by \"Marxian\" academics. In particular it explains that the collapse in profits following a boom and crisis is not the result of any long term tendency but is rather a cyclical phenomenon. The recovery following a depression is based on replacement of labor-intensive techniques that have become uneconomic at the low prices and profit margins following the crash. This new investment in less labor-intensive technology takes market share from competitors by producing at lower cost while also lowering the average rate of profit and thus explains the actual mechanism for both economic growth with improved technology and a long run tendency for the rate of profit to fall. The recovery eventually leads to another boom because the lag for gestation of fixed capital investment results in prices that continue such investment until eventually the completed projects deliver overproduction and a crash.\n", "Wolstenholme decried the trend for clients to focus on short term economic issues rather than considering the built environment as a long term asset. He noted that clients were tending to move away from collaborative partnering contracts to traditional method to allow them to exploit the increased competition in the market following the recession.\n", "Some markets can fail due to the nature of the goods being exchanged. For instance, goods can display the attributes of public goods or common goods, wherein sellers are unable to exclude non-buyers from using a product, as in the development of inventions that may spread freely once revealed. This can cause underinvestment because developers cannot capture enough of the benefits from success to make the development effort worthwhile. This can also lead to resource depletion in the case of common-pool resources, where, because use of the resource is rival but non-excludable, there is no incentive for users to conserve the resource. An example of this is a lake with a natural supply of fish: if people catch the fish faster than they can reproduce, then the fish population will dwindle until there are no fish left for future generations.\n", "Two key assumptions limit the effectiveness of the credit market in the model. First, the knowledge of the \"farmers\" is an essential input to their own investment projects—that is, a project becomes worthless if the farmer who made the investment chooses to abandon it. Second, farmers cannot be forced to work, and therefore they cannot sell off their future labor to guarantee their debts. Together, these assumptions imply that even though farmers' investment projects are potentially very valuable, lenders have no way to confiscate this value if farmers choose not to pay back their debts.\n", "In Mackay's account, the panicked tulip speculators sought help from the government of the Netherlands, which responded by declaring that anyone who had bought contracts to purchase bulbs in the future could void their contract by payment of a 10 percent fee. Attempts were made to resolve the situation to the satisfaction of all parties, but these were unsuccessful. The mania finally ended, Mackay says, with individuals stuck with the bulbs they held at the end of the crash—no court would enforce payment of a contract, since judges regarded the debts as contracted through gambling, and thus not enforceable by law.\n", "As Pesaran and Pick (2007) observe, however, financial contagion is a difficult system to estimate econometrically. To disentangle contagion from interaction effects, county-specific variables have to be used to instrument foreign returns. Choosing the crisis period introduces sample selection bias, and it has to be assumed that crisis periods are sufficiently long to allow correlations to be reliably estimated. In consequence, there appears to be no strong consensus in the empirical literature as to whether contagion occurs between markets, or how strong it is.\n", "In an interview regarding the late-2000s recession, Soros referred to it as the most serious crisis since the 1930s. According to Soros, market fundamentalism with its assumption that markets will correct themselves with no need for government intervention in financial affairs has been \"some kind of an ideological excess.\" In Soros's view, the markets' moods—a \"mood\" of the markets being a prevailing bias or optimism/pessimism with which the markets look at reality—\"actually can reinforce themselves so that there are these initially self-reinforcing but eventually unsustainable and self-defeating boom/bust sequences or bubbles.\"\n" ]
[]
[]
[ "normal" ]
[]
[ "normal" ]
[]
2018-02466
Why are we told to breathe in through our nose and out of our mouth while doing sports, meditation etc?
The nose is actually a pretty awesome organ that helps make sure that the air you breathe is prepared as good as possible for your lungs. That includes amongst other things filtering particles out of the air (This pesky nose-hair is actually good for things!), making sure the air gets warmed up when it is cold and moisturizing the air if it is dry. Clean, moist and warm air is making sure that it's easy on the lungs and your breathing is efficient. Additionally breathing through your nose makes sure your air intake is regulated and you aren't prone to hyperventilating. So that explains why breathing in through your nose while doing sports, meditation and... basically in every situation is the best way to breathe in, but why is breathing out through your mouth then advised in sports? It's mostly about the speed of your oxygen intake. Or, to be more precise, about increasing the breathing frequency. As I just wrote the flow through your nose is rather limited. That works in both directions, if you breathe in as hard as you can and breathe out as hard as you can first through your nose then through your mouth you will see that you can breathe a lot faster through your mouth. So if you breathe out of your mouth you will save a little time which means that your intake frequency of oxygen will, overall, be higher. TL:DR: Breathing in through your nose is easier on your lungs and more efficient, breathing out through your mouth has little drawbacks and is faster. Together it's the most efficient you can breathe if you need higher levels of oxygen.
[ "In T'ai chi, aerobic exercise is combined with specific breathing exercises to strengthen the diaphragm muscles, improve posture and make better use of the body's Qi, (energy). Different forms of meditation, and yoga advocate various breathing methods. A form of Buddhist meditation called anapanasati meaning mindfulness of breath was first introduced by Buddha. Breathing disciplines are incorporated into meditation, certain forms of yoga such as pranayama, and the Buteyko method as a treatment for asthma and other conditions.\n\nIn music, some wind instrument players use a technique called circular breathing. Singers also rely on breath control.\n", "To give an example, in the exercise called \"flying\" the practitioner extends their arms slowly out from the side up to stretching above the head and then slowly back down again. One cycle can take anywhere between 2 and 10 minutes. The practice of breathing in and out of both nose and mouth at the same time is recommended while doing the exercises. The key is to pay close attention to the subtleties of sensations and the quality of experience while doing the exercises, thus linking body and mind in the presence of awareness given to the sensations.\n", "The volume of air that moves in \"or\" out (at the nose or mouth) during a single breathing cycle is called the tidal volume. In a resting adult human it is about 500 ml per breath. At the end of exhalation the airways contain about 150 ml of alveolar air which is the first air that is breathed back into the alveoli during inhalation. This volume air that is breathed out of the alveoli and back in again is known as dead space ventilation, which has the consequence that of the 500 ml breathed into the alveoli with each breath only 350 ml (500 ml - 150 ml = 350 ml) is fresh warm and moistened air. Since this 350 ml of fresh air is thoroughly mixed and diluted by the air that remains in the alveoli after normal exhalation (i.e. the functional residual capacity of about 2.5–3.0 liters), it is clear that the composition of the alveolar air changes very little during the breathing cycle (see Fig. 9). The oxygen tension (or partial pressure) remains close to 13-14 kPa (about 100 mm Hg), and that of carbon dioxide very close to 5.3 kPa (or 40 mm Hg). This contrasts with composition of the dry outside air at sea level, where the partial pressure of oxygen is 21 kPa (or 160 mm Hg) and that of carbon dioxide 0.04 kPa (or 0.3 mmHg).\n", "Section::::Society and culture.:Breathing and physical exercise.\n\nDuring physical exercise, a deeper breathing pattern is adapted to facilitate greater oxygen absorption. An additional reason for the adoption of a deeper breathing pattern is to strengthen the body’s core. During the process of deep breathing, the thoracic diaphragm adopts a lower position in the core and this helps to generate intra-abdominal pressure which strengthens the lumbar spine. Typically, this allows for more powerful physical movements to be performed. As such, it is frequently recommended when lifting heavy weights to take a deep breath or adopt a deeper breathing pattern.\n\nSection::::Further reading.\n", "The most important function of breathing is the supplying of oxygen to the body and the removal of its waste product of carbon dioxide. Under most conditions, the partial pressure of carbon dioxide (PCO2), or concentration of carbon dioxide, controls the respiratory rate.\n", "Because of dead space, taking deep breaths more slowly (e.g. ten 500 ml breaths per minute) is more effective than taking shallow breaths quickly (e.g. twenty 250 ml breaths per minute). Although the amount of gas per minute is the same (5 L/min), a large proportion of the shallow breaths is dead space, and does not allow oxygen to get into the blood.\n\nSection::::Mechanical dead space.\n", "There appears to be a connection between pulmonary edema and increased pulmonary blood flow and pressure which results in capillary engorgement. This may occur during higher intensity exercise while immersed or submersed.\n\nFacial immersion at the time of initiating breath-hold is a necessary factor for maximising the mammalian diving reflex in humans.\n\nSection::::Physiological response.:Thermal balance responses.\n", "It is only as a result of accurately maintaining the composition of the 3 liters of alveolar air that with each breath some carbon dioxide is discharged into the atmosphere and some oxygen is taken up from the outside air. If more carbon dioxide than usual has been lost by a short period of hyperventilation, respiration will be slowed down or halted until the alveolar partial pressure of carbon dioxide has returned to 5.3 kPa (40 mmHg). It is therefore strictly speaking untrue that the primary function of the respiratory system is to rid the body of carbon dioxide “waste”. The carbon dioxide that is breathed out with each breath could probably be more correctly be seen as a byproduct of the body’s extracellular fluid carbon dioxide and pH homeostats\n", "Deep breathing exercises are sometimes used as a form of relaxation, that, when practiced regularly, may lead to the relief or prevention of symptoms commonly associated with stress, which may include high blood pressure, headaches, stomach conditions, depression, anxiety, and others.\n\nDue to the lung expansion being lower (inferior) on the body as opposed to higher up (superior), it is referred to as 'deep' and the higher lung expansion of rib cage breathing is referred to as 'shallow'. The actual volume of air taken into the lungs with either means varies.\n\nSection::::Relation to yoga and meditation.\n", "Carl Stough was a student of choral conducting at Westminster Choir College in New Jersey in the 1940s when he started to be fascinated with breathing. As a singer, he knew how important small and steady airflow was to the production of voice. He investigated the meaning of \"breath support\" that all singers are confronted with. He was trying to understand how to obtain optimal breath support for himself and the singers he was conducting in his choirs.\n", "Although many professional wind players find circular breathing highly useful, few pieces of European orchestral music composed before the 20th century actually require its use. However, the advent of circular breathing among professional wind players has allowed for the transcription of pieces originally composed for string instruments which would be unperformable on a wind instrument without the aid of circular breathing. A notable example of this phenomenon is \"Moto Perpetuo\", transcribed for trumpet by Rafael Méndez from the original work for violin by Paganini.\n", "For humans, the typical respiratory rate for a healthy adult at rest is 12–18 breaths per minute. The respiratory center sets the quiet respiratory rhythm at around two seconds for an inhalation and three seconds exhalation. This gives the lower of the average rate at 12 breaths per minute.\n\nAverage resting respiratory rates by age are:\n\nBULLET::::- birth to 6 weeks: 30–40 breaths per minute\n\nBULLET::::- 6 months: 25–40 breaths per minute\n\nBULLET::::- 3 years: 20–30 breaths per minute\n\nBULLET::::- 6 years: 18–25 breaths per minute\n\nBULLET::::- 10 years: 17–23 breaths per minute\n\nBULLET::::- Adults: 12-18 breaths per minute\n", "Human infants are sometimes considered obligate nasal breathers, but generally speaking healthy humans may breathe through their nose, their mouth, or both. During rest, breathing through the nose is common for most individuals. Breathing through both nose and mouth during exercise is also normal, a behavioral adaptation to increase air intake and hence supply more oxygen to the muscles. Mouth breathing may be called abnormal when an individual breathes through the mouth even during rest. Some sources use the term \"mouth breathing habit\" but this incorrectly implies that the individual is fully capable of normal nasal breathing, and is breathing through their mouth out of preference. However, in about 85% of cases, mouth breathing represents an involuntary, subconscious adaptation to reduced openness of the nasal airway, and mouth breathing is a requirement simply in order to get enough air. Chronic mouth breathing in children may affect dental and facial growth. It may also cause gingivitis (inflamed gums) and halitosis (bad breath), especially upon waking if mouth breathing occurs during sleeping.\n", "The alveoli are the dead end terminals of the \"tree\", meaning that any air that enters them has to exit via the same route. A system such as this creates dead space, a volume of air (about 150 ml in the adult human) that fills the airways after exhalation and is breathed back into the alveoli before environmental air reaches them. At the end of inhalation the airways are filled with environmental air, which is exhaled without coming in contact with the gas exchanger.\n\nSection::::Mammals.:Ventilatory volumes.\n", "In humans, about a third of every resting breath has no change in O and CO levels. In adults, it is usually in the range of 150 mL.\n\nDead space can be increased (and better envisioned) by breathing through a long tube, such as a snorkel. Even though one end of the snorkel is open to the air, when the wearer breathes in, they inhale a significant quantity of air that remained in the snorkel from the previous exhalation. Thus, a snorkel increases the person's dead space by adding even more \"airway\" that doesn't participate in gas exchange.\n\nSection::::Components.\n", "In addition to the lung deterioration observed for the various etiologies and mechanisms of respiratory compromise, severe respiratory compromise can have a concomitant impact on non-pulmonary systems of the body.\n\nSection::::Diagnosis.\n\nCentral to implementing therapies to reverse or mitigate a state of respiratory compromise is an accurate diagnosis of the condition. Correctly diagnosing respiratory compromise requires a screening to determine the amount of gas in the patient's bloodstream. Two different tests are available for clinical diagnosis.\n\nTesting and monitoring blood gas levels requires one of the following diagnostic procedures:\n\nBULLET::::- Pulse oximetry\n", "Exercise increases the breathing rate due to the extra carbon dioxide produced by the enhanced metabolism of the exercising muscles. In addition passive movements of the limbs also reflexively produce an increase in the breathing rate.\n\nInformation received from stretch receptors in the lungs limits tidal volume (the depth of inhalation and exhalation).\n\nSection::::Mammals.:Responses to low atmospheric pressures.\n", "Positive pressure ventilation, in which air is forced into the lungs, is needed when oxygenation is significantly impaired. Noninvasive positive pressure ventilation including continuous positive airway pressure (CPAP) and bi-level positive airway pressure (BiPAP), may be used to improve oxygenation and treat atelectasis: air is blown into the airways at a prescribed pressure via a face mask. Noninvasive ventilation has advantages over invasive methods because it does not carry the risk of infection that intubation does, and it allows normal coughing, swallowing, and speech. However, the technique may cause complications; it may force air into the stomach or cause aspiration of stomach contents, especially when level of consciousness is decreased.\n", "Mindfulness and Awareness Trainings use conscious breathing for training awareness and body consciousness.\n\nVipassana Meditation focuses on breathing in and around the nose to calm the mind (anapanasati).\n\nSection::::Applications.:Psychology and psycho-therapy.\n\nAccelerating and deepening the breathing can be used to access suppressed nonverbal memories.\n\nRebirthing uses conscious breathing to purge repressed birth memories and traumatic childhood memories.\n\nHolotropic Breathing was developed by Stanislav Grof and uses deepened breathing to allow access to non-ordinary states of consciousness.\n", "Levels of CO rise in the blood when the metabolic use of O, and the production of CO is increased during, for example, exercise. The CO in the blood is transported largely as bicarbonate (HCO) ions, by conversion first to carbonic acid (HCO), by the enzyme carbonic anhydrase, and then by disassociation of this acid to H and HCO. Build-up of CO therefore causes an equivalent build-up of the disassociated hydrogen ions, which, by definition, decreases the pH of the blood. The pH sensors on the brain stem immediately sense to this fall in pH, causing the respiratory center to increase the rate and depth of breathing. The consequence is that the partial pressure of CO (PCO2) does not change from rest going into exercise. During very short-term bouts of intense exercise the release of lactic acid into the blood by the exercising muscles causes a fall in the blood plasma pH, independently of the rise in the PCO2, and this will stimulate pulmonary ventilation sufficiently to keep the blood pH constant at the expense of a lowered PCO2.\n", "The primary purpose of breathing is to bring atmospheric air (in small doses) into the alveoli where gas exchange with the gases in the blood takes place. The equilibration of the partial pressures of the gases in the alveolar blood and the alveolar air occurs by diffusion. At the end of each exhalation, the adult human lungs still contain 2,500–3,000 mL of air, their functional residual capacity or FRC. With each breath (inhalation) only as little as about 350 mL of warm, moistened atmospherically is added, and well mixed, with the FRC. Consequently, the gas composition of the FRC changes very little during the breathing cycle. Since the pulmonary capillary blood equilibrates with this virtually unchanging mixture of air in the lungs (which has a substantially different composition from that of the ambient air), the partial pressures of the arterial blood gases also do not change with each breath. The tissues are therefore not exposed to swings in oxygen and carbon dioxide tensions in the blood during the breathing cycle, and the peripheral and central chemoreceptors do not need to \"choose\" the point in the breathing cycle at which the blood gases need to be measured, and responded to. Thus the homeostatic control of the breathing rate simply depends on the partial pressures of oxygen and carbon dioxide in the arterial blood. This then also maintains the constancy of the pH of the blood.\n", "From the respiratory center, the muscles of respiration, in particular the diaphragm, are activated to cause air to move in and out of the lungs.\n\nSection::::Control of respiratory rhythm.\n\nSection::::Control of respiratory rhythm.:Ventilatory pattern.\n", "Lung capacity can be expanded through flexibility exercises such as yoga, breathing exercises, and physical activity. A greater lung capacity is sought by people such as athletes, freedivers, singers, and wind-instrument players. A stronger and larger lung capacity allows more air to be inhaled into the lungs. In using lungs to play a wind instrument for example, exhaling an expanded volume of air will give greater control to the player and allow for a clearer and louder tone.\n\nSection::::See also.\n\nBULLET::::- Spirometry\n\nSection::::External links.\n\nBULLET::::- Lung function fundamentals (anaesthetist.com)\n\nBULLET::::- RT Corner (educational site for RT's and nurses)\n", "The lungs are normally protected against aspiration by a series of \"protective reflexes\" such as coughing and swallowing. Significant aspiration can only occur if the protective reflexes are absent or severely diminished (in neurological disease, coma, drug overdose, sedation or general anesthesia). In intensive care, sitting patients up reduces the risk of pulmonary aspiration and ventilator-associated pneumonia.\n", "The musician inhales fully and begins to exhale and blow. When the lungs are nearly empty, the last volume of air is blown into the mouth, and the cheeks are inflated with part of this air. Then, while still blowing this last bit of air out by squeezing the cheeks, the musician must very quickly fill the lungs by inhaling through the nose prior to running out of the air in the mouth. If done correctly, by the time the air in the mouth is nearly exhausted the musician can begin to exhale from the lungs once more, ready to repeat the process again. Essentially, circular breathing bridges the gap between exhalations with air stored in the cheeks, an extra air reserve to play with while sneaking in a breath through the nose.\n" ]
[]
[]
[ "normal" ]
[]
[ "normal" ]
[]
2018-00923
Why do a lot of "gaming chairs" have a bucket seat design to them?
My guess is just the fact that bucket seats in a car take you to the "next level" with the same terminology and logic applied to gaming
[ "A gaming chair is one designed specially for the comfort of video game players. The history of the gaming chair originated from racing games such as Need For Speed, FlatOut, Dirt, etc. The original idea was to replicate the feel you have when driving a sporty car. This is why almost all gaming chairs are designed to look like a car seat. They have very high backrests and flared out sides. The sides of the seat will typically have additional padding. The sides of the backrest will be slightly curved inward. Most Models will also have some type of cutouts in the backrest to help add to the sporty look.\n", "BULLET::::- Folding seat, a fixed seat on a bus, a tram or a passenger car\n\nBULLET::::- Friendship bench, a special place in a school playground where a child can go when he or she wants someone to talk to\n\nSection::::G.\n\nBULLET::::- Gaming chair, legless, curved/L-shaped, generally upholstered, and sometimes contains built-in electronic devices like loudspeakers and vibration to enhance the video game experience; the five main types of gaming chairs are bean bags, rockers, pedestals, racers, and cockpits\n\nBULLET::::- Garden Egg chair, designed by Peter Ghyczy and a modernist classic\n", "BULLET::::- Lawn chair, usually a light, folding chair for outdoor use on soft surfaces. The left and right legs are joined along the ground into a single foot to make a broader contact area with the ground. Individual feet would otherwise dig into soft grass.\n\nBULLET::::- Lifeguard chairs, enable a lifeguard to sit on a high perch at the beach to better look for swimmers in distress\n\nBULLET::::- Lift chair, a powered lifting mechanism that pushes the entire chair up from its base, allowing the user to easily move to a standing position\n", "BULLET::::- Dante chair, similar to the Savonarola chair with a more solid frame and a cushioned seat\n\nBULLET::::- Deckchair, a chair with a fabric or vinyl back and seat that folds flat by a scissors action round a transverse axis. The fabric extends from the sitter's feet to head. It may have an extended seat that is meant to be used as a leg rest and may have armrests. It was originally designed for passenger lounging while aboard ocean liners or ships.\n", "The saddle chair Håg Capisco was launched in the 1980s and inspired by the horseback rider's dynamic posture. The goal, however, was to create a sitting device or work chair that would invite the user to assume the greatest number of sitting postures possible. In 2010 this design classic was made accessible for a wider audience when the Capisco Puls was launched.\n", "BULLET::::- Fiddleback chair, a wooden chair of the Empire period, usually with an upholstered seat, in which the splat resembles a fiddle\n\nBULLET::::- Fighting chair is a chair on a boat used by anglers to catch large saltwater fish. The chair typically swivels and has a harness to keep the angler strapped in should the fish tug hard on the line.\n", "The \"Sparco Sprint Bucket Seat\" is the North American version of the LPSK-02002. Though, it replaces the \"Fighter\" bucket seat by a similar Sparco model called \"Sprint\". Although the Sparco / Gran Turismo plate is included, the Sparco seat is a regular one with no GT logo.\n\nSection::::Racing Cockpit Pro.:Racing Seat Pro Siena (SRCP-002).\n", "BULLET::::- Papasan chair, a large, rounded, bowl-shaped chair with an adjustable angle similar to that of a futon; the bowl rests in an upright frame made of sturdy wicker or wood originally from the Philippines\n\nBULLET::::- Parsons chair, curving wooden chair named for the Parsons School of Design in New York, where it was created and widely copied today\n\nBULLET::::- Patio chair, any outdoor chair meant for use on a hard surface (contrast with lawn chairs) designed so as to not collect water and dry quickly after rain\n", "BULLET::::- Metal, Metal mesh or wire woven to form seat\n\nSection::::Standards and specifications.\n\nDesign considerations for chairs have been codified into standards. ISO 9241, \"Ergonomic requirements for office work with visual display terminals (VDTs) – Part 5: Workstation layout and postural requirements\", is the most common one for modern chair design.\n", "In place of a built-in footrest, some chairs come with a matching ottoman. An ottoman is a short stool that is intended to be used as a footrest but can sometimes be used as a stool. If matched to a glider chair, the ottoman may be mounted on swing arms so that the ottoman rocks back and forth with the main glider.\n", "BULLET::::- High chair, a children's chair to raise them to the height of adults for feeding. They typically come with a detachable tray so that the child can sit apart from the main table. Booster chairs raise the height of children on regular chairs so they can eat at the main dining table. Some high chairs are clamped directly to the table and thus are more portable.\n\nBULLET::::- Hanging Egg Chair, designed by Danish furniture designer Nanna Ditzel in 1957\n\nSection::::I.\n", "According to a 2010 \"Bloomberg Businessweek\" article, the Aeron chair \"made a fetish of lumbar support\". Galen Kranz has commented that while the company is aware that a perching position (facilitated by the chair's rounded front rail) is preferable, it put in the lumbar support to conform to public expectations—\"because that's what people think is required for it to be a scientifically 'good' chair\".\n", "The chair is available with a number of different undercarriages — as a regular four-legged chair, an office chair with five wheels and as a bar stool. It comes with armrests, a writing-table attached, and different forms of upholstering. To some extent, these additions mar the simple aesthetics of the chair, while contributing practical elements.\n", "BULLET::::- Dining chair, designed to be used at a dining table; typically, dining chairs are part of a dining set, where the chairs and table feature similar or complementary designs. The oldest known depiction of dining chairs is a seventh-century B.C bas-relief of an Assyrian king and queen on very high chairs.\n", "BULLET::::- Card protectors: In games where all of a player's cards are facedown, some players use items like specialty chips or glass figures to place on top of their cards to protect them from being accidentally discarded.\n", "Bucket seat\n\nA bucket seat is a car seat contoured to hold one person, distinct from a flat bench seat designed to fit multiple people. In its simplest form it is a rounded seat for one person with high sides, but may have curved sides that partially enclose and support the body in high-performance automobiles.\n", "Some machines, such as \"Ms. Pac-Man\" and \"Joust\", are occasionally in smaller boxes with a flat, clear glass or acrylic glass top; the player sits at the machine playing it, looking down. This style of arcade game is known as a \"cocktail-style arcade game table\" or tabletop arcade machine, since they were first popularized in bars and pubs. For two player games on this type of machine, the players sit on opposite sides with the screen flipped upside down for each player. A few cocktail-style games had players sitting next to rather than across from one another. Both \"Joust\" and \"Gun Fight\" had these type of tables.\n", "BULLET::::- Car chair, a car seat in an automobile in which either the pilot or passenger sits, customarily in the forward direction. Many car chairs are adorned in leather or synthetic material designed for comfort or relief from the noted stress of being seated. Variants include a toddler's or infant's carseat, which are often placed atop an existing chair and secured by way of extant seat belts or other such securant articles.\n\nBULLET::::- Carver chair, similar to a Brewster chair and from the same region and period\n\nBULLET::::- Cathedra, a bishop's ceremonial chair\n", "In the modern era, cheaper plastic items often attempt to mimic more expensive wooden and metal products, though they are only skeuomorphic if new ornamentation references the original functionality, such as molded screw heads in molded plastic items. Another well-known skeuomorph is the plastic Adirondack chair. The physical \"arm\" lever on a \"one-armed bandit\" gambling machine is a skeuomorphic throwback feature on modern computerized slot machines, since it is no longer required to set physical mechanisms and gears into motion.\n", "BULLET::::- Shower chair, a chair which is not damaged by water, sometimes on wheels, and used as a disability aid in a shower, similar to a wheelchair but has no foot pads; is waterproof and dries quickly\n\nBULLET::::- Side chair, a chair with a seat and back but without armrests; often matched with a dining table or used as an occasional chair\n\nBULLET::::- Sit-stand chair, normally used with a height-adjustable desk, allows the person to lean against this device and be partially supported\n\nBULLET::::- Sling chair, a suspended, free-swinging chair hanging from a ceiling\n", "Games table desk\n\nA Games table desk is an antique desk form which combines the type of surface required for writing with a surface etched or veneered in the pattern of a given board game. It also provides sufficient storage space for writing implements and a separate space for storing game accessories such as counters. It is often called a \"games table\" or game table, which leads to confusion with pieces of furniture (antique or modern) which are built specifically for gaming only, with no intention or provision for use as a desk.\n\nSection::::History.\n", "The games table desk has a great variety of forms. Like most of the desks of that period it was built on commission to whatever new design, or modification of an old design, the customer might want. Most of them have in common a double-sided top, covered on one side with a gaming board and on the other side with tooled leather or some other material suitable for placing paper on it and writing with a quill. The top board is sometimes attached loosely and sometimes very securely to the main body of the desk, and it is sometimes hinged. Some desks have not one but several top boards, kept stacked on one another, each having a different board game design on it.\n", "Typical American card tables from the late colonial and early American periods feature simple, straight lines, an ovolo corner, and square-tapered legs. Furniture makers in New York often created card tables with a fifth leg (to support the opened top) hinged to the rear of the table, long reeded legs with swelled feet that end in cylinders, and veneered sides and crossbanded edges around the leaves and table.\n\nSection::::Modern poker tables.\n", "Individual bucket-style seats are also used in passenger vans and minivans, although they are not always referred to as such. Unlike those in cars, bucket seats in vans can be configured in different ways or even removed for more cargo storage. In the typical minivan configuration, the front and middle rows have two bucket seats each, while the third-row seat has a three-person bench, for a total of seven passengers. Honda Odyssey 2005-2010 models (except for the base trim) adds a stowable \"PlusOneSeat\" between the middle row bucket seats. The Australian Mazda MPV has three seats in the middle and two in the last row.\n", "The design of the chair has been changed several times since its introduction. Initially named \"Poem,\" it was renamed to \"Poäng\" in 1992, and the seat part was changed from tubular steel to wood, which allowed the chair to be flat-packed and led to a price reduction of 21%. The color, pattern, and material of the upholstery were also repeatedly changed to account for changing customer preferences. The Poäng's price has decreased markedly since its introduction. In the 1990s it sold for up to $350 in the U.S. (adjusted for inflation as of 2016) compared to a 2016 price of $79.\n" ]
[]
[]
[ "normal" ]
[]
[ "normal", "normal" ]
[]
2018-23777
How do vinyl records store sound, and how does a needle reproduce that sound?
Records are etched with very tiny physical bumps in the plastic. The needle is very delicate and when it runs over a bump, it vibrates. The bumps on the record are placed and sized very precisely to vibrate the needle at the exact pitch of the recorded sound (like how a vibrating glass makes a specific sound). The vibrating needle is too faint to hear, so the record player sends the vibration up the needle arm to be amplified in a speaker (the exact amplification process is where my knowledge fails). It's basically a very sophisticated music box.
[ "In some ways similar to the laser turntable is the IRENE scanning machine for disc records, which images with microphotography, invented by a team of physicists at Lawrence Berkeley Laboratories.\n\nAn offshoot of IRENE, the Confocal Microscope Cylinder Project, can capture a high-resolution three-dimensional image of the surface, down to 200 µm. In order to convert to a digital sound file, this is then played by a version of the same 'virtual stylus' program developed by the research team in real-time, converted to digital and, if desired, processed through sound-restoration programs.\n\nSection::::Formats.\n\nSection::::Formats.:Types of records.\n", "BULLET::::- A laser turntable is a phonograph that plays gramophone records using a laser beam as the pickup instead of a conventional diamond-tipped stylus. This playback system has the unique advantage of avoiding physical contact with the record during playback; instead, a focused beam of light traces the signal undulations in the vinyl, with zero friction, mass and record wear. The laser turntable was first conceived by Robert S. Reis, while working as a consultant of analog signal processing for the United States Air Force and the United States Department of Defense.\n\n1984 LCD projector\n", "A gramophone record, commonly known as a record, or a vinyl record, is an analog sound storage medium consisting of a flat disc with an inscribed, modulated spiral groove. The groove usually starts near the periphery and ends near the center of the disc. Ever since Thomas Edison invented the phonograph in 1877, it produced distorted sound because of gravity's pressure on the playing stylus. In response, Emile Berliner invented a new medium for recording and listening to sound in 1887 in the form of a horizontal disc, originally known as the \"platter\".\n\n1887 Slot machine\n", "Section::::History.\n", "BULLET::::- Scream 4 Distortion: a distortion module with ten different distortion models: Overdrive, Distortion, Fuzz, Tube, Tape, Feedback, Modulate, Warp, Digital and Scream\n\nBULLET::::- DDL-1 Digital Delay Line: a simple delay effect\n\nBULLET::::- CF-101 Chorus/Flanger: a chorus and flanger effect\n\nSection::::Additional Interface.\n", "Pressing the tab key on the computer keyboard reveals the Record racks back side, where you can access additional parameters for the rackmounted devices, including signal cables for audio and CV. This allows users to virtually route cables connecting the devices in the rack like in a traditional hardware based studio. For example, a device's output can be split into two signal chains for different processing, and the connected to different mixer channels. Users can choose where to draw the line between simplicity and precision, allowing Record to remain useful at various levels of knowledge and ambition on the user's part.\n", "BULLET::::- 1928 : Fritz Pfleumer patents a system for recording on paper coated with a magnetizable, powdered steel layer. A precursor to tape.\n\nBULLET::::- 1929 : Nikolay Obukhov commissioned Michel Billaudot and Pierre Duvalie to design the Sonorous Cross\n\nBULLET::::- 1929 : Peter Lertes and Bruno Helberger developed the Hellertion\n\nBULLET::::- 1930 : Robert Hitcock completes the Westinghouse Organ\n\nBULLET::::- 1931 : George Beauchamp, the general manager of the National Guitar Corporation, develops the first electric guitar\n\nBULLET::::- 1934 : Laurens Hammond created the first Hammond Organ\n\nBULLET::::- 1935 : Yamaha releases Magna Organ, an early electrostatic reed organ\n", "Section::::Magnetic recording.\n\nSection::::Magnetic recording.:Magnetic wire recording.\n\nWire recording or magnetic wire recording is an analog type of audio storage in which a magnetic recording is made on thin steel or stainless steel wire.\n", "Greg Beets of Austin Chronicle praised the book writing that, \"Enjoy the Experience chronicles the golden age of private pressings with contagious aplomb.\" Andrew Frisicano of Time Out Magazine praised the book concept and how it traces the stories behind the creation of those records and their creators along with the study of the recordings themselves.\n", "BULLET::::- The German physicist Curt Stille (1873-1957) records magnetic sound for film, on a perforated steel band. First, this \"Magnettonverfahren\" has no success. Years later it is rediscovered for amateur films, providing easy dubbing. A \"Daylygraph\" or Magnettongerät had amplifier and equalizer, and a mature Magnettondiktiergerät called \"Textophon\".\n\nBULLET::::- Based on patents, which he had purchased of silence, brings the Englishman E. Blattner the \" Blattnerphone \"the first magnetic sound recording on the market. It records on a thin steel band.\n", "For a sound to be recorded by the Phonograph, it has to go through three distinct steps. First, the sound enters a cone-shaped component of the device, called the microphone diaphragm. That sound causes the microphone diaphragm, which is connected to a small metal needle, to vibrate. The needle then vibrates in the same way, causing its sharp tip to etch a distinctive groove into a cylinder, which was made out of tinfoil.\n\nSection::::The phonograph.:Playback.\n", "BULLET::::- The company RCA Victor presents to the public the first real LP record, the 35 cm diameter and 33.33 RPM give sufficient playing time for an entire orchestral work. But the new turntables are initially so expensive that they are only gain broad acceptance after the Second World War - then as vinyl record.\n\nBULLET::::- The French physicist René Barthélemy leads in Paris the first public television with clay before. The BBC launches first Tonversuche in the UK.\n", "BULLET::::- Lipman, Samuel,\"The House of Music: Art in an Era of Institutions\", 1984. See the chapter on \"Getting on Record\", pp. 62–75, about the early record industry and Fred Gaisberg and Walter Legge and FFRR (Full Frequency Range Recording).\n\nBULLET::::- Millard, Andre J., \"America on record : a history of recorded sound\", Cambridge ; New York : Cambridge University Press, 1995.\n\nBULLET::::- Millard, Andre J., \" From Edison to the iPod\", UAB Reporter, 2005, University of Alabama at Birmingham.\n", "In the late summer and early fall of 1929 Edison also briefly produced a high-quality series of thin electrically recorded lateral-cut \"Needle Type\" disc records for use on standard record players.\n\nSection::::Historical background.\n", "An alternative approach is to take a high-resolution photograph or scan of each side of the record and interpret the image of the grooves using computer software. An amateur attempt using a flatbed scanner lacked satisfactory fidelity. A professional system employed by the Library of Congress produces excellent quality.\n\nSection::::Stylus.\n\nA smooth-tipped \"stylus\" (in popular usage often called a \"needle\" due to the former use of steel needles for the purpose) is used to play the recorded groove. A special chisel-like stylus is used to engrave the groove into the \"master record\".\n", "In either type, the stylus itself, usually of diamond, is mounted on a tiny metal strut called a cantilever, which is suspended using a collar of highly compliant plastic. This gives the stylus the freedom to move in any direction. On the other end of the cantilever is mounted a tiny permanent magnet (moving magnet type) or a set of tiny wound coils (moving coil type). The magnet is close to a set of fixed pick-up coils, or the moving coils are held within a magnetic field generated by fixed permanent magnets. In either case, the movement of the stylus as it tracks the grooves of a record causes a fluctuating magnetic field, which causes a small electric current to be induced in the coils. This current closely follows the sound waveform cut into the record, and may be transmitted by wires to an electronic amplifier where it is processed and amplified in order to drive a loudspeaker. Depending upon the amplifier design, a phono-preamplifier may be necessary.\n", "Needle gun\n\nA needle gun is a firearm that has a needle-like firing pin, which can pass through the paper cartridge case to strike a percussion cap at the bullet base. A needle gun with a barrel that has a helical groove or pattern of grooves (\"rifling\") cut into the barrel walls is also called needle rifle.\n\nSection::::Types.\n\nSection::::Types.:Pauly.\n\nThe first experimental needle gun was designed by Jean Samuel Pauly, a Swiss gunsmith.\n", "The first 45 rpm record created for sale was \"PeeWee the Piccolo\" RCA 47-0147 pressed in yellow translucent vinyl at the Sherman Avenue plant, Indianapolis on December 7, 1948, by R. O. Price, plant manager.\n\nIn the 1970s, the government of Bhutan produced now-collectible postage stamps on playable vinyl mini-discs.\n\nSection::::Structure.\n", "BULLET::::- 1927: Remington Typewriter Company and Rand Kardex combine to form Remington Rand. Within a year, Remington Rand acquires the Powers Accounting Machine Company.\n", "Record press\n\nA record press is a machine for manufacturing vinyl records. It is essentially a hydraulic press fitted with thin nickel stampers which are negative impressions of a master disc. Labels and a pre-heated vinyl patty (or \"biscuit\") are placed in a heated mold cavity. Two stampers are used, one for each of side of the disc. The record press closes under a pressure of about 150 tons. The process of compression molding forces the hot vinyl to fill the grooves in the stampers, and take the form of the finished record.\n", "BULLET::::- A small practice organ in the home of Norman Johnston, 1964.\n", "Bob and Ted fall out and Doreen goes off with Ted. The movie ends with Bob sitting in a darkened room, listening to the aria from Madame Butterfly. He gets up and drags the phonograph needle across the record several times, placing the needle back on the record. As he sits in the dark crying the record skips repeatedly over the scratched aria.\n\nSection::::Cast.\n\nBULLET::::- Brian Bedford as Bob Handman\n\nBULLET::::- Julie Sommars as Doreen Marshall\n\nBULLET::::- James Farentino as Ted\n\nBULLET::::- Edy Williams as Lavinia\n\nBULLET::::- Nick Navarro as Beatnik\n\nBULLET::::- Pearl Shear as Fat Woman\n", "Section::::New York.\n", "BULLET::::- 1877: Thomas Edison (1847 - 1931) invents the first phonograph, using a tin foil cylinder. For the first time sounds could be recorded and played. A phonograph horn with membrane and needle was arranged in such a way that the needle had contact to the tinfoil.\n\nBULLET::::- 1880: the American physicist Charles Sumner Tainter discovers that many disadvantages of Edison's cylinders can be eliminated if the soundtrack is arranged in spiral form and engraved in a flat, round disk. Technical problems soon ended these experiments. Still, Tainter is regarded as the inventor of the gramophone record.\n", "BULLET::::1. \"A Project For Your Art Department\"Jim KeysorCentury Records7\" 45 RPM, promo, yellow translucent discRecorded in the early 1960sSide A: \"A Project For Your Art Department\"Side B: \"Helpful Hints\"B1: \"Producing Your Album Covers\"B2: \"How To Record\"B3: \"Tape Editing\"B4: \"Marketing Your Records\"\n\nBULLET::::2. \"The Sound of a Secure Future\"James B. KeysorCentury Records FV 13957Vinyl LPRecorded 1961Side A: Matrix ® XY 13957-1Side B: Matrix ® XY 13957-2This was a promo for the Century Records franchise associate programInterviewee – James B. KeysorInterviewer (uncredited) – Gabe Bartold\n\nSection::::External links.\n\nBULLET::::- Century discography, by Robert S. Plante (retrieved May 12, 2017)\n\nSection::::Selected publications.\n" ]
[]
[]
[ "normal" ]
[]
[ "normal", "normal" ]
[]
2018-00346
How can large animals exist in environments filled with poisonous and venomous wild life?
Sure, some animals die from bites/stings/infections..etc, but so do humans. Animals obviously know not to eat plants that will kill them, and they avoid venomous animals whenever possible.There aren't armies of venomous snakes and spiders and scorpions going around killing everything in site just for the fun of it. Animals and plants are poisonous or venomous for a reason; it's either a defense mechanism or a way to capture food.
[ "Megafauna – in the sense of the largest mammals and birds – are generally K-strategists, with high longevity, slow population growth rates, low mortality rates, and (at least for the largest) few or no natural predators capable of killing adults. These characteristics, although not exclusive to such megafauna, make them vulnerable to human overexploitation, in part because of their slow population recovery rates.\n\nSection::::Evolution of large body size.\n", "The newly discovered diversity of squamate species producing venoms is a treasure trove for those seeking to develop new pharmaceutical drugs; many of these venoms lower blood pressure, for example. Previously known venomous squamates have already provided the basis for medications such as Ancrod, Captopril, Eptifibatide, Exenatide and Tirofiban.\n\nThe world's largest venomous lizard and the largest species of venomous land animal is the Komodo dragon.\n\nSection::::Venom.:Criticism.\n", "Many predators are powerfully built and can catch and kill animals larger than themselves; this applies as much to small predators such as ants and shrews as to big and visibly muscular carnivores like the cougar and lion. \n\nSection::::Specialization.:Diet and behaviour.\n", "The main diet includes mostly small mammals and birds, but also frogs, lizards and tarantulas. Larger prey is struck and released, after which it is tracked down via its scent trail.\n\nSection::::Reproduction.\n", "Larger specimens usually eat animals about the size of a house cat, but larger food items are not unknown: the diet of the common anaconda, \"Eunectes murinus\", is known to include subadult tapirs. Prey is swallowed whole, and may take several days or even weeks to fully digest. Despite their intimidating size and muscular power, they are generally not dangerous to humans.\n", "BULLET::::- Predator-prey relationships – either the predator is affected by the toxin resulting in a decline of predator population and thus increasing the prey population; or the prey population is affected by the toxin resulting in a decline in the prey population that, in essence, will cause a decline in the predator population due to lack of food resources\n", "Some reports suggest that this species produces a large amount of venom that is weak compared to some other vipers. Others, however, suggest that such conclusions are not accurate. These animals are badly affected by stress and rarely live long in captivity. This makes it difficult to obtain venom in useful quantities and good condition for study purposes. For example, Bolaños (1972) observed that venom yield from his specimens fell from 233 mg to 64 mg while they remained in his care. As the stress of being milked regularly has this effect on venom yield, it is reasoned that it may also affect venom toxicity. This may explain the disparity described by Hardy and Haad (1998) between the low laboratory toxicity of the venom and the high mortality rate of bite victims.\n", "Possible predators include rats, mice, moles, skunks, weasels, birds, frogs and toads, lizards, walking insects (e.g., some beetle and cricket varieties), some types of flies, centipedes, and even certain carnivorous snail species, such as \"Strangesta capillacea\".\n\nSection::::Population density.\n", "Predators including big cats, birds of prey, and ants share powerful jaws, sharp teeth, or claws which they use to seize and kill their prey. Some predators such as snakes and fish-eating birds like herons and cormorants swallow their prey whole; some snakes can unhinge their jaws to allow them to swallow large prey, while fish-eating birds have long spear-like beaks that they use to stab and grip fast-moving and slippery prey. Fish and other predators have developed the ability to crush or open the armoured shells of molluscs.\n", "BULLET::::- The largest sirenian at up to is the West Indian manatee (\"Trichechus manatus\"). Steller's sea cow (\"Hydrodamalis gigas\") was probably around five times as massive, but was exterminated by humans within 27 years of its discovery off the remote Commander Islands in 1741. In prehistoric times this sea cow also lived along the coasts of northeastern Asia and northwestern North America; it was apparently eliminated from these more accessible locations by aboriginal hunters.\n\nBULLET::::- Superorder Xenarthra\n\nBULLET::::- Order Cingulata\n", "Section::::Immunity.\n\nMany ophiophagous animals seem to be immune to the venom of the usual snakes they prey and feed upon. The phenomenon was studied in the mussurana by the Brazilian scientist Vital Brazil. They have antihemorrhagic and antineurotoxic antibodies in their blood. The Virginia opossum (\"Didelphis virginiana\") has been found to have the most resistance towards snake venom. This immunity is not acquired and has probably evolved as an adaptation to predation by venomous snakes in their habitat.\n", "What’s really special about Gorgonopsids is their patience and implacability. Once they have smelt blood they have a tendency to pursue their prey at all costs. In fact it was this keen sense of smell that originally tempted it into the cold present, lured by the smell of humans and waste from a supermarket. They then store their kills in trees like leopards.\n", "The main method of predation used on ungulates is the \"low flight with sustained grip attack\", which can take between a few seconds to at least 15 minutes to kill the prey. In studies, the average estimated weight of ungulate prey found in golden eagle nests varied between and , depending on the location and species involved. In each case, the weight of the ungulate prey is similar to the average newborn weight for that respective species, and most ungulates taken are about the same weight as the eagle. The taking of larger ungulates is exceptional but has been verified in several cases, and is most likely to happen in late winter or early spring, when other available prey is scarce and (in most of the range) eagles are not concerned with carrying prey to a nest. In Scotland, golden eagles have been confirmed to kill red deer calves up to in mass and have been captured on film attacking an adult red deer but not carrying through with the hunt. Both adult and kid chamois and ibex have been confirmed as prey and, in some cases, have been forced off cliff edges to fall to their deaths, after which they can be consumed. At a Mongolian nest, an apparently live-caught, dismembered adult Mongolian gazelle was observed. Adult pronghorns weighing have been successfully attacked and killed. Unsuccessful attacks on both adult mule and white-tailed deer (\"Odocoileus virginianus\") have been recently filmed but there is only a single account that mentions predation on an adult white-tailed deer. Adult roe deer, being relatively modestly sized, are possibly taken with some regularity unlike adults of other deer species. A handful of confirmed attacks on relatively large sheep, exceptionally including healthy adults, estimated to weigh around have occurred in Scotland. One study in Finland found reindeer calves, of an estimated average weight of , were routinely killed. Adult female reindeer weighing have been killed in three cases in central Norway. One golden eagle was captured on a remote wildlife camera in the Russian Far East killing an adult female sika deer, a singular event to capture on film since eagles hunts of even regular prey are difficult to photograph. Females of this race of weigh from Few records of predation on domestic cattle are known but a detailed examination of calf remains has shown that golden eagles in New Mexico, mainly wintering migrants, killed 12 and injured 61 weighing from from 1987 to 1989. No other living bird of prey has been verified to kill prey as heavy as this, although wedge-tailed, martial (\"Polemaetus bellicosus\") and crowned eagles (\"Stephanoaetus coronatus\") have been confirmed to kill prey estimated to weigh up to , and , respectively.\n", "Megafauna play a significant role in the lateral transport of mineral nutrients in an ecosystem, tending to translocate them from areas of high to those of lower abundance. They do so by their movement between the time they consume the nutrient and the time they release it through elimination (or, to a much lesser extent, through decomposition after death). In South America's Amazon Basin, it is estimated that such lateral diffusion was reduced over 98% following the megafaunal extinctions that occurred roughly 12,500 years ago. Given that phosphorus availability is thought to limit productivity in much of the region, the decrease in its transport from the western part of the basin and from floodplains (both of which derive their supply from the uplift of the Andes) to other areas is thought to have significantly impacted the region's ecology, and the effects may not yet have reached their limits.\n", "The largest animal killer is the blue whale, which is the largest animal on Earth. The blue whale mostly feeds on krill (\"euphausiacea\") which is a small, abundant crustacean. Blue whales are almost entirely killed by killer whales and by humans.\n\nChimpanzees wage war against rival groups, killing rival males and eating the baby chimps. Ants also wage warfare on other ants, even engaging in cannibalism.\n\nSection::::Killer plants.\n\nSection::::Killer plants.:Deadly if consumed.\n\nMany plant based items if eaten in sufficient quantities can cause seizures, spasms, tremors, gastroenteritis, cardiovascular collapse, coma, and then death.\n\nSection::::Killer plants.:Deadly if consumed.:Ornamental plants.\n", "Feeding is probably opportunistic, any arthropod that is small enough to consume, which is known to include spiders, ants and termites. Trails of winged ants have been observed as feeding opportunities, and some species are recorded seizing dead insects being carried by a line of worker ants. Juvenile fish are also known to be eaten.\n", "From the earliest of times, hunting has been an important human activity as a means of survival. There is a whole history of overexploitation in the form of overhunting. The overkill hypothesis (Quaternary extinction events) explains why the megafaunal extinctions occurred within a relatively short period of time. This can be traced with human migration. The most convincing evidence of this theory is that 80% of the North American large mammal species disappeared within 1000 years of the arrival of humans on the western hemisphere continents. The fastest ever recorded extinction of megafauna occurred in New Zealand, where by 1500 AD, just 200 years after settling the islands, ten species of the giant moa birds were hunted to extinction by the Māori. A second wave of extinctions occurred later with European settlement.\n", "Cuvier's dwarf caiman is considered to be a keystone species whose presence in the ecosystem maintains a healthy balance of organisms. In its absence, fish, such as piranhas, might dominate the environment. The eggs and newly hatched young are most at risk and are preyed on by birds, snakes, rats, raccoons, and other mammals. Adults are protected by the bony osteoderms under the scales and their main predators are jaguars, green anacondas (\"Eunectes murinus\"), and large boa constrictors (\"Boa constrictor\"). \n", "In the lowlands, \"C. basiliscus\" is primarily active during the rainy summer months, and most specimens are found crossing the roads at night. However, a few have been seen basking early in the morning. It has been reported to tame quickly in captivity.\n\nSection::::Feeding.\n\nKlauber reported that the stomachs of seven specimens of \"C. basiliscus\" contained mammal hair, probably belonging to rodents.\n\nSection::::Venom.\n\n\"C. basilicus\" is known to produce large amounts of highly toxic venom, and large specimens should be regarded as very dangerous.\n", "BULLET::::- Giant ibis (\"Pseudibis gigantea\")\n\nBULLET::::- White-rumped vulture (\"Gyps bengalensis\")\n\nBULLET::::- Indian vulture (\"Gyps indicus\")\n\nBULLET::::- Red-headed vulture (\"Sarcogyps calvus\")\n\nBULLET::::- Bengal florican (\"Houbaropsis bengalensis\")\n\nBULLET::::- Spoon-billed sandpiper (\"Eurynorhynchus pygmeus\")\n\nSection::::Fauna.:Molluscs.\n", "The appearance of numbers of Burmese pythons in North Key Largo was forewarning of a serious new threat to the survival, not just of the rare Key Largo Woodrat and Cotton Mouse, but also to another three federally endangered mammals found only in the Florida Keys: the Key Deer, Silver Rice Rat and Keys Marsh Rabbit are all found only in the Lower Florida Keys (on Big Pine Key and a few other islands further down the archipelago). Though pythons and other constrictors (especially boa constrictors) do take other prey, most have special adaptations for detecting and capturing warm-blooded prey (mammals and birds), even in total darkness. And the mammals and birds of south Florida and the Florida Keys have never been exposed to a predatory snake of this size, and may have a hard time adapting defensive strategies before their populations are wiped out. A study of python prey, in Everglades National Park shows an alarming declines among small mammal populations in the park.\n", "Section::::Natural threats.\n\nThe mulga snake (\"Pseudechis australis\") is immune to most Australian snake venom, and is known to also eat young inland taipans. The perentie (\"Varanus giganteus\") is a large monitor lizard that also shares the same habitat. As it grows large enough, it will readily tackle large venomous snakes for prey.\n\nSection::::Interaction with humans.\n\nMany reptile keepers consider it a placid snake to work with.\n", "Section::::Behavior.:Hunting and diet.\n\nPrey includes a wide variety of small to medium-sized mammals and birds. The bulk of their diet consists of rodents, but larger lizards and mammals as big as ocelots are also reported to have been consumed. Young boa constrictors eat small mice, birds, bats, lizards, and amphibians. The size of the prey item increases as they get older and larger.\n", "Section::::Specialization.:Venom.\n\nMany smaller predators such as the box jellyfish use venom to subdue their prey, and venom can also aid in digestion (as is the case for rattlesnakes and some spiders). The marbled sea snake that has adapted to egg predation has atrophied venom glands, and the gene for its three finger toxin contains a mutation (the deletion of two nucleotides) that inactives it. These changes are explained by the fact that its prey does not need to be subdued.\n\nSection::::Specialization.:Physiology.\n", "This final group is one of secondary effects. All wild populations of living things have many complex intertwining links with other living things around them. Large herbivorous animals such as the hippopotamus have populations of insectivorous birds that feed off the many parasitic insects that grow on the hippo. Should the hippo die out, so too will these groups of birds, leading to further destruction as other species dependent on the birds are affected. Also referred to as a domino effect, this series of chain reactions is by far the most destructive process that can occur in any ecological community.\n" ]
[ "Large animals shouldn't be able to exist in areas with venemous wildlife. " ]
[ "There are not a large amount of venemous plants in animals in said areas, therefore not enough exist to cause extinction to the non venemous animals in the area." ]
[ "false presupposition" ]
[ "Large animals shouldn't be able to exist in areas with venemous wildlife. ", "Large animals shouldn't be able to exist in areas with venemous wildlife. " ]
[ "normal", "false presupposition" ]
[ "There are not a large amount of venemous plants in animals in said areas, therefore not enough exist to cause extinction to the non venemous animals in the area.", "There are not a large amount of venemous plants in animals in said areas, therefore not enough exist to cause extinction to the non venemous animals in the area." ]
2018-02614
Do planets all orbit the sun on the same plane as models seem to suggest?
Mostly, yes. URL_0 The planets are all within 7 degrees of the ecliptic plane.
[ "Several sets of geocaching caches have been laid out as solar system models.\n\nSection::::A model based on a classroom globe.\n", "BULLET::::- The Kepler Dichotomy among the M Dwarfs: Half of Systems Contain Five or More Coplanar Planets, Sarah Ballard, John Asher Johnson, October 15, 2014\n\nBULLET::::- Exoplanet Predictions Based on the Generalised Titius-Bode Relation, Timothy Bovaird, Charles H. Lineweaver, August 1, 2013\n\nBULLET::::- The Solar System and the Exoplanet Orbital Eccentricity - Multiplicity Relation, Mary Anne Limbach, Edwin L. Turner, April 9, 2014\n\nBULLET::::- The period ratio distribution of Kepler's candidate multiplanet systems, Jason H. Steffen, Jason A. Hwang, September 11, 2014\n", "Some Solar System models attempt to convey the relative scales involved in the Solar System on human terms. Some are small in scale (and may be mechanical—called orreries)—whereas others extend across cities or regional areas. The largest such scale model, the Sweden Solar System, uses the 110-metre (361 ft) Ericsson Globe in Stockholm as its substitute Sun, and, following the scale, Jupiter is a 7.5-metre (25-foot) sphere at Arlanda International Airport, 40 km (25 mi) away, whereas the farthest current object, Sedna, is a 10 cm (4 in) sphere in Luleå, 912 km (567 mi) away.\n", "With a few exceptions, the farther a planet or belt is from the Sun, the larger the distance between its orbit and the orbit of the next nearer object to the Sun. For example, Venus is approximately 0.33 AU farther out from the Sun than Mercury, whereas Saturn is 4.3 AU out from Jupiter, and Neptune lies 10.5 AU out from Uranus. Attempts have been made to determine a relationship between these orbital distances (for example, the Titius–Bode law), but no such theory has been accepted. The images at the beginning of this section show the orbits of the various constituents of the Solar System on different scales.\n", "BULLET::::- a planet might yet have some shred of sect dignity if it is in the hemisphere of the chart corresponding to its inherent sect--for example, if Jupiter is in the same hemisphere as the Sun, whether or not the Sun is above the horizon, or if Venus is in the hemisphere opposite the Sun, whether or not the Sun is below the horizon.\n\nPlanets satisfying all three of these sect conditions were said to be \"Hayz\", but it is not clear how Hayz strength compares to strength from essential dignities.\n\nSection::::Further reading.\n", "BULLET::::- Are Planetary Systems Filled to Capacity? A Study Based on Kepler Results, Julia Fang, Jean-Luc Margot, February 28, 2013\n\nSection::::System architectures.:Orbital configurations.:Planet capture.\n", "Section::::Solar System features.:Scattered disc and Oort cloud.\n", "Klemperer does indeed mention this configuration at the start of his article, but only as an already known set of equilibrium systems before introducing the actual rosettes.\n\nIn Larry Niven's novel \"Ringworld\", the Puppeteers' \"Fleet of Worlds\" is arranged in such a configuration (5 planets spaced at the points of a pentagon) which Niven calls a \"Kemplerer rosette\"; this (possibly intentional) misspelling (and misuse) is one possible source of this confusion. It is notable that these fictional planets were maintained in position by large engines in addition to gravitational force. \n", "In the 18th century the same possibility was mentioned by Isaac Newton in the \"General Scholium\" that concludes his \"Principia\". Making a comparison to the Sun's planets, he wrote \"And if the fixed stars are the centers of similar systems, they will all be constructed according to a similar design and subject to the dominion of \"One\".\"\n", "So the varying positions of the Laplace plane at varying distances from the primary planet can be pictured as putting together a warped or non-planar surface, which may be pictured as a series of concentric rings whose orientation in space is variable: the innermost rings are near the equatorial plane of rotation and oblateness of the planet, and the outermost rings near its solar orbital plane. Also, in some cases, larger satellites of a planet (such as Neptune's Triton) can affect the Laplace planes of smaller satellites orbiting the same planet.\n\nSection::::The work of Laplace.\n", "In hierarchical systems the planets are arranged so that the system can be gravitationally considered as a nested system of two-bodies, e.g. in a star with a close-in hot jupiter with another gas giant much further out, the star and hot jupiter form a pair that appears as a single object to another planet that is far enough out.\n\nThe generalized regions of stability where planets can exist in binary and hierarchical triple star systems have been empirically mapped.\n\nBULLET::::- Stability of planets in triple star systems, F. Busetti, H. Beust, C. Harley, November 20, 2018\n", "Other, as yet unobserved, orbital possibilities include: double planets; various co-orbital planets such as quasi-satellites, trojans and exchange orbits; and interlocking orbits maintained by precessing orbital planes.\n\nBULLET::::- Extrasolar Binary Planets I: Formation by tidal capture during planet-planet scattering, H. Ochiai, M. Nagasawa, S. Ida, June 26, 2014\n\nBULLET::::- Disruption of co-orbital (1:1) planetary resonances during gas-driven orbital migration, Arnaud Pierens, Sean Raymond, May 19, 2014\n\nSection::::System architectures.:Orbital configurations.:Number of planets, relative parameters and spacings.\n\nBULLET::::- On The Relative Sizes of Planets Within Kepler Multiple Candidate Systems, David R. Ciardi et al. December 9, 2012\n", "Section::::Solar System features.:Outer-system satellites.\n", "Section::::Subsequent evolution.:Planetary migration.\n\nAccording to the nebular hypothesis, the outer two planets may be in the \"wrong place\". Uranus and Neptune (known as the \"ice giants\") exist in a region where the reduced density of the solar nebula and longer orbital times render their formation highly implausible. The two are instead thought to have formed in orbits near Jupiter and Saturn, where more material was available, and to have migrated outward to their current positions over hundreds of millions of years.\n", "BULLET::::- If, as an example, the fourth house cusp as 27 degrees Libra on it and the fifth house cusp has 8 degrees Sagittarius then the house will be intercepted. It will have Venus as its primary ruler, but in addition to Venus, Pluto, Mars, and Jupiter will be secondary rulers.\n", "Eventually, the giant planets reach their current orbital semi-major axes, and dynamical friction with the remaining planetesimal disc damps their eccentricities and makes the orbits of Uranus and Neptune circular again.\n\nIn some 50% of the initial models of Tsiganis and colleagues, Neptune and Uranus also exchange places. An exchange of Uranus and Neptune would be consistent with models of their formation in a disk that had a surface density that declined with distance from the Sun, which predicts that the masses of the planets should also decline with distance from the Sun.\n\nSection::::Solar System features.\n", "Section::::Potential issues and an alternative.\n\nA study using a numerical simulation that included gravitational interactions among all objects revealed that a dynamical instability occurred in less than 70 million years. Interactions between planetesimals dynamically heated the disk and lead to earlier interactions between the planetesimals and giant planets. This study used a limited number of planetesimals due to computational constraints so it is as yet unknown whether this result would apply to a more complete disk.\n", "Most multiple-star systems are organized in what is called a \"hierarchical system\": the stars in the system can be divided into two smaller groups, each of which traverses a larger orbit around the system's center of mass. Each of these smaller groups must also be hierarchical, which means that they must be divided into smaller subgroups which themselves are hierarchical, and so on. Each level of the hierarchy can be treated as a \"two-body problem\" by considering close pairs as if they were a single star. In these systems there is little interaction between the orbits and the stars' motion will continue to approximate stable Keplerian orbits around the system's center of mass, unlike the unstable trapezia systems or the even more complex dynamics of the large number of stars in star clusters and galaxies.\n", "Some planetaria and related museums often use this type of scale model of the Solar System, with a planetarium dome representing the Sun. Examples of this can be seen in planetaria like the Adler Planetarium and Astronomy Museum, the Hayden Planetarium at the American Museum of Natural History, the Clark Planetarium, the Griffith Observatory, the Louisiana Arts and Sciences Museum, the Adventure Science Center, etc.\n\nSection::::See also.\n\nBULLET::::- Numerical model of the Solar System\n\nSection::::References.\n\nBULLET::::- \n\nSection::::External links.\n\nBULLET::::- A list of websites related to Solar System models\n\nBULLET::::- Otford Solar System model\n", "If the smaller planets are to be easily visible to the naked eye, large outdoor spaces are generally necessary, as is some means for highlighting objects that might otherwise not be noticed from a distance. The objects in such models do not move. Traditional orreries often did move and some used clockworks to make the relative speeds of objects accurate. These can be thought of as being correctly scaled in time instead of distance.\n\nSection::::Scale models in various locations.\n\nMany towns and institutions have built outdoor scale models of the Solar System. Here is a table comparing these models.\n", "Although the Sun dominates the system by mass, it accounts for only about 2% of the angular momentum. The planets, dominated by Jupiter, account for most of the rest of the angular momentum due to the combination of their mass, orbit, and distance from the Sun, with a possibly significant contribution from comets.\n", "Ter Haar and Cameron distinguished between those theories that consider a closed system, which is a development of the Sun and possibly a solar envelope, that starts with a protosun rather than the Sun itself, and state that Belot calls these theories monistic; and those that consider an open system, which is where there is an interaction between the Sun and some foreign body that is supposed to have been the first step in the developments leading to the planetary system, and state that Belot calls these theories dualistic.\n", "In the eighteenth century the same possibility was mentioned by Isaac Newton in the \"General Scholium\" that concludes his \"Principia\". Making a comparison to the Sun's planets, he wrote \"And if the fixed stars are the centres of similar systems, they will all be constructed according to a similar design and subject to the dominion of \"One\".\"\n", "Ongoing analysis of the system has produced several models for the orbitral arrangement of the system. There is no current consensus and 3-planet, 4-planet, 5-planet and 6-planet models have been proposed to address the available radial velocity data. Most of these models predict, however, that the inner planets are close in with circular orbits, while outer planets, particularly Gliese 581d, should it exist, are on more elliptical orbits.\n", "At present, few systems have been found to be analogous to the Solar System with terrestrial planets close to the parent star. More commonly, systems consisting of multiple Super-Earths have been detected.\n\nSection::::System architectures.:Components.\n\nSection::::System architectures.:Components.:Planets.\n" ]
[]
[]
[ "normal" ]
[]
[ "normal", "normal" ]
[]
2018-03812
How do small towns survive?
There is going to be some flow of money in through government things, like teacher salaries, Medicare, Medicaid, social security, and welfare, building grants, education grants, etc. Sometimes people also have jobs in neighboring places or remote jobs, in which case they are basically exporting their labor. However, it is totally possible for small town to "dry up." Often small towns like that did have one main export, be it a factory, a mine, a few farms, etc. And if that employer goes under then it can ripple and slowly squeeze out every job.
[ "Small Towns Initiative\n", "As of 2017, Alberta had 87 villages that had a cumulative population of 37,099 in the 2016 Census of Population. Alberta's largest and smallest villages are Stirling and Gadsby with populations of 1,215 and 40 respectively.\n\nWhen a village's population reaches or exceeds 1,000 people, the council may request a change to town status, but the change in incorporated status is not mandatory. Villages with populations less than 300, whether their populations have declined below 300 or they were incorporated as villages prior to the minimum 300 population requirement, are permitted to retain village status.\n", "There are many economic advantages in farming on a small-scale land. Local farmers generate a local economy in their rural communities. An American study showed that small farms with incomes of $100,000 or less spend almost 95 percent of their farm-related expenses within their local communities. The same study took in comparison the fact that farms with incomes greater than $900,000 spend less than 20 percent of their farm-related expenses in the local economy. Thus, small-scale agriculture supports local economy.\n", "A microtown is a municipality with less than 500 residents that is not part of the suburbia of a neighboring city. (Such towns might also be known as villages and hamlets.) Microtowns used to be prevalent in the West and Midwest United States in the 18th and 19th centuries as people moved for cheaper land and started their own municipality. Today, a microtown is usually a strong sign of a locality's decline, as people have moved away to seek more prosperous opportunities elsewhere, although many such places still exist today, always in rural America. The smallest documented microtown is in Maine with a population of exactly one (it is legally a town), although a town usually has some degree of independence. A population of zero is a ghost town.\n", "After the local mining jobs were lost, many in the community began providing goods and services, which included restaurants, food markets, bakery, bars, gas stations, hardware stores, drug stores and other commerce that supports the needs of the community.\n", "BULLET::::- Connecticut Council of Small Towns — website\n\nBULLET::::- Township Officials of Illinois — website\n\nBULLET::::- Indiana Township Association — website\n\nBULLET::::- Michigan Townships Association — website\n\nBULLET::::- Minnesota Association of Townships — website\n\nBULLET::::- Association of Towns of New York — website\n\nBULLET::::- North Dakota Township Officers Association — website\n\nBULLET::::- Ohio Township Association — web pages\n\nBULLET::::- Pennsylvania Association of Township Supervisors — website\n\nBULLET::::- South Dakota Association of Towns and Townships\n\nBULLET::::- Wisconsin Towns Association — website\n", "In the United States, it is common for commuter towns to create disparities in municipal tax rates. When a commuter town collects few business taxes, residents must pay the brunt of the public operating budget in higher property or income taxes. Such municipalities may scramble to encourage commercial growth once an established residential base has been reached.\n", "In Nevada, a town has a form of government, but is not considered to be incorporated. It generally provides a limited range of services, such as land use planning and recreation, while leaving most services to the county. Many communities have found this \"semi-incorporated\" status attractive; the state has only 20 incorporated cities, and towns as large as Paradise (186,020 in 2000 Census), home of the Las Vegas Strip. Most county seats are also towns, not cities.\n\nSection::::By country.:United States.:New England.\n", "Essentially, villages are formed from urban communities with populations of at least 300 people. When a village's population exceeds 1,000 people, its council may apply to change its status to that of a town, but the change in incorporated status is not mandatory.\n\nCommunities with shrinking populations are allowed to retain village status even if the number of residents falls below the 300 limit. Some of Alberta's villages have never reached a population of 300 people, but were incorporated as villages before there was a requirement to have a population of 300 or more.\n", "Section::::By country.:United States.:North Carolina.\n\nIn North Carolina, all cities, towns, and villages are incorporated as municipalities. According to the North Carolina League of Municipalities, there is no legal distinction among a city, town, or village—it is a matter of preference of the local government. Some North Carolina cities have populations as small as 1,000 residents, while some towns, such as Cary, have populations of greater than 100,000.\n\nSection::::By country.:United States.:Pennsylvania.\n", "Each individual of the community takes a part to work in these businesses or workshops each day in order to keep them sufficient, sustainable and continuous. It not only provides revenue to the village but also gives each resident an opportunity to work routinely. Residents of Sólheimar are all able to cooperate with each other to help improve each business/workshop and in turn, they learn how to be part of a sustainable community.\n", "Types of rural communities\n\nSociologists have identified a number of different types of rural communities, which have arisen as a result of changing economic trends within rural regions of industrial nations.\n\nThe basic trend seems to be one in which communities are required to become entrepreneurial. Those that lack the sort of characteristics mentioned below, are forced to either seek out their niche or accept eventual economic defeat. These towns focus on marketing and public relations whilst bidding for business and government operations, such as factories or off-site data processing.\n", "BULLET::::- A village should not have a regular agricultural market, although today such markets are uncommon even in settlements which clearly are towns.\n\nBULLET::::- A village does not have a town hall nor a mayor.\n", "Gordon is the author of 13 books. His most recent, The Economic Survival of America's Isolated Small Towns, from CRC Press (2015), \"provides a detailed discussion of the context of these towns, from the internal challenges that isolate them and force independent action to the extent to which they can rely on neighboring or other macro-level resources.\" The book is the fourth in a series penned by Gordon and published by CRC Press.\n", "Alberta currently has a total of 108 towns, with a combined population totalling 458,376 as of 2012.\n\nSection::::Municipalities.:Urban municipalities.:Villages.\n\nAccording to Section 80 of the Municipal Government Act (MGA), an area may incorporate as a village if:\n\nBULLET::::- it has a population of 300 people or more; and\n\nBULLET::::- the majority of its buildings are on parcels of land smaller than 1,850 m.\n", "BULLET::::1. To strengthen local economy: Studies have shown that buying from an independent, locally owned business, significantly raises the number of times your money is used to make purchases from other local businesses, service providers and farms—continuing to strengthen the economic base of the community.\n\nBULLET::::2. Increase jobs: Small local businesses are the largest employer nationally in the United States of America.\n\nBULLET::::3. Encourage local prosperity: A growing body of economic research shows that in an increasingly homogenized world, entrepreneurs and skilled workers are more likely to invest and settle in communities that preserve their one-of-a-kind businesses and distinctive character.\n", "Section::::Urban municipalities.:Summer villages.\n\nUnder previous legislation, a community could incorporate as a summer village if it had \"a minimum of 50 separate buildings occupied as dwellings at any time during a six-month period\". A community can no longer incorporate as a summer village under the MGA.\n", "The main character is in charge of running a shop inherited from their deceased grandmother. The player is capable of arranging and stocking the shelves of their shop to their preference and can eventually expand on the size of the shop. In a similar vein to the Harvest Moon series the player is able to interact and befriend various townspeople. The town that the player resides in will consist of ten people at first and eventually build its way to one hundred non-player characters that the player can meet depending on their actions over the course of the story.\n\nSection::::Development.\n", "BULLET::::7. Get better service: Local businesses often hire people with a better understanding of the products they are selling and take more time to get to know customers.\n\nBULLET::::8. Invest in community: Local businesses are owned by people who live in the community, are less likely to leave, and are more invested in the community's future.\n\nBULLET::::9. Put your taxes to good use: Local businesses in town centers require comparatively little infrastructure investment and make more efficient use of public services as compared to nationally owned stores entering the community.\n", "Essentially, towns are formed from urban communities with populations of at least 1,000 people. When a town's population exceeds 10,000 people, its council may apply to change its status to that of a city, but the change in incorporated status is not mandatory.\n\nCommunities with shrinking populations are allowed to retain town status even if the number of residents falls below the 1,000 limit. Some of Alberta's towns have never reached a population of 1,000 people, but were incorporated as towns before the current requirement to have a population of 1,000 or more.\n", "To survive longer, townspeople need to scavenge the land around the town, bring back resources and improve the defenses. However, players are not forced to do this, so how long a town can survive depends heavily on each player's experience and willingness to cooperate, and several individuals' leadership.\n\nSection::::Gameplay.:The town.\n\nEach town has the following basic facilities:\n\nBULLET::::- A Well, from which people can take water.\n\nBULLET::::- A Bank, where resources, weapons, food and other items are contributed by townspeople.\n\nBULLET::::- A Construction site, where resources in the Bank are used to build structures.\n", "(b) regeneration policy – the quality of life in towns falls when retailers close down, so there is support from residents for the empty sites to be reused. The properties are available at affordable prices.\n\nc) sustainable development policy: the shops are close to customers, and can be reached on foot, which is particularly valuable for people without cars. To capitalise on this advantage, the shops offer a delivery service.\n\nSection::::Success factors.:A generous financial framework.\n", "Typical businesses include farming, livestock, tailoring, and small retail businesses. Each business has three business owners, to diversify risk and pool skill sets, and supports an average of 20 people based on the size of families in the area. Approximately 80% of Village Enterprise business owners are women.\n", "The people of an organized hamlet may request that the hamlet be incorporated as a village or resort village. In order to qualify, the hamlet must have been an organized hamlet for at least 3 years, have a population of at least 100 in the most recent census, and contain at least 50 separate dwelling units or business premises.\n\nSaskatchewan has 260 villages.\n\nSection::::Municipalities.:Urban municipalities.:Resort villages.\n\nSaskatchewan has 40 resort villages.\n\nSection::::Municipalities.:Rural municipalities.\n", "There is little economic activity occurring in the village which leads people to migrate out of the community. Many persons hope that increase tourist arrivals will increase the economic activities of the community and decrease outward migration.\n\nSection::::Tourism.\n" ]
[]
[]
[ "normal" ]
[]
[ "normal", "normal" ]
[]
2018-03987
Why are dead bugs always up side down when you find them?
It's more stable and so it happens more and can't un-happen. Most of the times you see a dead bug it has been poisoned, infected with a parasite, or starved to death. All of these cause a bug to lose coordination as they die. Think of a bug like a car on stilts. It takes coordination on their part to keep upright. It's much more energetically stable upside down. If they get blown over by a wind or fall over, they can right themselves. But if they are dying/poisoned/fighting a parasite, they can't so once they get on their backs, they stay that way.
[ "In the southeastern, central, and southwestern part of the United States, the adult \"C. rufifacies\" is one of the first insects to arrive on a fresh corpse. The adults normally arrive within the first 10 minutes after death. The larvae also have a shorter developmental time than other species, but because of their predatory nature, they can also alter entomological-based\" post mortem\" interval estimation. In Texas and Florida, the species emerges from corpses that are in an advanced stage of decomposition.\n\nSection::::Distribution.\n", "Each group or species of insect will be attracted to decomposing remains at different stages of decomposition, as changes within the remains result in the availability of different resources. The predictable order in which the above described insect groups are attracted to and observed on remains is referred to as a succession pattern, and can be used in forensic investigations to estimate the post-mortem interval (PMI) or time since death. This method of PMI estimation is most useful in the later stages of decomposition.\n", "Decomposition is a continuous process that is commonly divided into stages for convenience of discussion. When studying decomposition from an entomological point of view and for the purpose of applying data to human death investigations, the domestic pig Sus scrofa (Linnaeus) is considered to be the preferred human analogs. In entomological studies, five stages of decomposition are commonly described: (1) Fresh, (2) Bloat, (3) Active Decay, (4) Advanced or Post-Decay, and (5) Dry Remains. While the pattern of arthropod colonization follows a reasonably predictable sequence, the limits of each stage of decomposition will not necessarily coincide with a major change in the faunal community. Therefore, the stages of decomposition are defined by the observable physical changes to the state of the carcass. A pattern of insect succession results as different carrion insects are attracted to the varying biological, chemical and physical changes a carcass undergoes throughout the process of decay.\n", "Adventive species may or may not play a significant role in the decomposition of remains. Arthropods in this ecological role are not necessarily attracted to decaying remains, but use it as an extension of their natural habitats. Adventive species originate within the vegetation and soils surrounding decomposing remains. These insects may visit remains from time to time, or use them for concealment, but their presence can only be accounted for by chance. They may also become predators of necrophagous species found at remains. Adventive species include springtails, centipedes and spiders.\n", "Feeding larvae of Calliphoridae flies are the dominant insect group at carcasses during the active decay stage. At the beginning of the stage larvae are concentrated in natural orifices, which offer the least resistance to feeding. Towards later stages, when flesh has been removed from the head and orifices, larvae become more concentrated in the thoracic and abdominal cavities.\n", "Section::::Types.\n\nThere are generally three types of fixation processes depending on the initial specimen:\n\nHeat fixation: After a smear has dried at room temperature, the slide is gripped by tongs or a clothespin and passed through the flame of a Bunsen burner several times to heat-kill and adhere the organism to the slide. Routinely used with bacteria and archaea. Heat fixation generally preserves overall morphology but not internal structures.\n", "Infection, past or present, is diagnosed by small round exit holes of 1 to 1.5 mm diameter. Active infections feature the appearance of new exit holes and fine wood dust around the holes.\n", "Scavengers and carnivores such as wolves, dogs, cats, beetles, and other insects feeding on the remains of a carcass can make determining the time of insect colonization much harder. This is because the decomposition process has been interrupted by factors that may speed up decomposition. Corpses with open wounds, whether pre or post mortem, tend to decompose faster due to easier insect access. The cause of death likewise can leave openings in the body that allow insects and bacteria access to the inside body cavities in earlier stages of decay. Flies oviposit eggs inside natural openings and wounds that may become exaggerated when the eggs hatch and the larvae begin feeding.\n", "Big Bad then tests out his next plan, to signal his nephew, so Big Bad's nephew will fling open a closet door, rigged to close an iron maiden on Bugs. Big Bad beckons Bugs for his club picture, with the iron maiden as a backdrop. Bugs pulls all sorts of poses, so Big Bad comes up to demonstrate the right pose. Bugs immediately says \"I get it now,\" which signals the nephew to close the iron maiden—but instead closes it on his uncle. As Bugs steps out, the nephew peeks into the casket, and then closes it again, cringing.\n", "This specific behavior has also been documented in Dominican amber fossils dating back , with queens of the fossil species \"Acropyga glaesaria\" being found preserved with species of the extinct mealybug genus \"Electromyrmococcus\". Older trophobiotic associations have been suggested for the Eocene fossil ant species \"Ctenobethylus goepperti\" based on a Baltic amber fossil entombing thirteen \"C. goepperti\" workers intermingled with a number of aphids. \n", "\"L. sericata\" is an important species to forensic entomologists. Like most calliphorids, \"L. sericata\" has been heavily studied and its lifecycle and habits are well documented. Accordingly, the stage of its development on a corpse is used to calculate a minimum \"post mortem\" interval, so that it can be used to aid in determining the time of death of the victim. The presence or absence of \"L. sericata\" can provide information about the conditions of the corpse. If the insects seem to be on the path of their normal development, the corpse likely has been undisturbed. If, however, the insect shows signs of a disturbed lifecycle, or is absent from a decaying body, this suggests \"post mortem\" tampering with the body. Because \"L. sericata\" is one of the first insects to colonize a corpse, it is preferred to many other species in determining an approximate time of colonization. Developmental progress is determined with relative accuracy by measuring the length and weight of larval lifecycles.\n", "Hanged bodies can be expected to show their own quantity and variety of flies. Also, the amount of time flies will stay on a hanged body will vary in comparison to one found on the ground. A hanged body is more exposed to air and thus will dry out faster leaving less food source for the maggots.\n", "Understanding how a corpse decomposes and the factors that may alter the rate of decay is extremely important for evidence in death investigations. Campobasso, Vella, and Introna consider the factors that may inhibit or favor the colonization of insects to be vitally important when determining the time of insect colonization.\n\nSection::::Factors affecting decomposition.:Temperature and climate.\n", "Wraps, garments, and clothing have shown to affect the rate of decomposition because the corpse is covered by some type of barrier. Wraps, such as tight fighting tarps can advance the stages of decay during warm weather when the body is outside. However, loose fitting coverings that are open on the ends may aid colonization of certain insect species and keep the insects protected from the outside environment. This boost in colonization can lead to faster decomposition. Clothing also provides a protective barrier between the body and insects that can delay stages of decomposition. For instance, if a corpse is wearing a heavy jacket, this can slow down decomposition in that particular area and insects will colonize elsewhere. Bodies that are covered in pesticides or in an area surrounded in pesticides may be slow to have insect colonization. The absence of insects feeding on the body would slow down the rate of decomposition.\n", "Vacant niches can best be demonstrated by considering the spatial component of niches in simple habitats. For example, Lawton and collaborators compared the insect fauna of the bracken \"Pteridium aquilinum\", a widely distributed species, in different habitats and geographical regions and found vastly differing numbers of insect species. They concluded that many niches remain vacant (e.g., Lawton 1984).\n", "Section::::Effect of environmental conditions.:Burial.\n\nBurial retards the rate of decomposition, in part because even a few inches of soil covering the corpse will prevent blowflies from laying their eggs on the corpse. The depth of burial, the nature of the soil, and the temperature and moisture content of the soil all affect decay.\n\nSection::::Effect of environmental conditions.:Wet Environments.\n", "Section::::Forensic importance.\n\nThe use of Trogidae in forensic entomology is unknown at this time. Though they typically arrive last in the order of succession, they can be the first in succession on burned and charred bodies. After the burned skin is eaten away by the trogids, the corpse (with now-exposed, \"fresher\" surfaces) allows for viable colonization by other forensically important insects that help determine accurate \"post mortem\" interval estimates.\n", "The removal of corpses carrying infectious disease is crucial to the health of a colony. Efforts to eliminate colonies of fire ants, for instance, include introducing pathogens into the population, but this has limited efficacy where the infected insects are quickly separated from the population. However, certain infections have been shown to delay the removal of dead bodies or alter where they are placed. Although placing corpses farther away reduces the risk of infection, it also requires more energy. Burial and cannibalism are other recorded methods of corpse disposal among social insects. Termites have been shown to use burial when they cannot afford to devote workers to necrophoresis, especially when forming a new colony.\n", "Sometimes the spots on the elytra are almost unnoticeable, as they seem to blend in with the rest of the body, this can be seen in the picture on the bottom left of \"Pachnoda marginata peregrina\" in the terrarium under \"As pets\".\n\nThe larvae of the pachnoda can sometimes make a low snore-like noise when making their cocoons.\n\nSection::::Life cycle.\n", "Blowflies and flesh flies are the first carrion insects to arrive, and they seek a suitable oviposition site.\n\nSection::::Animal decomposition.:Stages of decomposition.:Bloat.\n", "Black and protruding sclerotia of \"C. purpurea\" are well known. However, many tropical ergots have brown or greyish sclerotia, mimicking the shape of the host seed. For this reason, the infection is often overlooked.\n\nInsects, including flies and moths, carry conidia of \"Claviceps\" species, but it is unknown whether insects play a role in spreading the fungus from infected to healthy plants.\n\nSection::::Evolution.\n", "At one point, as Bugs is behind a door and the monster is trying to break through, Bugs desperately cries for a doctor (\"Is there a doctor in the house?\") A silhouette from the theater audience stands up and offers, \"I'm a doctor.\" Bugs suddenly relaxes, grins, starts munching a carrot, and asks, \"What's up, Doc?\", just before the monster breaks through and the chase resumes.\n", "George and Elizabeth Peckham were American ethologists and entomologists and described in 1898 how they watched a female \"A. urnaria\" wasp provisioning her burrow. She ran along the ground among purslane plants until she found a small green caterpillar which she paralysed with a sting. She carried this heavy burden through their garden and out into an adjoining field of maize. Although all the cornstalks look similar to the Peckhams, the wasp quickly located the right spot and laid the caterpillar down. She then moved two fragments of soil which had been concealing the entrance to a hole. Picking up the caterpillar, she reversed into the burrow dragging the caterpillar behind her and disappeared from view. They could not see what happened next but knew she was laying an egg beside the caterpillar.\n", "Understanding the stages of decomposition, the colonization of insects, and factors that may affect decomposition and colonization are key in determining forensically important information about the body. Different insects colonize the body throughout the stage of decomposition. In entomological studies these stages are commonly described as fresh, bloat, active decay, advanced decay and dry decay. Studies have shown that each stage is characterized by particular insect species, the succession of which depends on chemical and physical properties of remains, rate of decomposition and environmental factors. Insects associated with decomposing remains may be useful in determining post-mortem interval, manner of death, and the association of suspects. Insect species and their times of colonization will vary according to the geographic region, and therefore may help determine if remains have been moved.\n", "A second method of PMI determination, in early stages of decomposition, by insect evidence utilizes the development rate of colonizing arthropods. This method is usually applied to necrophagous blowflies, as they are often the first to colonize and are associated with remains for the longest period. Development rates are only useful in forensic investigations until the first new generation has completed development and left the remains.\n" ]
[]
[]
[ "normal" ]
[]
[ "normal" ]
[]
2018-20359
is blood an irritant to most of our insides when it’s not inside the veins and arteries etc?
Yes, blood in non-vascular spaces causes problems. Blood is a tissue. It flows and gets replaced, but is a tissue. If it stops moving it clogs. If it doesn't circulate then it dies. Putting blood into a space that can't handle it is very dangerous. Bleeding into your brain for instance. Red blood cells will die in those spaces. That released extra potassium and cellular products into the cells around it, which causes the same process. A little bit isn't such a problem usually though. Think of how nasty it looks when you get a bruise. Now, imagine that on the inside of the body where you can't see.
[ "About 23% of patients have a high level of eosinophils in the blood.\n\nSection::::Diagnosis.:Urinary findings.\n\nUrinary findings include:\n\nBULLET::::- Eosinophiluria: Original studies with Methicillin-induced AIN showed sensitivity of 67% and specificity of 83%. The sensitivity is higher in patients with interstitial nephritis induced by methicillin or when the Hansel's stain is used. However, a 2013 study showed that the sensitivity and specificity of urine eosinophil testing are 35.6% and 68% respectively.\n\nBULLET::::- Isosthenuria\n\nBULLET::::- Blood in the urine and occasional RBC casts\n\nBULLET::::- Sterile pyuria: white blood cells and no bacteria\n", "Although some cases present with black, tarry stool (melena), the blood loss can be subtle, with the anemia symptoms predominating. Fecal occult blood testing is positive when bleeding is active. If bleeding is intermittent the test may be negative at times.\n\nSection::::Pathophysiology.\n", "Blood agent\n\nA blood agent is a toxic chemical agent that affects the body by being absorbed into the blood. Blood agents are fast-acting, potentially lethal poisons that typically manifest at room temperature as volatile colorless gases with a faint odor. They are either cyanide- or arsenic-based.\n\nSection::::Exposure.\n\nBlood agents work through inhalation or ingestion. As chemical weapons, blood agents are typically disseminated as aerosols and take effect through inhalation. Due to their volatility, they are more toxic in confined areas than in open areas.\n", "Cyanide-based blood agents irritate the eyes and the respiratory tract, while arsine is nonirritating. Hydrogen cyanide has a faint, bitter, almond odor that only about half of all people can smell. Arsine has a very faint garlic odor detectable only at greater than fatal concentrations.\n", "In healthy people about 0.5 to 1.5ml of blood escapes blood vessels into the stool each day. Significant amounts of blood can be lost without producing visible blood in the stool, estimated as 200ml in the stomach, 100ml in the duodenum, and lesser amounts in the lower intestine. Tests for occult blood identify lesser blood loss.\n\nSection::::Test performance.:Clinical sensitivity and specificity.\n", "The blood of people killed by blood agents is bright red, because the agents inhibit the use of the oxygen in it by the body's cells. Cyanide poisoning can be detected by the presence of thiocyanate or cyanide in the blood, a smell of bitter almonds, or respiratory tract inflammations and congestions in the case of cyanogen chloride poisoning. There is no specific test for arsine poisoning, but it may leave a garlic smell on the victim's breath.\n\nSection::::Effects.\n", "Tubular secretion occurs simultaneously during reabsorption of Filtrate. Substances, generally produced by body or the by-products of cell metabolism that can become toxic in high concentration, and some drugs (if taken). These all are secreted into the lumen of renal tubule. Tubular secretion can be either active or passive or co-transport.\n\nSubstances mainly secreted into renal tubule are; H+, K+, NH3, urea, creatinine, histamine and drugs like penicillin. \n", "Hyperventilation syndrome – caused by shallow breathing and a reduction of carbon dioxide level in the blood which leads to an increased pH in blood. Patient can feel tingling sensation in the hands and feet, and sometimes experience chest pressure and light-headedness. Prevention can be achieved by reassuring patient and dictating the rhythm of breathing.\n\nToxicity – usually caused by overdose or intravascular injection which causes a short-lived toxic concentration in the blood circulation. Prevention required to prevent toxicity includes the calculation of maximum dosage for the individual, and a self-aspirating syringe to prevent intravascular injection.\n", "BULLET::::- Multiple effects on the immune system. The sympathetic nervous system is the primary path of interaction between the immune system and the brain, and several components receive sympathetic inputs, including the thymus, spleen, and lymph nodes. However the effects are complex, with some immune processes activated while others are inhibited.\n\nBULLET::::- In the arteries, constriction of blood vessels, causing an increase in blood pressure.\n\nBULLET::::- In the kidneys, release of renin and retention of sodium in the bloodstream.\n", "BULLET::::- Diaphoresis — sweating without a precipitating factor (e.g. increased ambient temperature)\n\nBULLET::::- Hypotension — low blood pressure\n\nBULLET::::- Hypertension — high blood pressure\n\nBULLET::::- Venous thromboembolism (probably \"rare\")\n\nSection::::Overdose.\n", "Serum albumin is commonly measured by recording the change in absorbance upon binding to a dye such as bromocresol green or bromocresol purple.\n\nSection::::Reference ranges.\n\nSerum albumin concentration is typically 35–50 g/L (3.5–5.0 g/dL).\n\nSection::::Pathology.\n\nSection::::Pathology.:Hypoalbuminemia.\n\nHypoalbuminemia means low blood albumin levels. This can be caused by:\n\nBULLET::::- Liver disease; cirrhosis of the liver is most common\n\nBULLET::::- Excess excretion by the kidneys (as in nephrotic syndrome)\n\nBULLET::::- Excess loss in bowel (protein-losing enteropathy, e.g., Ménétrier's disease)\n\nBULLET::::- Burns (plasma loss in the absence of skin barrier)\n\nBULLET::::- Redistribution (hemodilution [as in pregnancy], increased vascular permeability or decreased lymphatic clearance)\n", "Blood vessels do not actively engage in the transport of blood (they have no appreciable peristalsis). Blood is propelled through arteries and arterioles through pressure generated by the heartbeat.\n\nBlood vessels also transport red blood cells which contain the oxygen necessary for daily activities. The amount of red blood cells present in your vessels has an effect on your health. Hematocrit tests can be performed to calculate the proportion of red blood cells in your blood. Higher proportions result in conditions such as dehydration or heart disease while lower proportions could lead to anemia and long-term blood loss.\n", "The ascitic white blood cell count can help determine if the ascites is infected. A count of 250 WBC per ml or higher is considered diagnostic for spontaneous bacterial peritonitis. Cultures of the fluid can be taken, but the yield is approximately 40% (72-90% if blood culture bottles are used).\n\nSection::::Contraindications.\n\nMild hematologic abnormalities do not increase the risk of bleeding. The risk of bleeding may be increased if:\n\nBULLET::::- prothrombin time 21 seconds\n\nBULLET::::- international normalized ratio 1.6\n\nBULLET::::- platelet count 50,000 per cubic millimeter.\n\nAbsolute contraindication is acute abdomen that requires surgery.\n\nRelative contraindications are:\n", "Studies suggest that the presence of various inflammatory factors during episodes of SCLS may explain the temporarily abnormal permeability of the endothelial cells lining the inner surface of the capillaries. These include transient spikes in monocyte- and macrophage-associated inflammatory mediators and temporary increases in the proteins vascular endothelial growth factors (VEGF) and angiopoietin-2. The impairment of endothelial cells in laboratory conditions provoked by serum taken from patients who were having episodes of SCLS is also suggestive of biochemical factors at work.\n", "Blood vessels also transport red blood cells which contain the oxygen necessary for daily activities. The amount of red blood cells present in your vessels has an effect on your health. Hematocrit tests can be performed to calculate the proportion of red blood cells in your blood. Higher proportions result in conditions such as dehydration or heart disease while lower proportions could lead to anemia and long-term blood loss.\n", "The main intravascular fluid in mammals is blood, a complex mixture with elements of a suspension (blood cells), colloid (globulins), and solutes (glucose and ions). The blood represents both the intracellular compartment (the fluid inside the blood cells) and the extracellular compartment (the blood plasma). The other intravascular fluid is lymph. It too represents both the intracellular compartment (the fluid inside its lymphocytes) and the extracellular compartment (the noncellular matrix of the lymph, which is roughly equivalent to serum). The average volume of plasma in the average (70 kg) male is approximately 3.5 liters. The volume of the intravascular compartment is regulated in part by hydrostatic pressure gradients, and by reabsorption by the kidneys.\n", "Anginal equivalent\n\nAn anginal equivalent is a symptom such as shortness of breath (dyspnea), diaphoresis (sweating), extreme fatigue, or pain at a site other than the chest, occurring in a patient at high cardiac risk. Anginal equivalents are considered to be symptoms of myocardial ischemia. Anginal equivalents are considered to have the same importance as angina pectoris in patients presenting with elevation of cardiac enzymes or certain EKG changes which are diagnostic of myocardial ischemia.\n", "Presently, AIDS presents a problem. Although it is difficult to contract it by a single puncture incident (the overall personal risk has been estimated to be 0.11%), at least one case has been reported among pathologists.\n\nThe continuous respiratory exposure to formaldehyde, used to preserve cadavers, is also an occupational risk of prosectors as well as medical students, anatomists and pathologists. Inhaled formaldehyde can irritate the eyes and mucous membranes, resulting in watery eyes, headache, a burning sensation in the throat, and difficulty breathing. Formaldehyde is listed as a potential human carcinogen.\n\nSection::::Famous prosectors.\n\nBULLET::::- Jean Zuléma Amussat\n", "At sufficient concentrations, blood agents can quickly saturate the blood and cause death in a matter of minutes or seconds. They cause powerful gasping for breath, violent convulsions and a painful death that can take several minutes. The immediate cause of death is usually respiratory failure.\n", "Damage to the valves and endocardium can be caused by:\n\nBULLET::::- Altered, turbulent blood flow. The areas that fibrose, clot, or roughen as a result of this altered flow are known as jet lesions. Altered blood flow is more likely in high pressure areas, so ventricular septal defects or patent ductus arteriosus can create more susceptibility than atrial septal defects.\n\nBULLET::::- Catheters, electrodes, and other intracardiac prosthetic devices.\n\nBULLET::::- Solid particles from repeated intravenous injections.\n\nBULLET::::- Chronic inflammation. Examples include auto-immune mechanisms and degenerative valvular lesions.\n", "On rare occasions, blood products are contaminated with bacteria. This can result in a life-threatening infection known as transfusion-transmitted bacterial infection. The risk of severe bacterial infection is estimated, , at about 1 in 50,000 platelet transfusions, and 1 in 500,000 red blood cell transfusions. Blood product contamination, while rare, is still more common than actual infection. The reason platelets are more often contaminated than other blood products is that they are stored at room temperature for short periods of time. Contamination is also more common with longer duration of storage, especially if that means more than 5 days. Sources of contaminants include the donor's blood, donor's skin, phlebotomist's skin, and containers. Contaminating organisms vary greatly, and include skin flora, gut flora, and environmental organisms. There are many strategies in place at blood donation centers and laboratories to reduce the risk of contamination. A definite diagnosis of transfusion-transmitted bacterial infection includes the identification of a positive culture in the recipient (without an alternative diagnosis) as well as the identification of the same organism in the donor blood.\n", "The use of tests for fecal occult blood in disorders of the mouth, nasal passages, esophagus, lungs and stomach, while analogous to fecal testing, is often discouraged, due to technical considerations including poorly characterized test performance characteristics such as sensitivity, specificity, and analytical interference. However, chemical confirmation that coloration is due to blood rather than coffee, beets, medications, or food additives can be of significant clinical assistance.\n\nSection::::Marathon runners.\n", "Section::::Factors influencing absorption.\n\nAlong with inhalation, ingestion and injection, dermal absorption is a route of exposure for bioactive substances including medications. Absorption of substances through the skin depends on a number of factors:\n\nBULLET::::- Concentration\n\nBULLET::::- Molecular Weight of the molecule\n\nBULLET::::- Duration of contact\n\nBULLET::::- Solubility of medication\n\nBULLET::::- Physical condition of the skin\n\nBULLET::::- Part of the body exposed including the amount of hair on the skin\n", "The interstitial fluid is a reservoir and transportation system for nutrients and solutes distributing among organs, cells, and capillaries, for signaling molecules communicating between cells, and for antigens and cytokines participating in immune regulation. The composition and chemical properties of the interstitial fluid vary among organs and undergo changes in chemical composition during normal function, as well as during body growth, conditions of inflammation, and development of diseases, as in heart failure and chronic kidney disease.\n", "BULLET::::- Menstrual disorders (dysmenorrhea), cystitis, urethritis and mucosal irritation of female genitalia\n\nBULLET::::- Water retention (edema), bone marrow edema (BME), joint pain\n\nBULLET::::- Fatigue, seasickness, tiredness, sleep disorders\n\nBULLET::::- Confusion, nervousness, depressive moods\n\nSection::::Metabolism.\n" ]
[]
[]
[ "normal" ]
[]
[ "normal", "normal" ]
[]
2018-11940
How is time effected by gravity?
> However i still don't know if we have found any proof/ observed it or if it is just a very good theory It is a very good theory **because** we have observed it countless times. A theory is a consistent set of laws and predictions that have been verified over and over again. Time runs slower close to large masses. This has been observed with: * GPS satellites and a couple of other satellites * Various spacecraft traveling through the solar system * Light leaving stars * Radio signals passing close to the Sun * Light moving up/down a few floors in a building * Clocks on mountains vs. clocks at lower altitude * Clocks on airplanes vs. clocks on the ground * Clocks on the same floor at different height (quite recent, requires excellent clocks) * And a couple of other measurements I forgot
[ "A second, similar type of time travel is permitted by general relativity. In this type a distant observer sees time passing more slowly for a clock at the bottom of a deep gravity well, and a clock lowered into a deep gravity well and pulled back up will indicate that less time has passed compared to a stationary clock that stayed with the distant observer.\n", "To illustrate then, without accounting for the effects of rotation, proximity to Earth's gravitational well will cause a clock on the planet's surface to accumulate around 0.0219 fewer seconds over a period of one year than would a distant observer's clock. In comparison, a clock on the surface of the sun will accumulate around 66.4 fewer seconds in one year.\n\nSection::::Circular orbits.\n", "BULLET::::- 1=\"m\" is the geometrized mass of the Earth, \"m\" = \"GM\"/\"c\",\n\nBULLET::::- \"M\" is the mass of the Earth,\n\nBULLET::::- \"G\" is the gravitational constant.\n\nTo demonstrate the use of the proper time relationship, several sub-examples involving the Earth will be used here.\n", "As described above, a time coordinate can to a limited extent be illustrated by the proper time of a clock that is notionally infinitely far away from the objects of interest and at rest with respect to the chosen reference frame. This notional clock, because it is outside all gravity wells, is not influenced by gravitational time dilation. The proper time of objects within a gravity well will pass more slowly than the coordinate time even when they are at rest with respect to the coordinate reference frame. Gravitational as well as motional time dilation must be considered for each object of interest, and the effects are functions of the velocity relative to the reference frame and of the gravitational potential as indicated in ().\n", "Galileo's experimental setup to measure the literal \"flow of time\", in order to describe the motion of a ball, preceded Isaac Newton's statement in his Principia:\n\nThe Galilean transformations assume that time is the same for all reference frames.\n\nSection::::Conceptions of time.:Newton's physics: linear time.\n\nIn or around 1665, when Isaac Newton (1643–1727) derived the motion of objects falling under gravity, the first clear formulation for mathematical physics of a treatment of time began: linear time, conceived as a \"universal clock\".\n", "Section::::Outside a non-rotating sphere.\n\nA common equation used to determine gravitational time dilation is derived from the Schwarzschild metric, which describes space-time in the vicinity of a non-rotating massive spherically symmetric object. The equation is\n\nwhere\n\nBULLET::::- formula_18 is the proper time between events A and B for a slow-ticking observer within the gravitational field,\n", "Gravitational time dilation\n\nGravitational time dilation is a form of time dilation, an actual difference of elapsed time between two events as measured by observers situated at varying distances from a gravitating mass. The higher the gravitational potential (the farther the clock is from the source of gravitation), the faster time passes. Albert Einstein originally predicted this effect in his theory of relativity and it has since been confirmed by tests of general relativity.\n", "This has been demonstrated by noting that atomic clocks at differing altitudes (and thus different gravitational potential) will eventually show different times. The effects detected in such Earth-bound experiments are extremely small, with differences being measured in nanoseconds. Relative to Earth's age in billions of years, Earth's core is effectively 2.5 years younger than its surface. Demonstrating larger effects would require greater distances from the Earth or a larger gravitational source.\n", "Section::::Conceptions of time.:Galileo: the flow of time.\n\nIn 1583, Galileo Galilei (1564–1642) discovered that a pendulum's harmonic motion has a constant period, which he learned by timing the motion of a swaying lamp in harmonic motion at mass at the cathedral of Pisa, with his pulse.\n\nIn his \"Two New Sciences\" (1638), Galileo used a water clock to measure the time taken for a bronze ball to roll a known distance down an inclined plane; this clock was \n", "Subsequently, Einstein worked on a general theory of relativity, which is a theory of how gravity interacts with spacetime. Instead of viewing gravity as a force field acting in spacetime, Einstein suggested that it modifies the geometric structure of spacetime itself. According to the general theory, time goes more slowly at places with lower gravitational potentials and rays of light bend in the presence of a gravitational field. Scientists have studied the behaviour of binary pulsars, confirming the predictions of Einstein's theories, and non-Euclidean geometry is usually used to describe spacetime.\n\nSection::::Mathematics.\n", "Clocks that are far from massive bodies (or at higher gravitational potentials) run more quickly, and clocks close to massive bodies (or at lower gravitational potentials) run more slowly. For example, considered over the total time-span of Earth (4.6 billion years), a clock set at the peak of Mount Everest would be about 39 hours ahead of a clock set at sea level. This is because gravitational time dilation is manifested in accelerated frames of reference or, by virtue of the equivalence principle, in the gravitational field of massive objects.\n", "Section::::Gravitational time dilation.\n\nGravitational time dilation is experienced by an observer that, at a certain altitude within a gravitational potential well, finds that his local clocks measure less elapsed time than identical clocks situated at higher altitude (and which are therefore at higher gravitational potential).\n", "Section::::The natural world.\n", "Today, it is known that the opposite is true: the Earth goes around the Sun. The Aristotelian and Ptolemaic ideas about the position of the stars and Sun were disproved in 1609. The first person to present a detailed argument that the Earth revolves around the Sun was the Polish priest Nicholas Copernicus, in 1514. Nearly a century later, Galileo Galilei, an Italian scientist, and Johannes Kepler, a German scientist, studied how the moons of some planets moved in the sky, and used their observations to validate Copernicus's thinking. \n", "That is, the stronger the gravitational field (and, thus, the larger the acceleration), the more slowly time runs. The predictions of time dilation are confirmed by particle acceleration experiments and cosmic ray evidence, where moving particles decay more slowly than their less energetic counterparts. Gravitational time dilation gives rise to the phenomenon of gravitational redshift and Shapiro signal travel time delays near massive objects such as the sun. The Global Positioning System must also adjust signals to account for this effect.\n", "BULLET::::- Time dilation in a gravitational field is equal to time dilation in far space, due to a speed that is needed to escape that gravitational field. Here is the proof.\n\nSection::::Experimental confirmation.\n\nGravitational time dilation has been experimentally measured using atomic clocks on airplanes. The clocks aboard the airplanes were slightly faster than clocks on the ground. The effect is significant enough that the Global Positioning System's artificial satellites need to have their clocks corrected.\n\nAdditionally, time dilations due to height differences of less than one metre have been experimentally verified in the laboratory.\n", "Both Newton and Galileo,\n\nas well as most people up until the 20th century, thought that time was the same for everyone everywhere.\n", "Time dilation refers to the expansion or contraction in the rate at which time passes, and was the subject of the Gravity Probe A experiment. Under Einstein's theory of general relativity, matter distorts the surrounding spacetime, so that space gets bent similarly to the way a sheet of fabric would bend if a bowling ball were dropped in the middle of the sheet. But the distortion manifests itself in the time direction as well: time would appear for a distant observer to flow more slowly in the vicinity of a massive object. For example, the metric, surrounding a spherically symmetric gravitating body, has a smaller coefficient at formula_1 closer to the body, which means slower rate of time flow there. \n", "Special relativity has modified the notion of time. But from a fixed Lorentz observer's viewpoint time remains a distinguished, absolute, external, global parameter. The Newtonian notion of time essentially carries over to special relativistic systems, hidden in the spacetime structure.\n\nSection::::Overturning of absolute time in general relativity.\n\nThough classically spacetime appears to be an absolute background, general relativity reveals that spacetime is actually dynamical; gravity is a manifestation of spacetime geometry. Matter reacts with spacetime: \n\nAlso, spacetime can interact with itself (e.g. gravitational waves). The dynamical nature of spacetime has a vast array of consequences.\n", "Section::::Specifics.:Speed of gravity.\n\nIn December 2012, a research team in China announced that it had produced measurements of the phase lag of Earth tides during full and new moons which seem to prove that the speed of gravity is equal to the speed of light. This means that if the Sun suddenly disappeared, the Earth would keep orbiting it normally for 8 minutes, which is the time light takes to travel that distance. The team's findings were released in the Chinese Science Bulletin in February 2013.\n", "BULLET::::- According to the general theory of relativity, gravitational time dilation is copresent with the existence of an accelerated reference frame. Additionally, all physical phenomena in similar circumstances undergo time dilation equally according to the equivalence principle used in the general theory of relativity.\n", "The General Theory of Relativity explains how the path of a ray of light is affected by 'gravity', which according to Einstein is a mere illusion in contrast to Newton's views. It is spacetime curvature, where light moves in a straight path in 4D but which is seen as a curve in 3D. These straight line paths are Geodesics. The Twin paradox, a thought experiment in Special relativity involving identical twins, which considers that two twins can age differently if they move at relatively different speeds to each other, or even at different places where spacetime curvature is different. Special relativity is based upon arenas of space and time where events take place, whereas general relativity is dynamic where force could change spacetime curvature, and which gives rise to the expanding Universe. Hawking and Roger Penrose worked upon this and later proved using general relativity that if the Universe had a beginning then it also must have an end.\n", "The time constant formula_39 depends only on formula_40 so if we expand that we get\n\nwhich depends only on the gravitational constant and formula_1 the density of the planet. The size of the planet is immaterial; the journey time is the same if the density is the same.\n\nSection::::In fiction.\n\nThe 1914 book \"Tik-Tok of Oz\" has a tube, that passed from Oz, through the center of the earth, emerging in the country of the Great Jinjin, Tittiti-Hoochoo.\n", "General relativity has enjoyed much success because of the way its predictions of phenomena which are not called for by the older theory of gravity have been regularly confirmed. For example:\n\nBULLET::::- General relativity accounts for the anomalous perihelion precession of the planet Mercury.\n\nBULLET::::- The prediction that time runs slower at lower potentials has been confirmed by the Pound–Rebka experiment, the Hafele–Keating experiment, and the GPS.\n", "On the other hand, the crew on board the spaceship also perceives the observer as slowed down and flattened along the spaceship's direction of travel, because both are moving at very nearly the speed of light relative to each other. Because the outside universe appears flattened to the spaceship, the crew perceives themselves as quickly traveling between regions of space that (to the stationary observer) are many light years apart. This is reconciled by the fact that the crew's perception of time is different from the stationary observer's; what seems like seconds to the crew might be hundreds of years to the stationary observer. In either case, however, causality remains unchanged: the past is the set of events that can send light signals to an entity and the future is the set of events to which an entity can send light signals.\n" ]
[]
[]
[ "normal" ]
[ "Time is affected by gravity." ]
[ "false presupposition", "normal" ]
[ "Time being affected by gravity is a theory which has been observed countless times." ]
2018-17379
How is it possible for steel wool to burn?
Burning is combining with oxygen to reach a lower energy state, releasing energy. Hydrogen burns, combines with Oxygen to get H2O (water) Steel is mostly iron. Iron combines with oxygen to make iron oxide (rust). This is happening slowly all the time as say, an old junker car rusts away. Steel wool is very very very thin, lots of surface area and a low volume. so there's lots of surface to react with oxygen and turn into iron oxide. Once you get it hot enough (ignition temperature) then it starts to burn, and it's thin so it reacts quickly (compared to a rusty spoon) and burns up A BIG piece of steel will burn too but you need a LOT of heat to get the whole thing up to ignition temperature.
[ "When steel wool is heated or allowed to rust, it increases in mass due to the combination of oxygen with iron.\n\nThe fine cross-section of steel wool makes it combustible in air. \n\nLight painting, where many sparks are released, is one application.\n\nVery fine steel wool can also be used as tinder in emergency situations, as it burns even when wet and can be ignited by fire, a spark, or by connecting a battery to produce joule heating.\n\nSection::::Grades.\n", "BULLET::::- PCW wools are not classified; self-classification led to the conclusion that PCW are not hazardous\n", "As soon as steam enters, the yarns quantity of moisture rises at once, caused by the heating of the yarn and by steam condensation. According to Speakmann the following phenomena can be seen in the stretched woolen fiber: The cystine side chains are subjected to a hydrolysis at the sulphur bridge, where cystine is dissolved into cysteine and a not yet isolated sulphonic acid.\n", "Boiled wool is a type of felted wool, and is similar to non-woven wool felt. These processes date at least as far back as the Middle Ages. The word \"felt\" itself comes from West Germanic \"feltaz\". Boiled/felted wool is characteristic of the traditional textiles of South America and Tyrolean Austria. It is produced industrially around the world.\n\nSection::::Process.\n", "Boiled wool\n\nBoiled wool is a type of fabric primarily used in creating berets, scarves, vests, cardigans, coats, and jackets. To create this fabric, knit wool or wool-blend fabrics are agitated with hot water in a process called fulling. This process shrinks the fabric and results in a dense felted fabric that resists fraying and further shrinkage.\n\nSection::::Origins.\n", "Section::::Safety of material.:Crystalline silica.\n\nAmorphous high-temperature mineral wool (AES and ASW) are produced from a molten glass stream which is aerosolised by a jet of high-pressure air or by letting the stream impinge onto spinning wheels. The droplets are drawn into fibres; the mass of both fibres and remaining droplets cool very rapidly so that no crystalline phases may form.\n", "Steel wool is commonly used by woodworkers, metal craftsmen, and jewelers to clean and smooth working surfaces and give them shine.\n\nHowever, when used on oak, remaining traces of iron may react with tannins in the wood to produce blue or black iron stain, and when used on aluminum, brass, or other non-ferrous metal surfaces may cause after-rust which will dull and discolor the surface. Bronze wool and stainless steel wool will not cause these undesirable effects.\n", "Section::::The chemical process.\n\nThere are completely different behaviors depending on the kind of yarn material. Much is known about the steaming of woolen yarns but more research is needed on the steaming behaviour of artificial fibers and cotton.\n\nSection::::The chemical process.:Wool.\n", "The old mill now owned by the Bishops had been built in 1893 and had been a wool scouring plant where raw wool is scrubbed and packed before shipping out to the textile mills. In 1895 the mill was enlarged and converted into a textile mill and in 1896 began making Native-American trade blankets—geometric patterned robes (unfringed blankets) for Native-American men and shawls (fringed blankets) for Native-American women in the area—the Umatilla, Cayuse and Walla Walla tribes. That business eventually failed and the plant stood idle until the Bishop family purchased it. When the Bishop assumed ownership, they built a new mill with the help of the town of Pendleton, which issued bonds for the mill's construction.\n", "Slag wool was first made in 1840 in Wales by Edward Parry, \"but no effort appears to have been made to confine the wool after production; consequently it floated about the works with the slightest breeze, and became so injurious to the men that the process had to be abandoned\". A method of making mineral wool was patented in the United States in 1870 by John Player and first produced commercially in 1871 at Georgsmarienhütte in Osnabrück Germany. The process involved blowing a strong stream of air across a falling flow of liquid iron slag which was similar to the natural occurrence of fine strands of volcanic slag from Kilauea called Pele's hair created by strong winds blowing apart the slag during an eruption. \n", "In after-use high-temperature mineral wool crystalline silica crystals are embedded in a matrix composed of other crystals and glasses. Experimental results on the biological activity of after-use high-temperature mineral wool have not demonstrated any hazardous activity that could be related to any form of silica they may contain.\n\nSection::::See also.\n\nBULLET::::- Risk and Safety Statements\n\nBULLET::::- Basalt fiber, a mineral fiber having high tensile strength\n\nBULLET::::- Asbestos, a mineral that is naturally fibrous\n\nBULLET::::- Pele's hair\n\nBULLET::::- Glass wool\n\nSection::::External links.\n\nBULLET::::- Statistics Canada documents on shipments of mineral wool in Canada\n", "What makes wool a unique product is that it is essentially flame resistant. The combination of sulfur and nitrogen provides a blend of materials that provide a flame-resistant product. The reason wool is used for many types of heat protection is that it has a low ignition temperature. This is defined as the temperature in which an item will produce a flame. Wool can be heated to over 1,000 degrees Fahrenheit before the igniting of this fabric. Even when the fabric comes in contact with flames it does not disseminate the flame. This provides an even greater protective quality with the wool's low ignition temperature and the inability for flames to spread throughout the fiber. When wool is heated to a certain degree, it begins to \"char\" on the outside. This can provide a protective outer layer on the outside fabric. When the char on the outside of the fabric is consistent with the original properties of the wool, it can produce a safer version of the product. When the heat is directly applied to the fabric, the \"char\" forms a semi-liquid state which can be wiped off the fabric providing no evidence of the heat contact.\n", "Natural sand and recycled glass are mixed and heated to 1,450 °C, to produce glass. The fiberglass is usually produced by \n", "Glass wool is an insulating material made from fibres of glass arranged using a binder into a texture similar to wool. The process traps many small pockets of air between the glass, and these small air pockets result in high thermal insulation properties. Glass wool is produced in rolls or in slabs, with different thermal and mechanical properties. It may also be produced as a material that can be sprayed or applied in place, on the surface to be insulated. The modern method for producing glass wool is the invention of Games Slayter working at the Owens-Illinois Glass Co. (Toledo, Ohio). He first applied for a patent for a new process to make glass wool in 1933.\n", "Section::::Shroud of Turin.:Hypothesis on image origin (Maillard reaction).\n\nThe Maillard reaction is a form of non-enzymatic browning involving an amino acid and a reducing sugar. The cellulose fibers of the shroud are coated with a thin carbohydrate layer of starch fractions, various sugars, and other impurities.\n", "Section::::Properties.\n", "Steel wool\n\nSteel wool, also known as iron wool, wire wool, steel wire or wire sponge, is a bundle of very fine and flexible sharp-edged steel filaments. It was described as a new product in 1896. It is used as an abrasive in finishing and repair work for polishing wood or metal objects, cleaning household cookware, cleaning windows, and sanding surfaces.\n\nSteel wool is made from low-carbon steel in a process similar to broaching, where a heavy steel wire is pulled through a toothed die that removes thin, sharp, wire shavings.\n\nSection::::Uses.\n", "Though the individual fibers conduct heat very well, when pressed into rolls and sheets, their ability to partition air makes them excellent insulators and sound absorbers. Though not immune to the effects of a sufficiently hot fire, the fire resistance of fiberglass, stone wool, and ceramic fibers makes them common building materials when passive fire protection is required, being used as spray fireproofing, in stud cavities in drywall assemblies and as packing materials in firestops.\n", "At various times in its history, the mill produced broadcloth, satinet, cashmere, doeskin, kersey and cloaking. The mill was used to spin and card wool until 1906, when the owners turned to buying yarn instead.\n", "The use of high-temperature mineral wool enables a more lightweight construction of industrial furnaces and other technical equipment as compared to other methods such as fire bricks, due to its high heat resistance capabilities per weight, but has the disadvantage of being more expensive than other methods.\n\nSection::::Safety of material.\n\nThe International Agency for Research on Cancer (IARC) has reviewed the carcinogenicity of man-made mineral fibres in October 2002.\n", "Classification temperature is the temperature at which a certain amount of linear shrinkage (usually two to four percent) is not exceeded after a 24‑hour heat treatment in an electrically heated laboratory oven in a neutral atmosphere. Depending on the type of product, the value may not exceed two percent for boards and shaped products and four percent for mats and papers.\n", "The colours and methods employed are the same as for wool, except that in the case of silk no preparation of the material is required before printing, and ordinary dry steaming is preferable to damp steaming.\n", "Wool (disambiguation)\n\nWool is the fibre commonly produced from sheep\n\nWool (the fiber) refers to one of the following:\n\nBULLET::::- Alpaca wool, derived from fur of alpacas\n\nBULLET::::- Angora wool, derived from fur of rabbits\n\nBULLET::::- Cashmere wool, derived from fur of goats\n\nBULLET::::- Llama wool, derived from fur of llamas\n\nBULLET::::- Wool, the commonly used term in the UK for yarn\n\nBULLET::::- Cotton wool, the UK term for cotton linters\n\nBULLET::::- Steel wool, an abrasive derived from steel\n\nBULLET::::- Bronze wool, an abrasive derived from bronze\n\nBULLET::::- Glass wool, an insulating material derived from fiberglass\n", "He was a Catholic priest of St. Stanislaus Kostka Roman Catholic Church in Chicago, then the largest Polish church in the country, with 40,000 in the parish. In his early twenties he began experimenting with the cloth, using steel shavings, moss, hair. In his research, he came upon the work of Dr. George E. Goodfellow, who had written about the bullet resistive properties of silk. \n", "Boiled wool fabric is created commercially by first knitting wool yarns to create a fabric of uniform thickness. The yarns and fabric may either be dyed or left natural, and the fabric may include designs or embellishments. After knitting, the fabric is fulled by boiling and agitating in hot water and an alkaline solution like soap. The agitation causes the scaly surface of wool fibers to stick together, producing a felted fabric. The result is a tighter and more dense material that is up to 50% smaller in all directions compared to the pre-felted fabric. Boiled wool is warm, durable, and resistant to water and wind.\n" ]
[]
[]
[ "normal" ]
[]
[ "normal", "normal" ]
[]
2018-00445
You often hear when deleting data, that the erased space isn't really empty, but marked for overwriting. Why isn't it just deleted straight away?
Takes too much time. Erase some pointer values that identify those segments where the data is stored, and overwrite those. You are throwing away the map to the data, which means you cannot find it, which effectively means it’s gone. You have achieved that by overwriting a few data fields rather than erasing all the data, which would take ages. Unnecessary in most cases.
[ "When data is deleted from storage devices, the references to the data are removed from the directory structure. The space can then be used, or overwritten, with data from other files or computer functions. The deleted data itself is not immediately removed from the physical drive and often exists as a number of disconnected fragments. This data, so long as it is not overwritten, can be recovered.\n", "One challenge with an overwrite is that some areas of the disk may be inaccessible, due to media degradation or other errors. Software overwrite may also be problematic in high-security environments which require stronger controls on data commingling than can be provided by the software in use. The use of advanced storage technologies may also make file-based overwrite ineffective (see the discussion below under \"Complications\").\n", "BULLET::::- Error correction and loss of information: The most challenging problem within data cleansing remains the correction of values to remove duplicates and invalid entries. In many cases, the available information on such anomalies is limited and insufficient to determine the necessary transformations or corrections, leaving the deletion of such entries as a primary solution. The deletion of data, though, leads to loss of information; this loss can be particularly costly if there is a large amount of deleted data.\n", "Section::::Differentiators.:Full disk overwriting.\n\nWhile there are many overwriting programs, only those capable of complete data erasure offer full security by destroying the data on all areas of a hard drive. Disk overwriting programs that cannot access the entire hard drive, including hidden/locked areas like the host protected area (HPA), device configuration overlay (DCO), and remapped sectors, perform an incomplete erasure, leaving some of the data intact. By accessing the entire hard drive, data erasure eliminates the risk of data remanence.\n", "Storage media may have areas which become inaccessible by normal means. For example, magnetic disks may develop new bad sectors after data has been written, and tapes require inter-record gaps. Modern hard disks often feature reallocation of marginal sectors or tracks, automated in a way that the operating system would not need to work with it. The problem is especially significant in solid state drives (SSDs) that rely on relatively large relocated bad block tables. Attempts to counter data remanence by overwriting may not be successful in such situations, as data remnants may persist in such nominally inaccessible areas.\n", "Even when an explicit deleted file retention facility is not provided or when the user does not use it, operating systems do not actually remove the contents of a file when it is deleted unless they are aware that explicit erasure commands are required, like on a solid-state drive. (In such cases, the operating system will issue the Serial ATA TRIM command or the SCSI UNMAP command to let the drive know to no longer maintain the deleted data.) Instead, they simply remove the file's entry from the file system directory, because this requires less work and is therefore faster, and the contents of the file—the actual data—remain on the storage medium. The data will remain there until the operating system reuses the space for new data. In some systems, enough filesystem metadata are also left behind to enable easy undeletion by commonly available utility software. Even when undelete has become impossible, the data, until it has been overwritten, can be read by software that reads disk sectors directly. Computer forensics often employs such software.\n", "Discarded computers, disk drives and media are also a potential source of plaintexts. Most operating systems do not actually erase anything—they simply mark the disk space occupied by a deleted file as 'available for use', and remove its entry from the file system directory. The information in a file deleted in this way remains fully present until overwritten at some later time when the operating system reuses the disk space. With even low-end computers commonly sold with many gigabytes of disk space and rising monthly, this 'later time' may be months later, or never. Even overwriting the portion of a disk surface occupied by a deleted file is insufficient in many cases. Peter Gutmann of the University of Auckland wrote a celebrated 1996 paper on the recovery of overwritten information from magnetic disks; areal storage densities have gotten much higher since then, so this sort of recovery is likely to be more difficult than it was when Gutmann wrote. \n", "BULLET::::2. Verify the overwriting methodology has been successful and removed data across the entire device.\n\nPermanent data erasure goes beyond basic file deletion commands, which only remove direct pointers to the data disk sectors and make the data recovery possible with common software tools. Unlike degaussing and physical destruction, which render the storage media unusable, data erasure removes all information while leaving the disk operable. New flash memory-based media implementations, such as solid-state drives or USB flash drives, can cause data erasure techniques to fail allowing remnant data to be recoverable.\n", "Section::::Complications.:Advanced storage systems.\n\nData storage systems with more sophisticated features may make overwrite ineffective, especially on a per-file basis. \n", "In 1995 \"Computer Contradictionary\" book, it reports EWOM, or Erasable Write-Only Memory (an analogy of EPROM), a memory copyrighted by IBM (Irish Business Machines), which allows the data to be written into and then erased from, for memory re-use.\n\nWith the explosive growth of the amount of video data available both online and in private use, there emerged a common joke that video tapes and other video media are \"write only memory\", because without efficient means of search and retrieval for video data archives, very little is viewed after recording.\n\nSection::::Other members of the family.\n", "The workings of undeletion depend on the file system on which the deleted file was stored. Some file systems, such as HFS, cannot provide an undeletion feature because no information about the deleted file is retained (except by additional software, which is not usually present). Some file systems, however, do not erase all traces of a deleted file, including FAT file systems:\n\nSection::::Mechanics.:FAT file systems.\n", "A process called undeleting allows the recreation of links to data that are no longer associated with a name. However, this process is not available on all systems and is often not reliable. When a file is deleted, it is added to a free space map for re-use. If a portion of the deleted file space is claimed by new data, undeletion will be unsuccessful, because some or all of the previous data will have been overwritten, and may result in cross-linking with the new data and leading to filesystem corruption. Additionally, deleted files on solid state drives may be erased at any time by the storage device for reclamation as free space.\n", "When a file is deleted, the meta-information about this file (filename, date/time, size, location of the first data block/cluster, etc.) is lost; e.g., in an ext3/ext4 filesystem, the names of deleted files are still present, but the location of the first data block is removed. This means the data is still present on the filesystem, but only until some or all of it is overwritten by new file data.\n", "Wear leveling can also defeat data erasure, by relocating blocks between the time when they are originally written and the time when they are overwritten. For this reason, some security protocols tailored to operating systems or other software featuring automatic wear leveling recommend conducting a free-space wipe of a given drive and then copying many small, easily identifiable \"junk\" files or files containing other nonsensitive data to fill as much of that drive as possible, leaving only the amount of free space necessary for satisfactory operation of system hardware and software. As storage and/or system demands grow, the \"junk data\" files can be deleted as necessary to free up space; even if the deletion of \"junk data\" files is not secure, their initial nonsensitivity reduces to near zero the consequences of recovery of data remanent from them.\n", "Whenever a member is deleted, the space it occupied is unusable for storing other data. Likewise, if a member is re-written, it is stored in a new spot at the back of the PDS and leaves wasted “dead” space in the middle. The only way to recover “dead” space is to perform frequent file compression. Compression, which is done using the IEBCOPY utility, \n", "Section::::Importance.\n\nInformation technology assets commonly hold large volumes of confidential data. Social security numbers, credit card numbers, bank details, medical history and classified information are often stored on computer hard drives or servers. These can inadvertently or intentionally make their way onto other media such as printers, USB, flash, Zip, Jaz, and REV drives.\n\nSection::::Importance.:Data breach.\n", "In a third scenario, files have been accidentally \"deleted\" from a storage medium by the users. Typically, the contents of deleted files are not removed immediately from the physical drive; instead, references to them in the directory structure are removed, and thereafter space the deleted data occupy is made available for later data overwriting. In the mind of end users, deleted files cannot be discoverable through a standard file manager, but the deleted data still technically exists on the physical drive. In the meantime, the original file contents remain, often in a number of disconnected fragments, and may be recoverable if not overwritten by other data files.\n", "For example, journaling file systems increase the integrity of data by recording write operations in multiple locations, and applying transaction-like semantics; on such systems, data remnants may exist in locations \"outside\" the nominal file storage location. Some file systems also implement copy-on-write or built-in revision control, with the intent that writing to a file never overwrites data in-place. Furthermore, technologies such as RAID and anti-fragmentation techniques may result in file data being written to multiple locations, either by design (for fault tolerance), or as data remnants.\n", "Another approach is offered by programs such as \"Norton GoBack\" (formerly \"Roxio GoBack\"): a portion of the hard disk space is set aside for file modification operations to be recorded in such a way that they may later be undone. This process is usually much safer in aiding recovery of deleted files than the undeletion operation as described below.\n\nSimilarly, file systems that support \"snapshots\" (like ZFS or btrfs), can be used to make snapshots of the whole file system at regular intervals (e.g. every hour), thus allowing recovery of files from an earlier snapshot.\n\nSection::::Limitations.\n", "The TRIM feature in many SSD devices, if properly implemented, will eventually erase data after it is deleted, but the process can take some time, typically several minutes. Many older operating systems do not support this feature, and not all combinations of drives and operating systems work.\n\nSection::::Complications.:Data in RAM.\n\nData remanence has been observed in static random-access memory (SRAM), which is typically considered volatile (\"i.e.\", the contents degrade with loss of external power). In one study, data retention was observed even at room temperature.\n", "A common method used to counter data remanence is to overwrite the storage media with new data. This is often called wiping or shredding a file or disk, by analogy to common methods of destroying print media, although the mechanism bears no similarity to these. Because such a method can often be implemented in software alone, and may be able to selectively target only part of the media, it is a popular, low-cost option for some applications. Overwriting is generally an acceptable method of clearing, as long as the media is writable and not damaged.\n", "Section::::Differentiators.:Standards.\n\nMany government and industry standards exist for software-based overwriting that removes the data. A key factor in meeting these standards is the number of times the data is overwritten. Also, some standards require a method to verify that all the data have been removed from the entire hard drive and to view the overwrite pattern. Complete data erasure should account for hidden areas, typically DCO, HPA and remapped sectors.\n", "Research from the Center for Magnetic Recording and Research, University of California, San Diego has uncovered problems inherent in erasing data stored on solid-state drives (SSDs). Researchers discovered three problems with file storage on SSDs:\n", "Recovery of fragmented files (after the first fragment) is therefore not normally possible by automatic processes, only by manual examination of each (unused) block of the disk. This requires detailed knowledge of the file system, as well as the binary format of the file type being recovered, and is therefore only done by recovery specialists or forensics professionals.\n\nSection::::Mechanics.:NTFS file systems.\n", "While a virtual tape library is very fast, the disk storage within is not designed to be removable, and does not usually involve physically removable external disk drives to be used for data archiving in place of tape. Since the disk storage is always connected to power and data sources and is never physically electrically isolated, it is vulnerable to potential damage and corruption due to nearby building or power grid lightning strikes.\n\nSection::::History.\n" ]
[]
[]
[ "normal" ]
[]
[ "normal", "normal" ]
[]
2018-10552
How do chest compressions get the heart to start beating if stopped?
Chest compressions don't, generally speaking, get the heart to start beating. Neither do defibrillators, despite what you see on TV. If the heart has stopped beating, there is some serious defect which *needs to be corrected,* like extreme blood loss. The purpose of chest compressions are to maintain some degree of blood flow (you're manually doing the squeezing the heart is meant to do) until better medical procedures can be put in place, to try to prolong the life of tissues like those in the brain.
[ "The first person saved with this technique was recalled by Jude:\n\n\"She was rather an obese female who … went into cardiac arrest as a result of flurothane anesthetic. This woman had no blood pressure, no pulse, and ordinarily we would have opened up her chest. Instead, since we weren’t in the operating room, we applied external cardiac massage. Her blood pressure and pulse came back at once. We didn’t have to open her chest. They went ahead and did the operation on her, and she recovered completely.\"\n", "Section::::Mechanism of CPR blood flow.\n\nThe load-distributing band system, employing thoracic compressions, produces higher blood flow compared to CPR consisting of sternal compressions only. The potential to produce blood flow for a sudden cardiac arrest victim is in large part determined by the peak power of the compression. Factors determining the power of the compression are the force of the compression, the depth of the compression, and the duration that the compression is held at maximum dep\n\nSection::::Research.\n", "BULLET::::- The pressure on the chest is released, allowing the pulmonary vessels and the aorta to re-expand causing a further initial slight fall in stroke volume (20 to 23 seconds) due to decreased left atrial return and increased aortic volume, respectively. Venous blood can once more enter the chest and the heart, cardiac output begins to increase.\n\nBULLET::::4. Return of cardiac output\n", "Acute cardiac unloading is any maneuver, therapy, or intervention that decreases the power expenditure of the ventricle and limits the hemodynamic forces that lead to ventricular remodeling after insult or injury to the heart. This technique is being investigated as a therapeutic to aid after damage has occurred to the heart, such as after a heart attack. The theory behind this approach is that by simultaneously limiting the oxygen demand and maximizing oxygen delivery to the heart after damage has occurred, the heart is more fully able to recover. This is primarily achieved by using temporary minimally invasive mechanical circulatory support to supplant the pumping of blood by the heart. Using mechanical support decreases the workload of the heart, or unloads it.\n", "The formalised system of chest compression was really an accidental discovery made in 1958 by William Bennett Kouwenhoven, Guy Knickerbocker, and James Jude at Johns Hopkins University. They were studying defibrillation in dogs when they noticed that by forcefully applying the paddles to the chest of the dog, they could achieve a pulse in the femoral artery. Further meticulous experimentation involving dogs answered such basic questions as how fast to press, where to press, and how deep to press. This information gave them the belief that they were ready for human trials.\n", "Recovery from open-heart surgery begins with about 48 hours in an intensive care unit, where heart rate, blood pressure, and oxygen levels are closely monitored. Chest tubes are inserted to drain blood around the heart and lungs. After discharge from the hospital, compression socks may be recommended in order to regulate blood flow.\n\nSection::::Risks.\n\nThe advancement of cardiac surgery and cardiopulmonary bypass techniques has greatly reduced the mortality rates of these procedures. For instance, repairs of congenital heart defects are currently estimated to have 4–6% mortality rates.\n", "In modern protocols for lay persons, this step is omitted as it has been proven that lay rescuers may have difficulty in accurately determining the presence or absence of a pulse, and that, in any case, there is less risk of harm by performing chest compressions on a beating heart than failing to perform them when the heart is not beating. For this reason, lay rescuers proceed directly to cardiopulmonary resuscitation, starting with chest compressions, which is effectively artificial circulation. In order to simplify the teaching of this to some groups, especially at a basic first aid level, the C for Circulation is changed for meaning CPR or Compressions.\n", "In the 19th century, Doctor H. R. Silvester described a method (The Silvester Method) of artificial ventilation in which the patient is laid on their back, and their arms are raised above their head to aid inhalation and then pressed against their chest to aid exhalation. The procedure is repeated sixteen times per minute. This type of artificial ventilation is occasionally seen in films made in the early 20th century.\n", "When done by trained responders, 30 compressions interrupted by two breaths appears to have a slightly better result than continuous chest compressions with breaths being delivered while compressions are ongoing.\n\nThere is a higher proportion of patients who achieve spontaneous circulation (ROSC), where their heart starts beating on its own again, than ultimately survive to be discharged from hospital (see table above). \n\n59% of CPR survivors lived over a year; 44% lived over 3 years, based on a study of CPR done in 2000-2008.\n\nSection::::Consequences.\n", "While the heart is asystolic, there is no blood flow to the brain unless CPR or internal cardiac massage (when the chest is opened and the heart is manually compressed) is performed, and even then it is a small amount. After many emergency treatments have been applied but the heart is still unresponsive, it is time to consider pronouncing the patient dead. Even in the rare case that a rhythm reappears, if asystole has persisted for fifteen minutes or more, the brain will have been deprived of oxygen long enough to cause brain death.\n\nSection::::See also.\n\nBULLET::::- Agonal heart rhythm\n", "BULLET::::- A message is also sent via the vagus nerve to the main pacemaker of the heart to decrease the rate and volume of the heartbeat, typically by a third. In some cases there is evidence that this may escalate into asystole, a form of cardiac arrest that is difficult to treat. There is a dissenting view on the full extent how and when a person reaches a stage of permanent injury, but it is agreed that pressure on the vagus nerve causes changes to pulse rate and blood pressure and is dangerous in cases of carotid sinus hypersensitivity.\n", "\"Thirty-one physicians and medical students, and one nurse volunteered . . . Consent was very informed. All volunteers had to observe me ventilate anaesthetised and curarized patients without a tracheal tube. I sedated the volunteers and paralysed them for several hours each. Blood O2 and CO were analysed. I demonstrated the method to over 100 lay persons who were then asked to perform the method on the curarized volunteers.\"\n", "Once the procedure on the heart vessels (coronary artery bypass grafting) or inside the heart such as valve replacement or correction of congenital heart defect, etc. is over, the cross-clamp is removed and the isolation of the heart is terminated, so normal blood supply to the heart is restored and the heart starts beating again.\n", "Tension pneumothorax is usually treated with urgent needle decompression. This may be required before transport to the hospital, and can be performed by an emergency medical technician or other trained professional. The needle or cannula is left in place until a chest tube can be inserted. If tension pneumothorax leads to cardiac arrest, needle decompression is performed as part of resuscitation as it may restore cardiac output.\n\nSection::::Treatment.:Conservative.\n", "Intermittent pneumatic compression\n\nIntermittent pneumatic compression is a therapeutic technique used in medical devices that include an air pump and inflatable auxiliary sleeves, gloves or boots in a system designed to improve venous circulation in the limbs of patients who suffer edema or the risk of deep vein thrombosis (DVT) or pulmonary embolism (PE).\n", "The use of a self-expanding device that attaches to the external surface of the left ventricle has been suggested, yet still awaits FDA approval. When the heart muscle squeezes, energy is loaded into the device, which absorbs the energy and releases it to the left ventricle in the diastolic phase. This helps retain muscle elasticity.\n\nSection::::Prognosis.\n", "Respiratory pump - Intrapleural pressure decreases during inspiration and abdominal pressure increases, squeezing local abdominal veins, allowing thoracic veins to expand and increase blood flow towards the right atrium.\n\nSkeletal muscle pump - In the deep veins of the legs, surrounding muscles squeeze veins and pump blood back towards the heart. This occurs most notably in the legs. Once blood flows past valves it cannot flow backwards and therefore blood is “milked” towards the heart.\n\nSection::::See also.\n\nBULLET::::- Afterload\n\nBULLET::::- Cardiac output\n\nBULLET::::- Frank–Starling law of the heart\n\nBULLET::::- Passive leg raising test\n\nBULLET::::- Volume overload\n\nSection::::External links.\n", "Section::::Additional devices.\n\nWhile several adjunctive devices are available, none other than defibrillation, as of 2010, have consistently been found to be better than standard CPR for out-of-hospital cardiac arrest. These devices can be split into three broad groups: timing devices; devices that assist the rescuer in achieving the correct technique, especially depth and speed of compressions; and devices that take over the process completely.\n\nSection::::Additional devices.:Timing devices.\n", "In the event that the patient is not breathing normally, the current international guidelines (set by the International Liaison Committee on Resuscitation or ILCOR) indicate that chest compressions should be started.\n", "Section::::Concept.\n\nSimilar to the concept of elective cardiopulmonary bypass, used in open heart surgery, oxygenation and perfusion can be maintained with an ECMO device in patients undergoing cardiovascular collapse. In the setting of cardiac arrest, ECPR involves percutaneous cannulation of a femoral vein and artery, followed by the activation of the device, which subsequently maintains circulation until an appropriate recovery is made.\n", "Some argue that when pressure is applied to the carotid artery, the baroreceptors send a signal to the brain via the glossopharyngeal nerve and the heart via the vagus nerve. This signal tells the heart to reduce volume of blood per heartbeat, typically up to one-third, in order to further relieve high pressure. There is a slight chance of the rate dropping to zero, or flatline (asystole). However, there are several studies that showed choking out will result in a few seconds of flat line ECG for a few seconds at least in half of the subjects. This might suggest that choking out or syncope is not as safe as it was assumed to be previously.\n", "To achieve this, the patient is first placed on cardiopulmonary bypass. This device, otherwise known as the heart-lung machine, takes over the functions of gas exchange by the lung and blood circulation by the heart. Subsequently, the heart is isolated from the rest of the blood circulation by means of an occlusive cross-clamp placed on the ascending aorta proximal to the innominate artery. During this period of heart isolation, the heart is not receiving any blood flow, thus no oxygen for metabolism. As the cardioplegia solution distributes to the entire myocardium, the ECG will change and eventually asystole will ensue. Cardioplegia lowers the metabolic rate of the heart muscle, thereby preventing cell death during the ischemic period of time.\n", "Section::::Termination of contraction.\n", "The AutoPulse measures chest size and resistance before it delivers the unique combination of thoracic and cardiac chest compressions. The compression depth and force varies per patient. The chest displacement equals a 20% reduction in the anterior-posterior chest depth. The physiological duty cycle is 50%, and it runs in a 30:2, 15:2 or continuous compression mode, which is user-selectable, at a rate of 80 compressions-per-minute.\n\nSection::::Device operation.\n", "The technique involves inserting a small balloon directly into the patient’s aorta and inflating it. The balloon blocks the artery and temporarily stops the blood flow giving doctors time to operate. It maintains blood circulation in the brain and heart. However, the parts of the body below the balloon are cut off from the normal blood flow and this may result in short- or longer-term problems.\n" ]
[ "Chest compressions get the heart to start beating again.", "Chest compressions get the heart to keep beating if it's stopped. " ]
[ "Chest compressions don't usually get the heart to start beating, they maintain blood flow until better medical treatment is available.", "Chest compressions don't make the heart start beating again, they only manually allowing the affected body to maintain blood pressure. " ]
[ "false presupposition" ]
[ "Chest compressions get the heart to start beating again.", "Chest compressions get the heart to keep beating if it's stopped. " ]
[ "false presupposition", "false presupposition" ]
[ "Chest compressions don't usually get the heart to start beating, they maintain blood flow until better medical treatment is available.", "Chest compressions don't make the heart start beating again, they only manually allowing the affected body to maintain blood pressure. " ]
2018-17526
how do birds survive typhoons as for example Mangkhut?
A lot of them don’t survive. But a lot of them are more attuned to pressure fluctuations, and likely leave the area ahead of the arrival of the storm.
[ "BULLET::::- Typhoon Wanda (1962) – Strongest typhoon recorded in Hong Kong\n\nBULLET::::- Typhoon Hope (Ising; 1979) – One of the strongest typhoons that made its final landfall near Hong Kong.\n\nBULLET::::- Typhoon Ellen (Herming; 1983) – A powerful typhoon that took a similar track through the Philippines in September 1983, and one of the strongest typhoon in Hong Kong\n\nBULLET::::- Typhoon Zeb (Iliang; 1998) – An extremely powerful typhoon that made landfall in the same province of the Philippines\n", "The deadliest typhoon to impact the Philippines was Typhoon Haiyan, locally known as Yolanda, in November 2013, in which more than 6,300 lives were lost from its storm surges and powerful winds. Over 1,000 went missing and nearly 20,000 were injured. Winds reached in one–minute sustained and may have been the strongest storm in history in terms of wind speeds as wind speeds before the 1970s were too high to record.\n\nSection::::Strongest typhoons.:Typhoon Angela (Rosing).\n", "So far, a total of eleven people were killed while some were missing or injured in Japan by the typhoon.\n", "BULLET::::- Typhoon Megi (Juan; 2010) – Another powerful typhoon that made landfall in nearby Isabela province and affected South China and Taiwan\n\nBULLET::::- Typhoon Kalmaegi (Luis; 2014) – A weaker typhoon which made landfall in the same provinces that Mangkhut did, around the same time in 2014\n\nBULLET::::- Typhoon Haima (Lawin; 2016) – Similarly powerful typhoon which also made landfall in Cagayan\n\nBULLET::::- Typhoon Hato (Isang; 2017) – Most recent typhoon to affect Hong Kong and Macau prior to Mangkhut\n\nSection::::External links.\n\nBULLET::::- EMSR312: Super Typhoon Mangkhut over the Northern Philippines (damage assessment maps) – Copernicus Emergency Management Service\n", "Binh Bridge, a mayor bridge of Hai Phong was hit by three ships which were set loose by the typhoon. One ship, the Vinashin Orient, was stuck under the deck, damage it. The bridge was closed, await damage assessment.\n\nAfter moving inland, the remnants of Conson brought heavy rainfall to parts of northern Laos.\n\nSection::::Aftermath.\n\nSection::::Aftermath.:Philippines.\n", "A number of wild boar may have escaped from captivity during the storm, after enclosures were damaged by falling trees. These boar have since bred and established populations in woods across southern England.\n\nA more positive aspect could be found among some British gardeners; as Heather Angel wrote in the Royal Horticultural Society's \"Journal\":\n", "\"M. gallisepticum\" causes respiratory disease and weakens the immune system which makes the bird vulnerable to any disease that they come into contact with. Small bubbles will appear in the corners of the eyes and sinuses will swell up. Once infected, they are carriers for the disease for life. Some birds have good resistance to the disease while others may die; some become ill and recover and others may not show any symptoms at all. There is currently no risk to humans. For domestic animals, there is a high concern and there should be a prevention of any interaction between wild birds and domestic poultry. Wild bird species affected by the disease are infectious and are often found in close contact with domestic species.\n", "Guiuan in Eastern Samar was the point of Haiyan's first landfall, and was severely affected due to the typhoon's impacts. Nearly all structures in the township suffered at least partial damage, many of which were completely flattened. For several days following Haiyan's first landfall, the damage situation in the fishing town remained unclear due to lack of communication. However, the damage could finally be assessed after Philippine Air Force staff arrived in Guiuan on November 10.\n", "BULLET::::- 2000 North Indian Ocean cyclone season\n\nBULLET::::- South-West Indian Ocean cyclone seasons: 1999–2000, 2000–01\n\nBULLET::::- Australian region cyclone seasons: 1999–2000, 2000–01\n\nBULLET::::- South Pacific cyclone seasons: 1999–2000, 2000–01\n\nSection::::External links.\n\nBULLET::::- Japan Meteorological Agency\n\nBULLET::::- Satellite movie of 2000 Pacific typhoon season\n\nBULLET::::- China Meteorological Agency\n\nBULLET::::- National Weather Service Guam\n\nBULLET::::- Hong Kong Observatory\n\nBULLET::::- Macau Meteorological Geophysical Services\n\nBULLET::::- Korea Meteorological Agency\n\nBULLET::::- Philippine Atmospheric, Geophysical and Astronomical Services Administration\n\nBULLET::::- Taiwan Central Weather Bureau\n\nBULLET::::- Joint Typhoon Warning Center\n\nBULLET::::- Digital Typhoon - Typhoon Images and Information\n\nBULLET::::- Typhoon2000 Philippine typhoon website\n", "Section::::Storm Hawks.:Junko.\n", "Lastly, the typhoon passed about 320 km (200 mi) north of Palau, producing gusty winds but no damage. There was an indirect death on the island after a person was crushed by a tree; he had been helping a friend cut down the tree out of fear it could cause damage during the storm.\n\nSection::::Aftermath.\n", "There are many different typhoon shelters in Hong Kong. The Yau Ma Tei Typhoon Shelter was established in 1915 after a serious typhoon that hit on 18 September 1906. Around 3,000 fish boats sank because of the typhoon, prompting the Hong Kong Government to build a typhoon shelter for those boat people who relied on fishery in Yau Ma Tei to make a living.\n", "According to news reports in China, emergency crews were out and repairing dikes damaged or destroyed by the typhoon as early as August 20. Within a few days of the storm's passage, food supplies and transportation had returned to normal in most of the affected areas. Roughly of failed dikes in Zhenjiang were considered to be the main reason why the storm was unusually deadly and destructive. By August 23, residents began repairing their homes in parts of the province after having evacuated ahead of the storm.\n\nSection::::Records.\n", "Section::::Meteorological history.\n", "Kinmen was struck by Nanmadol and was covered by the storm-level wind radius for extended period because of the slow motion of the typhoon. Another Fujianese county, the Matsu Islands, was also affected by Nanmadol.\n", "On late December 3, 2012, Typhoon Bopha or known as Pablo made landfall on Eastern Mindanao, damage was over US$1.04 billion by winds of 280 km/h (175 mph) on one-minute sustain winds. Typhoon Bopha was the most powerful typhoon ever hit Mindanao, killing 1,067 people and 834 people were missing. Most of the damage was caused by rushing storm surges and screaming winds.\n\nSection::::Strongest typhoons.:Typhoon Megi (Juan).\n\nIn terms of central pressure, Typhoon Megi (2010) OP[KOP[OP[OP[KO885 mb. This was the strongest storm ever to make landfall in terms of pressure.\n", "Typhoons can hit the Philippines any time of year, with the months of June to September being most active, with August being the most active individual month and May the least active. Typhoons move east to west across the country, heading north as they go. Storms most frequently make landfall on the islands of Eastern Visayas, Bicol region, and northern Luzon whereas the southern island and region of Mindanao is largely free of typhoons. Climate change is likely to worsen the situation with the extreme weather events including typhoons posing various risks and threats to the Philippines.\n", "Section::::See also.\n\nBULLET::::- Typhoon Chanchu\n\nBULLET::::- Typhoon Hagupit (Ruby, 2014)\n\nBULLET::::- Other notable tropical cyclones that struck the Philippines\n\nBULLET::::- Typhoon Bopha (Pablo, 2012)\n\nBULLET::::- Typhoon Parma (Pepeng, 2009)\n\nBULLET::::- Typhoon Fengshen (Frank, 2008)\n\nBULLET::::- Typhoon Angela (Rosing, 1995)\n\nBULLET::::- Typhoon Nesat (Pedring, 2011)\n\nBULLET::::- Typhoon Haima (Lawin, 2016)\n\nBULLET::::- Typhoon Mangkhut (Ompong, 2018)\n\nBULLET::::- Typhoon Yutu (2018)\n\nBULLET::::- Other strong tropical cyclones\n\nBULLET::::- Typhoon Tip\n\nBULLET::::- Hurricane Patricia\n\nBULLET::::- Typhoon Haiyan\n\nBULLET::::- Typhoon Meranti\n\nBULLET::::- Hurricane Wilma\n\nSection::::External links.\n\nBULLET::::- RSMC Tokyo – Typhoon Center\n\nBULLET::::- Best Track Data of Typhoon Megi (1013)\n", "Section::::Seasonal summary.\n\nbr\n\nThe season was unusual in the number of super typhoons that occurred in the basin, with eleven typhoons reaching winds of at least 135 knots. They were Isa, Nestor, Rosie, Winnie, Bing, Oliwa (from Central Pacific), Ginger, Ivan, Joan, Keith, and Paka (from Central Pacific). This was due to the strong El Niño of 1997-1998, which contributed to the record amounts of not only super typhoons but also tropical storms in the Western and Eastern Pacific. Fortunately, most of the stronger systems remained at sea.\n\nSection::::Seasonal summary.:Records.\n", "Rinjiin (voiced by Scott McNeil) is the last of the Dragon Knights, a group of Sky Knights who protect dragons. Until now, it was thought that all dragons had died out. Rinjiin discovered a clutch of unhatched eggs and took it as a sign that he was destined to lead the dragons. He built a giant metal dragon that he steers from inside and he uses it to fly around and destroy any enemy ships that come too close to the nest.\n\nSection::::Other characters.:Wren.\n", "Kirogi is a Korean word for a type of migrating bird that lives in North Korea from autumn to spring.\n\nSection::::Systems.:Typhoon Kai-tak.\n", "Section::::Impact.:Japan.\n", "BULLET::::- TCWS #3 - Tropical cyclone winds of to are expected within the next 18 hours.\n\nBULLET::::- TCWS #4 - Tropical cyclone winds of to are expected within 12 hours.\n\nBULLET::::- TCWS #5 - Tropical cyclone winds greater than are expected within 12 hours.\n", "Section::::Impact.\n\nSection::::Impact.:Philippines.\n\nThe typhoon enhanced the monsoon across the northern Philippines, and caused rainfall to areas already deluged by prior floods. Mania received 210 mm (8.38 in) of rain on July 26; the rain triggered mudslides in the valleys near Mount Pinatubo while widespread flooding resulted in 16 deaths and the evacuation of more than 20,000 people.\n\nSection::::Impact.:Japan.\n", "BULLET::::- Timeline of the 2010 Pacific hurricane season\n\nBULLET::::- Timeline of the 2010 North Indian Ocean cyclone season\n\nBULLET::::- Timelines of the South-West Indian Ocean cyclone seasons: 2009–10, 2010–11\n\nBULLET::::- Timelines of the Australian region cyclone seasons: 2009–10, 2010–11\n\nBULLET::::- Timelines of the South Pacific cyclone seasons: 2009–10, 2010–11\n\nSection::::External links.\n\nBULLET::::- Japan Meteorological Agency\n\nBULLET::::- China Meteorological Agency\n\nBULLET::::- National Weather Service Guam\n\nBULLET::::- Hong Kong Observatory\n\nBULLET::::- Korea Meteorological Administration\n\nBULLET::::- Philippine Atmospheric, Geophysical and Astronomical Services Administration\n\nBULLET::::- Taiwan Central Weather Bureau\n\nBULLET::::- TCWC Jakarta\n\nBULLET::::- Thai Meteorological Department\n\nBULLET::::- Vietnam's National Hydro-Meteorological Service\n" ]
[ "Birds survive typhoons.", "Birds survive typhoons." ]
[ "Many birds don't survive. Those that do are attuned to pressure fluctuations and leave the area ahead of the storm.", "Many birds don't survive typhoons." ]
[ "false presupposition" ]
[ "Birds survive typhoons.", "Birds survive typhoons." ]
[ "false presupposition", "false presupposition" ]
[ "Many birds don't survive. Those that do are attuned to pressure fluctuations and leave the area ahead of the storm.", "Many birds don't survive typhoons." ]
2018-14222
If wind is caused by pressure differences, why is there no constant updraft due to low pressure the higher you go?
The vertical difference is pressure is only enough to *equal* the downward force of gravity. They're balanced out.
[ "A widespread misconception in the world of soaring is that the updrafts associated with an incoming thunderstorm are almost always very strong and turbulent,\n\n\"which is most of the time incorrect\". If one believes this myth, then he would consider it safe (from thunderstorms, at least) to fly in an area with plentiful weak to moderate updrafts, since the updrafts associated with thunderstorms are always supposed to be strong and turbulent.\n", "the atmosphere is unstable to vertical motions, and convection is likely. Since convection acts to quickly mix the atmosphere and return to a stably stratified state, observations of decreasing potential temperature with height are uncommon, except while vigorous convection is underway or during periods of strong insolation. Situations in which the equivalent potential temperature decreases with height, indicating instability in saturated air, are much more common.\n", "The first term is the effect of perturbation pressure gradients on vertical motion. In some storms this term has a large effect on updrafts (Rotunno and Klemp, 1982) but there is not much reason to believe it has much of an impact on downdrafts (at least to a first approximation) and therefore will be ignored.\n", "So cool air lying on top of warm air can be stable, as long as the temperature decrease with height is less than the adiabatic lapse rate; the dynamically important quantity is not the temperature, but the potential temperature—the temperature the air would have if it were brought adiabatically to a reference pressure. The air around the mountain is stable because the air at the top, due to its lower pressure, has a higher potential temperature than the warmer air below.\n\nSection::::Its use in estimating atmospheric stability.:Effects of water condensation: equivalent potential temperature.\n", "A rising parcel of air containing water vapor, if it rises far enough, reaches its lifted condensation level: it becomes saturated with water vapor (see \"Clausius–Clapeyron relation\"). If the parcel of air continues to rise, water vapor condenses and releases its latent heat to the surrounding air, partially offsetting the adiabatic cooling. A saturated parcel of air therefore cools less than a dry one would as it rises (its temperature changes with height at the moist adiabatic lapse rate, which is smaller than the dry adiabatic lapse rate). Such a saturated parcel of air can achieve buoyancy, and thus accelerate further upward, a runaway condition (instability) even if potential temperature increases with height. The sufficient condition for an air column to be absolutely stable, even with respect to saturated convective motions, is that the \"equivalent potential temperature must increase monotonically with height.\"\n", "where \"S\" is the entropy. The above equation states that the entropy of the atmosphere does not change with height. The rate at which temperature decreases with height under such conditions is called the adiabatic lapse rate.\n\nFor \"dry\" air, which is approximately an ideal gas, we can proceed further. The adiabatic equation for an ideal gas is \n\nwhere formula_9 is the heat capacity ratio (formula_9=7/5, for air). Combining with the equation for the pressure, one arrives at the dry adiabatic lapse rate,\n", "Section::::Concerns regarding severe deep moist convection.\n\nBuoyancy is key to thunderstorm growth and is necessary for any of the severe threats within a thunderstorm. There are other processes, not necessarily thermodynamic, that can increase updraft strength. These include updraft rotation, low level convergence, and evacuation of mass out of the top of the updraft via strong upper level winds and the jet stream.\n\nSection::::Concerns regarding severe deep moist convection.:Hail.\n", "The winds in a tropical cyclone are the result of evaporation and condensation of moisture which results in updrafts. The updrafts in turn increase the height of the storm which causes more condensation.\n\nSection::::Winds in tropical cyclones.:Location of the winds.\n", "In the mesoscale, equivalent potential temperature is also a useful measure of the static stability of the unsaturated atmosphere. Under normal, stably stratified conditions, the potential temperature increases with height,\n\nand vertical motions are suppressed. If the equivalent potential temperature decreases with height,\n\nthe atmosphere is unstable to vertical motions, and convection is likely. Situations in which the equivalent potential temperature decreases with height, indicating instability in saturated air, are quite common.\n\nSection::::See also.\n\nBULLET::::- Meteorology\n\nBULLET::::- Moist static energy\n\nBULLET::::- Potential temperature\n\nBULLET::::- Weather forecasting\n\nSection::::Bibliography.\n", "At low altitudes above sea level, the pressure decreases by about for every 100 metres. For higher altitudes within the troposphere, the following equation (the barometric formula) relates atmospheric pressure \"p\" to altitude \"h\":\n\nformula_1\n\nwhere the constant parameters are as described below:\n\nSection::::Local variation.\n\nAtmospheric pressure varies widely on Earth, and these changes are important in studying weather and climate. See pressure system for the effects of air pressure variations on weather.\n", "This means that unstable air is now stable when it reaches the equilibrium level and convection stops. This level is often near the tropopause and can be indicated as near where the anvil of a thunderstorm because it is where the thunderstorm updraft is finally cut off, except in the case of overshooting tops where it continues rising to the maximum parcel level (MPL) due to momentum. More precisely, the cumulonimbus will stop rising around a few kilometres prior to reaching the level of neutral buoyancy and on average anvil glaciation occurs at a higher altitude over land than over sea (despite little difference in LNB from land to sea).\n", "Moreover, the diameters of the updraught columns vary between 2 km (air mass thunderstorm) and 10 km (supercell thunderstorm). The height of the cumulonimbus base is extremely variable. It varies from a few tens of metres above the ground to 4000 m above the ground. In the latter case, the updraughts can originate either from the ground (if the air is very dry - typical of deserts) or from aloft (when altocumulus castellanus degenerates into cumulonimbus). When the updraught originates from aloft, this is considered \"elevated convection\".\n\nSection::::Dangers pertaining to downbursts.\n\n\"Detailed article: Downburst\n", "The intermediate layers of the troposphere are the regions with less influence of human activity. This region is far enough from the surface for not being affected by the surface emissions. Additionally, commercial and military flights only cross this region during ascending or descending maneuvers. Moreover, in this region there exist two types of clouds with a large horizontal extension: \"Nimbostratus\" and \"Altostratus\", which cannot originate from human activity. Consequently, it is assumed that there are no anthropic clouds of these two \"genera\". However, what can occur is enhancing existing \"Nimbostratus\" or \"Altostratus\" due to the additional water vapor or condensation nuclei emitted by a thermal power plant, for instance. \n", "BULLET::::- Humidity as a result of plant operation may be an issue for nearby communities. A 400 meter diameter powerplant producing wind velocity of 22 meters per second, must add about 15 grams of water per kilogram of air processed. This is equal to 41 tons of water per second. In terms of humid air, this is 10 cubic kilometers of very humid air each hour. Thus, a community even 100 kilometers away may be unpleasantly affected.\n", "Atmospheric lift will also generally produce cloud cover through adiabatic cooling once the air becomes saturated as it rises, although the low-pressure area typically brings cloudy skies, which act to minimize diurnal temperature extremes. Since clouds reflect sunlight, incoming shortwave solar radiation decreases, which causes lower temperatures during the day. At night the absorptive effect of clouds on outgoing longwave radiation, such as heat energy from the surface, allows for warmer diurnal low temperatures in all seasons. The stronger the area of low pressure, the stronger the winds experienced in its vicinity. Globally, low-pressure systems are most frequently located over the Tibetan Plateau and in the lee of the Rocky mountains. In Europe (particularly in the British Isles and Netherlands), recurring low-pressure weather systems are typically known as \"depressions\".\n", "Vertical wind shear above the jet stream (i.e., in the stratosphere) is sharper when it is moving upwards, because wind speed decreases with height in the stratosphere. This is the reason CAT can be generated above the tropopause, despite the stratosphere otherwise being a region which is vertically stable. On the other hand, vertical wind shear moving downwards within the stratosphere is more moderate (i.e., because downwards wind shear within the stratosphere is effectively moving against the manner in which wind speed changes within the stratosphere) and CAT is never produced in the stratosphere. Similar considerations apply to the troposphere but in reverse.\n", "The barotropic vorticity equation assumes the atmosphere is nearly barotropic, which means that the direction and speed of the geostrophic wind are independent of height. In other words, there is no vertical wind shear of the geostrophic wind. It also implies that thickness contours (a proxy for temperature) are parallel to upper level height contours. In this type of atmosphere, high and low pressure areas are centers of warm and cold temperature anomalies. Warm-core highs (such as the subtropical ridge and the Bermuda-Azores high) and cold-core lows have strengthening winds with height, with the reverse true for cold-core highs (shallow Arctic highs) and warm-core lows (such as tropical cyclones).\n", "Under almost all circumstances, potential temperature increases upwards in the atmosphere, unlike actual temperature which may increase or decrease. Potential temperature is conserved for all dry adiabatic processes, and as such is an important quantity in the planetary boundary layer (which is often very close to being dry adiabatic).\n\nPotential temperature is a useful measure of the static stability of the unsaturated atmosphere. Under normal, stably stratified conditions, the potential temperature increases with height,\n\nand vertical motions are suppressed. If the potential temperature decreases with height,\n", "The wind speed in cyclonic circulation grows from zero as the radius increases and is always less than the goestrophic estimate.\n\nIn the anticyclonic-circulation example, there is no wind within the distance of 260 km (point R*) – this is the area of no/low winds around a pressure high.\n\nAt that distance the first anticyclonic wind has the same speed as the cyclostrophic winds (point Q), and half of that of the inertial wind (point P).\n\nFarther away from point R*, the anticyclonic wind slows down and approaches the geostrophic value with decreasingly larger speeds.\n", "In fact, the turbulence zone is located in and at the vicinity of the downdraft. The updrafts under the flanking line are smooth. The refutation of this myth is poetically expressed by Dominique Musto who says the following:\n\nThe author means that when flying under a flanking line, the updrafts will be widespread and smooth. Since the cells making the flanking line will fuse with the main cell, the soaring conditions will \"improve\", the updrafts will become stronger and stronger and the cloud base will become darker and darker.\n", "To understand this, consider dry convection in the atmosphere, where the vertical variation in pressure is substantial and adiabatic temperature change is important: As a parcel of air moves upward, the ambient pressure drops, causing the parcel to expand. Some of the internal energy of the parcel is used up in doing the work required to expand against the atmospheric pressure, so the temperature of the parcel drops, even though it has not lost any heat. Conversely, a sinking parcel is compressed and becomes warmer even though no heat is added.\n", "Also, by Bernoulli's theorem, the measured pressure is not exactly the weight of the air column, should significant vertical motion of air occur.\n\nThus, the pressure force acting on individual parcels of air at different heights is not really known through the measured values.\n\nWhen using information from a surface-pressure chart in balanced-flow formulations, the forces are best viewed as applied to the entire air column.\n\nOne difference of air speed in every air column invariably occurs, however, near the ground/sea, also if the air density is the same anywhere and no vertical motion occurs.\n", "BULLET::::- Ahead of embedded shortwave troughs, which have smaller wavelengths.\n\nDiverging winds aloft ahead of these troughs cause atmospheric lift within the troposphere below, which lowers surface pressures as upward motion partially counteracts the force of gravity.\n", "Put more simply, air density depends on air pressure. Given that air pressure also depends on air density, it would be easy to get the impression that this was circular definition, but it is simply interdependency of different variables. This then yields a more accurate formula, of the form\n\nwherebr\n\nTherefore, instead of pressure being a linear function of height as one might expect from the more simple formula given in the \"basic formula\" section, it is more accurately represented as an exponential function of height.\n", "One example is thermal columns extending above the top of the equilibrium level (EL) in thunderstorms: unstable air rising from (or near) the surface normally stops rising at the EL (near the tropopause) and spreads out as an anvil cloud; but in the event of a strong updraft, unstable air is carried past the EL as an \"overshooting top\" or \"dome\". A parcel of air will stop ascending at the maximum parcel level (MPL). This overshoot is responsible for most of the turbulence experienced in the cruise phase of commercial air flights.\n\nSection::::Stellar convection.\n" ]
[ "Difference in pressure as you go higher will cause a constant updraft.", "Difference in pressure as you go higher will cause a constant updraft." ]
[ "The updraft is balanced out by the force of gravity.", "The updraft is balanced out by the force of gravity." ]
[ "false presupposition" ]
[ "Difference in pressure as you go higher will cause a constant updraft.", "Difference in pressure as you go higher will cause a constant updraft.", "Difference in pressure as you go higher will cause a constant updraft." ]
[ "normal", "false presupposition" ]
[ "The updraft is balanced out by the force of gravity.", "The updraft is balanced out by the force of gravity.", "The updraft is balanced out by the force of gravity." ]
2018-04772
Why do objects accelerate while falling?
Because a force (gravity) works on them. Whenever a force affects a body with mass, it gets accelerated. If I have a ball and hold it in my hand, it is at rest. The Earth is pulling on it with gravity and my hands must exert some force to keep the ball from moving towards the Earth. When I let go, the ball is still at rest. However, now nothing opposes the gravity force from the Earth and the ball starts to drop. It must accelerate because otherwise it wouldn't move.
[ "When someone wants to jump, he or she exerts additional downward force on the ground ('action'). Simultaneously, the ground exerts upward force on the person ('reaction'). If this upward force is greater than the person's weight, this will result in upward acceleration. When these forces are perpendicular to the ground, they are also called a normal force.\n", "The reason why the object does not fall down when subjected to only downward forces is a simple one. Think about what keeps an object up after it is thrown. Once an object is thrown into the air, there is only the downward force of earth's gravity that acts on the object. That does not mean that once an object is thrown in the air, it will fall instantly. What keeps that object up in the air is its velocity. The first of Newton's laws of motion states that an object's inertia keeps it in motion, and since the object in the air has a velocity, it will tend to keep moving in that direction.\n", "History of classical mechanics\n\nThis article deals with the history of classical mechanics.\n\nSection::::Antiquity.\n\nThe ancient Greek philosophers, Aristotle in particular, were among the first to propose that abstract principles govern nature. Aristotle argued, in \"On the Heavens\", that terrestrial bodies rise or fall to their \"natural place\" and stated as a law the correct approximation that an object's speed of fall is proportional to its weight and inversely proportional to the density of the fluid it is falling through.\n", "Assuming the standardized value for g and ignoring air resistance, this means that an object falling freely near the Earth's surface increases its velocity by 9.80665 m/s (32.1740 ft/s or 22 mph) for each second of its descent. Thus, an object starting from rest will attain a velocity of 9.80665 m/s (32.1740 ft/s) after one second, approximately 19.62 m/s (64.4 ft/s) after two seconds, and so on, adding 9.80665 m/s (32.1740 ft/s) to each resulting velocity. Also, again ignoring air resistance, any and all objects, when dropped from the same height, will hit the ground at the same time.\n", "BULLET::::- Objects are falling to the floor because the room is aboard a rocket in space, which is accelerating at 9.81 m/s and is far from any source of gravity. The objects are being pulled towards the floor by the same \"inertial force\" that presses the driver of an accelerating car into the back of his seat.\n\nConversely, any effect observed in an accelerated reference frame should also be observed in a gravitational field of corresponding strength. This principle allowed Einstein to predict several novel effects of gravity in 1907, as explained in the next section.\n", "Technically, an object is in free fall even when moving upwards or instantaneously at rest at the top of its motion. If gravity is the only influence acting, then the acceleration is always downward and has the same magnitude for all bodies, commonly denoted formula_1.\n\nSince all objects fall at the same rate in the absence of other forces, objects and people will experience weightlessness in these situations.\n\nExamples of objects not in free fall:\n\nBULLET::::- Flying in an aircraft: there is also an additional force of lift.\n", "The object's speed versus time can be integrated over time to find the vertical position as a function of time:\n", "This is the \"textbook\" case of the vertical motion of an object falling a small distance close to the surface of a planet. It is a good approximation in air as long as the force of gravity on the object is much greater than the force of air resistance, or equivalently the object's velocity is always much less than the terminal velocity (see below).\n\nwhere\n\nSection::::Free fall in Newtonian mechanics.:Uniform gravitational field with air resistance.\n", "Linear acceleration, even at a low level, can provide sufficient g-force to provide useful benefits. A spacecraft under constant acceleration in a straight line would give the appearance of a gravitational pull in the direction opposite of the acceleration. This \"pull\" that would cause a loose object to \"fall\" towards the hull of the spacecraft is actually a manifestation of the inertia of the objects inside the spacecraft, in accordance with Newton's first law. \n", "That is, being on the surface of the Earth is equivalent to being inside a spaceship (far from any sources of gravity) that is being accelerated by its engines. The direction or vector of acceleration equivalence on the surface of the earth is \"up\" or directly opposite the center of the planet while the vector of acceleration in a spaceship is directly opposite from the mass ejected by its thrusters. From this principle, Einstein deduced that free-fall is inertial motion. Objects in free-fall do not experience being accelerated downward (e.g. toward the earth or other massive body) but rather weightlessness and no acceleration. In an inertial frame of reference bodies (and photons, or light) obey Newton's first law, moving at constant velocity in straight lines. Analogously, in a curved spacetime the world line of an inertial particle or pulse of light is \"as straight as possible\" (in space \"and\" time). Such a world line is called a geodesic and from the point of view of the inertial frame is a straight line. This is why an accelerometer in free-fall doesn't register any acceleration; there isn't any.\n", "What we now call gravity was not identified as a universal force until the work of Isaac Newton. Before Newton, the tendency for objects to fall towards the Earth was not understood to be related to the motions of celestial objects. Galileo was instrumental in describing the characteristics of falling objects by determining that the acceleration of every object in free-fall was constant and independent of the mass of the object. Today, this acceleration due to gravity towards the surface of the Earth is usually designated as formula_37 and has a magnitude of about 9.81 meters per second squared (this measurement is taken from sea level and may vary depending on location), and points toward the center of the Earth. This observation means that the force of gravity on an object at the Earth's surface is directly proportional to the object's mass. Thus an object that has a mass of formula_21 will experience a force:\n", "In the absence of other forces, gravity results in a constant downward acceleration of every freely moving object. Near Earth's surface the acceleration due to gravity is and the gravitational force on an object of mass \"m\" is . It is convenient to imagine this gravitational force concentrated at the center of mass of the object.\n\nIf an object is displaced upwards or downwards a vertical distance , the work \"W\" done on the object by its weight mg is:\n", "According to Aristotle, weight was the direct cause of the falling motion of an object, the speed of the falling object was supposed to be directly proportionate to the weight of the object. As medieval scholars discovered that in practice the speed of a falling object increased with time, this prompted a change to the concept of weight to maintain this cause effect relationship. Weight was split into a \"still weight\" or \"pondus\", which remained constant, and the actual gravity or \"gravitas\", which changed as the object fell. The concept of \"gravitas\" was eventually replaced by Jean Buridan's impetus, a precursor to momentum.\n", "Most effects of gravity vanish in free fall, but effects that seem the same as those of gravity can be \"produced\" by an accelerated frame of reference. An observer in a closed room cannot tell which of the following is true:\n\nBULLET::::- Objects are falling to the floor because the room is resting on the surface of the Earth and the objects are being pulled down by gravity.\n", "BULLET::::- Psychic forces are sufficient in most bodies for a shock to propel them directly away from the surface. A spooky noise or an adversary's signature sound will introduce motion upward, usually to the cradle of a chandelier, a treetop or the crest of a flagpole. The feet of a running character or the wheels of a speeding auto need never touch the ground, ergo fleeing turns to flight.\n\nBULLET::::- As speed increases, objects can be in several places at once.\n", "Centripetal force causes the acceleration measured on the rotating surface of the Earth to differ from the acceleration that is measured for a free-falling body: the apparent acceleration in the rotating frame of reference is the total gravity vector minus a small vector toward the north-south axis of the Earth, corresponding to staying stationary in that frame of reference.\n\nSection::::See also.\n\nBULLET::::- \"De Motu Antiquiora\" and \"Two New Sciences\" (the earliest modern investigations of the motion of falling bodies)\n\nBULLET::::- Equations of motion\n\nBULLET::::- Free fall\n\nBULLET::::- Gravitation\n\nBULLET::::- Mean speed theorem, the foundation of the law of falling bodies\n", "A more basic manifestation of the same effect involves two bodies that are falling side by side towards the Earth. In a reference frame that is in free fall alongside these bodies, they appear to hover weightlessly – but not exactly so. These bodies are not falling in precisely the same direction, but towards a single point in space: namely, the Earth's center of gravity. Consequently, there is a component of each body's motion towards the other (see the figure). In a small environment such as a freely falling lift, this relative acceleration is minuscule, while for skydivers on opposite sides of the Earth, the effect is large. Such differences in force are also responsible for the tides in the Earth's oceans, so the term \"tidal effect\" is used for this phenomenon.\n", "Proper acceleration, the acceleration of a body relative to a free-fall condition, is measured by an instrument called an accelerometer.\n\nIn classical mechanics, for a body with constant mass, the (vector) acceleration of the body's center of mass is proportional to the net force vector (i.e. sum of all forces) acting on it (Newton's second law):\n\nwhere F is the net force acting on the body, \"m\" is the mass of the body, and a is the center-of-mass acceleration. As speeds approach the speed of light, relativistic effects become increasingly large.\n\nSection::::Tangential and centripetal acceleration.\n", "At a distance relatively close to Earth (less than 3000 km), gravity is only slightly reduced. As an object orbits a body such as the Earth, gravity is still attracting objects towards the Earth and the object is accelerated downward at almost 1g. Because the objects are typically moving laterally with respect to the surface at such immense speeds, the object will not lose altitude because of the curvature of the Earth. When viewed from an orbiting observer, other close objects in space appear to be floating because everything is being pulled towards Earth at the same speed, but also moving forward as the Earth's surface \"falls\" away below. All these objects are in free fall, not zero gravity.\n", "An object in the technical sense of the term \"free fall\" may not necessarily be falling down in the usual sense of the term. An object moving upwards would not normally be considered to be falling, but if it is subject to the force of gravity only, it is said to be in free fall. The moon is thus in free fall.\n", "For other situations, such as when objects are subjected to mechanical accelerations from forces other than the resistance of a planetary surface, the weight force is proportional to the mass of an object multiplied by the total acceleration away from free fall, which is called the proper acceleration. Through such mechanisms, objects in elevators, vehicles, centrifuges, and the like, may experience weight forces many times those caused by resistance to the effects of gravity on objects, resulting from planetary surfaces. In such cases, the generalized equation for weight \"W\" of an object is related to its mass \"m\" by the equation , where \"a\" is the proper acceleration of the object caused by all influences other than gravity. (Again, if gravity is the only influence, such as occurs when an object falls freely, its weight will be zero).\n", "In the Western world prior to the 16th century, it was generally assumed that the speed of a falling body would be proportional to its weight—that is, a 10 kg object was expected to fall ten times faster than an otherwise identical 1 kg object through the same medium. The ancient Greek philosopher Aristotle (384–322 BC) discussed falling objects in \"Physics\" (Book VII) which was perhaps the first book on mechanics (see Aristotelian physics).\n", "In 1690, Pierre Varignon assumed that all bodies are exposed to pushes by aether particles from all directions, and that there is some sort of limitation at a certain distance from the Earth's surface which cannot be passed by the particles. He assumed that if a body is closer to the Earth than to the limitation boundary, then the body would experience a greater push from above than from below, causing it to fall toward the Earth.\n", "\"Uniform\" or \"constant\" acceleration is a type of motion in which the velocity of an object changes by an equal amount in every equal time period.\n\nA frequently cited example of uniform acceleration is that of an object in free fall in a uniform gravitational field. The acceleration of a falling body in the absence of resistances to motion is dependent only on the gravitational field strength \"g\" (also called \"acceleration due to gravity\"). By Newton's Second Law the force formula_11 acting on a body is given by:\n", "Section::::Scenarios.:Height of lower-velocity trajectories.\n\nIgnoring all factors other than the gravitational force between the body and the object, an object projected vertically at speed formula_16 from the surface of a spherical body with escape velocity formula_6 and radius formula_18 will attain a maximum height formula_19 satisfying the equation\n\nwhich, solving for \"h\" results in\n\nwhere formula_22 is the ratio of the original speed formula_16 to the escape velocity formula_24\n\nUnlike escape velocity, the direction (vertically up) is important to achieve maximum height.\n\nSection::::Trajectory.\n" ]
[]
[]
[ "normal" ]
[]
[ "normal", "normal" ]
[]
2018-05045
Why should I not be terrified when a plane encounters moderate turbulence?
Planes are designed with a factor of safety 2, meaning that they are designed to handle twice the stress they are supposed to. Pilots also undergo rigorous training to handle all sorts of issues. Even if you hit turbulence heavy enough to shake off the engine the plane will be able to fly for 15 miles before the remaining engine fails and the pilot can still put the plane into a high angle of attack and take the plane down safely.
[ "Because aircraft move so quickly, they can experience sudden unexpected accelerations or 'bumps' from turbulence, including CAT - as the aircraft rapidly cross invisible bodies of air which are moving vertically at many different speeds. Although the vast majority of cases of turbulence are harmless, in rare cases cabin crew and passengers on aircraft have been injured when tossed around inside an aircraft cabin during extreme turbulence (and in a small number of cases, killed, as on United Airlines Flight 826 on December 28, 1997). BOAC Flight 911 broke up in flight in 1966 after experiencing severe lee-wave turbulence just downwind of Mount Fuji, Japan.\n", "You damned well shouldn't do.br\n\nI have smacked the tiny sparrow,br\n\nBluebird, robin, and the rest.br\n\nDragged vorticies through branchesbr\n\nThrowing eggs out of their nests.\n\nI've hurled through total darknessbr\n\nJust as blind as I could be,br\n\nAnd spent the night in terrorbr\n\nOf things I could not see.br\n\nI've turned my eyes to heavenbr\n\nSweating bullets through the flight,br\n\nReached out my hand and pressed-to-testbr\n\n—the Master Caution light.\n\nBULLET::::- C-130 Version\n\n\"Low Flight\" \n\nOh, I have slipped throughbr\n\nswirling clouds of dust, br\n\nA few feet from the dirt,br\n\nI've flown the C-130 low enough,br\n", "The outcome of an ingestion event and whether it causes an accident, be it on a small fast plane, such as military jet fighters, or a large transport, depends on the number and weight of birds and where they strike the fan blade span or the nose cone. Core damage usually results with impacts near the blade root or on the nose cone.\n", "Groundings of entire classes of aircraft out of equipment safety concerns is unusual, but this has occurred to the de Havilland Comet in 1954 after multiple crashes due to metal fatigue and hull failure, the McDonnell Douglas DC-10 in 1979 after the crash of American Airlines Flight 191 due to engine loss, the Boeing 787 Dreamliner in 2013 after its battery problems, and the Boeing 737 MAX in 2019 after two crashes preliminarily tied to a flight control system.\n\nSection::::Aviation safety hazards.\n\nSection::::Aviation safety hazards.:Foreign object debris.\n", "Another aspect of safety is protection from intentional harm or property damage, also known as \"security\".\n", "On December 7, 2011, the NTSB issued a safety recommendation based on the results of its investigation into the crash of Flight 331. The NTSB recommended that the FAA take actions to ensure adequate pilot training in simulator training programs on tailwind approaches and landings, particularly on wet or contaminated runways, and revise its advisories on runway overrun prevention to include a discussion of risks associated with tailwind landings.\n", "From 310 million passengers in 1970, air transport had grown to 3,696 million in 2016, led by 823 million in the United States then 488 million in China.\n\nIn 2016, there were 19 fatal accidents of civil airliners of more than 14 passengers, resulting in 325 fatalities : the second safest year ever after 2015 with 16 accidents and 2013 with 265 fatalities.\n\nFor planes heavier than 5.7 t, there were 34.9 million departures and 75 accidents worldwide with 7 of these fatal for 182 fatalities, the lowest since 2013 : 182/34.9round2 fatalities per million departures.\n", "Detecting and predicting CAT is difficult. At typical heights where it occurs, the intensity and location cannot be determined precisely. However, because this turbulence affects long range aircraft that fly near the tropopause, CAT has been intensely studied. Several factors affect the likelihood of CAT. Often more than one factor is present. 64% of the non-light turbulences (not only CAT) are observed less than away from the core of a jet stream.\n\nSection::::Factors that increase CAT probability.:Jet stream.\n", "Section::::Safety recommendations.\n\nAs a result of its investigation into the accident and in light of its findings, the NTSB also issued the following safety recommendations:\n\nBULLET::::- That shoulder harnesses be provided to and worn by the flight crew\n\nBULLET::::- That flight attendant seats be designed for improved G-force tolerance\n\nBULLET::::- That emergency lighting switches be armed prior to every flight\n", "Stress caused by ambient temperature is called thermal stress and is normally experienced by military pilots. Although military aircraft have environmental control systems, the temperature inside the cockpit can quickly rise more than 10 degrees Celsius above the ambient temperature, and the Air Force has suggested that it is possible for cockpit temperatures to exceed 45 degrees Celsius. When such high temperatures occur in humid environments, both mental and physical performance will be degraded. When the aircraft operates close to the ground at high airspeed, the effect is worse because of the aerodynamic heating of the aircraft’s surface.\n", "In over one hundred years of implementation, aviation safety has improved considerably. In modern times, two major manufacturers still produce heavy passenger aircraft for the civilian market: Boeing in the United States of America, and the European company Airbus. Both place huge emphasis on the use of aviation safety equipment, now a billion-dollar industry in its own right; for each, safety is a major selling point—realizing that a poor safety record in the aviation industry is a threat to corporate survival. Some major safety devices now required in commercial aircraft involve:\n", "Section::::Aviation safety hazards.:Human factors.\n", "The Interstate Aviation Committee (IAC or MAK), after initial decoding of the flight recorder data, issued flight safety recommendations advising to avoid entering thunderstorms, to follow all maximum height limitations based on aircraft load and outside air temperature and to improve pilot training when working in these situations.\n", "Since most GEVs are designed to operate from water, accidents and engine failure typically are less hazardous than in a land-based aircraft, but the lack of altitude control leaves the pilot with fewer options for avoiding collision, and to some extent that discounts such benefits. Low altitude brings high speed craft into conflict with ships, buildings and rising land, which may not be sufficiently visible in poor conditions to avoid, and GEVs may be unable to climb over or turn sharply enough to avoid collisions. While drastic, low level maneuvers risk contact with solid or water hazards beneath. Aircraft can climb over most obstacles, but GEVs are more limited.\n", "A pilot misinformed by a printed document (manual, map, etc.), reacting to a faulty instrument or indicator (in the cockpit or on the ground), or following inaccurate instructions or information from flight or ground control can lose spatial orientation, or make another mistake, and consequently lead to accidents or near misses.\n\nSection::::Aviation safety hazards.:Lightning.\n\nBoeing studies showed that airliners are struck by lightning twice per year on average; aircraft withstand typical lightning strikes without damage.\n", "On March 14, Boeing reiterated that pilots can always use manual trim control to override software commands, and that both its Flight Crew Operations Manual and November 6 bulletin offer detailed procedures for handling incorrect angle-of-attack readings. \n", "In addition, the U.S. Aviation Safety Reporting System received messages about the 737 MAX from U.S. pilots in November 2018, including one from a captain who \"expressed concern that some systems such as the MCAS are not fully described in the aircraft Flight Manual.\" Captain Mike Michaelis, chairman of the safety committee of the Allied Pilots Association at American Airlines said \"It's pretty asinine for them to put a system on an airplane and not tell the pilots … especially when it deals with flight controls\".\n", "BULLET::::- 7 January 2017 – a private Bombardier Challenger 604 rolled three times in midair and dropped after encountering wake turbulence when it passed under an Airbus A380 over the Arabian Sea. Several passengers were injured, one seriously. Due to the G-forces experienced, the plane was damaged beyond repair and was consequently written off.\n", "BULLET::::- Evacuation slides – aid rapid passenger exit from an aircraft in an emergency situation\n\nBULLET::::- Advanced avionics – computerized auto-recovery and alert systems\n\nBULLET::::- Turbine engines – durability and failure containment improvements\n\nBULLET::::- Landing gear – that can be lowered even after loss of power and hydraulics\n", "BULLET::::- 9,600 m (31,500 ft)\n\nBULLET::::- 10,600 m (34,800 ft)\n\nBULLET::::- 11,600 m (38,100 ft)\n\nBULLET::::- 13,100 m (43,000 ft)\n\nBULLET::::- 15,100 m (49,500 ft)\n\nand every 2,000 metres thereafter.\n\nBULLET::::- Track 000 to 179°\n\nBULLET::::- 900 m (3,000 ft)\n\nBULLET::::- 1,500 m (4,900 ft)\n\nBULLET::::- 2,100 m (6,900 ft)\n\nBULLET::::- 2,700 m (8,900 ft)\n\nBULLET::::- 3,300 m (10,800 ft)\n\nBULLET::::- 3,900 m (12,800 ft)\n\nBULLET::::- 4,500 m (14,800 ft)\n\nBULLET::::- 5,100 m (16,700 ft)\n\nBULLET::::- 5,700 m (18,700 ft)\n\nBULLET::::- 6,300 m (20,700 ft)\n\nBULLET::::- 6,900 m (22,600 ft)\n\nBULLET::::- 7,500 m (24,600 ft)\n\nBULLET::::- 8,100 m (26,600 ft)\n\nBULLET::::- 9,100 m (29,900 ft)\n\nBULLET::::- 10,100 m (33,100 ft)\n\nBULLET::::- 11,100 m (36,400 ft)\n\nBULLET::::- 12,100 m (39,700 ft)\n", "The number of deaths per passenger-mile on commercial airlines in the United States between 2000 and 2010 was about 0.2 deaths per 10 billion passenger-miles. For driving, the rate was 150 per 10 billion vehicle-miles for 2000 : 750 times higher per mile than for flying in a commercial airplane.\n\nThere were no fatalities on large scheduled commercial airlines in the United States for over nine years, between the Colgan Air Flight 3407 crash in February 2009, and a catastrophic engine failure on Southwest Airlines Flight 1380 in April 2018.\n\nSection::::Statistics.:Security.\n", "BULLET::::- In the \"3rd Rock from the Sun\" episode \"Dick's Big Giant Headache: Part 1\" (1999), William Shatner makes his first appearance on the series. John Lithgow's character meets Shatner's character as he gets off an aircraft. When Shatner describes seeing something horrifying on the wing, Lithgow replies, \"The same thing happened to me!\" This references not only Lithgow's portrayal of the nervous passenger in the 1983 \"Twilight Zone\" remake, but also an earlier \"3rd Rock\" episode \"Frozen Dick\" (Season 1, Ep 12, 1996) when he and Jane Curtin's characters were due to fly to Chicago to pick up awards before Dick panicked about something on the wing while the plane was still on the tarmac and gets them both kicked off the plane.\n", "Section::::Immunity policy.\n", "Wind shear refers to the variation of wind over either horizontal or vertical distances. Airplane pilots generally regard significant wind shear to be a horizontal change in airspeed of for light aircraft, and near for airliners at flight altitude. Vertical speed changes greater than also qualify as significant wind shear for aircraft. Low level wind shear can affect aircraft airspeed during take off and landing in disastrous ways, and airliner pilots are trained to avoid all microburst wind shear (headwind loss in excess of ). The rationale for this additional caution includes:\n", "On May 1, 2017, Boeing 777 flight SU270 from Moscow to Thailand got into clear air turbulence. The aircraft suddenly dropped and 27 passengers who were not buckled up sustained serious injuries. The pilots were able to stabilize the plane and continue the flight. All passengers who needed medical attention were taken to Bangkok hospital upon arrival.\n\nOn March 5, 1966, BOAC Flight 911 from Tokyo to Hong Kong, a Boeing 707, broke up in CAT, with loss of all hands (124) on board. The sequence of failure started with the vertical stabilizer getting ripped off.\n\nSection::::Effects on aircraft.:Pilot rules.\n" ]
[]
[]
[ "normal" ]
[]
[ "normal" ]
[]
2018-18238
How do sinkholes form, why, and how do countries like China deal with fixing them, if at all?
Underground streams or water percolating through rock dissolves or wears away material creating cave systems. In places where the rock doesn't have much strength, repeated roof collapses means that effectively the void moves upwards as rock falls downwards. Eventually there is just a thin crust which can't support the weight above and it opens up to the surface.
[ "Section::::Dissolution of limestone.\n\nSubsidence frequently causes major problems in karst terrains, where dissolution of limestone by fluid flow in the subsurface creates voids (i.e., caves). If the roof of a void becomes too weak, it can collapse and the overlying rock and earth will fall into the space, causing subsidence at the surface. This type of subsidence can cause sinkholes which can be many hundreds of meters deep.\n\nSection::::Mining.\n", "Section::::Geological disasters.:Sinkholes.\n\nWhen natural erosion, human mining or underground excavation makes the ground too weak to support the structures built on it, the ground can collapse and produce a sinkhole. For example, the 2010 Guatemala City sinkhole which killed fifteen people was caused when heavy rain from Tropical Storm Agatha, diverted by leaking pipes into a pumice bedrock, led to the sudden collapse of the ground beneath a factory building.\n\nSection::::Geological disasters.:Volcanic eruptions.\n", "The karst topography also poses difficulties for human inhabitants. Sinkholes can develop gradually as surface openings enlarge, but progressive erosion is frequently unseen until the roof of an underground cavern suddenly collapses. Such events have swallowed homes, cattle, cars, and farm machinery. In the United States, sudden collapse of such a cavern-sinkhole swallowed part of the collection of the National Corvette Museum in Bowling Green, Kentucky in 2014.\n\nSection::::Interstratal karst.\n", "Some sinkholes form in thick layers of homogenous limestone. Their formation is facilitated by high groundwater flow, often caused by high rainfall; such rainfall causes formation of the giant sinkholes in the Nakanaï Mountains, on the New Britain island in Papua New Guinea. On the contact of limestone and insoluble rock below it, powerful underground rivers may form, creating large underground voids.\n\nIn such conditions, the largest known sinkholes of the world have formed, like the deep Xiaozhai Tiankeng (Chongqing, China), giant sótanos in Querétaro and San Luis Potosí states in Mexico and others.\n", "As the rock dissolves, spaces and caverns develop underground. These sinkholes can be dramatic, because the surface land usually stays intact until there is not enough support. Then, a sudden collapse of the land surface can occur.\n\nOn 2 July 2015, scientists reported that active pits, related to sinkhole collapses and possibly associated with outbursts, were found on the comet 67P/Churyumov-Gerasimenko by the \"Rosetta\" space probe.\n\nSection::::Formation.:Artificial processes.\n", "Cover-collapse sinkholes or \"dropouts\" form where so much soil settles down into voids in the limestone that the ground surface collapses. The surface collapses may occur abruptly and cause catastrophic damages. New sinkhole collapses can also form when man changes the natural water-drainage patterns in karst areas.\n\nSection::::Classification.:Pseudokarst sinkholes.\n\nPseudokarst sinkholes resemble karst sinkholes but formed by processes other than the natural dissolution of rock.\n\nSection::::Man’s activities accelerate cover-collapse sinkholes.\n", "BULLET::::- Otjikoto Lake – a sinkhole lake that was created by a collapsing karst cave in Namibia\n\nSection::::Sinkholes of South Africa.\n\nBULLET::::- Blyvooruitzicht sinkholes - ancient sinkhole in South Africa\n\nBULLET::::- Boesmansgat – in South Africa; believed to be the sixth-deepest submerged freshwater cave (or sinkhole) in the world\n\nSection::::Sinkholes of Turkey.\n\nBULLET::::- Akhayat sinkhole – sinkhole in Mersin Province, Turkey\n\nBULLET::::- Cennet and Cehennem – two large sinkholes in the Taurus Mountains, in Mersin Province\n\nBULLET::::- Egma Sinkhole – sinkhole and the deepest cave in Turkey\n\nBULLET::::- Kanlıdivane – ancient city situated around a big sinkhole in Mersin Province\n", "Section::::Geological hazards.:Sinkhole.\n\nA sinkhole is a localized depression in the surface topography, usually caused by the collapse of a subterranean structure such as a cave. Although rare, large sinkholes that develop suddenly in populated areas can lead to the collapse of buildings and other structures.\n\nSection::::Geological hazards.:Volcanic eruption.\n", "BULLET::::- Shaanxi tiankeng cluster, in the Daba Mountains of southern Shaanxi, China, covers an area of nearly 5019 square kilometers with the largest sinkhole being 520 meters in diameter and 320 meters deep.\n\nBULLET::::- Teiq Sinkhole (Taiq, Teeq, Tayq) in Oman is one of the largest sinkholes in the world by volume: . Several perennial wadis fall with spectacular waterfalls into this deep sinkhole.\n\nBULLET::::- Xiaozhai Tiankeng – Chongqing Municipality, China. Double nested sinkhole with vertical walls, deep.\n\nSection::::Notable examples.:In the Caribbean.\n", "The Guatemala City holes are instead an example of \"piping pseudokarst\", created by the collapse of large cavities that had developed in the weak, crumbly Quaternary volcanic deposits underlying the city. Although weak and crumbly, these volcanic deposits have enough cohesion to allow them to stand in vertical faces and to develop large subterranean voids within them. A process called \"soil piping\" first created large underground voids, as water from leaking water mains flowed through these volcanic deposits and mechanically washed fine volcanic materials out of them, then progressively eroded and removed coarser materials. Eventually, these underground voids became large enough that their roofs collapsed to create large holes.\n", "BULLET::::- Cenotes – This refers to the characteristic water-filled sinkholes in the Yucatán Peninsula, Belize and some other regions. Many cenotes have formed in limestone deposited in shallow seas created by the Chicxulub meteorite's impact.\n\nBULLET::::- Sótanos – This name is given to several giant pits in several states of Mexico.\n\nBULLET::::- Tiankengs – These are extremely large sinkholes, typically deeper and wider than , with mostly vertical walls, most often created by the collapse of underground caverns. The term means \"sky holes\" in Chinese; many of this largest type of sinkhole are located in China.\n", "Section::::Notable examples.\n\nSome of the largest sinkholes in the world are:\n\nSection::::Notable examples.:In Africa.\n\nBULLET::::- Blue Hole – Dahab, Egypt. A round sinkhole or blue hole, deep. It includes an archway leading out to the Red Sea at , which has been the site for many freediving and scuba attempts, the latter often fatal.\n\nBULLET::::- Boesmansgat – South African freshwater sinkhole, approximately deep.\n\nBULLET::::- Lake Kashiba – Zambia. About 3.5 hectares (8.6 acres) in area and about deep.\n\nSection::::Notable examples.:In Asia.\n", "BULLET::::- Lapa Terra Ronca – a dolomitic limestone cave inside the area of the Terra Ronca State Park in Brazil\n\nSection::::Sinkholes of China.\n\nBULLET::::- Dragon Hole - the deepest underwater sinkhole (blue hole), located in the Drummond Island reef of the Paracel Islands in the South China Sea.\n\nBULLET::::- Xiaozhai Tiankeng - the deepest sinkhole in the world (over 2,100 feet), located in Fenjie Count of Chongqing Municipality.\n\nSection::::Sinkholes of Croatia.\n\nBULLET::::- Blue Lake – a karst lake located near Imotski in southern Croatia\n\nBULLET::::- Red Lake – a sinkhole containing a karst lake near the city of Imotski, Croatia\n", "BULLET::::- West Rand, Gauteng and North West Province, KwaZulu Natal.\n\nSection::::Asia.\n\nSection::::Asia.:China.\n\nBULLET::::- Area around Guilin and Yangshuo\n\nBULLET::::- Jiuzhaigou Valley and Huanglong Scenic and Historic Interest Area, (UNESCO World Heritage Site)\n\nBULLET::::- Shaanxi tiankeng cluster, discovered in 2016, it is one of the largest in the world comprising forty-nine sinkholes and more than fifty funnels ranging from 50–100 metres in diameter.\n\nBULLET::::- South China Karst, World Heritage Site\n\nBULLET::::- Stone Forest\n\nBULLET::::- Xiaozhai Tiankeng, also known as the Heavenly Pit, is the world's largest sinkhole.\n", "Collapses, commonly incorrectly labeled as sinkholes also occur due to human activity, such as the collapse of abandoned mines and salt cavern storage in salt domes in places like Louisiana, Mississippi and Texas. More commonly, collapses occur in urban areas due to water main breaks or sewer collapses when old pipes give way. They can also occur from the overpumping and extraction of groundwater and subsurface fluids.\n", "Since the late 1990s, Dr. Marcus Gary, a hydrogeologist at the Edwards Aquifer Authority and adjunct professor at the Jackson School of Geosciences, University of Texas at Austin has studied Sistema Zacatón to understand how the sinkholes formed and how they evolve over time. During these studies, Gary made extensive use of a number of investigative tools, including those on the DEPTHX probe, geophysics, isotope geochemistry, field mapping, and geomicrobiology. Gary was a primary member and co-PI on the DEPTHX mission, which used an autonomous underwater robot to explore the deepest parts of Zacatón for the first time.\n", "Section::::Occurrence.\n\nSinkholes tend to occur in karst landscapes. Karst landscapes can have up to thousands of sinkholes within a small area, giving the landscape a pock-marked appearance. These sinkholes drain all the water, so there are only subterranean rivers in these areas. Examples of karst landscapes with a plethora of massive sinkholes include Khammouan Mountains (Laos) and Mamo Plateau (Papua New Guinea). The largest known sinkholes formed in sandstone are Sima Humboldt and Sima Martel in Venezuela.\n", "Section::::Formation of the sinkhole.\n", "Sinkholes can also form when natural water-drainage patterns are changed and new water-diversion systems are developed. Some sinkholes form when the land surface is changed, such as when industrial and runoff-storage ponds are created; the substantial weight of the new material can trigger a collapse of the roof of an existing void or cavity in the subsurface, resulting in development of a sinkhole.\n\nSection::::Classification.\n\nSection::::Classification.:Solution sinkholes.\n", "An American Society of Civil Engineers publication says the potential for sinkhole collapse must be a part of land-use planning in karst areas. Since water level changes accelerate sinkhole collapse, measures must be taken to minimize water level changes. Where sinkhole collapse of structures could cause loss of life the public should be made aware of the risks. The areas most susceptible to sinkhole collapse can be identified and avoided. A 1987 U.S. Geological Survey publication says \"Many induced sinkholes develop with little or no advance warning\" while others are preceded by warning features such as cracks, sagging, jammed doors, cracking noises,etc. Another U.S. Geological Survey publication says \"Sinkhole density is an important factor for determining the area most prone to sinkhole development. Where a closed depression has collapsed into a sinkhole we know that the underlying subsurface contains unstable voids, and possibly a cave system. In areas where active sinkholes have developed there is a greater possibility that a new sinkhole will form (Brezinski, 2004; Zhou, 2003).\" Where large cavities exist in the limestone large surface collapses can occur like the Winter Park, Florida sinkhole collapse. Recommendations for land uses in karst areas should avoid or minimize alterations of the land surface and natural drainage. Geotechnical engineers say the current understanding of karst development allows proper site characterization to avoid karst disasters. Most sinkhole disasters are recognizable, predictable, and preventable rather than “acts of God”. In karst areas the traditional foundation evaluations (bearing capacity and settlement) of the ability of soil to support a structure only comes after acceptable results from the geotechnical site investigation for cavities and defects in the underlying rock. Since the soil/rock surface in karst areas are very irregular the number of subsurface samples (borings and core samples) required per unit area is usually much greater than in non-karst areas.\n", "Unusual processes have formed the enormous sinkholes of Sistema Zacatón in Tamaulipas (Mexico), where more than 20 sinkholes and other karst formations have been shaped by volcanically heated, acidic groundwater. This has produced not only the formation of the deepest water-filled sinkhole in the world—Zacatón—but also unique processes of travertine sedimentation in upper parts of sinkholes, leading to sealing of these sinkholes with travertine lids.\n", "BULLET::::- Sistema Sac Actun – an underwater cave system situated along the Caribbean coast of the Yucatán Peninsula with passages to the north and west of the village of Tulum, in the state of Quintana Roo\n\nBULLET::::- Zacatón – a thermal water filled sinkhole belonging to the Zacatón system - a group of unusual karst features located in Aldama Municipality near the Sierra de Tamaulipas in the northeastern state of Tamaulipas\n\nSection::::Sinkholes of Namibia.\n\nBULLET::::- Lake Guinas – a sinkhole lake, created by a collapsing karst cave, located thirty north of Tsumeb, Namibia\n", "Strategies for mitigating the risk due to sinkholes and subsidence include filling the voids, which can be costly due to the amount of material needed, and construction techniques so that structures are not prone to sinking. Strategies for managing the risk include insurance for subsidence and sinkholes.\n", "BULLET::::- Concentrated leak: seeping water erodes and enlarges a crack until a breach occurs. The crack may not progress to the exit (albeit failure is still possible), but eventually the continued erosion forms a pipe or a sinkhole.\n\nBULLET::::- Backward erosion: initiated at the exit point of the seepage path, this type of erosion occurs when the hydraulic gradient is sufficiently high to cause particle detachment and transport; a pipe forms backwards from the exit point until breach.\n", "Sinkholes have been used for centuries as disposal sites for various forms of waste. A consequence of this is the pollution of groundwater resources, with serious health implications in such areas. The Maya civilization sometimes used sinkholes in the Yucatán Peninsula (known as cenotes) as places to deposit precious items and human sacrifices.\n" ]
[]
[]
[ "normal" ]
[]
[ "normal", "normal" ]
[]
2018-04202
Why does the US both buy steel from AND sell steel to Canada at the same time?
Two things come to mind: 1) Canada and the US are both large countries. There could be places in the US that are closer to Canadian steel producers than to those in the US and vice versa. 2) There are many different kinds of steel. Not all manufacturers will produce all types.
[ "BULLET::::- Dofasco, Canada's largest steel maker acquired by Luxembourg-based Arcelor, January 2006.\n\nBULLET::::- Noranda (mining company) & Falconbridge Ltd., purchased by Swiss mining company Xstrata in 2006. Noranda had earlier been a target of state-owned China Metals Corp., but had backed out in 2005 amid public concern in Canada of Chinese state control of such a major company.\n\nBULLET::::- ATI Technologies, Canada's graphics chip maker, acquired by Advanced Micro Devices, July 2006.\n\nBULLET::::- Stelco, Canada's last major independent steel producer, taken over by United States Steel in August 2007.\n\nBULLET::::- Alcan purchased by Rio Tinto in 2007.\n", "BULLET::::- Stothert & Pitt (Canada) Limited, Montreal, QC\n\nBULLET::::- Stowell Screw Company Limited, Longueuil, QC\n\nBULLET::::- Trenton Industries Limited, Trenton, NS\n\nBULLET::::- Trenton Steel Works Limited, Trenton, NS\n\nBULLET::::- Truscon Steel Company of Canada Limited, Walkerville, ON\n\nSection::::See also.\n\nBULLET::::- Dominion, Nova Scotia\n\nSection::::External links.\n\nBULLET::::- Industrial Heritage of Cape Breton Island\n\nBULLET::::- \"The Sun Rises in the East\", an address to The Empire Club of Canada by Lionel Avard Forsyth, President of the Dominion Steel and Coal Corporation Ltd. on October 22, 1953.\n", "BULLET::::- Canadian Bridge Engineering Company Limited, Walkerville, ON\n\nBULLET::::- Canadian Steel Corporation Limited, Ojibway, ON\n\nBULLET::::- Canadian Steel Lands Limited, Ojibway, ON\n\nBULLET::::- Canadian Transmission Tower Company Limited, Montreal, QC\n\nBULLET::::- Canadian Tube and Steel Products Limited, Montreal, QC\n\nBULLET::::- Cibou Steamship Company Limited, London, UK\n\nBULLET::::- Dominion Coal Company, Glace Bay, NS\n\nBULLET::::- Dominion Coal Import Company Limited, Montreal, QC\n\nBULLET::::- Dominion Rolling Stock Company Limited, Sydney, NS\n\nBULLET::::- Cumberland Railway and Coal Company, Springhill, NS\n\nBULLET::::- Sydney and Louisburg Railway Company, Glace Bay, NS\n\nBULLET::::- Dominion Iron and Steel Company, Sydney, NS\n\nBULLET::::- Dominion Limestone Limited, Aguathuna\n", "In 1940, Canadian heavy industry is converting to a war footing, with a new \"front of steel\" confronting the Axis powers, led by Nazi Germany. Steel is the weapon of war used by the nation that had chosen \"guns before butter\" and unleashed its lightning blitzkrieg attacks on Europe.\n", "One of the examples cited in Johnston's book is that of J. D. Rockefeller deciding where to build his first major oil refinery. Instead of taking the easier, cheaper route from the oil fields to refine his petroleum in Pittsburgh, Rockefeller chose to build his refinery in Cleveland. Why? Because rail companies would be transporting his refined oil to market. Pittsburgh had just one major railroad, meaning it could dictate prices in negotiations, while Cleveland had three railroads that Rockefeller knew would compete for his business, potentially reducing his costs significantly. The leverage gained in these rail negotiations more than offset the additional operating costs of sending his oil to Cleveland for refining, helping establish Rockefeller's empire, while undermining his competitors who failed to integrate their core operating decisions with their negotiation strategies.\n", "This relationship has been a critical one as Congressman Visclosky and the Congressional Steel Caucus have supported, and voted in favor of, tariffs on foreign steel entering the United States from countries that illegally, and in violation of international agreements, subsidize and governmentally support their steel companies and industry, thereby allowing them to sell steel at an unfair and below market price, a practice known as \"Steel Dumping\".\n", "The Indian steel industry began expanding into Europe in the 21st century. In January 2007 India's Tata Steel made a successful $11.3 billion offer to buy European steel maker Corus Group. In 2006 Mittal Steel (based in London but with Indian management) merged with Arcelor after a takeover bid for $34.3 billion to become the world's biggest steel maker, ArcelorMittal (based in Luxembourg City), with 10% of the world's output.\n\nSection::::Asia.:China.\n", "Management feared that an Ichan deal with the union would leads losing control of the company. The union remained publicly neutral but internally came to believe that Ichan merely wanted to extract short-term cash from USX and was not really interested in the long-term health of the steel industry.\n\nManagement and the union resumed direct negotiations on October 21, 1986, but talks had broken down by November 21.\n\nSection::::Return to work.\n", "BULLET::::- Dominion Shipping Company Limited, Sydney, NS\n\nBULLET::::- Dominion Wabana Ore Limited, Bell Island, NL\n\nBULLET::::- DOSCO Overseas Engineering Limited, Beaconsfield, UK\n\nBULLET::::- Empire Housing Company Limited, Sydney, NS\n\nBULLET::::- Essex Terminal Railway Company, Walkerville, ON\n\nBULLET::::- Eastern Car Company Limited, Trenton, NS\n\nBULLET::::- Graham Nail & Wire Products Limited, Toronto, ON\n\nBULLET::::- Halifax Shipyards Limited, Halifax, NS\n\nBULLET::::- Nova Scotia Steel and Coal Company Limited, Trenton, NS\n\nBULLET::::- Old Sydney Collieries Limited, Sydney Mines, NS\n\nBULLET::::- James Pender & Company, Saint John, NB\n\nBULLET::::- Scotia Rolling Stock Company Limited, Trenton, NS\n\nBULLET::::- Seaboard Power Corporation Limited, Glace Bay, NS\n", "The Allied nations realized that only steel could challenge steel, and in the United Kingdom and Canada, industrial workers responded with total energy and efficiency. On the home front, industrial production soared with factories converting to munitions in 100 Canadian cities and towns, with committed Canadians entering the workforce in large numbers.\n", "Precision and standardization allowed for rapid production of steel products. As the epitome of a new mechanized steel weapon, the light and simple to use Bren gun was manufactured in Canada in a complex operation that involved 2,800 smaller processes employing 18,000 tools and jigs. Steel is also used in thin sheets to create the submarine chasers coming from Canadian shipyards, in the trucks, armour and ambulances rolling out of factories that were formerly manufacturing automotive products, even in steel ball bearings, critical parts for mechanized warfare.\n", "However, pressure from foreign steel manufacturers led to a loss of orders during the mid-1980s. Canadian steel manufacturers, in particular, were cited for dumping more than 120,000 tons of steel a year to the United States during the late 1980s. While the Canadian market share amounted to about 5 percent at the time, a majority of the Canadian imports were being dumped in the Northeastern United States, which was Lehigh Structural Steel's prime market.\n", "The next meeting of the Combined Policy Committee on 15 April 1946 produced no accord on collaboration, and resulted in an exchange of cables between Truman and Attlee. Truman cabled on 20 April that he did not see the communiqué he had signed as obligating the United States to assist Britain in designing, constructing and operating an atomic energy plant. Attlee's response on 6 June 1946 \"did not mince words nor conceal his displeasure behind the nuances of diplomatic language.\" At issue was not just technical co-operation, which was fast disappearing, but the allocation of uranium ore. During the war this was of little concern, as Britain had not needed any ore, so all the production of the Congo mines and all the ore seized by the Alsos Mission had gone to the United States, but now it was also required by the British atomic project. Chadwick and Groves reached an agreement by which ore would be shared equally.\n", "Railway and locomotive construction in the latter 19th century created a huge demand for steel. The Bessemer furnace at the Algom steel mill in Sault Ste. Marie, Ontario went into operation in 1902. The Montreal Rolling Mills Co, The Hamilton Steel and Iron Company, the Canada Screw Company, the Canada Bolt and Nut Company, and the Dominion Wire Manufacturing Company were consolidated in 1910 to form The Steel Company of Canada headquartered in Toronto. With mills located in Hamilton and other cities, it was the largest producer of steel in Canada for most of the century. Its competitor, the Dominion Steel Castings Company Limited founded in 1912, renamed the Dominion Foundries and Steel Company in 1917 and Dofasco in 1980, had its Hamilton facilities located next to those of Stelco.\n", "Section::::The Automobile Age (1920–1950).:Industry.\n\nWith the rail building era coming to an end, the rise of the automotive industry in southern Ontario provided the Hamilton steel mills of the Steel Company of Canada and the Dominion Foundries and Steel Company with a new market. Dofasco introduced the basic oxygen steelmaking at its mills in Hamilton in 1954. In the latter part of the century, Algoma, in Sault Ste. Marie, built coke oven batteries and blast furnaces, while phasing out the open-hearth and Bessemer steel-making process in favour of the basic oxygen steel-making.\n", "BULLET::::- The 1989 Canada–United States Free Trade Agreement between Canada and the United States. Rather than continuing to negotiate a series of smaller bilateral trade deals by sector, Canada at one-tenth the size of America, reached a wide-ranging trade agreement. The deal was large enough to attract the attention of its southern neighbor, while valuable enough to the Americans to ensure Canada could also negotiate important protections for its culture, healthcare and education.\n", "BULLET::::- In May 2007, a deal to acquire IPSCO steel (of Saskatchewan) for $7.7 billion was announced by SSAB Swedish Steel AB.\n\nBULLET::::- In 2007, US Steel's CAD$1.9bn took over the bankrupt Stelco works (largely in Hamilton), subsequently idled in 2010 and finally shutdown in October 2013.\n\nBULLET::::- In 2007, the UK Tate & Lyle conglomerate announced the sale of its Redpath Sugar refining business to American Sugar Refining\n\nBULLET::::- In June 2007, the ED Smith Income Fund accepted a takeover for CAD$217mn from TreeHouse Foods of Chicago.\n", "Section::::Canadian Steel Improvement.\n", "Section::::Development.\n", "This was a significant signing, which had been foreshadowed in a media war between John L. Lewis, the head of the CIO, and the Ontario Premier, Mitchell Hepburn, in the spring of 1937. Immediately following a contentious 16-day strike of the 3,700 employees at the Oshawa General Motors plant,\n", "Section::::Labour disputes.\n\nLabour disputes at \"Ontario Malleable\" were commonplace prior to, and after unionization in 1937. The first recorded strike in Oshawa was at OMIC in 1883.\n\nSection::::Labour disputes.:1900.\n", "Section::::Corporate predecessors.\n", "Section::::Other versions.\n\nSection::::Other versions.:\"DC: The New Frontier\".\n", "In the early days of the Reagan Administration, steel firms won substantial tax breaks in order to compete with imported goods. But instead of modernizing their mills, steel companies shifted capital out of steel and into more profitable areas. In March 1982, U.S. Steel took its concessions and paid $1.4 billion in cash and $4.7 billion in loans for Marathon Oil, saving approximately $500 million in taxes through the merger. The architect of tax concessions to steel firms, Senator Arlen Specter (R-PA), complained that \"we go out on a limb in Congress and we feel they should be putting it in steel.\" The events are the subject of a song by folk singer Anne Feeney.\n", "The Steelworkers continue to have a contentious relationship with U.S. Steel, but far less so than the relationship that other unions had with employers in other industries in the United States. They launched a number of long strikes against U.S. Steel in 1946 and a 116-day strike in 1959, but those strikes were over wages and benefits and not the more fundamental issue of union recognition that led to violent strikes elsewhere.\n" ]
[]
[]
[ "normal" ]
[]
[ "normal" ]
[]
2018-01929
Vehicles have tires, tires wear down... Where does all of the rubber go from tires wearing down?
It peels off in tiny bits. Those are called “marbles” in racing where you can actually see them after a long race when cars lose their rubber at the exact same spots over and over again. In real life you don’t see it of course because road cars don’t lose rubber that much and it constantly gets blown away by wind and washed away by rain.
[ "BULLET::::- Rubber: Biodiesel also affects types of natural rubbers found in some older engine components. Studies have also found that fluorinated elastomers (FKM) cured with peroxide and base-metal oxides can be degraded when biodiesel loses its stability caused by oxidation. Commonly used synthetic rubbers FKM- GBL-S and FKM- GF-S found in modern vehicles were found to handle biodiesel in all conditions.\n\nSection::::Technical standards.\n\nBiodiesel has a number of standards for its quality including European standard EN 14214, ASTM International D6751, and others.\n\nSection::::Low temperature gelling.\n", "The dried material is then baled and palletized for storage and shipment.\n\nSection::::Production.:Vulcanized rubber.\n\nNatural rubber is often vulcanized - a process by which the rubber is heated and sulfur, peroxide or bisphenol are added to improve resistance and elasticity and to prevent it from perishing. Carbon black is often used as an additive to rubber to improve its strength, especially in vehicle tires, which account for about 70% (~9 million tons) of carbon black production.\n\nSection::::Production.:Transportation.\n", "One common example of chemically assisted degradation is the degradation of rubber by ozone particles. Ozone is a naturally occurring atmospheric molecule that is produced by electric discharge or through a reaction of Oxygen with solar radiation. Ozone is also produced with atmospheric pollutants reacted with ultraviolet radiation. For a reaction to occur, ozone concentrations only have to be as low as 3-5 parts per hundred million and when these concentrations are reached, a reaction occurs with a thin surface layer (5 x10-7 metres) of the material.\n", "Latex coagulates in the cups if kept for long and must be collected before this happens. The collected latex, \"field latex\", is transferred into coagulation tanks for the preparation of dry rubber or transferred into air-tight containers with sieving for ammoniation. Ammoniation preserves the latex in a colloidal state for longer periods of time.\n", "Over the longer term, tires do wear out (2000 to 5000 miles); a rash of punctures is often the most visible sign of a worn tire.\n\nSection::::Maintenance and repair.:Repair.\n\nVery few bicycle components can actually be repaired; replacement of the failing component is the normal practice.\n", "Section::::Recycled pavement material.\n\nRubberized asphalt is the largest market for crumb rubber in the United States, consuming an estimated , or approximately 12 million tires annually. Crumb rubber is also used as ground cover under playground equipment, and as a surface material for running tracks and athletic fields.\n\nSection::::Composition.\n\nBULLET::::- 71% recoverable rubber\n\nBULLET::::- 14% steel\n\nBULLET::::- 3% fiber\n\nBULLET::::- 12% extraneous material\n\nSection::::Grading.\n\nThe following are common classifications of crumb rubber:\n\nRetreaders tire buffings shall consist of clean, fresh, dry buffings from tire retread preparation operations.\n", "When skidding occurs, the tyres can be rubbed flat, or even burst. Aircraft tyres have much shorter lifetimes than cars for these reasons. Since Maxaret reduced the skidding, spreading it out over the entire surface of the tyre, the tyre lifetime is improved. One early tester summed up the system thus:\n", "Section::::End of use.:Environmental issues.\n", "Inflation is key to proper wear and rolling resistance of pneumatic tires. Many vehicles have monitoring systems to assure proper inflation. \n", "In most of the United States, a fee is included in every new tire that is sold. Fees can be collected by states, importers, and sellers, the latter being the most common case. These fees are collected to help support tire-recycling programs throughout the states. State tire-recycling programs are created to reduce the amount of scrap tires in stockpiles. The table below shows the tire fees in each state:\n\nSection::::U.S.-Mexico border issues.\n", "Latex is practically a neutral substance, with a pH of 7.0 to 7.2. However, when it is exposed to the air for 12 to 24 hours, its pH falls and it spontaneously coagulates to form a solid mass of rubber.\n", "57% of the 260,000 tonnes of used tires estimated to be thrown away each year in Brazil were sent to cement ovens in Brazil. In Brazil, used tires are applied to make artificial reefs in the sea, to increase fisheries production. Energy can be recovered by burning the tires in controlled ovens, because each tire contains the energy of 9.4 liters of petroleum oil.\n\nSection::::Materials.:Plastic.\n\nAn average of 17.5% of all rigid and film plastic is recycled each year in Brazil. 60% of the recycled plastic comes from industrial residue and 40% from urban refuse.\n\nSection::::Materials.:Refrigerator.\n", "The ozone molecules react with the rubber which in most cases is unsaturated (contains double bonds), however a reaction will still occur in saturated polymers (those containing only single bonds). When reaction occurs, scission of the polymer chain (breaking of double covalent bonds) takes place forming decomposition products:\n", "The rubber is then spread on platforms under large sheds, until the women workers of the post have cut it into neat little cubes. This done, for three months it lies in layers on the platforms to dry and is turned once a fortnight, till all the moisture has evaporated. During this process it loses some 25% in weight.\n", "The properties of the gas, liquid, and solid output are determined by the type of feed-stock used and the process conditions. For instance whole tires contain fibers and steel. Shredded tires have most of the steel and sometimes most of the fiber removed. Processes can be either batch or continuous. The energy required to drive the decomposition of the rubber include using directly fired fuel (like a gas oven), electrical induction (like an electrically heated oven) or by microwaves (like a microwave oven). Sometimes a catalyst is used to accelerate the decomposition. The choice of feed-stock and process can affect the value of the finished products.\n", "Asphalt concrete that is removed from a pavement is usually stockpiled for later use as aggregate for new hot mix asphalt at an asphalt plant. This reclaimed material, or RAP, is crushed to a consistent gradation and added to the HMA mixing process. Sometimes waste materials, such as asphalt roofing shingles, crushed glass, or rubber from old tires, are added to asphalt concrete as is the case with rubberized asphalt, but there is a concern that the hybrid material may not be recyclable.\n\nSection::::See also.\n\nBULLET::::- Asphalt\n\nBULLET::::- Asphalt plant\n\nBULLET::::- Free floating screed\n\nBULLET::::- Paver\n\nBULLET::::- Plastic armour\n", "In the United States in 2017, about 43% of scrap tires (1,736,340 tons or 106 million tires) were burnt as tire-derived fuel. Cement manufacturing was the largest user of TDF, at 46%, pulp and paper manufacturing used 29% and electric utilities used 25%. Another 25% of scrap tires were used to make ground rubber, 17% were disposed of in landfills and 16% had other uses.\n\nSection::::Theory.\n", "Section::::Effects.:Environmental.\n\nSmall debris particles and dust (primarily from tire wear and vehicle exhaust particulates) constitute a significant problem when they are washed into the soil and leak into groundwater reservoirs through surface runoff, especially urban runoff. Roadside soil and water contamination can result when the concentration of harmful constituents is high enough. The greater the surface area of synthetic rubber waste fragments, the greater the potential for breakdown into harmful constituents. For leached tire debris, the potential environmental impact of the ingredients zinc and organic toxicants has been demonstrated.\n\nSection::::Prevention.\n", "Tire maintenance\n\nTire maintenance for motor vehicles is based on several factors. The chief reason for tire replacement is friction from moving contact with road surfaces, causing the tread on the outer perimeter of tires to eventually wear away. When the tread depth becomes too shallow (less than 4/32 in, or 3.2 mm), the tire is worn out and should be replaced. The same wheels can usually be used throughout the lifetime of the car. Other problems encountered in tire maintenance include:\n\nBULLET::::- Uneven or accelerated tire wear: can be caused by under-inflation, overloading or poor wheel alignment.\n", "The estimated per capita emission ranges from 0.23 to 4.7 kg/year, with a global average of 0.81 kg/year. The emissions from car tires (100%) are substantially higher than those of other sources of microplastics, e.g., airplane tires (2%), artificial turf (12–50%), brake wear (8%), and road markings (5%). Emissions and pathways depend on local factors like road type or sewage systems. The relative contribution of tire wear and tear to the total global amount of plastics ending up in our oceans is estimated to be 5–10%. In air, 3–7% of the particulate matter (PM) is estimated to consist of tire wear and tear, indicating that it may contribute to the global health burden of air pollution which has been projected by the World Health Organization (WHO) at 3 million deaths in 2012. The wear and tear also enter our food chain, but further research is needed to assess human health risks.\n", "Starch is not often used alone as a plastic material because of its brittle nature, but is commonly used as a biodegradation additive. Many plasticizers use starch-glycerol-water to modify starch’s brittle nature. Biodegradation of this blend was tested and was found that by the second day the degraded carbon had already attained about 100% of the initial carbon of the sample.\n\nSection::::Biodegradable materials.:Synthetic biodegradable polymer.\n", "Another way to distinguish between composition and aggregation in modeling the real world, is to consider the relative lifetime of the contained object. For example, if a Car object contains a Chassis object, a Chassis will most likely not be replaced during the lifetime of the Car. It will have the same lifetime as the car itself; so the relationship is one of composition. On the other hand, if the Car object contains a set of Tire objects, these Tire objects may wear out and get replaced several times. Or if the Car becomes unusable, some Tires may be salvaged and assigned to another Car. At any rate, the Tire objects have different lifetimes than the Car object; therefore the relationship is one of aggregation.\n", "Section::::Devulcanization.\n\nThe market for new raw rubber or equivalent is large. The auto industry consumes a substantial fraction of natural and synthetic rubber. Reclaimed rubber has altered properties and is unsuitable for use in many products, including tires. Tires and other vulcanized products are potentially amenable to devulcanization, but this technology has not produced material that can supplant unvulcanized materials. The main problem is that the carbon-sulfur linkages are not readily broken, without the input of costly reagents and heat. Thus, more than half of scrap rubber is simply burned for fuel.\n\nSection::::Inverse vulcanization.\n", "At high extensions some of the energy stored in the stretched network chain is due to a change in its entropy, but most of the energy is stored in bond distortions (regime II, above) which do not involve an entropy change. If one assumes that all of the stored energy is converted to kinetic energy, the retraction velocity may be calculated directly from the familiar conservation equation E= ½ mv. Numerical simulations, based on the Molecular Kink paradigm, predict velocities consistent with this experiment.\n\nSection::::Historical approaches to elasticity theory.\n", "In some applications, there is a complicated set of trade-offs in choosing materials. For example, soft rubbers often provide better traction but also wear faster and have higher losses when flexed—thus reducing efficiency. Choices in material selection may have a dramatic effect. For example: tires used for track racing cars may have a life of 200 km, while those used on heavy trucks may have a life approaching 100,000 km. The truck tires have less traction and also thicker rubber.\n" ]
[]
[]
[ "normal" ]
[]
[ "normal", "normal" ]
[]
2018-03456
How do figure skaters not get dizzy when spinning 20+ times in a row?
They get dizzy all the time, but they get used to it by practicing.
[ "When performing some types of spin, an elite skater can complete on average six rotations per second, and up to 70 rotations in a single spin. However, this is rarely seen in modern competitions because it would gain no extra points for the spin.\n", "If a skater performs a spin that has no basic position with only two revolutions, or with less than two revolutions, he or she does not fulfill the position requirement for the spin, and receives no points for it. A spin with less than three revolutions is not considered a spin; rather, it is considered a skating movement.S&P/ID 2018, p. 103 The flying spin and any spin that only has one position must have six revolutions; spin combinations must have 10 revolutions. Required revolutions are counted from when the skater enters the spin until he or she exits out of it, except for flying spins and the spins in which the final wind-up is in one position. Skaters increase the difficulty of camel spins by grabbing their leg or blade while performing the spin.\n", "A figure skater only needs to be able to spin in one direction, either clockwise or counter-clockwise. Most skaters favor a counter-clockwise direction of rotation when spinning (as in jumping), but there are some skaters who prefer to spin in the clockwise direction. A small minority of skaters are able to spin in both directions. Spins may be performed on either foot. For skaters who rotate in a counter-clockwise direction, a spin on the left foot is called a forward spin, while a spin on the right foot is called a back spin. The opposite applies for skaters who rotate in a clockwise direction. When learning to spin, a skater will typically learn a forward spin first, then once that is mastered they will learn how to execute a back spin.\n", "There are many types of spins, identified by the foot on which the spin is performed, the entrance to the spin, and the position of the arms, legs, and torso. Spins may be performed on either foot. Figure skaters are rarely able to spin in both directions; most favor one or the other. For skaters who rotate in a counterclockwise direction, a spin on the left foot is called a \"forward\" or \"front spin\", while a spin on the right foot is called a \"back spin\". Spins may be entered with a step or a jump. Spins entered with a jump are referred to as \"flying spins\". There are three basic positions, for which many variations exist. There are five levels of difficulty — Level B to Level 4.\n", "Once spin speed is increased to a sufficient level that the diabolo is stable, the user can then perform tricks. \"Skillful players can set it whirling at a rate of 2,000 revolutions a minute, it is said.\" Depending on how long a trick takes to perform, the user will normally have to spend some time increasing the spin speed of the diabolo before performing other tricks. Skilled users can perform multiple tricks while maintaining the spin speed of the diabolo. \"A skilled person [can] catch it, hurl it fifty or sixty feet into the air, then catch it again with little effort.\"\n", "Spins are normally entered on the ice, but they can also be entered from a jump or sequence of jumps known as star jumps. Spins that are entered through a jump are calling flying spins; these include the flying camel, flying sit spin, death drop, and butterfly spin. Flying spins may go from a forward spin to a back spin and they can also be performed as part of a spin sequence (combination spin).\n", "The flying layback spin is rarely performed because of the physical danger posed by landing with a hyperextended spine and the fact that few coaches know how the move is performed. However, some skaters such as Choi Ji Eun have been successfully credited with flying layback spins in competition.\n\nSection::::In competition.\n", "2008 they defended their two titles. At the same time, Christina Niederer participated in the Swiss Championships in Figure Skating. Shortly after Niederer won the Silver Medal at the Swiss Championship of 2010 in Latin Dance and Ballroom Dancing, she decided to stop her dancing career and focus on figure skating. \n", "Most jumps have a \"natural\" rotation; that is, the approach and landing curves both have the same rotational sense as the jump in the air. A few jumps, notably including the Lutz and Walley, are \"counter-rotated\", with the approach edge having an opposite rotational sense to the rotation in the air and landing curve.\n", "Researchers studied the difference in mental rotation ability between gymnasts, handball, and soccer players with both in-depth and in-plane rotations. Results suggested that athletes were better at performing mental rotation tasks that were more closely related to their sport of expertise.\n\nThere is a correlation in mental rotation and motor ability in children, and this connection is especially strong in boys ages 7–8. Children were known for having very connected motor and cognitive processes, and the study showed that this overlap is influenced by motor ability.\n", "Section::::History.:\"Dizzy\" revival.\n", "In free skating, Totmianina / Marinin were described by NBC as \"untouchable\", and the pair scored a personal best 135.84 points in this segment for a combined score of 204.48. Their victory was described as \"a rout\" The pair had suffered a setback in 2004 when Marinin dropped Totmianina during a lift, putting her in the hospital with a concussion.\n", "BULLET::::- RANDI KAUFMANN, who has been teaching ice skating since 1990, works mostly with Pre-Juvenile and Open Juvenile at the Synchroettes. She holds a Bachelor of Arts degree from the State University of New York in Binghamton and Master’s degree in Social Work from the Rutgers University. She is a member of USFSA, ISI and PSA.\n\nThe Synchroettes coaches Geri Lynch Tomich, Kaleigh Corbett and Bobette Guerrieri were recognized by the U.S. Professional Skaters Association among best coaches in the nation and included in the Honor Roll of Synchro Coaches in 2012 and 2013.\n\nSection::::Charity.\n", "After relocating to Moscow, the pair focused on improving their basic skating skills. Their training was interrupted when Klimov fell off a bicycle in late May 2013, resulting in a broken leg.\n\nSection::::Career.:2013–14 season.\n", "Jumps may be rotated in clockwise or counter-clockwise direction. Most skaters are counter-clockwise jumpers.\n\nSection::::Scale of values.\n\nEach jump has a base value, which is adjusted if the jump is under-rotated (), if the jump has wrong edge (e),and a grade of execution (GoE) from +5 to −5, weighted according to the base value.\n\nThe current scale of values is:\n\nSection::::Technique.\n", "Section::::Types of spins.:Upright spins.\n\nAn upright spin is a spin where the skater is in an upright position and their head is in line with their spine. There are many variations on it.\n\nBULLET::::- A basic two-foot spin is an upright spin in which the skater rotates with both feet on the ice using their arms to swing around and create momentum.\n\nBULLET::::- A basic one-foot spin is an upright spin in which the skater rotates with one foot on the ice. Spins can be skated on either foot.\n", "Swiegers began skating at age ten in Saskatchewan and began pairs at age fifteen. In 2005, his first partner, Kristin Bonkowski, decided to focus on her singles career.\n\nSection::::Skating career.:Partnership with Lawrence.\n\nIn summer 2005, Swiegers teamed up with Paige Lawrence, one of few skaters at his club who jumped in the same direction – clockwise.\n", "The solo spin combination must be performed once during the short program of pair skating competitions, with at least two revolutions in two basic positions. Both partners must include all three basic positions in order to earn the full points possible. There must be a minimum of five revolutions made on each foot. Spins can be commenced with jumps and must have at least two different basic positions, and both partners must include two revolutions in each position. A solo spin combination must have all three basic positions (the camel spin, the sit spin, and upright positions) performed by both partners, at any time during the spin to receive the full value of points, and must have all three basic positions performed by both partners to receive full value for the element. A spin with less than three revolutions is not counted as a spin; rather, it is considered a skating movement. If a skater changes to a non-basic position, it is not considered a change of position. The number of revolutions in non-basic positions, which may be considered difficult variations, are counted towards the team's total number of revolutions. Only positions, whether basic or non-basic, must be performed by the partners at the same time.\n", "Section::::Commercial performance.\n", "BULLET::::- Sir Rupert Van Der Hooves (voiced by Robert Tinkler) - High strung and delicate, Hooves is a nervous moose. And who wouldn't be, living next door to the Numb Chucks? Hooves went to High School with Woodchuck Morris which makes the Numb Chucks LOVE him to Hooves' dismay. Hooves enjoys gardening, cleaning, relaxing, reading, fancy living etc., but doesn't enjoy the Chucks or Grandma Butternut invading his personal space. According to the pilot, he speaks in a German accent.\n", "In 2016, 42-year-old set technician Olivier Rochette, of Canada, died in San Francisco, California, from head injuries he had sustained after accidentally being hit in the head by an aerial lift while preparing for a production of \"Luzia\". Rochette was the son of Cirque du Soleil co-founder Gilles Ste-Croix.\n\nIn 2018, 38-year-old aerial straps performer Yann Arnaud, of France, died at a hospital in Tampa, Florida, after falling during a performance of \"Volta\". He had been with the company for 15 years.\n\nSection::::External links.\n", "BULLET::::- A scratch spin is an upright spin with the free leg crossed in front of the skating leg. The arms and free leg begin in an open position, extended straight out and high. They are pulled in gradually, which accelerates the spin, and the leg is pushed down so that the feet are crossed at the ankles. This spin is performed on a very tight backward inside edge.\n", "The load in a laboratory centrifuge must be carefully balanced. This is achieved by using a combination of samples and balance tubes which all have the same weight or by using various balancing patterns without balance tubes. Small differences in mass of the load can result in a large force imbalance when the rotor is at high speed. This force imbalance strains the spindle and may result in damage to the centrifuge or personal injury. Some centrifuges have an automatic rotor imbalance detection feature that immediately discontinues the run when an imbalance is detected.\n", "A spin becomes a Biellmann spin, by definition, when \"the level of the boot passes the head so that the boot is above and behind or over the head.\" Some skaters have better positions than others, but as long as the boot is over the head, it is a Biellmann.\n\nWhen learning the spin the skater does not usually drop their head into the teardrop shape formed by their body so as to maintain balance.\n\nSkaters often cut their hands performing the Biellmann.\n\nSection::::Variations.\n\nThere are many spin variations that are derived from the classic Biellmann spin:\n", "A spin combination must have at least \"two different basic positions with 2 revolutions in each of these positions anywhere within the spin\". Skaters earn the full value of a spin combination when they include all three basic positions. The number of revolutions in non-basic positions are included in the total number of revolutions, but changing to a non-basic position is not considered a change of position. The change of foot and change of position can be made at the same time or separately, and can be performed as a jump or as a step-over movement. Non-basic positions are allowed during spins executed in one position or, for single skaters, during a flying spin.\n" ]
[ "Figure skaters do not get dizzy." ]
[ "Figure skaters do get dizzy they are just acustomed to it by practicing." ]
[ "false presupposition" ]
[ "Figure skaters do not get dizzy." ]
[ "false presupposition" ]
[ "Figure skaters do get dizzy they are just acustomed to it by practicing." ]
2018-00980
How does casinos not having clocks lessen our fatigue?
It’s not so much fatigue as clock watching... such as “oh, it’s already 2am, I’d better get some sleep!” vs. having no idea whether it’s 10pm or 2am.
[ "Shift work or chronic jet lag have profound consequences on circadian and metabolic events in the body. Animals that are forced to eat during their resting period show increased body mass and altered expression of clock and metabolic genes. In humans, shift work that favors irregular eating times is associated with altered insulin sensitivity and higher body mass. Shift work also leads to increased metabolic risks for cardio-metabolic syndrome, hypertension, and inflammation.\n\nSection::::Human health.:Airline pilots and cabin crew.\n", "Low level theories exist that suggest that fatigue is due mechanical failure of the exercising muscles (\"peripheral fatigue\"). However, such low level theories do not explain why running muscle fatigue is affected by information relevant to cost benefit trade offs. For example, marathon runners can carry on running longer if told they are near the finishing line, than far away. The existence of a central governor can explain this effect.\n\nSection::::See also.\n\nBULLET::::- Central governor\n\nBULLET::::- Deployment cost–benefit selection in physiology\n\nBULLET::::- Evolutionary medicine\n\nBULLET::::- Health science\n\nBULLET::::- Management control system\n\nBULLET::::- Mind–body\n\nBULLET::::- Neural top–down control of physiology\n", "Using the convention that for every time zone crossed, synchronization to that time zone requires one day, teams can be analyzed during a season to see where they are in terms of being acclimated to their time zone of play. For example, consider the Washington Nationals. If they have been competing at home for the last 3 days or more, they would be completely acclimated to Eastern Standard Time (EST). If they were to travel to Los Angeles, upon arrival they would be 3 hours off, because they traveled 3 time zones west. Every 24 hours spent on the west coast, would bring them 1 hour closer to acclimation. So after 24 hours in Los Angeles, they would be 2 hours off. After 48 hours, they would be 1 hour off, and after 72 hours, they would be acclimated to west coast time and would stay that way until they left their time zone.\n", "There is a technology movement surrounding travel and time-saving that implements daytime hotel use. Several companies offer mobile applications that divide the time by hours or day use for travel purposes. In San Francisco, a company has created a mobile application that divvies up time in a hotel by the minute.\n", "Gaming being very discussed topic  in the media some concerns have risen that these gaming houses are detrimental to players both physical and mental health. Six to nine hours on any given day are used as time that players can practice in their “work environment” and the players have no place for privacy as 24 hours a day for weeks the team is in close proximity to each other which can spark arguments ,another of the concerns is that the players are eating unhealthy meals due to time constraints. as well as the high level competitive field creating extra stress because the players have to be consistent even under pressure. But not rarely these teams have a fitness coach, a psychologist, a nutritionist, and other services. Additionally they receive regular visits from a physiotherapist and three fitness sessions a week to avoid neck, back and finger injuries after so many hours in the ergonomic chairs\n", "Although studies confirmed a correlation between PERCLOS and impairment, some experts are concerned by the influence which eye-behaviour unrelated to fatigue levels may have on the accuracy of measurements. Dust, insufficient lighting, glare and changes in humidity are non-fatigue related factors that may influence operator eye-behaviour. This system may therefore be prone to higher rates of false alarms and missed instances of impairment.\n\nSection::::Facial features tracking.\n", "Controlling central nervous system fatigue can help scientists develop a deeper understanding of fatigue as a whole. Numerous approaches have been taken to manipulate neurochemical levels and behavior. In sports, nutrition plays a large role in athletic performance. In addition to fuel, many athletes consume performance-enhancing drugs including stimulants in order to boost their abilities.\n\nSection::::Manipulation.:Dopamine reuptake and release agents.\n", "A Rolex clock is located at the back of the Stratton Bank stand, next to the scoreboard. Erected in 1963 following promotion, it is the only Rolex clock to be found at any football stadium in the world.\n", "The sedentary nature of many modern workplaces increases negative metabolic risk factors such as high body mass index (BMI), waist circumference, and blood pressure and elevated fasting glucose and triglyceride levels. Breaking up long periods of sedentary time is shown to improve these risks. Specifically, utilization of portable pedal exercise machines in office environments has been shown to improve employee health, and use was demonstrated feasible during working hours. Interventions using pedometers to influence employee behavior, decrease the duration of sedentary periods, and increase total movement during the work day have also proven successful. Smartphone applications and workplace signs promoting stair use are known to improve employee health, and many employers are now investing in wearable technologies to encourage employees to monitor physical activity. Workplace Tai Chi programs have also proven effective as a health intervention and means of reducing absenteeism, particularly in older workers. Despite these efforts, many health promotion programs struggle with poor participation, and the introduction of incentives is shown to improve employee involvement.\n", "Section::::Human health.\n\nTiming of medical treatment in coordination with the body clock, chronotherapeutics, may significantly increase efficacy and reduce drug toxicity or adverse reactions.\n\nA number of studies have concluded that a short period of sleep during the day, a power-nap, does not have any measurable effect on normal circadian rhythms but can decrease stress and improve productivity.\n", "Lighting requirements for circadian regulation are not simply the same as those for vision; planning of indoor lighting in offices and institutions is beginning to take this into account. Animal studies on the effects of light in laboratory conditions have until recently considered light intensity (irradiance) but not color, which can be shown to \"act as an essential regulator of biological timing in more natural settings\".\n\nSection::::Human health.:Obesity and diabetes.\n", "Those who work the night shift are known to be at increased risk of insomnia, disease, and death. Studies of casino shift workers have found increased emotional and behavioral problems in their children as well as a six-times higher rate of divorce in men with children who have been married less than 5 years. Women with children were also three times more likely to get divorced while performing shift work at casinos. However, there was no increased rate of divorce in couples without children who performed shift work at casinos. A 2012 study by the Kansas Health Institute recommended that casinos try to minimize this risk by keeping each worker on a consistent shift, rather than rotating them between days and nights. It stated that if shift rotation must occur, it is better to rotate employees forward from day into evening shifts.\n", "The drawbacks of the mechanical clocks include accuracy and matching of the two clocks, and matching of the indicators (flags) of time expiration. Additional time cannot easily be added for more complex time controls, especially those that call for an increment or delay on every move, such as some forms of byoyomi. However, a malfunctioning analog clock is a less serious event than a malfunctioning digital clock.\n\nSection::::Early development of digital game clocks.\n", "Section::::Hazards.:Noise-induced hearing loss.\n", "People who did not get at least two hours — even if they surpassed an hour per week — did not get the benefits.\n\nThe study is the latest addition to a compelling body of evidence for the health benefits of nature. Many doctors already give nature prescriptions to their patients.\n", "As of January 1, 2018, workers in Nevada have a 10-hour training course on these subject they must complete within 15 days of being hired. They must renew this certification every 5 years. These rules are in place to prevent workplace hazards, injuries, and deaths which have occurred during performances.\n", "In 2018, pilot data collected by Walter Reed Army Institute of Research, was presented at the American Academy of Sleep Medicine's annual SLEEP meeting suggested National Football League teams perform better at night versus the day as a result of circadian advantage. It also indicated that teams had fewer turnovers at night.\n", "An increased frequency of breast cancer of 35% has also been found in women who work night shifts.\n\nSection::::Hazards.:Injuries.\n", "Section::::FlyAwake.\n", "Section::::Present status.\n\nFAST is now a commercial product marketed through Fatigue Science and Institutes for Behavior Resources.\n\nSection::::U.S. Navy.\n", "There is limited evidence that interval training assists in managing risk factors of many diseases, including metabolic syndrome, cardiovascular disease, obesity and diabetes. It does this by improving insulin action and sensitivity. Generating higher insulin sensitivity results in lower levels of insulin needed to lower glucose levels in the blood. This helps individuals with type 2 diabetes or metabolic syndrome control their glucose levels. A combination of interval training and continuous exercise increases cardiovascular fitness and raises HDL-cholesterol, which reduces the risk of cardiovascular disease.\n", "Unlike home field advantage which is present any time two teams play a game that is not held in a neutral site, circadian advantage does not apply to all games. In a typical MLB season, it applies to approximately 20% of games played with the other 80% featuring teams at equal circadian advantage. In sports that allow more time between games, it may apply to significantly fewer games. Circadian advantage is much more of an issue in sports that feature significant international travel.\n", "The timeseal is a utility which allows the server to adjust for the effects of internet lag. Each move is time-stamped locally and the time is takes for each command to travel to the server is not deducted from the player's clock. This method of time stamping each move is helpful for players with slow internet connections. FICS does not track lag centrally and does not permit users to exclude persistent laggers.\n\nSection::::Usage.:Interfaces.\n", "During November 2013, Evolution Gaming joined Grosvenor Casinos to create a ‘Live Casino’ targeting players on their online casino. The games include blackjack, roulette and baccarat and are available 24hrs a day, 7 days a week. The live game system is designed to produce an environment more closely matching a real casino experience.\n", "Thermochemistry aside, the rate of metabolism and an amount of energy expenditures can be mistakenly interchanged, for example, when describing RMR and REE.\n\nSection::::Use.:Clinical guidelines for conditions of resting measurements.\n\nThe Academy of Nutrition and Dietetics (AND) provides clinical guidance for preparing a subject for RMR measures, in order to mitigate possible confounding factors from feeding, stressful physical activities, or exposure to stimulants such as caffeine or nicotine:\n\n\"In preparation, a subject should be fasting for 7 hrs or greater, and mindful to avoid stimulants and stressors, such as caffeine, nicotine, and hard physical activities such as purposeful exercises.\"\n" ]
[ "Lack of clocks lessens fatigue in casino.", "The lack of clocks in casinos lessens one's fatigue." ]
[ "Lack of clock prevents \"clock watching\" to keep track of time. ", "It isn't exactly the lack of clocks that causes fatigue, it's more so one's realization of time that makes one more bound to feel tired." ]
[ "false presupposition" ]
[ "Lack of clocks lessens fatigue in casino.", "The lack of clocks in casinos lessens one's fatigue." ]
[ "false presupposition", "false presupposition" ]
[ "Lack of clock prevents \"clock watching\" to keep track of time. ", "It isn't exactly the lack of clocks that causes fatigue, it's more so one's realization of time that makes one more bound to feel tired." ]
2018-01907
Why is deer meat described as “gamey”? What does “gamey” mean?
The gamy flavor is a mineraly, musty, musky, dank, flavor that comes from the fat. I don't know what compounds contribute to the flavor, but that's where you're going to find the greatest concentration. Lean venison contains the same compounds but in far, far less concentration and thus is itself delicious. Trim as much fat as you can. If it's too lean for whatever you're cooking, use beef fat or fatback, by weight, if you're grinding it.
[ "Section::::Taxonomy and evolution.\n", "Section::::Food sources.:Land mammals.\n\nTerrestrial mammals or land mammals (\"nunarmiutaq\" \"nunarmiutaat\" in Yup'ik) are game animals and furbearers.\n\nBULLET::::- Game animals (\"pitarkaq\" \"pitarkat\" in Yup'ik and Cup'ik, \"pitarkar\" \"pitarkat\" in Cup'ig). Caribou, moose and \"bears\" are included in the definition of the word \"pitarkat\".\n", "BULLET::::- Tree squirrel or red squirrel \"Tamiasciurus hudsonicus\" (\"qiguiq\" in Yup'ik and Cup'ik)\n\nBULLET::::- Ground squirrel or parky squirrel, parka squirrel \"Spermophilus parryii\" (\"qanganaq\" in Yup'ik and Cup'ik, \"qanganar\" in Cup'ig) were skinned and hung on meat drying racks to dry.\n\nBULLET::::- Marmot or hoary marmot \"Marmota caligata\" (\"cikigpak\" in Yup'ik and Cup'ik) were used similarly to parka squirrels.\n\nBULLET::::- Hare \"Lepus othus\" (\"qayuqeggliq\" in Yup'ik and Cup'ik, \"qayuqegglir\" in Cup'ig) and Rabbit \"Lepus americanus\" (\"maqaruaq\" in Yup'ik and Cup'ik, \"maqaruar\" in Cup'ig) can be prepared much like poultry meat: roasted, broiled, grilled, fried, and stewed.\n\nSection::::Food sources.:Birds.\n", "The type and range of animals hunted for food varies in different parts of the world. This is influenced by climate, animal diversity, local taste and locally accepted views about what can or cannot be legitimately hunted. Sometimes a distinction is also made between varieties and species of a particular animal, such as wild turkey and domestic turkey. Fish caught for sport are referred to as game fish. The flesh of the animal, when butchered for consumption is often described as having a \"gamey\" flavour. This difference in taste can be attributed to the wild diet of the animal, which usually results in a lower fat content compared to domestic farm raised animals.\n", "BULLET::::- \"Club Native\" (2008)\n", "Section::::Biology and behavior.:Reproduction/calves.\n", "Deer hunting\n\nDeer hunting is hunting for deer for meat or sport, an activity which dates back tens of thousands of years. Venison, the name for deer meat, is a nutritious and natural food source of animal protein that can be obtained through deer hunting. There are many different types of deer around the world that are hunted for their meat.\n", "In ', or \"hunter's '\", at least part of the meat comes from game, such as wild boar, venison or hare. It is usually seasoned with juniper berries, which help neutralize off-flavors that may be found in the meat of wild animals.\n\nSection::::Serving.\n", "BULLET::::- Game animals (\"pitarkaq\" \"pitarkat\" in Yup'ik and Cup'ik, \"pitarkar\" \"pitarkat\" in Cup'ig). Caribou, moose and \"bears\" are included in the definition of the word \"pitarkat\".\n", "\"Venison\" originally described meat of any game animal killed by hunting, and was applied to any animal from the families Cervidae (deer), Leporidae (hares), and Suidae (wild pigs); and certain species of the genus \"Capra\" (goats and ibex).\n\nIn Southern Africa, the word \"venison\" refers to the meat of antelope. as there are no native Cervidae in sub-Saharan Africa.\n\nSection::::Qualities.\n", "Section::::In popular culture.\n\nThe world-famous deer Bambi (the titular character of the book \"Bambi, A Life in the Woods\" (1923) and its sequel \"Bambi's Children\" (1939), by Felix Salten) is originally a roe deer. When the story was adapted into the animated feature film \"Bambi\" (1942), by the Walt Disney Studios, Bambi was changed to a mule deer, and accordingly, the setting was changed to a North American wilderness. These changes made Bambi a deer species more familiar to mainstream US viewers.\n\nSection::::See also.\n\nBULLET::::- Siberian roe deer\n\nSection::::Further reading.\n", "Section::::Biology.\n\nSection::::Biology.:Diet.\n", "Section::::Human interaction.:Economic significance.\n", "Section::::Use as a hunting dog.\n", "BULLET::::- Red fox \"Vulpes vulpes\" (\"kaviaq\" in Yup'ik and Cup'ik, \"kavviar\" in Cup'ig). The Nunivak Cup'ig practiced few restrictions with reference to food, but the flesh of the red fox was avoided since it was believed to cause a person to sleep during the day and be restless at night. This restriction did not apply to the flesh of the white fox.\n\nBULLET::::- Arctic fox \"Vulpes lagopus\" (\"uliiq\" in Yup'ik and Cup'ik, \"qaterlir\" [white fox], \"eqyerer\" [blue fox] \"illaassug\" [cross fox] in Cup'ig)\n\nBULLET::::- Sea otter \"Enhydra lutris\" (\"arrnaq\" in Yup'ik and Cup'ik, \"aatagar\" in Cup'ig)\n", "Section::::Terminology.\n", "BULLET::::- Fond brun, or brown stock. The brown color is achieved by roasting the bones and mirepoix. This also adds a rich, full flavour. Veal bones are the most common type used in a fond brun. Tomato paste is often added (sometimes thinned tomato paste is painted onto the roasting bones). The acid in the paste helps break down the connective tissue helping accelerating the formation of gelatin, as well as giving color to the stock.\n\nBULLET::::- Glace viande is stock made from bones, usually from veal, that is highly concentrated by reduction.\n", "BULLET::::- When silage is prepared under optimal conditions, the modest acidity also has the effect of improving palatability and provides a dietary contrast for the animal. (However, excessive production of acetic and butyric acids can reduce palatability: the mix of bacteria is ideally chosen so as to maximize lactic acid production.)\n\nBULLET::::- Several of the fermenting organisms produce vitamins: for example, lactobacillus species produce folic acid and vitamin B12.\n", "Section::::Fauna.\n\nBULLET::::- Bears\n\nBULLET::::- Beavers\n\nBULLET::::- Birds\n\nBULLET::::- Bugs\n\nBULLET::::- Donkeys\n\nBULLET::::- Gophers\n\nBULLET::::- Hedgehogs\n\nBULLET::::- Elephants\n\nBULLET::::- Kangaroos\n\nBULLET::::- Mice\n\nBULLET::::- Owls\n\nBULLET::::- Pigs\n\nBULLET::::- Rabbits\n\nBULLET::::- Squirrels\n\nBULLET::::- Tigers\n\nBULLET::::- Weasels\n\nSection::::In other media.\n", "The coat is singular, with the fur being short straight dense and rough, without any undercoat, and without odor. It can range from a leonine fawn color to several shades of light fawn and white, these last colors being the most common. While these colors may be solid, in some cases there may also be a small or large mark of one of these on a white coat. The canine may bear a white mark around its neck and a white mark on its chest and legs.\n\nSection::::See also.\n\nBULLET::::- Rajapalayam (dog)\n\nBULLET::::- Gaucho Sheepdog\n\nBULLET::::- Campeiro Bulldog\n\nBULLET::::- Dogue Brasileiro\n", "Traditionally, game meat used to be hung until \"high\", i.e. approaching a state of decomposition. The term \"gamey\" / \"gamy\" refers to this usually desirable taste (\"haut goût\"). However, this adds to the risk of contamination. Small game can be processed essentially intact, after gutting and skinning or defeathering (by species). Small animals are ready for cooking, although they may be disjointed first. Large game must be processed by techniques commonly practiced by commercial butchers.\n\nSection::::Cooking.\n", "BULLET::::- \"Pomelomeryx boulangeri\"\n\nBULLET::::- \"Pomelomeryx gracilis\"\n\nBULLET::::- \"Dremotherium\"\n\nBULLET::::- \"Dremotherium cetinensis\"\n\nBULLET::::- \"Dremotherium guthi\"\n\nBULLET::::- \"Dremotherium quercyi\"\n\nBULLET::::- \"Dremotherium feignouxi\"\n\nBULLET::::- Blastomerycinae\n\nBULLET::::- \"Pseudoblastomeryx\"\n\nBULLET::::- \"Pseudoblastomeryx advena\"\n\nBULLET::::- \"Machaeromeryx\"\n\nBULLET::::- \"Machaeromeryx tragulus\"\n\nBULLET::::- \"Longirostromeryx\"\n\nBULLET::::- \"Longirostromeryx clarendonensis\"\n\nBULLET::::- \"Longirostromeryx wellsi\"\n\nBULLET::::- \"Problastomeryx\"\n\nBULLET::::- \"Problastomeryx primus\"\n\nBULLET::::- \"Parablastomeryx\"\n\nBULLET::::- \"Parablastomeryx floridanus\"\n\nBULLET::::- \"Parablastomeryx gregorii\"\n\nBULLET::::- \"Blastomeryx\"\n\nBULLET::::- \"Blastomeryx gemmifer\"\n\nBULLET::::- Moschinae\n\nBULLET::::- \"Micromeryx\"\n\nBULLET::::- \"Micromeryx styriacus\"\n\nBULLET::::- \"Micromeryx flourensianus\"\n\nBULLET::::- \"Micromeryx\"? \"eiselei\" - this species is a proposed member of genus \"Micromeryx\"\n\nBULLET::::- Moschus\n\nBULLET::::- \"Moschus moschiferus\"\n\nBULLET::::- \"Moschus anhuiensis\"\n\nBULLET::::- \"Moschus berezovskii\"\n\nBULLET::::- \"Moschus fuscus\"\n\nBULLET::::- \"Moschus chrysogaster\"\n", "BULLET::::- \"now no longer a category, Jamón ibérico de recebo\": from acorn, pasture and compound-fed Iberian pigs\n\nBULLET::::- \"Jamón ibérico de cebo o campo\" (field). from Iberian pigs feeding natural feed and in the case of field, grazes in the fields during a period of 2 months.\n\nBULLET::::- \"Jamón ibérico\": (was also known as \"jamón de pata negra\", but use of this name was prohibited on April 15, 2006 in order to avoid confusion). Compound-fed Iberian pigs.\n", "BULLET::::- Nonformula-fed (\"red\" or \"grain-fed\") veal: Calves that are raised on grain, hay, or other solid food, in addition to milk. The meat is darker in colour, and some additional marbling and fat may be apparent. In Canada, the grain-fed veal stream is usually marketed as calf, rather than veal. The calves are slaughtered at 22 to 26 weeks of age weighing .\n\nBULLET::::- Rose veal (in the UK)\n\nBULLET::::- Young beef (in Europe)\n\nBULLET::::- Pasture-raised veal\n\nSection::::Culinary uses.\n", "Section::::Human interaction.:Heraldry.\n" ]
[]
[]
[ "normal" ]
[]
[ "normal" ]
[]
2018-18843
Stigma behind MEDICINAL cannabis use?
A lot of people see it as a backdoor to legalization. You can get a prescription via FaceTime in some states.
[ "Medical cannabis research includes any medical research on using cannabis as a treatment for any medical condition. For reasons including increased popular support of cannabis use, a trend of cannabis legalization, and the perception of medical usefulness, more scientists are doing medical cannabis research. Medical cannabis is unusually broad as a treatment for many conditions, each of which has its own state of research. Similarly, various countries conduct and respond to medical cannabis research in different ways.\n\nSection::::See also.\n\nBULLET::::- Charlotte's Web cannabis strain\n\nBULLET::::- Chinese herbology\n\nBULLET::::- Medical cannabis in the United States\n\nBULLET::::- Tilden's Extract\n\nSection::::External links.\n", "Medical cannabis research\n\nMedical cannabis research includes any medical research on using cannabis as a treatment for any medical condition. For reasons including increased popular support of cannabis use, a trend of cannabis legalization, and the perception of medical usefulness, more scientists are doing medical cannabis research. Medical cannabis is unusually broad as a treatment for many conditions, each of which has its own state of research. Similarly, various countries conduct and respond to medical cannabis research in different ways.\n\nSection::::Ethics.\n", "Medical cannabis\n\nMedical cannabis, or medical marijuana, is cannabis and cannabinoids that are prescribed by physicians for their patients. The use of cannabis as medicine has not been rigorously tested due to production and governmental restrictions, resulting in limited clinical research to define the safety and efficacy of using cannabis to treat diseases. Preliminary evidence suggests that cannabis can reduce nausea and vomiting during chemotherapy, improve appetite in people with HIV/AIDS, reduces chronic pain and muscle spasms and treats severe forms of epilepsy.\n", "Section::::Federal policy.:Compassionate IND program.\n", "Cannabis use as a medical treatment has risen globally since 2008 for a variety of reasons including increasing popular support for cannabis legalization and increased incidence of chronic pain among patients. While medical cannabis use is increasing, there are major social and legal barriers which lead to cannabis research proceeding more slowly and differently from standard medical research. Reasons why cannabis is unusual as a treatment include that it is not a patented drug owned by the pharmaceutical industry, and that its legal status as a medical treatment is ambiguous even where it is legal to use, and that cannabis use carries outside the norm of a typical medical treatment. The ethics around cannabis research is in a state of rapid change.\n", "The authors of a report on a 2011 survey of medical cannabis users say that critics have suggested that some users \"game the system\" to obtain medical cannabis ostensibly for treatment of a condition, but then use it for nonmedical purposes – though the truth of this claim is hard to measure. The report authors suggested rather that medical cannabis users occupied a \"continuum\" between medical and nonmedical use.\n\nSection::::Society and culture.:Brand names.\n", "From 18 December 2018, the Misuse of Drugs act was amended by the Misuse of Drugs (Medicinal Cannabis) Amendment Act 2018 (2018 No 54) allowing for much broader use of medical marijuana, making the drug available to terminally ill patients in the last 12 months of life.\n", "The \"Cannabis\" plant has a history of medicinal use dating back thousands of years in many cultures. Some American medical organizations have requested removal of cannabis from the list of Schedule I controlled substances maintained by the United States federal government, followed by regulatory and scientific review. Others oppose its legalization, such as the American Academy of Pediatrics.\n", "The history of medicinal cannabis goes back to ancient times. Ancient physicians in many parts of the world mixed cannabis into medicines to treat pain and other ailments. In the 19th century, cannabis was introduced for therapeutic use in Western Medicine. Since then, there have been several advancements in how the drug is administered. Initially, cannabis was reduced to a powder and mixed with wine for administration. In the 1970s, synthetic THC was created to be administered as the drug Marinol in a capsule. However, the main mode of administration for cannabis is smoking because its effects are almost immediate when the smoke is inhaled. Between 1996 and 1999, eight U.S. states supported cannabis prescriptions opposing policies of the federal government. Most people who are prescribed marijuana for medical purposes use it to alleviate severe pain.\n", "In 2014, the American Academy of Neurology reviewed all available findings levering the use of marijuana to treat brain diseases. The result was that the scientific evidence is weak that cannabis in any form serves as medicinal for curing or alleviating neurological disorders. To ease multiple sclerosis patients' stiffness, which may be accomplished by their taking cannabis extract by mouth or as a spray, there is support. The academy has published new guidelines on the use of marijuana pills and sprays in the treatment of MS.\n", "A media report on 16 May 2013 stated that a New South Wales (NSW) parliamentary committee has recommended the use of medically-prescribed cannabis for terminally ill patients and has supported the legalisation of cannabis-based pharmaceuticals on such grounds. As part of the recommendation, the committee has called upon the cooperation of the federal Australian government for a scheme that would allow patients to possess up to 15 grams of cannabis. Also, both the patients and their carers would be required to obtain a certificate from a specialist, registration with the Department of Health and a photo Identification card.\n", "Section::::Treatment.:Barriers to treatment.\n\nResearch that looks at barriers to cannabis treatment frequently cites a lack of interest in treatment, lack of motivation and knowledge of treatment facilities, an overall lack of facilities, costs associated with treatment, difficulty meeting program eligibility criteria and transport difficulties.\n\nSection::::Epidemiology.\n", "A license is available from the home office to import prescribed medicinal cannabis. However, as of mid-February 2019, virtually no-one has been able to access medical cannabis.\n\nThe law stipulates that GPs are not allowed to prescribe cannabis-derived medicines. Prescription must come from a specialist consultant. NHS guidance states that medical cannabis should only be prescribed when there is clear published evidence of its benefit and other treatment options have been exhausted. \n", "In April 2018, after 5 years of research, Sanjay Gupta backed medical marijuana for conditions such as epilepsy and multiple sclerosis. He believes that medical marijuana is safer than opioid for pain management.\n\nSection::::Research by medical condition.\n\nSection::::Research by medical condition.:Cancer.\n", "Individuals who have been particularly active in efforts to support the medical use of cannabis include Robert Randall, Dennis Peron, Ed Rosenthal, Steve Kubby, Steve DeAngelo, Richard Lee, Jon Gettman, Brownie Mary, and Tod H. Mikuriya. Former talk show host Montel Williams is a well-known advocate who uses cannabis to treat his multiple sclerosis, a topic he has testified about in a number of states considering medical cannabis legislation. Former U.S. Surgeon General Joycelyn Elders has also testified in support of medical cannabis legislation in several states.\n", "Countries that have legalized the medical use of cannabis include Australia, Canada, Chile, Colombia, Croatia, Cyprus, Czech Republic, Finland, Germany, Greece, Israel, Italy, Jamaica, Luxembourg, Macedonia, Malta, the Netherlands, New Zealand, Peru, Poland, Portugal, Sri Lanka, Thailand, the United Kingdom, and Uruguay. Other countries have more restrictive laws allowing for the use of specific cannabinoids only, such as Brazil and France which have approved the use of Sativex. Countries with the most relaxed policies include Canada, Uruguay, and the Netherlands, where cannabis can be purchased without need for a prescription. In Mexico, THC content of medical cannabis is limited to one percent. The same limit applies in Switzerland, but no prescription is required to purchase. In the United States, the legality of medical cannabis varies by state.\n", "Section::::Federal policy.\n\nSection::::Federal policy.:Controlled Substances Act.\n", "When cannabis is inhaled to relieve pain, blood levels of cannabinoids rise faster than when oral products are used, peaking within three minutes and attaining an analgesic effect in seven minutes. A 2014 review found limited and weak evidence that smoked cannabis was effective for chronic non-cancer pain. A 2015 meta-analysis found that inhaled medical cannabis was effective in reducing neuropathic pain in the short term for one in five to six patients. Another 2015 review found limited evidence that medical cannabis was effective for neuropathic pain when combined with traditional analgesics.\n", "A 2017 review found only limited evidence for the effectiveness of cannabis in relieving chronic pain in several conditions. Another review found tentative evidence for use of cannabis in treating peripheral neuropathy, but little evidence of benefit for other types of long term pain.\n", "Medical cannabis can be administered through various methods, including capsules, lozenges, tinctures, dermal patches, oral or dermal sprays, cannabis edibles, and vaporizing or smoking dried buds. Synthetic cannabinoids are available for prescription use in some countries, such as dronabinol and nabilone. Countries that allow the medical use of whole-plant cannabis include Australia, Canada, Chile, Colombia, Germany, Greece, Israel, Italy, the Netherlands, Peru, Poland, Portugal, and Uruguay. In the United States, 33 states and the District of Columbia have legalized cannabis for medical purposes, beginning with the passage of California's Proposition 215 in 1996. Although cannabis remains prohibited for any use at the federal level, the Rohrabacher–Farr amendment was enacted in December 2014, limiting the ability of federal law to be enforced in states where medical cannabis has been legalized.\n", "The New Zealand Medical Association (NZMA) supports having evidence-based, peer-reviewed studies of medical cannabis. In 2010 the New Zealand Law Commission made a recommendation to allow for its medical use. The NZMA, which made submissions on the issues paper, supports the stance put forward by the Law Commission. GreenCross New Zealand was the first legally registered support group fighting for patient rights to access cannabis as medicine; however, this group is now defunct due to not filing financial statements. As of September 2017 the only explicitly medical advocacy group is Medical Cannabis Awareness New Zealand (MCANZ) a registered charity dedicated to legal access for patients now, and is mildly successful with the non-pharmaceutical route, having introduced Tilray for a small number of patients thereby allowing NZ stocks to be held.\n", "Section::::Classification.\n\nMany different cannabis strains are collectively called \"medical cannabis\". Since many varieties of the cannabis plant and plant derivatives all share the same name, the term medical cannabis is ambiguous and can be misunderstood. A Cannabis plant includes more than 400 different chemicals, of which about 70 are cannabinoids. In comparison, typical government-approved medications contain only one or two chemicals. The number of active chemicals in cannabis is one reason why treatment with cannabis is difficult to classify and study.\n", "Voters in eight U.S. states showed their support for cannabis prescriptions or recommendations given by physicians between 1996 and 1999, including Alaska, Arizona, California, Colorado, Maine, Michigan, Nevada, Oregon, and Washington, going against policies of the federal government. In May 2001, \"The Chronic Cannabis Use in the Compassionate Investigational New Drug Program: An Examination of Benefits and Adverse Effects of Legal Clinical Cannabis\" (Russo, Mathre, Byrne et al.) was completed. This three-day examination of major body functions of four of the five living US federal cannabis patients found \"mild pulmonary changes\" in two patients.\n", "According to the United States Department of Health and Human Services, there were 455,000 emergency room visits associated with cannabis use in 2011. These statistics include visits in which the patient was treated for a condition induced by or related to recent cannabis use. The drug use must be \"implicated\" in the emergency department visit, but does not need to be the direct cause of the visit. Most of the illicit drug emergency room visits involved multiple drugs. In 129,000 cases, cannabis was the only implicated drug.\n", "Approved cannabis-based pharmaceuticals can be prescribed by a specialist doctor, but requires patients to meet strict criteria. As of April 2016, only Sativex is approved for use in New Zealand; it is not subsidised, so patients must pay the full retail cost. Unapproved cannabis-based pharmaceuticals (e.g. Cesamet, Marinol) and non-pharmaceutical cannabis products can be approved on case-by-case basis by the Minister of Health. On 9 June 2015, Associate Health Minister Peter Dunne approved the one-off use of Elixinol, a cannabidiol (CBD) product from the United States for a coma patient, and on 4 April 2016, he approved the one-off use of Aceso Calm Spray, a non-pharmaceutical-grade CBD cannabis-based product for a patient with a severe case of Tourette syndrome. These two cases are the only ones to this date to have been approved by the Health Minister.\n" ]
[]
[]
[ "normal" ]
[]
[ "normal", "normal" ]
[]
2018-15095
If we see the stars as they were in the past, shouldn't we be able to "see" the Big Bang ?
We are able to see the Big Bang, or at least the first light released after it when the universe became transparent - immediately after the Big Bang matter and light first didn't exist as they do today, and then all the matter was ionized for a few hundred thousand years, absorbing all light from before then. Unfortunately, since it's been billions of years and the universe has been expanding for all of that time, the light that's reaching us now has been redshifted or "stretched out" to wavelengths that the human eye can't see - in fact, it's all the way down in the microwave band, same as Wi-Fi, Bluetooth, and of course microwave ovens (not exactly the same frequency, but closer to that than visible light). We do still have images of this Cosmic Microwave Background Radiation coloured in so that we can see it, for example [this]( URL_0 ).
[ "An important feature of the Big Bang spacetime is the presence of particle horizons. Since the universe has a finite age, and light travels at a finite speed, there may be events in the past whose light has not had time to reach us. This places a limit or a \"past horizon\" on the most distant objects that can be observed. Conversely, because space is expanding, and more distant objects are receding ever more quickly, light emitted by us today may never \"catch up\" to very distant objects. This defines a \"future horizon\", which limits the events in the future that we will be able to influence. The presence of either type of horizon depends on the details of the FLRW model that describes our universe.\n", "However, the Big Bang theory seems to introduce a new problem: it states that the sky was much brighter in the past, especially at the end of the recombination era, when it first became transparent. All points of the local sky at that era were comparable in brightness to the surface of the Sun, due to the high temperature of the universe in that era; and most light rays will originate not from a star but the relic of the Big Bang.\n", "In February 2018, astronomers reported, for the first time, a signal of the reionization epoch, an indirect detection of light from the earliest stars formed - about 180 million years after the Big Bang.\n\nSection::::Observations.:Notable pathfinder objects.\n\nBULLET::::- MWC 349 was first discovered in 1978, and is estimated to be only 1,000 years old.\n\nBULLET::::- VLA 1623 – The first exemplar Class 0 protostar, a type of embedded protostar that has yet to accrete the majority of its mass. Found in 1993, is possibly younger than 10,000 years.\n", "The observable universe can be thought of as a sphere that extends outwards from any observation point for 46.5 billion light years, going farther back in time and more redshifted the more distant away one looks. Ideally, one can continue to look back all the way to the Big Bang; in practice, however, the farthest away one can look using light and other electromagnetic radiation is the cosmic microwave background (CMB), as anything past that was opaque. Experimental investigations show that the observable universe is very close to isotropic and homogeneous.\n", "General relativity gave us our modern notion of the expanding universe that started in the Big Bang. Using relativity and quantum theory we have been able to roughly reconstruct the history of the universe. In our epoch, during which electromagnetic waves can propagate without being disturbed by conductors or charges, we can see the stars, at great distances from us, in the night sky. (Before this epoch, there was a time, before the universe cooled enough for electrons and nuclei to combine into atoms about 377,000 years after the Big Bang, during which starlight would not have been visible over large distances.)\n", "In 2013 and 2015, ESA's Planck spacecraft released even more detailed images of the cosmic microwave background, showing consistency with the Lambda-CDM model to still higher precision.\n", "Recognizing the cosmological importance of the darkness of the night sky (Olbers' paradox) and the first speculations on an extragalactic background light dates back to the first half of the 19th century. Despite its importance, the first attempts were made only in the 1950-60s to derive the value of the visual background due to galaxies, at that time based on the integrated starlight of these stellar systems. In the 1960s the absorption of starlight by dust was already taken into account, but without considering the re-emission of this absorbed energy in the infrared. At that time Jim Peebles pointed out, that in a Big Bang-created Universe there must have been a cosmic infrared background (CIB) – different from the cosmic microwave background – that can account for the formation and evolution of stars and galaxies.\n", "Tyson begins the episode by explaining the nature of the speed of light and how much of what is seen of the observable universe is from light emanated from billions of years ago. Tyson further explains how modern astronomy has used such analysis via deep time to identify the Big Bang event and the age of the universe.\n", "Also in 2005, astronomers Alexander Kashlinsky and John Mather of NASA's Goddard Space Flight Center reported that one of \"Spitzer\" earliest images may have captured the light of the first stars in the universe. An image of a quasar in the Draco constellation, intended only to help calibrate the telescope, was found to contain an infrared glow after the light of known objects was removed. Kashlinsky and Mather are convinced that the numerous blobs in this glow are the light of stars that formed as early as 100 million years after the Big Bang, redshifted by cosmic expansion.\n", "The cosmic microwave background has a redshift of , corresponding to an age of approximately 379,000 years after the Big Bang and a comoving distance of more than 46 billion light years. The yet-to-be-observed first light from the oldest Population III stars, not long after atoms first formed and the CMB ceased to be absorbed almost completely, may have redshifts in the range of . Other high-redshift events predicted by physics but not presently observable are the cosmic neutrino background from about two seconds after the Big Bang (and a redshift in excess of ) and the cosmic gravitational wave background emitted directly from inflation at a redshift in excess of .\n", "Populations of stars have been aging and evolving, so that distant galaxies (which are observed as they were in the early universe) appear very different from nearby galaxies (observed in a more recent state). Moreover, galaxies that formed relatively recently, appear markedly different from galaxies formed at similar distances but shortly after the Big Bang. These observations are strong arguments against the steady-state model. Observations of star formation, galaxy and quasar distributions and larger structures, agree well with Big Bang simulations of the formation of structure in the universe, and are helping to complete details of the theory.\n", "Assuming the universe keeps expanding and it does not suffer a Big Crunch, a Big Rip, or another similar fate, the cosmic microwave background will continue redshifting until it will no longer be detectable, and will be overtaken first by the one produced by starlight, and later by the background radiation fields of processes that are assumed will take place in the far future of the universe.\n\nSection::::Timeline of prediction, discovery and interpretation.\n\nSection::::Timeline of prediction, discovery and interpretation.:Thermal (non-microwave background) temperature predictions.\n\nBULLET::::- 1896 – Charles Édouard Guillaume estimates the \"radiation of the stars\" to be 5–6K.\n", "Future gravitational waves observatories might be able to detect primordial gravitational waves, relics of the early universe, up to less than a second after the Big Bang.\n\nSection::::Problems and related issues in physics.\n", "The observable universe is currently 1.38 (13.8 billion) years old. This time is in the Stelliferous Era. About 155 million years after the Big Bang, the first star formed. Since then, stars have formed by the collapse of small, dense core regions in large, cold molecular clouds of hydrogen gas. At first, this produces a protostar, which is hot and bright because of energy generated by gravitational contraction. After the protostar contracts for a while, its center will become hot enough to fuse hydrogen and its lifetime as a star will properly begin.\n", "As yet, no Population III stars have been found, so our understanding of them is based on computational models of their formation and evolution. Fortunately, observations of the Cosmic Microwave Background radiation can be used to date when star formation began in earnest. Analysis of such observations made by the European Space Agency's Planck telescope in 2016 concluded that the first generation of stars may have formed from around 300 million years after the Big Bang.\n", "Our understanding of the universe back to very early times suggests that there is a past horizon, though in practice our view is also limited by the opacity of the universe at early times. So our view cannot extend further backward in time, though the horizon recedes in space. If the expansion of the universe continues to accelerate, there is a future horizon as well.\n\nSection::::History.\n\nSection::::History.:Etymology.\n", "BULLET::::- 200–300 million years: First stars begin to shine: Because many are Population III stars (some Population II stars are accounted for at this time) they are much bigger and hotter and their life-cycle is fairly short. Unlike later generations of stars, these stars are metal free. As reionization intensifies, photons of light scatter off free protons and electrons – Universe becomes opaque again.\n", "In the Big Bang model for the formation of the universe, inflationary cosmology predicts that after about 10 seconds the nascent universe underwent exponential growth that smoothed out nearly all irregularities. The remaining irregularities were caused by quantum fluctuations in the inflaton field that caused the inflation event. Long before the formation of stars and planets, the early universe was smaller, much hotter and, starting 10 seconds after the Big Bang, filled with a uniform glow from its white-hot fog of interacting plasma of photons, electrons, and baryons.\n", "Both of these effects have been observed by the WMAP spacecraft, providing evidence that the universe was ionized at very early times, at a redshift more than 17. The detailed provenance of this early ionizing radiation is still a matter of scientific debate. It may have included starlight from the very first population of stars (population III stars), supernovae when these first stars reached the end of their lives, or the ionizing radiation produced by the accretion disks of massive black holes.\n", "This period, known as the Dark Ages, began around 377,000 years after the Big Bang. During the Dark Ages, the temperature of the universe cooled from some 4000 K down to about 60 K, and only two sources of photons existed: the photons released during recombination/decoupling (as neutral hydrogen atoms formed), which we can still detect today as the cosmic microwave background (CMB), and photons occasionally released by neutral hydrogen atoms, known as the 21 cm spin line of neutral hydrogen. The hydrogen spin line is in the microwave range of frequencies, and within 3 million years, the CMB photons had redshifted out of visible light to infrared; from that time until the first stars, there were no visible light photons. Other than perhaps some rare statistical anomalies, the universe was truly dark.\n", "All of this cosmic evolution after the inflationary epoch can be rigorously described and modeled by the ΛCDM model of cosmology, which uses the independent frameworks of quantum mechanics and Einstein's General Relativity. There is no well-supported model describing the action prior to 10 seconds or so. Apparently a new unified theory of quantum gravitation is needed to break this barrier. Understanding this earliest of eras in the history of the universe is currently one of the greatest unsolved problems in physics.\n\nSection::::Features of the model.\n", "Most stars are actually relatively cool objects emitting much of their electromagnetic radiation in the visible or near-infrared part of the spectrum. Ultraviolet radiation is the signature of hotter objects, typically in the early and late stages of their evolution. In the Earth's sky seen in ultraviolet light, most stars would fade in prominence. Some very young massive stars and some very old stars and galaxies, growing hotter and producing higher-energy radiation near their birth or death, would be visible. Clouds of gas and dust would block the vision in many directions along the Milky Way.\n", "The Big Bang theory is the prevailing cosmological description of the development of the Universe. Under this theory, space and time emerged together ago and the energy and matter initially present have become less dense as the Universe expanded. After an initial accelerated expansion called the inflationary epoch at around 10 seconds, and the separation of the four known fundamental forces, the Universe gradually cooled and continued to expand, allowing the first subatomic particles and simple atoms to form. Dark matter gradually gathered forming a foam-like structure of filaments and voids under the influence of gravity. Giant clouds of hydrogen and helium were gradually drawn to the places where dark matter was most dense, forming the first galaxies, stars, and everything else seen today. It is possible to see objects that are now further away than 13.799 billion light-years because space itself has expanded, and it is still expanding today. This means that objects which are now up to 46.5 billion light-years away can still be seen in their distant past, because in the past when their light was emitted, they were much closer to the Earth.\n", "CLASS will also improve our understanding of \"cosmic dawn,\" when the first stars lit up the universe. Ultraviolet (UV) radiation from these stars stripped electrons from atoms in a process called \"reionization.\" The freed electrons scatter CMB light, imparting a polarization that CLASS measures. In this way CLASS can improve our knowledge of when and how cosmic dawn occurred. A better understanding of cosmic dawn will also help other experiments measure the sum of the masses of the three known neutrino types using the gravitational lensing of the CMB. \n", "The figures quoted above are distances \"now\" (in cosmological time), not distances \"at the time the light was emitted\". For example, the cosmic microwave background radiation that we see right now was emitted at the time of photon decoupling, estimated to have occurred about years after the Big Bang, which occurred around 13.8 billion years ago. This radiation was emitted by matter that has, in the intervening time, mostly condensed into galaxies, and those galaxies are now calculated to be about 46 billion light-years from us. To estimate the distance to that matter at the time the light was emitted, we may first note that according to the Friedmann–Lemaître–Robertson–Walker metric, which is used to model the expanding universe, if at the present time we receive light with a redshift of \"z\", then the scale factor at the time the light was originally emitted is given by\n" ]
[ "We are unable to see the \"Big Bang\".", "We are unable to see the Big Bang of the past." ]
[ "We are able to see the first light released immediately after the \"Big Bang\".", "We are able to see the Big Bang of the past." ]
[ "false presupposition" ]
[ "We are unable to see the \"Big Bang\".", "We are unable to see the Big Bang of the past." ]
[ "false presupposition", "false presupposition" ]
[ "We are able to see the first light released immediately after the \"Big Bang\".", "We are able to see the Big Bang of the past." ]
2018-17675
Why is Bluetooth so much less reliable than RF?
What exactly do you refers to as RF devices? Is it Radio Frequency? If that is the case Bluetooth would be included as it is a Radio Communication.
[ "In 2001, researchers at Nokia determined various scenarios that contemporary wireless technologies did not address. The company began developing a wireless technology adapted from the Bluetooth standard which would provide lower power usage and cost while minimizing its differences from Bluetooth technology. The results were published in 2004 using the name Bluetooth Low End Extension.\n", "BULLET::::- For \"numeric comparison\", MITM protection can be achieved with a simple equality comparison by the user.\n\nBULLET::::- Using OOB with NFC enables pairing when devices simply get close, rather than requiring a lengthy discovery process.\n\nSection::::Technical information.:Pairing and bonding.:Security concerns.\n\nPrior to Bluetooth v2.1, encryption is not required and can be turned off at any time. Moreover, the encryption key is only good for approximately 23.5 hours; using a single encryption key longer than this time allows simple XOR attacks to retrieve the encryption key.\n", "Many devices depend on the transmission and reception of radio waves for their operation. The possibility for mutual interference is great. Many devices not intended to transmit signals may do so. For instance a dielectric heater might contain a 2000 watt 27 MHz source within it. If the machine operates as intended then none of this RF power will leak out. However, if due to poor design or maintenance it allows RF to leak out, it will become a transmitter or unintentional radiator.\n\nSection::::EMC matters.:RF leakage & shielding.\n", "BULLET::::- Alternative MAC/PHY: Enables the use of alternative MAC and PHYs for transporting Bluetooth profile data. The Bluetooth radio is still used for device discovery, initial connection and profile configuration. However, when large quantities of data must be sent, the high-speed alternative MAC PHY 802.11 (typically associated with Wi-Fi) transports the data. This means that Bluetooth uses proven low power connection models when the system is idle, and the faster radio when it must send large quantities of data. AMP links require enhanced L2CAP modes.\n", "As with all wireless transmission, the range and accessibility to most Bluetooth advertising depends on the transmitter power class and the individual portage of the receiver equipment. However, with advances in mobile devices technology, this distance for proper receiving is increasing to reach 250 meters or more in nowadays smart phones, tablet computers and other mobile devices.\n\nThe selectivity goes down with extension of range. Hence the transmission power raise as well as receiver sensitivity raise will reduce the contextual connection between actual location of receiver and contents of broadcast message.\n", "Current mobile devices are commonly released with hardware and software support for both classic Bluetooth and the Bluetooth Low Energy.\n\nSection::::Implementation.:Operating systems.\n\nBULLET::::- iOS 5 and later\n\nBULLET::::- Windows Phone 8.1\n\nBULLET::::- Windows 8 and later\n\nBULLET::::- Android 4.3 and later\n\nBULLET::::- BlackBerry 10\n\nBULLET::::- Linux 3.4 and later through BlueZ 5.0\n\nBULLET::::- Unison OS 5.2\n\nBULLET::::- macOS 10.10\n\nSection::::Technical details.\n\nSection::::Technical details.:Radio interface.\n", "Section::::Design.\n\nSection::::Design.:Battery powered.\n", "BULLET::::- Some users also reported problems with the unit's microphone. They claimed that voice clarity on the recipient's end of the connection was poor, akin to talking \"inside a cardboard box.\" (koreth, Slashdot). Users have found that bluetooth-enabled wireless headsets or fixed headsets seemed to be an effective work-around.\n\nBULLET::::- Another problem people reported is that the handset's earpiece made it difficult to hear in loud environments. Using a bluetooth-enabled wireless headset would also take care of this problem.\n", "Over 570,000 radios have been purchased. There have been several system improvement programs, including the Integrated Communications Security (ICOM) models, which have provided integrated voice and data encryption, the Special Improvement Program (SIP) models, which add additional data modes, and the advanced SIP (ASIP) models, which are less than half the size and weight of ICOM and SIP models and provided enhanced FEC (forward error correction) data modes, RS-232 asynchronous data, Packet Data formats, and direct interfacing to Precision Lightweight GPS Receiver (PLGR) devices providing radio level situational awareness capability.\n", "Wireless capability is a key requirement for most enterprise mobility applications, and it has been reported that wireless-transmission failure rates are three times higher for non-rugged notebooks compared to rugged units. This difference is attributed to the greater experience of rugged-notebook vendors at integrating multiple radios into their products. Each transmission failure leads to five to ten minutes in lost productivity as the user has to re-login to the company network through a VPN.\n", "Bluetooth Low Energy is designed to enable devices with low power consumption. Several chipmakers including Cambridge Silicon Radio, Dialog Semiconductor, Nordic Semiconductor, STMicroelectronics, Cypress Semiconductor, Silicon Labs and Texas Instruments have introduced their Bluetooth Low Energy optimized chipsets over the last few years. Devices with peripheral and central roles have different power requirements. A study by beacon software company, Aislelabs, reported that peripherals, such as proximity beacons, usually function for 1–2 years with a 1,000mAh coin cell battery. This is possible because of power efficiency of Bluetooth Low Energy protocol which only transmits small packets as compared to Bluetooth Classic which is also suitable for audio and high bandwidth data.\n", "BULLET::::- Wireless communication standards: Through the late 1990s, proponents of Bluetooth (such as Sony-Ericsson) and WiFi competed to gain support for positioning one of these standards as the de facto computer-to-computer wireless communication protocol. This competition ended around 2000 with WiFi the undisputed winner (largely due to a very slow rollout of Bluetooth networking products). However, in the early 2000s, Bluetooth was repurposed as a device-to-computer wireless communication standard, and has succeeded well in this regard. Today's computers often feature separate equipment for both types of wireless communication, although Wireless USB is slowly gaining momentum to become a competitor of Bluetooth.\n", "Providing fault-tolerant design for every component is normally not an option. Associated redundancy brings a number of penalties: increase in weight, size, power consumption, cost, as well as time to design, verify, and test. Therefore, a number of choices have to be examined to determine which components should be fault tolerant:\n\nBULLET::::- How critical is the component? In a car, the radio is not critical, so this component has less need for fault tolerance.\n", "Compared to \"Classic Bluetooth\", Bluetooth Low Energy is intended to provide considerably reduced power consumption and cost while maintaining a similar communication range. In terms of lengthening the battery life of Bluetooth devices, represents a significant progression.\n\nBULLET::::- In a single-mode implementation, only the low energy protocol stack is implemented. Dialog Semiconductor, STMicroelectronics, AMICCOM, CSR, Nordic Semiconductor and Texas Instruments have released single mode Bluetooth Low Energy solutions.\n", "One application is distributing messages at a specific Point of Interest, for example a store, a bus stop, a room or a more specific location like a piece of furniture or a vending machine. This is similar to previously used geopush technology based on GPS, but with a much reduced impact on battery life and much extended precision.\n", "Section::::Uses.\n\nBluetooth is a standard wire-replacement communications protocol primarily designed for low power consumption, with a short range based on low-cost transceiver microchips in each device. Because the devices use a radio (broadcast) communications system, they do not have to be in visual line of sight of each other; however, a \"quasi optical\" wireless path must be viable. Range is power-class-dependent, but effective ranges vary in practice. See the table \"Ranges of Bluetooth devices by class\".\n", "BULLET::::- Enhanced Retransmission Mode (ERTM): This mode is an improved version of the original retransmission mode. This mode provides a reliable L2CAP channel.\n\nBULLET::::- Streaming Mode (SM): This is a very simple mode, with no retransmission or flow control. This mode provides an unreliable L2CAP channel.\n\nReliability in any of these modes is optionally and/or additionally guaranteed by the lower layer Bluetooth BDR/EDR air interface by configuring the number of retransmissions and flush timeout (time after which the radio flushes packets). In-order sequencing is guaranteed by the lower layer.\n", "Bluetooth v2.1 – finalized in 2007 with consumer devices first appearing in 2009 – makes significant changes to Bluetooth's security, including pairing. See the pairing mechanisms section for more about these changes.\n\nSection::::Security.:Bluejacking.\n", "Section::::Comparable technologies.\n\nAlthough the Near field communication (NFC) environment is very different and has many non-overlapping applications, it is still compared with iBeacons.\n\nBULLET::::- NFC range is up to 20 cm (7.87 inches) but the optimal range is 4 cm (1.57 inches). iBeacons have a significantly larger range.\n\nBULLET::::- NFC can be either passive or active. When using passive mode, the power is sent from the reader device. Although Passif (bought by Apple Inc.) has worked on reducing the energy consumption, a battery pack is still needed inside iBeacon tags at this time.\n", "In 2011, the \"Wall Street Journal\" published an article describing research into security flaws of the system, including a user interface that makes it difficult for users to recognize when transceivers are operating in secure mode. According to the article, \"(R)esearchers from the University of Pennsylvania overheard conversations that included descriptions of undercover agents and confidential informants, plans for forthcoming arrests and information on the technology used in surveillance operations.\" The researchers found that the messages sent over the radios are sent in segments, and blocking just a portion of these segments can result in the entire message being jammed. \"Their research also shows that the radios can be effectively jammed (single radio, short range) using a highly modified pink electronic child’s toy and that the standard used by the radios 'provides a convenient means for an attacker' to continuously track the location of a radio’s user. With other systems, jammers have to expend a lot of power to block communications, but the P25 radios allow jamming at relatively low power, enabling the researchers to prevent reception using a $30 toy pager designed for pre-teens.\"\n", "The E0 stream cipher is used for encrypting packets, granting confidentiality, and is based on a shared cryptographic secret, namely a previously generated link key or master key. Those keys, used for subsequent encryption of data sent via the air interface, rely on the Bluetooth PIN, which has been entered into one or both devices.\n\nAn overview of Bluetooth vulnerabilities exploits was published in 2007 by Andreas Becker.\n", "BULLET::::- \"Just works\": As the name implies, this method just works, with no user interaction. However, a device may prompt the user to confirm the pairing process. This method is typically used by headsets with very limited IO capabilities, and is more secure than the fixed PIN mechanism this limited set of devices uses for legacy pairing. This method provides no man-in-the-middle (MITM) protection.\n", "Peripheral devices in computing can also be connected wirelessly, as part of a Wi-Fi network or directly via an optical or radio-frequency (RF) peripheral interface. Originally these units used bulky, highly local transceivers to mediate between a computer and a keyboard and mouse; however, more recent generations have used smaller, higher-performance devices. Radio-frequency interfaces, such as Bluetooth or Wireless USB, provide greater ranges of efficient use, usually up to 10 feet, but distance, physical obstacles, competing signals, and even human bodies can all degrade the signal quality. Concerns about the security of wireless keyboards arose at the end of 2007, when it was revealed that Microsoft's implementation of encryption in some of its 27 MHz models was highly insecure.\n", "In April 2005, Cambridge University security researchers published results of their actual implementation of passive attacks against the PIN-based pairing between commercial Bluetooth devices. They confirmed that attacks are practicably fast, and the Bluetooth symmetric key establishment method is vulnerable. To rectify this vulnerability, they designed an implementation that showed that stronger, asymmetric key establishment is feasible for certain classes of devices, such as mobile phones.\n", "BULLET::::- Most offered air interface solution is based on ISO/IEC 18000-3 HF (13,56 MHz) passive RFID tags and near field communication (NFC)-like reader specification.\n\nBULLET::::- Most offered authentication procedures make use of IETF public key infrastructure (PKI).\n\nBULLET::::- Comfortable solutions support single sign-on servicing.\n\nBULLET::::- Bluetooth BLE profile \"proximity\" is said to support such application.\n\nSection::::Usage principles.\n" ]
[ "Bluetooth is not the same as Radio Frequency. ", "bluetooth is less reliable than RF." ]
[ "Bluetooth is included in Radio Frequency. ", "Bluetooth is a form of RF." ]
[ "false presupposition" ]
[ "Bluetooth is not the same as Radio Frequency. ", "bluetooth is less reliable than RF." ]
[ "false presupposition", "false presupposition" ]
[ "Bluetooth is included in Radio Frequency. ", "Bluetooth is a form of RF." ]
2018-02103
How can someone actually feel that they're being watched?
In short, you can't, but your brain is normally paranoid so it thinks people are watching and especially remembers the times that it was right rather than when it was a false alarm. More specifically psychology studies have shown that people tend to overestimate how often people are watching them (especially if they are embarrassed about what they are doing) and that your brain is much more likely to remember the exceptional cases where it is right (you look around and happen to lock eyes with someone) then the times when it was a false alarm (you look around and nothing is there) since being paranoid is generally a good survival trait when you are trying to survive in the wilderness like our ancestors did. The end result is that people tend to think they can tell when other people are watching when they really don't have that ability at all.
[ "Being able to view inmates from the tower is seen almost as ‘God like’. The observer can watch over the cells without the inmates being able to see him in return. “If I can observe the watcher who spies upon me, I can control my surveillance, I can spy in turn, I can learn the watcher’s ways, his weaknesses, I can study his habits, I can elude him. If the eye is hidden, it looks at me even when it is not actually observing me. By concealing itself in the shadows, the eye can intensify all its powers.” \n", "Participants' behavior may have been shaped by knowing that they were watched (Hawthorne effect). Instead of being restrained by fear of an observer, guards may have behaved more aggressively when supervisors observing them did not step in to restrain them.\n", "BULLET::::- \"Freedom Wars\" is a PSVita action role-playing game which set in the dystopian future. Most of the human population were sentenced 1,000,000 years of imprisonment since they were born. They were dwelling inside the enclosed metropolitan cities named \"Panopticon\". The society were under heavy surveillance by numbers of \"Accessory\" androids. And the criminals were forced to hard labors of finding resources in the outside world, and then contribute to their government to exchange for few years amnesty or gain access to several human rights. In the game, the slogan \"We gaze at you\" is the parody of \"Big Brother is watching you\" in \"Nineteen Eighty-Four\".\n", "Watching-Eye Effect\n\nThe Watching-Eye Effect is related to the Hawthorne effect which describes that there is an observable change in behavior when people are being watched. It has been demonstrated that these effects are so pronounced that even depictions of eyes are enough to trigger them, in which case they are referred to as Watching-Eye Effect. Empirical psychological research has continually shown that the visible presence of images depicting eyes nudges people towards slightly, but measurably more honest and more pro-social behavior.\n", "In \"Did I Ever Tell You How Lucky You Are?\" by Dr. Seuss, the entire town of Hawtch-Hawtch is employed as watchers watching over other watchers leading to the first watcher who is watching the \"lazy town bee\" so it will work harder. Since the bee wasn't working harder, it was assumed the bee-watcher wasn't watching hard enough and needed to be watched.\n", "Companies who invest in such information from enclosures due to their potentially profit contributes to what Andrejevic describes as “the work of being watched”, individuals who unknowingly or willingly submit to giving up their information that generates such economic value in exchange for use of digital commodities. Andrejevic connects this idea back to the forcible separation of workers from the means of production – a process defined by Marx as “primitive” accumulation. Marx developed this from the first agrarian enclosure of land, where works were separated from the land in order to force workers to regain access to be contractually regulated.\n", "Similar findings have come from the United States, where Park Dietz has written: “Every instance of an attack on a public figure by a lone stranger in the United States for which adequate information has been made publicly available has been the work of a mentally disordered person who issued one or more pre-attack signals in the form of inappropriate letters, visits or statements...\" The role of FTAC in the UK is to detect such signals, to evaluate the risks involved and to intervene to reduce them. Such intervention often entails the obtaining of treatment and care for the fixated individual from psychiatric and social services and general practitioners in their town of residence.\n", "Panoptic surveillance was described by Michel Foucault in the context of a prison in which prisoners were isolated from each other but visible at all times by guards. Surveillance tends to isolate individuals from one another while setting forth a one-way visibility to authority figures. This isolation leads to social fragmentation.\n", "The concept is part of the psychology of surveillance and has implications for the areas of crime reduction and prevention without increasing actual surveillance, just by psychological measures alone.\n\nThe effect differs from the Psychic staring effect in that the latter describes the feeling of being watched, whereas individuals who succumb to the Watching-Eye Effect are usually aware that the eyes are only images.\n\nSection::::See also.\n\nBULLET::::- Fake security camera\n\nBULLET::::- Evil eye\n\nBULLET::::- Situation awareness\n\nBULLET::::- Decision-making\n\nBULLET::::- Gaze\n\nBULLET::::- Subject-expectancy effect\n\nBULLET::::- Eye contact\n\nSection::::Bibliography.\n", "In 2012 the Danish daily newspaper and online title Dagbladet Information crowdmapped the positions of surveillance cameras by encouraging readers to use a free Android and iOS app to photograph and geolocate CCTV cameras.\n\nSection::::Personal sousveillance.\n", "Section::::Discussion.:Tag Orientation.\n", "In modern psychology, vigilance, also termed sustained concentration, is defined as the ability to maintain concentrated attention over prolonged periods of time. During this time, the person attempts to detect the appearance of a particular target stimulus. The individual watches for a signal stimulus that may occur at an unknown time. The study of vigilance has expanded since the 1940s mainly due to the increased interaction of people with machines for applications involving monitoring and detection of rare events and weak signals. Such applications include air traffic control, inspection and quality control, automated navigation, military and border surveillance, and lifeguarding.\n", "The Hawthorne effect grew out of a series of studies. The theory states that an individual will act differently than they normally would due to the individual's awareness of being watched. Specifically in McGregor's X and Y theory, it states that the manager's approach has effects on the outcome of the worker. Individuals who receive attention from their superior will have positive feelings of receiving special treatment. Specifically, they feel that the attention they are receiving is unique from the attention that other employees are receiving. The basic understanding of superior-subordinate relationships lies in the foundation that the habits of a superior tend to have the power to create productive or counterproductive environments. \n", "Section::::Historical context.\n\nSection::::Historical context.:Major case studies.\n\nThere are many cases in which self-monitoring is used a variable of interest. Many recent studies look into the relationship with on-task behavior, work-place utilization, and leadership positions.\n\nA pilot study regarding on-task behavior was done with two high school students with symptoms of learning disabilities. These students were trained using a self-monitoring application and given prompts and the results showed positive, stable improvements in their on-task behavior after each individual's self-monitoring was increased.\n", "Observation involves participating in activities over a period of time and therefore becoming an accepted part of the group. An example is the research for \"A Glasgow Gang Observed\". A 26-year-old schoolmaster at a Scottish Reformatory (ListD) school, who called himself James Patrick, went undercover with the help of one of his pupils to study the often violent behaviour of the teenagers in a gang in Glasgow. He concealed all his personal information for his own safety.\n\nSection::::Advantages and disadvantages.\n", "Section::::Similar processes.\n\nSection::::Similar processes.:Self presentation.\n", "Section::::Discovery.\n", "BULLET::::- Overt observational research – The researchers identify themselves as researchers and explain the purpose of their observations. The problem with this approach is subjects may modify their behaviour when they know they are being watched. They portray their \"ideal self\" rather than their true self in what is called the Hawthorne Effect. The advantage that the overt approach has over the covert approach is that there is no deception (see, for example, PCIA-II\n\nBULLET::::- Participant Observation – The researcher participates in what they are observing so as to get a finer appreciation of the phenomena.\n\nSection::::In marketing research.\n", "In direct response to the panoptic and invasive forms of tracking manifesting themselves within the digital realm, some have turned to sousveillance: a form of inverse surveillance in which users can record those who are surveilling them, thereby empowering themselves. This form of counter surveillance, often used through small wearable recording devices, enables the subversion of corporate and government panoptic surveillance by holding those in power accountable and giving people a voice––a permanent video record––to push back against government abuses of power or malicious behavior that may go unchecked.\n", "In a study conducted by Fife, Nelson, and Bayles of focus groups from a Southeastern liberal arts university, five themes were ascertained regarding Facebook use and expectancy violations:\n\nBULLET::::- \"\"\"Don't stalk' – and when you do, don't talk about it\"\"\n\nBULLET::::- Though an understanding exists among Facebook participants that users will use the site to keep track of the behavior of others in a number of ways, excessive monitoring is likely to be perceived as an expectancy violation.\n\nBULLET::::- \"\"Don't embarrass me with bad pictures\"\"\n", "The first part of the test is related to the notion \"in plain view\". If a person did not undertake reasonable efforts to conceal something from a casual observer (as opposed to a snoop), then no subjective expectation of privacy is assumed.\n", "Three areas were investigated: assertiveness, firmness, and cooperation. In reference to the three areas respondents were asked the following: how they behave toward the target, how the target behaves toward them, and how they think they are viewed by the target. The study identified the looking glass self as a \"metaperception\" because it involves \"perception of perceptions.\" One of the hypotheses tested in the study was: If \"metaperceptions\" cause self-perceptions they will necessarily be coordinated. The hypothesis was tested at the individual and relationship levels of analysis\n\nSection::::Studies.:Family study.:Findings.\n", "From cookies to ultrasonic trackers, some argue that invasive forms of surveillance underscore how users are trapped in a digital panopticon, similar to the concept envisioned by Jeremy Bentham: a prison in which the prisoners were able to be seen at all times by guards but were unable to detect when, or even if, they were being watched at all, creating a sense of paranoia that drove prisoners to carefully police their own behavior. Similarly, scholars have drawn parallels between Bentham’s panopticon and today’s pervasive use of internet tracking in that individuals lack awareness to the vast disparities of power that exist between themselves and the corporation to which they willingly give their data. In essence, companies are able to gain access to consumers’ activity when they use a company’s services. The usage of these services often is beneficial, which is why users agree to exchange personal information. However, since users participate in this unequal environment, in which corporations hold most of the power and in which the user is obliged to accept the bad faith offers made by the corporations, users are operating in an environment that ultimately controls, shapes and molds them to think and behave in a certain way, depriving them of privacy.\n", "The man leaves the diner and walks toward a nearby town; he sees a parked truck with a driver, but both turn out to be mannequins. Like the diner, the rest of the town seems deserted, but the man seems to find evidence of someone being there recently. The man grows unsettled as he wanders through the empty town, needing someone to talk to but at the same time feeling that he is being watched. In a soda shop, the man notices an entire spinning rack of paperback books titled \"The Last Man on Earth, Feb. 1959\"; he grows upset and leaves. As night falls, the lights in the park turn on, leading the man to a movie theater, the marquee of which is illuminated. He remembers he is an Air Force airman from the advertised film, \"Battle Hymn\". When the film suddenly begins onscreen, he runs to the projection booth and finds nobody there, then becomes even more paranoid that he is being watched.\n", "Section::::Scale.:High self monitors.\n\nIndividuals who closely monitor themselves are categorized as high self-monitors. They often behave in a manner that is highly responsive to social cues and their situational context. High self-monitors can be thought of as social pragmatists who project images in an attempt to impress others and receive positive feedback. In comparison to low self-monitors, high self monitors participate in more expressive control and have concern for situational appropriateness. As these individuals are willing to adjust their behavior, others may perceive them to be more receptive, pleasant, and benevolent towards them.\n\nSection::::Scale.:Low vs high self monitors.\n" ]
[ "People can feel when someone is watching them." ]
[ "You can't actually feel this it just seems like you can because you remember the times you were correct more than the times that you were incorrect. " ]
[ "false presupposition" ]
[ "People can feel when someone is watching them." ]
[ "false presupposition" ]
[ "You can't actually feel this it just seems like you can because you remember the times you were correct more than the times that you were incorrect. " ]
2018-02071
Why do surgeons need to wash their hands for an extend period of time when normal sanitizer already kill 99.9% of all bacteria
From what I understand, sanitizer kills the organisms, but those dead "corpses" are all still there. The body reacts the same way to a dead virus as it does to a live one. Washing probably cleans all that dead stuff off too.
[ "Sanitizing surfaces is part of nosocomial infection in health care environments. Modern sanitizing methods such as Non-flammable Alcohol Vapor in Carbon Dioxide systems have been effective against gastroenteritis, MRSA, and influenza agents. Use of hydrogen peroxide vapor has been clinically proven to reduce infection rates and risk of acquisition. Hydrogen peroxide is effective against endospore-forming bacteria, such as \"Clostridium difficile\", where alcohol has been shown to be ineffective. Ultraviolet cleaning devices may also be used to disinfect the rooms of patients infected with \"Clostridium difficile\" or MRSA after discharge.\n\nSection::::Prevention.:Antimicrobial surfaces.\n", "Two categories of micro-organisms can be present on health care workers' hands: transient flora and resident flora. The first is represented by the micro-organisms taken by workers from the environment, and the bacteria in it are capable of surviving on the human skin and sometimes to grow. The second group is represented by the permanent micro-organisms living on the skin surface (on the stratum corneum or immediately under it). They are capable of surviving on the human skin and to grow freely on it. They have low pathogenicity and infection rate, and they create a kind of protection from the colonization from other more pathogenic bacteria. The skin of workers is colonized by 3.9 x 10 – 4.6 x 10 cfu/cm. The microbes comprising the resident flora are: \"Staphylococcus epidermidis\", \"S. hominis\", and \"Microccocus\", \"Propionibacterium, Corynebacterium, Dermobacterium\", and \"Pitosporum\" spp., while transient organisms are \"S. aureus\", and \"Klebsiella pneumoniae\", and \"Acinetobacter, Enterobacter\" and \"Candida\" spp. The goal of hand hygiene is to eliminate the transient flora with a careful and proper performance of hand washing, using different kinds of soap, (normal and antiseptic), and alcohol-based gels. The main problems found in the practice of hand hygiene is connected with the lack of available sinks and time-consuming performance of hand washing. An easy way to resolve this problem could be the use of alcohol-based hand rubs, because of faster application compared to correct hand-washing.\n", "The most common brands of alcohol hand rubs include Aniosgel, Avant, Sterillium, Desderman and Allsept S. All hospital hand rubs must conform to certain regulations like EN 12054 for hygienic treatment and surgical disinfection by hand-rubbing. Products with a claim of \"99.99% reduction\" or 4-log reduction are ineffective in hospital environment, since the reduction must be more than \"99.99%\".\n", "Improving patient hand washing has also been shown to reduce the rate of nosocomial infection. Patients who are bed-bound often do not have as much access to clean their hands at mealtimes or after touching surfaces or handling waste such as tissues. By reinforcing the importance of handwashing and providing santizing gel or wipes within reach of the bed, nurses were directly able to reduce infection rates. A study published in 2017 demonstrated this by improving patient education on both proper hand-washing procedure and important times to use sanitizer and successfully reduced the rate of enterococci and \"S. aureus\".\n", "For health care settings like hospitals and clinics, optimum alcohol concentration to kill bacteria is 70% to 95%. Products with alcohol concentrations as low as 40% are available in American stores, according to researchers at East Tennessee State University.\n\nAlcohol rub sanitizers kill most bacteria, and fungi, and stop some viruses. Alcohol rub sanitizers containing at least 70% alcohol (mainly ethyl alcohol) kill 99.9% of the bacteria on hands 30 seconds after application and 99.99% to 99.999% in one minute.\n", "Several methods have been tested for their effectiveness at improving thorough intensive-care unit environmental hygiene. A study conducted in 2010 across 3532 high risk environmental surfaces in 260 intensive care unit rooms in 27 acute-care hospitals (ICUs) assessed the consistency at which these surfaces met base line cleaning standards. Only 49.5% of the high-risk object surfaces were found to meet this baseline criterion. The least-cleaned objects were bathroom light switches, room door knobs, and bed pan cleaners. Significant improvements in ICU room cleaning was achieved through a structured approach that incorporated a simple, highly objective surface targeting method and repeated performance feedback to environmental surface personnel. Specific methods included implementing an objective evaluation process, environmental surfaces staff education, programmatic feedback, and continuous training to minimize the spread of hospital-associated infections. The authors noted an improvement in the thoroughness of cleaning at 71% from baseline for the entire group of hospitals involved.\n", "The World Health Organization (WHO) and the CDC recommends \"persistent\" antiseptics for hand sanitizers. Persistent activity is defined as the prolonged or extended antimicrobial activity that prevents or inhibits the proliferation or survival of microorganisms after application of the product. This activity may be demonstrated by sampling a site several minutes or hours after application and demonstrating bacterial antimicrobial effectiveness when compared with a baseline level. This property also has been referred to as \"residual activity.\" Both substantive and nonsubstantive active ingredients can show a persistent effect if they substantially lower the number of bacteria during the wash period.\n", "The Center for Disease Control and Prevention in the USA recommends hand washing over hand sanitizer rubs, particularly when hands are visibly dirty. The increasing use of these agents is based on their ease of use and rapid killing activity against micro-organisms; however, they should not serve as a replacement for proper hand washing unless soap and water are unavailable.\n", "On April 30, 2015, the FDA announced that they were requesting more scientific data based on the safety of hand sanitizer. Emerging science also suggests that for at least some health care antiseptic active ingredients, systemic exposure (full body exposure as shown by detection of antiseptic ingredients in the blood or urine) is higher than previously thought, and existing data raise potential concerns about the effects of repeated daily human exposure to some antiseptic active ingredients. This would include hand antiseptic products containing alcohol and triclosan.\n\nSection::::Chemistry.\n", "Alcohol-free hand sanitizers may be effective immediately while on the skin, but the solutions themselves can become contaminated because alcohol is an in-solution preservative and without it, the alcohol-free solution itself is susceptible to contamination. However, even alcohol-containing hand sanitizers can become contaminated if the alcohol content is not properly controlled or the sanitizer is grossly contaminated with microorganisms during manufacture. In June 2009, alcohol-free Clarcon Antimicrobial Hand Sanitizer was pulled from the US market by the FDA, which found the product contained gross contamination of extremely high levels of various bacteria, including those which can \"cause opportunistic infections of the skin and underlying tissues and could result in medical or surgical attention as well as permanent damage\". Gross contamination of any hand sanitizer by bacteria during manufacture will result in the failure of the effectiveness of that sanitizer and possible infection of the treatment site with the contaminating organisms.\n", "Micro-organisms are known to survive on inanimate ‘touch’ surfaces for extended periods of time. This can be especially troublesome in hospital environments where patients with immunodeficiencies are at enhanced risk for contracting nosocomial infections.\n", "To prevent spreading\" Klebsiella\" infections between patients, healthcare personnel must follow specific infection-control precautions, which may include strict adherence to hand hygiene (preferably using an alcohol based hand rub (60-90%) or soap and water if hands are visibly soiled. Alcohol based hand rubs are effective against these Gram-negative bacilli) and wearing gowns and gloves when they enter rooms where patients with \"Klebsiella\"–related illnesses are housed. Healthcare facilities also must follow strict cleaning procedures to prevent the spread of \"Klebsiella\".\n\nTo prevent the spread of infections, patients also should clean their hands very often, including:\n\nBULLET::::- Before preparing or eating food\n", "Skin flora do not readily pass between people: 30 seconds of moderate friction and dry hand contact results in a transfer of only 0.07% of natural hand flora from naked with a greater percentage from gloves.\n\nSection::::Hygiene.:Removal.\n\nThe most effective (60 to 80% reduction) antimicrobial washing is with ethanol, isopropanol, and n-propanol. Viruses are most affected by high (95%) concentrations of ethanol, while bacteria are more affected by n-propanol.\n\nUnmedicated soaps are not very effective as illustrated by the following data. Health care workers washed their hands once in nonmedicated liquid soap for 30 seconds. The students/technicians for 20 times.\n", "An important use of hand washing is to prevent the transmission of antibiotic resistant skin flora that cause hospital-acquired infections such as Methicillin-resistant \"Staphylococcus aureus\". While such flora have become antibiotic resistant due to antibiotics there is no evidence that recommended antiseptics or disinfectants selects for antibiotic-resistant organisms when used in hand washing. However, many strains of organisms are resistant to some of the substances used in antibacterial soaps such as Triclosan.\n", "A number of compounds can decrease the risk of bacteria growing on surfaces including: copper, silver, and germicides.\n\nThere have been a number of studies evaluating the use of no-touch cleaning systems particularly the use of ultraviolet C devices. One review was inconclusive due to lack of, or of poor quality evidence. Other reviews have found some evidence, and growing evidence of their effectiveness.\n\nSection::::Treatment.\n", "There are certain situations during which hand washing with water and soap are preferred over hand sanitizer, these include: eliminating bacterial spores of \"Clostridioides difficile\", parasites such as \"Cryptosporidium\", and certain viruses like norovirus depending on the concentration of alcohol in the sanitizer (95% alcohol was seen to be most effective in eliminating most viruses). In addition, if hands are contaminated with fluids or other visible contaminates, hand washing is preferred as well as when after using the toilet and if discomfort develops from the residue of alcohol sanitizer use. Furthermore, CDC recommends hand sanitizers are not effective in removing chemicals such as pesticides.\n", "Hand sanitizer that contains at least 60 percent alcohol or contains a \"persistent antiseptic\" should be used. Alcohol rubs kill many different kinds of bacteria, including antibiotic resistant bacteria and TB bacteria. 90% alcohol rubs are highly flammable, but kill many kinds of viruses, including enveloped viruses such as the flu virus, the common cold virus, and HIV, though is notably ineffective against the rabies virus.\n", "In a 1998 study using the FDA protocol, a non-alcohol sanitizer with benzalkonium chloride as the active ingredient met the FDA performance standards, while Purell, a popular alcohol-based sanitizer, did not. The study, which was undertaken and reported by a leading US developer, manufacturer and marketer of topical antimicrobial pharmaceuticals based on quaternary ammonium compounds, found that their own benzalkonium chloride-based sanitizer performed better than alcohol-based hand sanitizer after repeated use.\n", "In the United States, OSHA standards require that employers must provide readily accessible hand washing facilities, and must ensure that employees wash hands and any other skin with soap and water or flush mucous membranes with water as soon as feasible after contact with blood or other potentially infectious materials (OPIM).\n\nIn the UK healthcare professionals have adopted the 'Ayliffe Technique', based on the 6 step method developed by Graham Ayliffe, JR Babb and AH Quoraishi.\n", "Alcohol-based hand sanitizer is more convenient compared to hand washing with soap and water in most situations in the healthcare setting. It is generally more effective at killing microorganisms and better tolerated than soap and water. Hand washing should still be carried out if contamination can be seen or following the use of the toilet.\n", "The inappropriate use of PPE equipment such as gloves, has been linked to an increase in rates of the transmission of infection, and the use of such must be compatible with the other particular hand hygiene agents used.\n\nSection::::Infection control in healthcare facilities.:Antimicrobial surfaces.\n\nMicroorganisms are known to survive on non-antimicrobial in animate ‘touch’ surfaces (e.g., bedrails, over-the-bed trays, call buttons, bathroom hardware, etc.) for extended periods of time. This can be especially troublesome in hospital environments where patients with immunodeficiencies are at enhanced risk for contracting nosocomial infections.\n", "First, mechanical indicators and gauges on the machine itself indicate proper operation of the machine. Second heat sensitive indicators or tape on the sterilizing bags change color which indicate proper levels of heat or steam. And, third (most importantly) is biological testing in which a microorganism that is highly heat and chemical resistant (often the bacterial endospore) is selected as the standard challenge. If the process kills this microorganism, the sterilizer is considered to be effective.\n", "Researchers found environmental reservoirs of CRE bacteria in ICU sinks and drains. Despite multiple attempts to sterilize these sinks and drains, using detergents and steam, the hospital staff was unsuccessful in getting rid of the CRE. Due to the bacterial resistance to cleaning measures, staff should take extreme precaution in maintaining sterile environments in hospitals not yet infected with the CRE-resistant bacteria.\n", "Cleaning methods for removing and sanitizing biohazards vary from practitioner to practitioner. Some organizations are working to create a \"Standard of Clean\" such as ISSA's K12 Standard, which includes use of quantifiable testing methods such as ATP testing.\n\nSection::::Organizations.\n", "Alcohol-free hand sanitizer efficacy is heavily dependent on the ingredients and formulation, and historically has significantly under-performed alcohol and alcohol rubs. More recently, formulations that use benzalkonium chloride have been shown to have persistent and cumulative antimicrobial activity after application, unlike alcohol, which has been shown to decrease in efficacy after repeated use, probably due to progressive adverse skin reactions.\n\nSection::::Substances used.:Ash or mud.\n" ]
[ "Normal hand sanitizer will clean surgeons' hands.", "Because hand sanitizer kills 99.9% of bacteria, then surgeons should not have to wash their hands frequently." ]
[ "Hand sanitizer kills the organisms but does not remove them.", "The corpses of the bacteria remain on the hands, and the body reacts to dead bacteria in the same way it does live bacteria, therefore frequently washing hands is necessary." ]
[ "false presupposition" ]
[ "Normal hand sanitizer will clean surgeons' hands.", "Because hand sanitizer kills 99.9% of bacteria, then surgeons should not have to wash their hands frequently." ]
[ "false presupposition", "false presupposition" ]
[ "Hand sanitizer kills the organisms but does not remove them.", "The corpses of the bacteria remain on the hands, and the body reacts to dead bacteria in the same way it does live bacteria, therefore frequently washing hands is necessary." ]
2018-04020
How can television and radio channels know how many people are watching or listening at a specific time?
Assuming you're talking about Nielsen ratings? They don't know precisely, but they send out surveys for people to fill out and fit the sampling responses into a statistical model. Online television/radio streams do get precise numbers.
[ "In 2005, ACNielsen initiated their MVP (Media Voice Panel) program. Panel members carry an electronic monitor that detects the digital station and program identification codes hidden within the TV and radio broadcasts they are exposed to. At night, members place the monitor in a cradle that sends the collected data through the home's electrical wiring to a relay device that transmits it by phone, making it one of the first practical uses of electrical wiring as a home network. With an approximately one week notice to members, the MVP program ended on March 17, 2008.\n", "Another version of the device is small, about the size of a beeper, that plugs into the wall below or near each TV set in household. It monitors anything that comes on the TV and relays the information with the small Portable People Meter to narrow down who is watching what and when. \n\nThe device, known as a 'frequency-based meter', was invented by a British company called Audits of Great Britain (AGB). The successor company to AGB is TNS, which is active in 34 countries around the globe.\n", "Section::::Active Position Detection.\n", "In the 1980s, the company launched a new measurement device known as the \"people meter\". The device resembles a remote control with buttons for each individual family member and extras for guests. Viewers push a button to signify when they are in the room and push it again when they leave, even if the TV is still on. This form of measurement was intended to provide a more accurate picture of who was watching and when.\n", "Originally, these meters identified the frequency of the channels - VHF or UHF - watched on the viewer's TV set. This system became obsolete when Direct to Home (DTH) satellite dish became popular and viewers started to get their own satellite decoders. In addition, this system doesn't measure digital broadcasts.\n", "BULLET::::2. A more technologically sophisticated system uses Set Meters, which are small devices connected to televisions in selected homes. These devices gather the viewing habits of the home and transmit the information nightly to Nielsen through a \"Home Unit\" connected to a phone line. The technology-based home unit system is meant to allow market researchers to study television viewing habits on a minute to minute basis, recording the moment viewers change channels or turn off their television set. In addition to set meters, individual viewer reporting devices, such as people meters, have allowed the company to separate household viewing information into various demographic groups.\n", "Section::::Reception.\n\nSection::::Reception.:Broadcast and ratings.\n", "BULLET::::- In Germany TV audience measurement is done by Gesellschaft für Konsumforschung (known as GfK).\n", "A similar term used by Nielsen Media Research is the Designated Market Area (DMA), and they control the trademark on it. DMAs are used by Nielsen Media Research to identify TV stations that best reach an area and attract the most viewers. There are 210 Nielsen DMAs in the United States, 70 of which are metered (in other words, viewership in these markets are estimated automatically instead of through the archaic diary system still in use in the smaller markets).\n", "Nielsen Media Research. It is used by media planners and buyers, advertisers to appropriate the ratings of channels when individual viewership is calculated. This term is usually used in the US to represent average percentage of People using TV across all channels within predefined time period.\n", "People meter\n\nA people meter is an audience measurement tool used to measure the viewing habits of TV and cable audiences.\n\nSection::::Meter.\n", "Section::::Premiere and reception.\n", "In 1987, stations in the European Broadcasting Union began offering Radio Data System (RDS), which provides written text information about programs that were being broadcast, as well as traffic alerts, accurate time, and other teletext services.\n\nSection::::1970s, 1980s, and 1990s.:Sri Lanka.\n", "BULLET::::- In Japan Video Research Ltd. handles radio and television measurement. IRS (Indian Readership Survey) is the industry currency for newspaper and magazine readership\n\nBULLET::::- In Kazakhstan, TV measurement is handled by TNS.\n\nBULLET::::- In the Kingdom of Saudi Arabia the measurement is done by GfK at the request of Saudi Media Measurement Company (SMMC).\n\nBULLET::::- In Lithuania, TV and radio measurements are handled by TNS Gallup.\n", "Nielsen's formula for PUT is the number of persons viewing TV divided by the total persons universe i.e. the television rating divided by the total share of television in a particular demographic area.\n", "BULLET::::- A Sensing-only device is a TVBD that determines which channels are available by monitoring for activity. They must listen for 30 seconds to determine the channel is not in use and check again for activity once every minute. They must limit their power to 50 mW and only operate on channels 21 and above. A sensing-only device must vacate a channel within two seconds of detecting non-TVBD activity.\n", "Section::::History.\n\nSection::::History.:Television.\n\nIn 1981, Audimat was launched in France by the Centre d’Étude d’Opinion (CEO), an opinion research company. It was based on a panel of 600 households equipped with audience meters. Measurement was carried out on a daily basis with results reported the following morning, however, no indication was given as to the profile of the audience watching the television set.\n", "Before the People Meter advances, Nielsen used the diary method, which consisted of viewers physically recording the shows they watched. However, there were setbacks with the system. Lower-rated stations claimed the diary method was inaccurate and biased. They argued that because they had lower ratings, those who depended on memory for the diary method may only remember to track their favorite shows. Stations also argued that if it wasn’t low ratings that skewed the diary method, it might also be the new variety of channels for viewers to choose from. Viewers may not be able to record everything they watch and there is no way of discovering the truth. Finally in 1986, Nielsen developed an electronic meter, People Meter, to solve the problem. The People Meter is an electronic method of television measurement that moved from active and diary-based to passive and meter-monitored. The meter also recorded real time simultaneously viewing, reducing memory bias.\n", "This works as each network sends its signal to many local affiliated television stations across the country. These local stations then carry the \"network feed,\" which can be viewed by millions of households across the country. In such cases, the signal is sent to as many as 200+ stations or as little as just a dozen or fewer stations, depending on the size of the network.\n", "BULLET::::- Near: Within a couple of meters\n\nBULLET::::- Far: Greater than 10 meters away\n\nAn iBeacon broadcast has the ability to approximate when a user has entered, exited, or lingered in region. Depending on a customer's proximity to a beacon, they are able to receive different levels of interaction at each of these three ranges.\n", "PUTs is calculated by considering the average audience figures gauged from the peopleometer, for all channels during a particular time period and adding them together to get the cumulative number. Put is used to calculate the demographic persons rating. Almost all media scales are technically based on person's ratings. It is observed that the percentage rating remains constant from year to year. If changes are observed, then they are slight in nature and only due to changes in viewing habits.\n\nSection::::PUT and PVT.\n", "During the 1980s, the Arbitron Company was developing the \"Portable People Meter\", or \"PPM\", technology to replace its self-administered, seven-day radio diary method to collect radio listening data from Arbitron survey participants. The radio diary had been the most generally accepted method of measuring radio listening since 1964. Rantel became an early evangelist for the new PPM method, because Rantel researchers had performed many audits of Arbitron radio diaries during its early years and were keenly aware of the weaknesses of the seven-day radio diary method.\n", "Section::::Premise.\n\nThe Nielsens (named after the Nielsen ratings) are a family of fictional characters from a 1950s' sitcom that has been canceled; they have been relocated to a real world New Jersey suburb in 1991, which is different from the world they know. They use a device called a Turnerizer (named after Ted Turner) to switch between color and black-and-white within their home. Mike Duff, the teenage son of the family next door, is the only real-world person who knows their secret.\n", "Each year, Nielsen processes approximately two million paper diaries from households across the country, for the months of November, February, May and July—also known as the \"sweeps\" rating periods. The term \"sweeps\" dates from 1954, when Nielsen collected diaries from households in the Eastern United States first; from there they would \"sweep\" west. Seven-day diaries (or eight-day diaries in homes with DVRs) are mailed to homes to keep a tally of what is watched on each television set and by whom. Over the course of a sweeps period, diaries are mailed to a new panel of homes each week. At the end of the month, all of the viewing data from the individual weeks is aggregated.\n", "Section::::Calculation.\n\nIt is a term coined by Nielsen Media Research. It refers to the total number of people in a particular demographic area, that are watching television during a given time period. Nielsen defines “PUT as a percentage of the population or as a number that represents the thousands of persons viewing television.” The formula used to calculate PUT is similar to HUT [Houses Using Television]\n\nPUT = (Rating / Share) x 100\n" ]
[]
[]
[ "normal" ]
[ "TV stations know how many people are listening or watching." ]
[ "false presupposition", "normal" ]
[ "They don't actually know they extrapolate based on survey data from a limited sample size. " ]
2018-01936
If human and other mammal babies live off of milk the first few months of their life, why can't adults live off of milk/formula like an all around nutrition supplement?
People are covering a lot of ground here but missing your question. Basically babies are born with a resivawr(sp, autocorrect you have failed me, you have failed yourself, and most of all you have failed the people of Reddit) of nutrients (such as iron) that are not present in mom's breast milk. They are in fact born with enough iron and various other nutrients to survive off it for many months - but for best health you'll want to supplement after the first couple. So the answer is that babies can't survive off it forever either and eventually need to eat food with the missing nutrients. Adults can do roughly the same thing - you can survive (but not thrive) off potatoes and cow's milk and absolutely nothing else.
[ "Section::::Infant formula processing.:Recent and future potential new ingredients.:Prebiotics.\n\nPrebiotics are undigestible carbohydrates that promote the growth of probiotic bacteria in the gut. Human milk contains a variety of oligosaccharides believed to be an important factor in the pattern of microflora colonization of breastfed infants. Because of variety, variability, complexity and polymorphism of the oligosaccharide composition and structure, it is currently not feasible to reproduce the oligosaccharide components of human milk in a strictly structural fashion.\n", "One of the health concerns associated with the introduction of solid foods before six months is iron deficiency. The early introduction of complementary foods may satisfy the hunger of the infant, resulting in less frequent breastfeeding and ultimately less milk production in the mother. Because iron absorption from human milk is depressed when the milk is in contact with other foods in the proximal small bowel, early use of complementary foods may increase the risk of iron depletion and anemia.\n", "If the mother's milk supply is insufficient, formula or (preferably) donor milk is necessary in order for the infant to obtain adequate nutrients. Supplements should be given immediately after a breastfeeding session, rather than in place of a breastfeeding session.\n\nThe use of supplements is gradually tapered off as the mother's own milk supply rebounds. In some cases, especially when low supply is caused by medical conditions such as insufficient glandular tissue, long-term use of supplements is necessary. For mothers who cannot breastfeed exclusively, breastfeeding as much as possible, with formula feeding as necessary, offers many benefits over formula alone.\n", "Humans may consume dairy milk for a variety of reasons, including tradition, availability and nutritional value (especially minerals like calcium, vitamins such as B, and protein). Dairy milk substitutes may be expected to meet such standards, though there are no legal requirements for them to do so. This may result in additives being put into milk substitutes to compensate for the absence of certain vitamins, minerals and/or proteins. Infant formula, whether based on cow's milk, soy or rice, is usually fortified with iron and other dietary nutrients.\n", "Some of the metabolites directly affect the nervous system or the brain and can sometimes influence the development and behavior of children in the long term. There are studies that indicate certain HMOs supply the child with sialic acid residues. Sialic acid is an essential nutrient for the development of the child’s brain and mental abilities.\n\nHMOs are used as supplements in baby food to ensure a provision of babies that are not being breastfed with this important component of the human milk.\n\nSection::::Evolution.\n", "Most women that do not breastfeed use infant formula, but breast milk donated by volunteers to human milk banks can be obtained by prescription in some countries. In addition, research has shown that women who rely on infant formula could minimize the gap between the level of immunity protection and cognitive abilities a breastfed child benefits from versus the degree to which a bottle-fed child benefits from them. This can be done by supplementing formula-fed infants with bovine milk fat globule membranes (MFGM) meant to mimic the positive effects of the MFGMs which are present in human breast milk.\n", "Section::::Preparation and content.:Nutritional content.\n\nBesides breast milk, infant formula is the only other milk product which the medical community considers nutritionally acceptable for infants under the age of one year (as opposed to cow's milk, goat's milk, or follow-on formula). Supplementing with solid food in addition to breast milk or formula begins during weaning, and most babies begin supplementing about the time their first teeth appear, usually around the age of six months.\n", "Although cow's milk is the basis of almost all infant formula, plain cow's milk is unsuited for infants because of its high casein content and low whey content, and untreated cow's milk is not recommended before the age of 12 months. The infant intestine is not properly equipped to digest non-human milk, and this may often result in diarrhea, intestinal bleeding and malnutrition. To reduce the negative effect on the infant's digestive system, cow's milk used for formula undergoes processing to be made into infant formula. This includes steps to make protein more easily digestible and alter the whey-to-casein protein balance to one closer to human milk, the addition of several essential ingredients (often called \"fortification\", see below), the partial or total replacement of dairy fat with fats of vegetable or marine origin, etc.\n", "It is important to know that some foods are restricted for infants. For example, whether breast- or bottle-fed, infants do not need additional fluids during the first four months of life. Excessive intake of extra fluids or supplements can have harmful effects. Fluids besides human breast milk or iron-enriched formula are not recommended. These substitutes, such as milk, juice, and water do not require what the infant needs to grow and develop, cannot be digested correctly, and have a high risk of being contaminated. Water is acceptable only for mixing formula. Honey also must be avoided because there is a high risk of botulism. Breast milk is the safest thing to give, unless the mother is advised against it by a health care professional.\n", "BULLET::::- Carbohydrates are an important source of energy for growing infants, as they account for 35 to 42% of their daily energy intake. In most cow's milk-based formulas, lactose is the main source of carbohydrates present, but lactose is not present in cow's milk-based lactose-free formulas nor specialized non-milk protein formulas or hydrolyzed protein formulas for infants with milk protein sensitivity. Lactose is also not present in soy-based formulas. Therefore, those formulas without lactose will use other sources of carbohydrates, such as sucrose and glucose, dextrins, and natural and modified starches. Lactose is not only a good source of energy, it also aids in the absorption of the minerals magnesium, calcium, zinc and iron.\n", "Despite the recommendation that babies be exclusively breastfed for the first 6 months, less than 40% of infants below this age are exclusively breastfed worldwide. The overwhelming majority of American babies are not exclusively breastfed for this period – in 2005 under 12% of babies were breastfed exclusively for the first 6 months, with over 60% of babies of 2 months of age being fed formula, and approximately one in four breastfed infants having infant formula feeding within two days of birth.\n", "Section::::Infant formula processing.:Recent and future potential new ingredients.:Lysozyme and lactoferrin.\n\nLysozyme is an enzyme that is responsible for protecting the body by damaging bacterial cell walls. Lactoferrin is a globular, multifunctional protein that has antimicrobial activity. Compared to human milk, cow’s milk has a signifactly lower levels of lysozyme and lactoferrin; therefore, the industry has an increasing interest in adding them into infant formulas.\n\nSection::::See also.\n\nBULLET::::- 2008 Chinese milk scandal\n\nBULLET::::- Baby food\n\nBULLET::::- Baby bottle\n\nBULLET::::- Breastfeeding\n\nBULLET::::- Breast milk\n\nBULLET::::- Child development\n\nBULLET::::- Daigou\n\nBULLET::::- Dairy allergy\n\nBULLET::::- List of dairy products\n\nSection::::External links.\n", "Section::::History.:Evaporated milk formulas.\n\nIn the 1920s and 1930s, evaporated milk began to be widely commercially available at low prices, and several clinical studies suggested that babies fed evaporated milk formula thrive as well as breastfed babies\n", "Meeting the nutritional needs of infants as they grow is essential for their healthy development. Feeding infants inappropriately or insufficiently can cause major illnesses and effect their physical and mental development. Educational campaigns that share information on when to introduce solid foods, appropriate types of foods to feed an infant, and hygiene practices are effective at improving these feeding practices. \n\nSection::::Nutritional needs and the amount of food.\n\nNewborns need a diet of breastmilk or infant formula. About 40% of the food energy in these milks comes from carbohydrates, mostly from a simple sugar called lactose.\n", "Section::::Health Benefits of MFGM.:Brain development and cognitive function.:Clinical data.\n\nSeveral studies of diets supplemented with MFGM and its components, including gangliosides and sphingomyelin, have aimed to address measures of cognitive development in pediatric populations. In some of the studies, MFGM supplementation to infant formula was shown to narrow the gap in cognitive development between breastfed and formula-fed infants.\n\nTanaka et al. (2013) studied the neurobehavioral effects of feeding formula supplemented with sphingomyelin-enriched phospholipid in 24 very low birth weight preterm infants (birth weight 1500 g).\n", "Infants with classic galactosemia cannot digest lactose and therefore cannot benefit from breast milk. Breastfeeding might harm the baby also if the mother has untreated pulmonary tuberculosis, is taking certain medications that suppress the immune system. has HIV, or uses potentially harmful substances such as cocaine, heroin, and amphetamines. Other than cases of acute poisoning, no environmental contaminant has been found to cause more harm to infants than lack of breastfeeding. Although heavy metals such as mercury are dispersed throughout the environment and are of concern to the nursing infant, the neurodevelopmental benefits of human milk tend to override the potential adverse effects of neurotoxicants.\n", "Infant Nutrition is the description of the dietary needs of infants. A diet lacking essential calories, minerals, vitamins, or fluids is considered inadequate. Breast milk provides the best nutrition for these vital first months of growth when compared to formula. For example, breastfeeding aids in preventing anemia, obesity, and sudden infant death syndrome; and it promotes digestive health, immunity, and intelligence. The American Academy of Pediatrics recommends exclusively feeding an infant breast milk, or iron fortified formula, for the first twelve months of life. Infants are usually not introduced to solid foods until four to six months of age. Historically, breastfeeding infants was the only option for nutrition otherwise the infant would parish. Breastfeeding is rarely contraindicated, but is not recommended for mothers being treated for cancer, those with active tuberculosis, HIV, substance abuse, or leukemia. Clinicians can be consulted to determine what is best for each baby.\n", "The World Health Organization (WHO) and the Pan American Health Organization currently recommend feeding infants only breast milk for the first six months of life. If the infant is being fed formula, it must be iron-enriched. An infant that receives exclusively breast milk for the first six months rarely needs additional vitamins or minerals. However, vitamins D and B12 may be needed if the breastfeeding mother does not have a proper intake of these vitamins. In fact, the American Academy of Pediatrics suggests all infants, breastfed or not, take a vitamin D supplement within the first days of life to prevent a deficiency or rickets. Exclusively breastfed infants will also require an iron supplement after four months, because the iron is depleted at this point from the breast milk.\n", "A very large meta-analysis investigated the effect of probiotics on preventing late-onset sepsis (LOS) in neonates. Probiotics were found to reduce the risk of LOS, but only in babies who were fed human milk exclusively. It is difficult to distinguish if the prevention was a result of the probiotic supplementation or if it was a result of the properties of human milk. It is also still unclear if probiotic administration reduces LOS risk in extremely low birth weight infants due to the limited number of studies that investigated it. Out of the 37 studies included in this systematic review, none indicated any safety problems related to the probiotics. It would be beneficial to clarify the relationship between probiotic supplementation and human milk for future studies in order to prevent late onset sepsis in neonates.\n", "Manufacturers state that the composition of infant formula is designed to be roughly based on a human mother's milk at approximately one to three months postpartum; however, there are significant differences in the nutrient content of these products. The most commonly used infant formulas contain purified cow's milk whey and casein as a protein source, a blend of vegetable oils as a fat source, lactose as a carbohydrate source, a vitamin-mineral mix, and other ingredients depending on the manufacturer. In addition, there are infant formulas using soybean as a protein source in place of cow's milk (mostly in the United States and Great Britain) and formulas using protein hydrolysed into its component amino acids for infants who are allergic to other proteins. An upswing in breastfeeding in many countries has been accompanied by a deferment in the average age of introduction of baby foods (including cow's milk), resulting in both increased breastfeeding and increased use of infant formula between the ages of 3- and 12-months.\n", "Human breast milk contains all of the essential nutrients an infant needs for optimal growth, development and long-term health, including the fatty acids, DHA and ARA. DHA and ARA play key roles in the structure and function of human tissues, immune function, and brain and retinal development during gestation and infancy. \n", "Some concerns that surround human milk bank include:\n\nBULLET::::- Cost\n\nBULLET::::- Availability\n\nBULLET::::- Lack of health care provider interest\n\nBULLET::::- Concern about the type of women who might donate\n\nSection::::Consumers.\n\nAfter the milk has been donated the primary consumer of the milk are premature babies; other consumers include adults with medical complications or conditions. The main reason why premature babies consume donor milk is that the mother cannot provide milk for the baby. The donor milk therefore acts as a substitute.\n\nSection::::Health benefits of human milk banks.\n", "Conversely, the natural diet of an infant up to age one is breast milk (or a synthetic equivalent such as formula). It is important for parents to not decrease the volume of milk feeds until around one year of age or until the baby is taking in enough solid foods to support weight-gain (AAP, 2013). Proponents of BLW would argue that breast-feeding mothers should change their own diet to improve the infant’s nutrition before pushing for increase solid food intake (Rapley & Murkett, 2008).\n", "These products have also recently fallen under criticism for contributing to the childhood obesity epidemic in some developed countries due to their marketing and flavoring practices.\n\nSection::::History.:Usage since 1970s.\n\nSince the early 1970s, industrial countries have witnessed a resurgence in breastfeeding among newborns and infants to 6 months of age. This upswing in breastfeeding has been accompanied by a deferment in the average age of introduction of other foods (such as cow's milk), resulting in increased use of both breastfeeding and infant formula between the ages of 3–12 months.\n", "Whole cow's milk contains too little iron, retinol, vitamin E, vitamin C, vitamin D, unsaturated fats or essential fatty acids for human babies. Whole cow's milk also contains too much protein, sodium, potassium, phosphorus and chloride which may put a strain on an infant's immature kidneys. In addition, the proteins, fats and calcium in whole cow's milk are more difficult for an infant to digest and absorb than the ones in breast milk.\n\nBULLET::::- Note: Milk is generally fortified with vitamin D in the U.S. and Canada. Non-fortified milk contains only 2 IU per 3.5 oz.\n" ]
[ "Breast milk has enough nutrients to sustain an adult all through life." ]
[ "There are a lot of nutrients that are not in breast milk that the baby has a store of but needs to begin supplementing after a time. " ]
[ "false presupposition" ]
[ "Breast milk has enough nutrients to sustain an adult all through life." ]
[ "false presupposition" ]
[ "There are a lot of nutrients that are not in breast milk that the baby has a store of but needs to begin supplementing after a time. " ]
2018-00217
Where I'm from (southern US) trees shed seeds like its raining in the fall and winter. Why, if each seed has the potentiality for a new tree, aren't there sprouts basically everywhere there's grass and dirt?
At every single stage of their development, seeds/seedlings/sprouts, etc. undergo a series of filters. The dynamics change at every stage of development, some factors are more important than other in a given stage. At first, we have seeds not being fully viable, just like a human embryo can die before being born, seed embryos can die for internal reasons. Then we have predation. Lots of animals -mainly insects- and fungi feed on seeds, if they eat a critical area (like the embryo itself) or a critical mass, the seed won't be able to germinate. After that, there's another set of internal causes, seeds are structures made to wait for the right conditions, some of them need several stages of wet/dry seasons, chemical damage, physical damage, some even need fires, like some pine species!! If the conditions aren't met, the seed won't germinate. Now we reach the stage where the seed germinates. Here's another set of filters. The first one is establishing in a good place, that means you will have access to soil, rain, and sunlight. If the seed isn't established in a good place, it will die soon. After that, there's another set of predators that want to feast on their yummy and soft structures. At this stage there's also competition, you and your hundreds of siblings are competing for access to the same resources, the competition is fierce and close. The list goes on and on. From the thousands of seeds a tree produces in its lifetime, only a tiny fraction will make it to adulthood, the competition for resources, predation, and internal factors tell us that not every single seed can make it. In the case of urban and suburban areas, you have to consider that humans are constantly removing any undesired plants.
[ "Seed production in natural plant populations varies widely from year to year in response to weather variables, insects and diseases, and internal cycles within the plants themselves. Over a 20-year period, for example, forests composed of loblolly pine and shortleaf pine produced from 0 to nearly 5 million sound pine seeds per hectare. Over this period, there were six bumper, five poor, and nine good seed crops, when evaluated for production of adequate seedlings for natural forest reproduction.\n\nSection::::Development.\n", "Seeds and spores can be used for reproduction (through e.g. sowing). Seeds are typically produced from sexual reproduction within a species, because genetic recombination has occurred. A plant grown from seeds may have different characteristics from its parents. Some species produce seeds that require special conditions to germinate, such as cold treatment. The seeds of many Australian plants and plants from southern Africa and the American west require smoke or fire to germinate. Some plant species, including many trees, do not produce seeds until they reach maturity, which may take many years. Seeds can be difficult to acquire and some plants do not produce seed at all. Some plants (like certain plants modified using genetic use restriction technology) may produce seed, but not fertile seed. In certain cases, this is done to prevent the accidental spreading of these plants, for example by birds and other animals.\n", "Seeds are cached on and off of the ground, depending on the season. Seeds are cached on the ground in areas with sparse vegetation and exposed, well-drained soils. Seeds are buried in the litter of dead needles and twigs, and between organic material and mineral soil. Seeds are cached close to the trunk of trees, most often on the south side where snow melts most quickly. Ground-caching stops when snow covers the ground.\n", "Not all seeds undergo a period of dormancy, many species of plants release their seeds late in the year when the soil temperature is too low for germination or when the environment is dry. If these seeds are collected and sown in an environment that is warm enough, and/or moist enough, they will germinate. Under natural conditions non dormant seeds released late in the growing season wait until spring when the soil temperature rises or in the case of seeds dispersed during dry periods until it rains and there is enough soil moisture.\n", "The phenomenon of seeds remaining dormant within the soil is well known and documented (Hills and Morris 1992). Detailed information on the role of such “seed banks” in northern Ontario, however, is extremely limited, and research is required to determine the species and abundance of seeds in the soil across a range of forest types, as well as to determine the function of the seed bank in post-disturbance vegetation dynamics. Comparison tables of seed density and diversity are presented for the boreal and deciduous forest types and the research that has been conducted is discussed. This review includes detailed discussions of: (1) seed bank dynamics, (2) physiology of seeds in a seed bank, (3) boreal and deciduous forest seed banks, (4) seed bank dynamics and succession, and (5) recommendations for initiating a seed bank study in northern Ontario.\n", "Not all seeds undergo a period of dormancy. Seeds of some mangroves are viviparous; they begin to germinate while still attached to the parent. The large, heavy root allows the seed to penetrate into the ground when it falls. Many garden plant seeds will germinate readily as soon as they have water and are warm enough; though their wild ancestors may have had dormancy, these cultivated plants lack it. After many generations of selective pressure by plant breeders and gardeners, dormancy has been selected out.\n", "Species growing in shaded environments tend to produce larger seeds and larger seeded species have higher seedling survivorship in low-light conditions. The increased metabolic reserves of larger seeds allows the first shoots to grow taller and leaves to grow broader more quickly in order to compete for what little sunlight is available. A few large seeded trees that occur in closed canopy wooded areas such as old-growth forests are the many oak species, hickory, pecan, and butternut trees.\n\nSection::::Selective pressures.:Drought.\n", "In some trees, like jackfruit, some citrus, and avocado, the seeds can be found already germinated while the fruit goes overripe; strictly speaking this condition cannot be described as vivipary, but the moist and humid conditions provided by the fruit mimic a wet soil that encourages germination. However, the seeds also can germinate under moist soil.\n\nSection::::Reproduction.\n\nVivipary includes reproduction via embryos, such as shoots or bulbils, as opposed to germinating externally from a dropped, dormant seed, as is usual in plants;\n", "Common causes for sprouts becoming inedible:\n\nBULLET::::- Seeds are not rinsed well enough before soaking\n\nBULLET::::- Seeds are left in standing water after the initial soaking\n\nBULLET::::- Seeds are allowed to dry out\n\nBULLET::::- Temperature is too high or too low\n\nBULLET::::- Dirty equipment\n\nBULLET::::- Insufficient air flow\n\nBULLET::::- Contaminated water source\n\nBULLET::::- Poor germination rate\n", "A seed deposited in the seed bank is initially dormant. Dormancy is broken by the cold and wet conditions of fall and winter, and so freshly deposited seeds lay dormant until at least the following spring, at which time approximately 90% of the previously dormant seeds will germinate. The rest remain dormant in the seed bank.\n", "The seeds are dispersed short distances by wind, but can travel longer distances by water, animals, and people. The vast majority of seeds (95%) are found in the top of the soil within a few meters of the parent plant. Seeds may stay alive in the seed bank for more than five years.\n", "Section::::Seed dispersal.\n", "The seeds of conifers, the largest group of gymnosperms, are enclosed in a cone and most species have seeds that are light and papery that can be blown considerable distances once free from the cone. Sometimes the seed remains in the cone for years waiting for a trigger event to liberate it. Fire stimulates release and germination of seeds of the jack pine, and also enriches the forest floor with wood ash and removes competing vegetation. Similarly, a number of angiosperms including \"Acacia cyclops\" and \"Acacia mangium\" have seeds that germinate better after exposure to high temperatures.\n", "Thermal scarification can be achieved by briefly exposing seeds to hot water, which is also known as hot water treatment. In some chaparral plant communities, some species' seeds require fire and/or smoke to achieve germination. An exception to that phenomenon is Western poison oak, whose thick seed coatings provide a time delayed effect for germination, but do not require fire scarification.\n\nRegardless of the method, scarified seeds do not store well and need to be planted quickly, lest the seeds become unviable.\n\nSection::::Common uses.\n", "Strict categorization, however, is not possible for some complex species interactions. For example, seed germination and survival in harsh environments is often higher under so-called nurse plants than on open ground. A nurse plant is one with an established canopy, beneath which germination and survival are more likely due to increased shade, soil moisture, and nutrients. Thus, the relationship between seedlings and their nurse plants is commensal. However, as the seedlings grow into established plants, they are likely to compete with their former benefactors for resources.\n\nSection::::Mechanisms.\n", "The absence of a soil seed bank impedes the establishment of vegetation during primary succession, while presence of a well-stocked soil seed bank permits rapid development of species-rich ecosystems during secondary succession.\n\nSection::::Population densities and diversity.\n\nThe mortality of seeds in the soil is one of the key factors for the persistence and density fluctuations of plant populations, especially for annual plants. Studies on the genetic structure of \"Androsace septentrionalis\" populations in the seed bank compared to those of established plants showed that diversity within populations is higher below ground than above ground.\n", "\"Acacia oncinocarpa\" and \"Eucalyptus miniata\", for example, and perennial herbs all have adaptive mechanisms that enable them to live in fire-prone areas of Australia. Both the acacia (a small spreading shrub) and eucalyptus (an overstorey tree) can regenerate from seeds and vegetatively regenerate new shoots from buds that escape fire. Reproduction and seed fall occur during the eight dry months. Due to the area's frequent fires, the seeds are usually released onto a recently burnt seed bed.\n", "Section::::Environmental significance.\n\nSoil seed banks play an important role in the natural environment of many ecosystems. For example, the rapid re-vegetation of sites disturbed by wildfire, catastrophic weather, agricultural operations, and timber harvesting is largely due to the soil seed bank. Forest ecosystems and wetlands contain a number of specialized plant species forming persistent soil seed banks.\n\nBefore the advent of herbicides a good example of a persistent seed bank species, Papaver rhoeas sometimes was so abundant in agricultural fields in Europe that it could be mistaken for a crop.\n", "Section::::Cold stratification.\n\nCold stratification is the process of subjecting seeds to both cold and moist conditions. Seeds of many trees, shrubs and perennials require these conditions before germination will ensue.\n\nSection::::Cold stratification.:In the wild.\n", "For annuals, seeds are a way for the species to survive dry or cold seasons. Ephemeral plants are usually annuals that can go from seed to seed in as few as six weeks.\n\nSection::::Germination.\n\nSeed germination is a process by which a seed embryo develops into a seedling. It involves the reactivation of the metabolic pathways that lead to growth and the emergence of the radicle or seed root and plumule or shoot. The emergence of the seedling above the soil surface is the next phase of the plant's growth and is called seedling establishment.\n", "Canopy seed bank\n\nA canopy seed bank or aerial seed bank is the aggregate of viable seed stored by a plant in its canopy. Canopy seed banks occur in plants that postpone seed release for some reason.\n\nIt is often associated with serotiny, the tendency of some plants to store seed in a cone (e.g. in the genus \"Pinus\") or woody fruits (e.g. in the genus \"Banksia\"), until seed release is triggered by the passage of a wildfire.\n", "Seed germination depends on both internal and external conditions. The most important external factors include right temperature, water, oxygen or air and sometimes light or darkness. Various plants require different variables for successful seed germination. Often this depends on the individual seed variety and is closely linked to the ecological conditions of a plant's natural habitat. For some seeds, their future germination response is affected by environmental conditions during seed formation; most often these responses are types of seed dormancy.\n", "A set of conditions must be met in order for long-term seed storage to be evolutionarily viable for a plant:\n\nBULLET::::- The plant must be phylogenetically able (pre-adapted) to develop the necessary reproductive structures\n\nBULLET::::- The seeds must remain viable until cued to release\n\nBULLET::::- Seed release must be cued by a trigger that indicates environmental conditions that are favorable to germination,\n\nBULLET::::- The cue must occur on an average timescale that is within the reproductive lifespan of the plant\n\nBULLET::::- The plant must have the capacity and opportunity to produce enough seeds prior to release to ensure population replacement\n", "Care must be taken, as training materials regarding seed production, cleaning, storage, and maintenance often focus on making landraces more uniform, distinct and stable (usually for commercial application) which can result in the loss of valuable adaptive traits unique to local varieties.\n\nAdditionally, there is a matter of localized nature to be considered.\n\nIn the upper northern hemisphere, and lower southern, one sees a seasonal change in terms of a cooler winter. Many plants go-to-seed and then go dormant. These seeds must hibernate until their respective spring season.\n\nSection::::Open pollination.\n", "There are indications that mutations are more important for species forming a persistent seed bank compared to those with only transient seeds. The increase of species richness in a plant community due to a species-rich and abundant soil seed bank is known as the \"storage effect\".\n" ]
[ "There should be a ton of tree seedlings because of how many seeds there are." ]
[ "Not all seeds will become tree seedlings due to environmental factors like genetics, predation, damage, etc." ]
[ "false presupposition" ]
[ "There should be a ton of tree seedlings because of how many seeds there are." ]
[ "false presupposition" ]
[ "Not all seeds will become tree seedlings due to environmental factors like genetics, predation, damage, etc." ]