source_id
int64 1
4.64M
| question
stringlengths 0
28.4k
| response
stringlengths 0
28.8k
| metadata
dict |
---|---|---|---|
3,035 | I got this question after reading this: http://worldradio.ch/wrs/news/wrsnews/basel-study-shows-positive-side-of-depression.shtml?24427 When a person is depressed, is it because the brain is malfunctioning, or is it just a natural reaction for sadness? Today you say to a doctor that you are sad, and he prescribes you an antidepressant without asking further questions. Instead, shouldn't depression be considered a natural reaction from the brain and be treated only for special cases where it is a syntom instead of the main disease(bipolar disorder, pacient getting suicidal, ...)? By a natural condition I point out the Basel study saying that people get more analytic during a depression. It is the way that the brain found to protect the person from getting another sadness. | I think your defenition of depression as "sad" is a missunderstanding on your part. Depression is clearly defined in DSM-IV . Depression that meets the DSM-IV criteria for a depressive disorder. The term is usually used to denote depression that is not a normal, temporary mood caused by life events or grieving DSM describes symptoms and does not discuss the causes of the disorders . DSM-IV designates the 4th edition. Issued in 1993, DSM-IV is currently the latest edition (as of 2001). A better defenition would be : Chronic Depressive Disorder (Dysthymia) A. Depressed mood for most of the day, for more days than not, as indicated either by subjective account or observation by others, for at least 2 years. Note: In children and adolescents, mood can be irritable and duration must be at least 1 year. B. Presence, while depressed, of two (or more) of the following: Poor appetite or overeating Insomnia or hypersomnia Low energy or fatigue Low self-esteem Poor concentration or difficulty making decisions Feelings of hopelessness C. During the 2-year period (1 year for children or adolescents) of the disturbance, the person has never been without the symptoms in Criteria A and B for more than 2 months at a time. D. The disturbance does not occur exclusively during the course of a chronic Psychotic Disorder, such as Schizophrenia or Delusional Disorder. G. The symptoms are not due to the direct physiological effects of a substance (e.g., a drug of abuse, a medication) or a general medical condition (e.g., hypothyroidism). H. The symptoms cause clinically significant distress or impairment in social, occupational, or other important areas of functioning. Once you clear that definition up, there are of course unethical doctors who may not perform a full diagnosis and just prescribe medicine that has been show to work on the surface symptoms. So in answer to your question, yes, it is a disease. (Although, keep in mind that a layman may think of a disease being caused by pathogens as opposed to a chemical imbalance. This is a limitation in layman understanding). | {
"source": [
"https://skeptics.stackexchange.com/questions/3035",
"https://skeptics.stackexchange.com",
"https://skeptics.stackexchange.com/users/2333/"
]
} |
3,040 | If there is anything one takes away about US prisons from television and movies, it is that there is a really high chance of you getting raped in the shower or in some other dark place. Now, I don't know that much about US prisons, but it always struck me as a bit drastic. Is rape a common occurrence in US prisons? If so, how is this possible? Do the guards look the other way? Is supervision incomplete? | I think it's important to compare with what you'd expect in the general population against what you'd expect in prisons. According to the Twelfth United Nations Survey of Crime Trends and Operations of Criminal Justice Systems the incidents of rape in the US is 28.6 per 1000 in 2008 (on a steady decline from 31.6 in 2003). I came upon an article in reason.com reporting that the DOJ (U.S. Departement of Justice) has recently attempted to estimate the amount of sexual abuse in prisons in America. I've emailed the reporter asking for directions to the actual report, but the reason.com give these statistics that I found interesting: The U.S. Department of Justice recently released its first-ever estimate of the number of inmates who are sexually abused in America each year. According to the department’s data, which are based on nationwide surveys of prison and jail inmates as well as young people in juvenile detention centers, at least 216,600 inmates were victimized in 2008 alone. Contrary to popular belief, most of the perpetrators were not other prisoners but staff members—corrections officials whose job it is to keep inmates safe. On average, each victim was abused between three and five times over the course of the year. The vast majority were too fearful of reprisals to seek help or file a formal complaint. However you shouldn't assume that people who work in prisons are abusive, even going by accusation it's a low number of the total. A DOJ report from September 2009 reports that 4.7% of all staff members under the BOP (Federal Bureau of Prisons) has been accused of sexual assault, it doesn't however seem to break it down into categories containing the nature of the abuse. On the bright side I did find out that congress has attempted to do something about prison rape back in 2003: The Prison Rape Elimination Act of 2003 directs prison officials to make the prevention of sexual abuse in prisons a top management priority. The Prison Rape Elimination Act defines “prisons” broadly to include not only federal and state prisons and local jails, but also short-term lockups such as cellblocks and other holding facilities regardless of their size. This is a placeholder until I can find the DOJ statistics. | {
"source": [
"https://skeptics.stackexchange.com/questions/3040",
"https://skeptics.stackexchange.com",
"https://skeptics.stackexchange.com/users/1183/"
]
} |
3,049 | I think most of us is familiar with this feeling. You can tell someone (who stands behind your back) is staring at you without any physical evidence. Is it possible or is it just a matter of coincidence? How is this phenomena called? Was it ever proven or disproven? | There are some (in)famous experiments done by Rupert Sheldrake who claims that a so-called "morphogenetic" field is responsible for this sort of thing. Alas, his experiments had quite sloppy methodology. The feeling itself is real, as most here will testify. But it has nothing to do with being actually stared at/observed. http://www.scientificamerican.com/article.cfm?id=ruperts-resonance Apart from some technical problems with Sheldrake's experiment, here is a partial explanation for why some people really believe they can feel when they are stared at: Second, psychologists dismiss anecdotal accounts of this sense to a reverse self-fulfilling effect: a person suspects being stared at and turns to check; such head movement catches the eyes of would-be starers, who then turn to look at the staree, who thereby confirms the feeling of being stared at.
[...]
When University of Hertfordshire psychologist Richard Wiseman also attempted to replicate Sheldrake's research, he found that subjects detected stares at rates no better than chance. So the conclusion is no , people cannot really feel somebody is watching them. | {
"source": [
"https://skeptics.stackexchange.com/questions/3049",
"https://skeptics.stackexchange.com",
"https://skeptics.stackexchange.com/users/2566/"
]
} |
3,099 | Is there any scientific truth to the common maxim "opposites attract" when applied to romantic relationships? I'm not necessarily looking for one blanket answer, though an overall trend or pattern would be helpful as part of a more complete answer. Here are some related questions that you might consider: Are there certain "opposite" personality traits (e.g. introversion vs. extraversion, rational vs. emotional decision making, etc.) that are more likely to "attract" than others? Are there any similar patterns regarding physical characteristics (e.g. height, hair color, athletic ability, etc.)? Is there any data on the difference between or sameness of people affecting the long-term success of committed relationships? An evolutionary perspective would also be welcome. For example, is there any data that suggests that a tendency towards genetic diversity might drive such behavior (if in fact, it does exist)? If you believe this question is too broad or needs to be clarified in any way, please make suggestions in the comments, and I will address them promptly. Thanks! | The answer to your question is, for the most part, no : Opposites do not attract.
Like-minded people attract ; that is, the relationships they form are fuller and longer-lasting
than when paired with an opposite-minded partner. This is despite the fact that people often claim,
in surveys and when asked directly by interviewers, that they would like someone with personality
characteristics different from their own; revealed preference and lived experience show that they
really don't know what they want. The article, Do People Know What They Want: A Similar or Complementary
Partner? , by a certain Pieternel Dijkstra,
reviews the hypotheses, and the research, better than I could give myself: With regard to [] “relative” mate preferences two hypotheses have been presented. First, according
to the “similarity-attraction hypothesis” individuals feel most attracted to potential partners who,
in important domains, are similar to themselves (e.g., Lucas, Wendorf, and Imamoglu, 2004). Similar
individuals are assumed to be attractive because they validate our beliefs about the world and
ourselves and reduce the risk of conflicts (e.g., Morry and Gaines, 2005). Not surprisingly
therefore, similarity between partners contributes to relationship satisfaction (e.g., Lutz-Zois,
Bradley, Mihalik, and Moorman-Eavers, 2006). Because a happy and long-lasting intimate relationship
contributes to both psychological and physical health (e.g., Berkman and Syme, 1994), similarity
between partners increases their own and their offspring’s chances of survival by helping maintain
(the quality of) the pair bond. In contrast, according to the “complementarity hypothesis” individuals feel most attracted to
potential partners who complement them, an assumption that reflects the saying that “opposites
attract” (e.g., Antill, 1983). Complementary individuals are assumed to be so attractive because
they enhance the likelihood that one’s needs will be gratified (e.g., De Raad and Doddema-Winsemius,
1992). For example, young women who lack economic resources may feel attracted to older men who
have acquired economic resources and therefore may be good providers (Eagly and Wood, 1999). In
addition, from an evolutionary perspective, one might argue that seeking a complementary mate,
rather than a similar one, may help prevent inbreeding. Ultimately, this is the punchline (emphasis mine): Studies on mate selection have consistently found support for the “similarity attraction”
hypothesis. Homogamy has been reported for numerous characteristics such as physical attractiveness
(e.g., White, 1980), attachment style (e.g., Klohnen and Luo, 2003), political and religious
attitudes (e.g., Luo and Klohnen, 2005), socio-economic background, level of education and IQ (e.g.,
Bouchard and McGue, 1981). In contrast, support for the “complementarity hypothesis” is much
scarcer. Although many individuals occasionally feel attracted to “opposites”, attractions between
opposites often do not develop into serious intimate relationships and, when they do, these
relationships often end prematurely (Felmlee, 2001). There is a particular area where people often do seek partners different from themselves, that's hot in the field of evolutionary psychology, and is extensively researched — that would the major
histocompatability complex , or MHC, in human mate choice. The MHC is a genomic region which codes for
protein receptors, MHC proteins, which are heavily involved in the immune system and autoimmunity;
the body uses them as antigens so that T cells and NK-cells, our immune's system "policemen", can
recognize foreign elements and differentiate them from "self". To protect against the great
diversity of bacterial and viral invaders, the MHC genome region needs to be highly polymorphic;
that means inbreeding at those loci (by mating with a "more-alike" partner) would be deleterious and
evolutionarily disfavored because inbreeding homogenizes alleles, which in turns means that mating with an "alike" individual would decrease the recognition ability of their offspring to detect pathogens. Thus, research often finds that humans often favor
dissimilar MHC alleles in their mates, which some researchers have hypothesized is mediated through
olfaction. Here is the abstract of one such study finding these preferences: Preferences for mates that possess genes dissimilar to one's own at the major histocompatibility
complex (MHC), a polymorphic group of loci associated with the immune system, have been found in
mice, birds, fish, and humans. These preferences may help individuals choose genetically compatible
mates and may adaptively function to prevent inbreeding or to increase heterozy-gosity and thereby
immunocompetence of offspring. MHC-dissimilar mate preferences may influence the psychology of
sexual attraction. We investigated whether MHC similarity among romantically involved couples (N =
48) predicted aspects of their sexual relationship. All women in our sample normally ovulated, and
alleles at three MHC loci were typed for each person. As the proportion of MHC alleles couples
shared increased, women's sexual responsivity to their partners decreased, their number of extrapair
sexual partners increased, and their attraction to men other than their primary partners increased,
particularly during the fertile phase of their cycles. Other than that, I can't think of anything else; the rule is pretty much, like attracts like . | {
"source": [
"https://skeptics.stackexchange.com/questions/3099",
"https://skeptics.stackexchange.com",
"https://skeptics.stackexchange.com/users/2243/"
]
} |
3,131 | A well reputed professor of neurology once mentioned to me that no drugs have been invented to cure human afflictions since the 1950's or 60's. Are there any drugs that have been invented since that time that are permanent cures of human ailments? This is somewhat related to: Is drug development far cheaper than Big Pharma wants us to believe? | That is an incorrect statement, if we're stricts. Bone marrow transplant to cure children lymphoma is a good example of a cure developed after the 50's: http://www.fhcrc.org/science/clinical/ltfu/faqs/transplantation.html . The major breakthrough was to obtain successful allogeneic grafts, which was not possible before late 60's-early 70's. There's a classic review in the New England Journal of Medicine of 1975 talking about these advances: http://www.nejm.org/doi/full/10.1056/NEJM197504172921605 (part 1) and http://www.nejm.org/doi/full/10.1056/NEJM197504242921706 (part 2). Also, you might want to look at Donall Thomas' 1990 Nobel Laureate lecture: http://nobelprize.org/nobel_prizes/medicine/laureates/1990/thomas-lecture.pdf Stent grafts to restore blood flow and prevent (diminish the chance of) restenosis. See http://www.fauchard.org/history/articles/jdh/v49n2_July01/charles_stent_49_2.html for the origin of the word stent and its many applications, and http://web.mit.edu/invent/iow/palmaz.html for endovascular stents in particular. If you look for drug examples, Imatinib (Gleevec) fits the description. For treating leukemia. http://www.time.com/time/covers/0,16641,20010528,00.html Abciximab (Reopro) saved a huge number of lives after heart attacks. http://www.ncbi.nlm.nih.gov/pubmed/10155090 Statins to reduce the cholesterol level (even with its abuse and lack of concurrent diets). http://www.nature.com/nrd/journal/v2/n7/fig_tab/nrd1112_I1.html Combination drugs like the HAART therapy against AIDS is another. You could argue that it does not "cure" AIDS, but the vast increase in life expectancy with it makes it worth being in the list. http://www.ncbi.nlm.nih.gov/pmc/articles/PMC1716971/ That said, it is true that the pace in which cures have been achieved diminished considerably. We could hypothesise for a long time about why this is happening, including (but clearly not limited to) stricter regulations and higher costs, for example. | {
"source": [
"https://skeptics.stackexchange.com/questions/3131",
"https://skeptics.stackexchange.com",
"https://skeptics.stackexchange.com/users/1792/"
]
} |
3,132 | Welcome to installment #2 of my "the science of animals falling" series of questions.... I've heard it said many times since childhood that if you are to find a baby bird on the ground which appears to have fallen from the nest you should not pick it up and put it back. I was told that once the baby bird has your scent on it, the mother will not take it back. It's possible that this is just something parents tell their kids to keep them from touching birds which are notorious for carrying germs (as are kids), but it is a very commonly held belief in America. Is there any scientific evidence to
explain why this would happen? Has this been studied? | From Fortean Times : Birds have little or no sense of
smell , and will be unaware of your
molestation. Besides, they will not
lightly abandon their offspring. From National Geographic : " Most birds have a poorly developed
sense of smell ," says Michael Mace,
bird curator at San Diego Zoo's Wild
Animal Park. " They won't notice a
human scent ." One exception: vultures,
who sniff out dead animals for dinner.
But you wouldn't want to mess with a
vulture anyway! Snopes also debunks it: Mother birds will not reject their babies because they smell human scent on them, nor will they refuse to [sit] on eggs that have been handled by a person. Many birds have a limited sense of smell and cannot detect human scent, or if they can detect it, do not react to it. | {
"source": [
"https://skeptics.stackexchange.com/questions/3132",
"https://skeptics.stackexchange.com",
"https://skeptics.stackexchange.com/users/486/"
]
} |
3,167 | Some people say that holes in latex are large enough for the AIDS-causing HIV and chlamydia to pass through, so that condoms do not protect against these STDs. Others dispute this statement and consider condoms to be effective in blocking HIV transmission. What is the typical size of holes in latex and how does it compare to the size of viruses? Are viruses like HIV able to get through these holes? | Concerning the question of whether the HIV virus can pass through condoms, the answer appears to depends on the type and condition of the condom. The first question is, what kind of condoms? It seems that the available internet literature readily acknowledges that : “Condoms manufactured from latex are
the most popular, and studies
conducted on the ability of condoms to
prevent the transmission of STDs and
HIV most often involve latex condoms.
Condoms manufactured from lambskin,
also known as "natural skin," or
"natural membrane," are made from the
intestinal lining of lambs. While
these condoms can prevent pregnancy,
they contain small pores that may
permit passage of some STDs, including
HIV, the hepatitis B virus, and the
herpes simplex virus.” These kinds of condoms were not widely available at one time. Users would be well advised to recognize this point in making their condom selection if the interest is to avoid the transmission of sexually transmitted diseases (“STD”s), including AIDS. Incidentally, presumably, natural membrane condoms presumably make dandy water balloons. Likewise, rubber gloves are “water tight” but HIV can pass through the pores in rubber gloves, which is why the latex used for condoms is manufactured to more rigorous specifications. According to Straight Dope : I'll say. Your clip is a 1992 letter
to the editor from Mike Roland, editor
of Rubber Chemistry and Technology, a
publication of the American Chemical
Society. Roland argued that "the
rubber comprising latex condoms has
intrinsic voids [pores] about 5
microns (0.00002 inches) in size.
Since this is roughly 10 times smaller
than sperm, the latter are effectively
blocked.... Contrarily, the AIDS virus
is only 0.1 micron (4 millionths of an
inch) in size. Since this is a factor
of 50 smaller than the voids inherent
in rubber, the virus can readily pass
through." This sounds scary, but there are a
couple problems with it. First, Roland
bases his statement about a 5 micron
latex pore size on a study of rubber
gloves, not condoms. The U.S. Public
Health Service says that condoms are
manufactured to higher standards than
gloves. Condoms are dipped in the
latex twice, gloves only once. If just
4 out of 1,000 condoms fail the leak
test, the whole batch is rejected; the
standard for gloves is 40 out of
1,000. A study of latex condoms by the
National Institutes of Health using an
electron microscope found no holes at
a magnification of 2000. This seems to be the an article on the original report on holes in latex gloves. So, the point that HIV molecules are larger than water is a “red herring” with respect to determining whether condoms are useful to preventing the transmission of HIV. Second, and a more precise question is, can HIV pass through latex condoms? As the Straight Dope quote indicates, there was an FDA report indicating that under extreme test conditions – certainly unlikely to replicated in actual performance - HIV viruses were found to have passed through latex condoms. Internet literature from AIDS information sites - that do not seem to have an "anti-condom" agenda - seem to agree that the “pores” in latex condoms are approximately .5 microns in size, whereas the HIV virus size is .1 microns. See this Arizona health site for corroboration . There seems to be some dispute as to whether infection can occur through the virus alone. According to Straight Dope : As for the substantive issue you
raise, it's true "the transmission of
HIV by genital fluids most probably
occurs through virus-infected cells
since they can be present in larger
numbers than free virus in the body
fluids" (Jay Levy, "Pathogenesis of
Human Immunodeficiency Virus
Infection," Microbiological Reviews,
March 1993--an exhaustive treatment of
the subject). But it would be wrong to
construe this to mean that HIV is
transmitted only by cells. When I
spoke to Dr. Levy he readily conceded
that HIV may be transmitted by free
virus as well. He did add that the
viscosity of semen may hinder the
passage of such virus through the
latex barrier. If this information is outdated it would be nice to know. A lot of internet sources quote the “factoid” that condoms have “pores” of .5 microns in size. I suspect from my efforts to chase down the source of this information that it may come from the 1992 report based on an examination of latex gloves noted above. This site directly addresses the “pore hypothesis” and concludes that the double layers of latex in condoms prevents the formation of holes or pores that go through the entire condoms. That seems to be the best explanation for the anomaly of HIV not passing through condoms like "bullets through a netting." So, the answer seems to be that outside of artificially created circumstances and assuming properly manufactured, non-defective, non-deteriorated condoms, the HIV virus will not pass through latex condoms. Third, an even more precise question, is whether HIV can pass through condoms under ordinary usage? A caveat to arguments about the effectiveness of condoms is always that they have to be used properly and invariably. Proper usage involves more than mechanics. Health sites often contain warnings that deterioration, and opening up condom practices with teeth or nails, can introduce tears into the condoms. For example, Health Communities.com states : Condoms should be purchased from a
source that can guarantee product
reliability and freshness. Heat,
pressure, and age can break down
latex. Condoms should not be used more
than 5 years after the manufacture
date. If the condom looks deteriorated
or discolored, or feels sticky or
brittle, it should be discarded. If
the packaging is torn or damaged, the
condoms should not be used. Condoms are easily torn if they are
handled roughly or with sharp
fingernails, so care should be taken
while putting them on and taking them
off. Petroleum or oil-based lubricants
(e.g., Vaseline, baby oil) can break
down latex and should not be used.
Water-based lubricants (e.g., KY
Jelly) should be used and are usually
labeled "For use with latex condoms or
diaphragms." Hence, fresh out of the box condoms provide a level of protection that may not be found in one left in a wallet or the glove compartment. Fourth, even if all things go right, are condoms always effective? The answer is clearly “no,” as suggested by the 4 out of 1,000 flaw rate mentioned above. According to this site : Generally, the condom's effectiveness
at preventing HIV transmission is
estimated to be 87%, but it may be as
low as 60% or as high as 96%.
Conclusions: Consistent use of condoms
provides protection from HIV. The
level of protection approximates 87%,
with a range depending upon the
incidence among condom nonusers. Thus,
the condom's efficacy at reducing
heterosexual transmission may be
comparable to or slightly lower than
its effectiveness at preventing
pregnancy. Family Planning
Perspectives, 1999, 31(6):272-279 Condoms are clearly effective in decreasing the odds of being infected, but, clearly they are not absolutely effective. It appears, though, that the risk of infection from an HIV viruses making its way through a non-defective, non-compromised condom is de minimis. One, however, should not be entirely sanguine about the effectiveness of condoms under all circumstances. | {
"source": [
"https://skeptics.stackexchange.com/questions/3167",
"https://skeptics.stackexchange.com",
"https://skeptics.stackexchange.com/users/673/"
]
} |
3,191 | My girlfriend sent me this article from the Telegraph . This is an excerpt: Led by two academics at Oxford University, the £1.9 million study found that human thought processes were “rooted” to religious concepts. Photo from the article Not having ever been religious myself I am very skeptical of these claims. Please read the article for more details on the claim. Besides me thinking that it's an incorrect claim, I also don't think that it is actually possible to produce a reliable scientific study on religious belief and genetics—so the studies must be misquoted somehow or not scientifically based. Does this piece of news correspond to the findings of the studies? Are the studies scientific or are they philosophical essays? | God? No. Does this piece of news correspond to
the findings of the studies? Belief in God is part of human nature - The Telegraph Religious belief is human nature, huge new study claims - CNN All correspond to the findings of a press release... Humans 'predisposed' to believe in gods and the afterlife - University of Oxford ...and to any interviews given by the two academics from Oxford University that led the studies. Are the studies scientific or are they
philosophical essays? Philosophical essays. The Cognition, Religion, and Theology Project Funding source: John Templeton Foundation Grant Amount: $3,876,247 Start Date: October 2007 Our Philosophy Grantmaking The division of labor and increasing
specialization in most fields mean
that some of the most interesting,
difficult, and profound questions do
not get addressed. We try to give
great minds the space and resources to
stretch their imaginations. We want to
work with contrarians, with
intellectual entrepreneurs. - source Project Leader(s) Justin L. Barrett , Senior Researcher Institute for Cognitive and Evolutionary Anthropology [ICEA] Institute of Social and Cultural Anthropology [ISCA], University of Oxford Roger Trigg , Senior Research Fellow The Ian Ramsey Centre for Science and Religion , University of Oxford Project goals. The overarching goal of the project is
to support scientific research that
promises to yield new evidence
regarding how the structures of human
minds inform and constrain religious
expression. The project will conduct
research on the cognitive
underpinnings of religious concepts
and practices – for example, ideas
about gods and spirits, the afterlife,
spirit possession, prayer, ritual,
religious expertise, and connections
between religious thought and morality
and pro-social behavior. - source The Science... Cognitive Science of Religion (CSR) First mentioned in Towards a Cognitive Science of Religion by E. Thomas Lawson , 2000. International Association for the Cognitive Science of Religion (IACSR) founded in 2006. ...CSR’s ability to bridge the gap
between strictly evolutionary or
biological treatments of religion and
strictly social approaches. Evidently,
however, the issues addressed by this
field are gaining momentum in the
public sphere in part because of the
anti-religious rhetoric that has come
to parasitize the field. We aim to
harness this momentum and attention to
maximize the scientific potential of
CSR, and to engage theological and
philosophical perspectives in a
potentially mutually productive,
instead of antagonistic, manner, pursuing truth wherever the evidence
leads . - Project website Main findings of the Cognition, Religion and Theology Project Studies by Emily Reed Burdett and Justin Barrett ...press release text. The cognitive science of
religion. Barrett, Burdett. Deborah Kelemen from Boston
University finds ...press release text. The Human Function Compunction: Teleological explanation in adults. Kelemen, Rosset. 2009 Are Children ‘Intuitive Theists’? Kelemen, 2003. Experiments involving
adults ...press release text. The cognitive psychology of belief in the supernatural. Bering, 2006. Reasoning about dead agents reveals possible adaptive trends. Bering, et al. 2005. The development of ‘afterlife’ beliefs in secularly and religiously schooled children. Bering, et al. 2005. The Cognition, Religion and Theology Project's interpretation of the main findings From the press release... The studies (both analytical and
empirical) conclude that humans are
predisposed to believe in gods and an
afterlife, and that both theology and
atheism are reasoned responses to what
is a basic impulse of the human mind. ‘This project does not set out to
prove god or gods exist. Just because
we find it easier to think in a
particular way does not mean that it
is true in fact. If we look at why
religious beliefs and practices
persist in societies across the world,
we conclude that individuals bound by
religious ties might be more likely to
cooperate as societies. Interestingly,
we found that religion is less likely
to thrive in populations living in
cities in developed nations where
there is already a strong social
support network.’ - Project Director Justin Barrett, Ph.D. ‘This project suggests that religion
is not just something for a peculiar
few to do on Sundays instead of
playing golf. We have gathered a body
of evidence that suggests that
religion is a common fact of human
nature across different societies.
This suggests that attempts to
suppress religion are likely to be
short-lived as human thought seems to
be rooted to religious concepts, such
as the existence of supernatural
agents or gods, and the possibility of
an afterlife or pre-life.’ - Project
Co-Director Professor Roger Trigg Reality The science does not support the conclusion. Given Dr. Barrett knows he is... ...an observant Christian who
believes in “an all-knowing,
all-powerful, perfectly good God who
brought the universe into being,” as
he wrote in an e-mail message. “I
believe that the purpose for people is
to love God and love each other.” - nytimes He must also know this increases the chances his research could be skewed by Confirmation bias . These intriguing findings would
certainly be strengthened by
replications with additional stimuli
sets, alternative methods, and with
different cultural populations. As
they stand, they suggest one possible
cognitive reason for the culturally
widespread existence of religious
beliefs in deities that either order
or create the natural world: such
ideas resonate with an early
developing and persistent intuition
that the
natural world looks
purposefully designed. Positing a
designer (or designers) fits with our
intuitions. - Barrett There is also the problem of Biased interpretation We are moral realists. Gods, by virtue
of having access to the facts of any
matter, also know the moral facts of
the matter, and (perhaps not
surprisingly) tend to see things the
way we do. Theists, then, can glibly
accept moral realism. Not so for the
atheist. Atheists may have
approximately the same moral
intuitions and behave just as morally
as theists, but have some intellectual
work to do that the theist has managed
to avoid by relying on the authority
of the gods. Atheists have this extra
work to do in the moral domain, but
that does not mean that it cannot be
done. - Barrett And good old fashioned Demonization Refusing to accept that, in principle,
science could ever allow space for
non-material, even theistic,
explanations demands philosophical
argument, not an assertion of the
supremacy of science. The obscurantist
refusal to allow the theory of
Intelligent Design to be even
discussed in a scientific context can
only be the product of a
deeply-ingrained materialism, even
atheism. - The Religious Roots of Science , Roger Trigg. The Bottom Line... The Essence of the Skeptical Position*. (edited for brevity) Extraordinary claims demand extraordinary evidence. The burden of proof lies with the claimant. The claim stands or falls on the quality of the evidence the proponent
can provide. To be taken seriously, claims must be testable, at least in principle. Claims must be falsifiable. The evidence must be public and accessible to all competent critics. Science is a public activity based on trust . Failed on all counts. * Distinguishing Science from Pseudoscience , Beyerstein Misc... Journal of Cognition and Culture , ED: E. Lawson and Pascal Boyer. Book editor: Justin L. Barrett. Cognitive science gaining ground in U.S. academic religion studies | {
"source": [
"https://skeptics.stackexchange.com/questions/3191",
"https://skeptics.stackexchange.com",
"https://skeptics.stackexchange.com/users/96/"
]
} |
3,219 | I have friends that are very picky when it comes to MP3 bitrate, and will always look for the 320 kbps version of a file. However, I have never noticed any differences, they sound the same to me. I remember reading somewhere, can't remember where, that the human ear is simply incapable of sensing the difference, even if present. Can anyone shed some light on the 192 vs. 320 kbps issue? | Here is one surprising result, from an experiment described in Maximum PC's article " Do Higher MP3 Bit Rates Really Pay Off? ": Its conclusion: [No other] Maximum PC Challenge has ever surprised us as much as this one. It’s downright humiliating, in fact, that in many cases, we were unable to tell the difference between an uncompressed track and one encoded at 160Kb/s, the bit rate most of us considered the absolute minimum acceptable for even portable players. Some follow-up testing confirmed our suspicions: variable bit rate encoding makes a tremendous difference in the audio quality results, certainly enough to justify—many times over—the slight file size increase. Capping the bit rate at 160Kb/s in MP3 files can be pretty harsh on a track, but allowing the bit rate to wander upwards during more complex passages—as variable bit rate encoding does—and throttle down during quieter sections captures an astonishing amount of complexity while keeping file sizes down to an impressive minimum. I myself took a similar test and failed as much as I succeeded in identifying which track was which (160 vs 320), a result which is no better than random guessing. I can hear a very slight difference most of the time between LAME-encoded (--alt preset standard*) MP3 files and CD audio, but only on an expensive system with terrific speaks in a quiet room. For earbuds and car listening it doesn't really seem to matter. The biggest difference seems to be not in 160 vs. 320 but CBR vs. VBR. * "Current consensus is that settings "--alt-preset standard" are recommended for most cases. This results in a very high quality VBR MP3s, giving you bitrates around 200kbps, depending heavily on the music. Mellow rap can go much lower and loud heavy metal can result higher bitrates. The quality will always remain very high." — cd-rw.org Addendum 7 years later Rick Beato has a great video on this topic, which I just discovered on YouTube: Audiophile or Audio-Fooled . | {
"source": [
"https://skeptics.stackexchange.com/questions/3219",
"https://skeptics.stackexchange.com",
"https://skeptics.stackexchange.com/users/1781/"
]
} |
3,246 | France is commonly made fun of for not having won a war, for instance when they rejoined NATO and the daily mail made fun of them: Why did the French celebrate their World Cup in 1998 so wildly? It was their first time they won anything without outside help. Why are the French afraid of war? You would be, too, if you had never won one. Are there any examples with clear French victories of war? A good example would be where conquered another nation, a large area of land without immediately losing it or gained a ceasefire/surrender through skill or force. War of alliances are fine as long as France is the main contributor and the military leader similar to how America is sometimes joined by much smaller contributions by other nations. The French derived their name from the Franks which existed around 700AD, so that seem like a good starting point unless anyone feel like arguing that point. | First of all, a concise yes/no answer heavily depends on : What the timeframe was ? Do you include pre-Roman Gauls? Frankish kings? Medieval period? Post-Westphalian nation state only? Modern era only? (e.g. post late 18th century) How you define "French"? This is somewhat tied in with timeframe? Do you include only post-Westphalian-soveregnity nation state? Do you included decidedly non-French nationals leading French armies? Also, do you include wars where France was part of a winning alliance? and where do you draw the line? (on a spectre from Crimean war to WWII) How you define " war " Do you include only conflict among nation states? Or do you include "unfair" conflicts such as a colonial war against poorly armed militia? Technically the latter should be included - à la guerre comme à la guerre ;), but the deeper philosophical root of the original claim would not really be in tune with the asnwer that said "lost all wars except against these poorly equipped 10,000 sized rebel force" (that's like asking "did this boxer win any fights" and the answer is "Yes, if you include one with a 10 year old he he was 18" :) Does winning a major battle count if the overall war was lost? Does winning a single war count if it was part of a coherent series of wars that were lost overall (the latter especially applies to Napoleonic era). How you define " winning " Do you include cases where most of the war was fought by other powers? Nominally, the French were part of the side that won WWII. How much that was attributable to French martial efforts is a different story. Do you include a war that concluded in - effectively - a draw judging by the results of the war? Thus, depending on your definitions: NO , French never won a war against another major nation-state "without outside help" since 1648 (when the concept of nation states came into existence at the end of 30-year war and Peace of Westphalia). YES , French won a "war" single-handedly between 1648 and 1860, if you count Napoleon's wins. Napoleon Bonaparte won several sub-wars that were part of Napoleonic wars . But , strictly speaking, they shouldn't be counted because the Napoleonic wars as a whole were a loss for France in the end. YES , French won at least major war single-handedly prior to 1648. In a stunning reverse of the picture of Napoleonic Wars, they lost nearly every sub-war at the start of 100 years war - but by the year 116 of that war, the overall conflict was won by the French. Extinguishing all English claims to French territory. Another answer covered Charlemagne pretty well. Whether that counts as "French" depends on which timeframe you look at. Ditto Charles Martel. YES , French did win a couple of wars as a major part of an alliance since 1638. How many of them counts depends heavily on the defined scope as discussed above. Only one of them was 100% clear win under any scope one can think of: French won the second Italian War of Independence against Austria (e.g. Magenta ) during Second Italian War of Independence . "The next year, in 1860, with French and British approval, the central Italian states — Duchy of Parma, Duchy of Modena, Grand Duchy of Tuscany and the Papal States — were annexed by the Kingdom of Sardinia, and France would take its deferred reward, Savoy and Nice." Second Italian War of Independence . Another war they won that may or may not be counted depending on your scope (they fought as a major part of larger alliance; and they got no tangible benefits from the win) was Crimean War . However, I only included these for completeness of data. None of these counts towards the letter of the original claim that explicitly said " without outside help ". Some people prefer to include as "win" WWI or WWII - but both of them France effectively lost until USA and Britain (Battle of the Marne) intervened. And the same was true for every single time in the war that mattered - e.g. at Verdun, the French didn't start winning till Russian Brusilov offensive and British-dominated Somme offensive drew off German resources. But yes, they technically were among the winning allies in the end of the war ( which does nothing to address the original claim's spirit or letter ). NO , French did not win any war that they fought against a major nation since 1860, with or without caveats. WWI/WWII don't count as French "win" under any reasonable interpretation of the claim being examined (see above for more details on WWI). YES , French won numerous wars against rebels/natives in colonial conflicts, at various points in history including modernity. Invasion of Algiers in 1830. I think that qualifies as unconditional victory. So strictly speaking the answer to your question is "yes". This can be padded by yet more colonial-type victories that I'm too lazy to copy/paste out of Wiki (IndoChina) Malian Intervention was won by the French controlling all cities previously held by the guerrillas. Technically speaking, these all count as "Winning a war" and thus satisfy the original claim being examined. The fact the opponents were severely outclassed and outnumbered and out-resourced is worth noting, however. YES , non-French entities that lived in territory that of modern France won wars in the distant past, such as Viking-descended Normans winning Battle of Hastings and the whole Norman conquest of England . YES , there were some other military victories. But none of them should really count as they all come with major caveats. E.g. Battle of the Allia : Win. But that was Gauls, not really modern French. And Gauls lost the overall war to Rome. And the list of military conflicts that they had lost is indeed much longer, though some of that list is humorous spin. P.S. People seem to be questioning why I don't count WWI as being within the scope of the claim. I'll detail below: The claim very specifically was: anything without outside help . ... War of alliances are fine as long as France is the main contributor and the military leader similar to how America is sometimes joined by much smaller contributions by other nations . Based on those clarifications, the whole history of WWI leads to it not being even remotely in-scope. First of all, Russian army and Britain combined provided more raw manpower than France ( src ); AND suffered more casualties combined ( src ). Second, non-French participation was critical to France not losing to Germany in all 3 pivotal moments in the war: France was very nearly 100% conquered in 1914, with the only 2 reasons that it didn't happen being (a) BEC's participation in the Battle on the Marne, where they were instrumental to breaking the German line and (b) Russian/Serbian wins over Austria (which caused Germans to shift divisions to Eastern front, which created the lack of troops that contributed to the break in the line to be exploited). Even discounting that, we have similar situation in Verdun - where French weren't winning (admittedly, not losing either) until (a) British-led offensive on the Somme drew off some German troops from Verdan and (b) More importrantly, Brusilov offensive drew off even more German forces to the Eastern Front. British naval blockade stacked the war economically against Germany (French Navy wasn't even close to preventing German trade with the rest of the world, especially USA) As a bonus, Germany invested enormous resources into its navy which it couldn't use for anything productive in the end - which carried clear opportunity cost in terms of economic value of that investment. Additional meaningful non-French contributions: Americans financed British and French military capability heavily, both financially and through weapons sales. American entry into the war after Russia was knocked out of revolution shouldn't be discounted either, though that's the weakest argument among these. Each of those contributions separately - and especially all of them conbined - far surpass the plank of "without outside help" or "joined by much smaller contributions by other nations" | {
"source": [
"https://skeptics.stackexchange.com/questions/3246",
"https://skeptics.stackexchange.com",
"https://skeptics.stackexchange.com/users/631/"
]
} |
3,253 | Today in the cafeteria my friend dropped a chicken wing on the floor and immediately picked it up and ate it. Afterward he claimed that the chicken wing was still safe to eat if it was consumed within 5 seconds after being on the floor. He said that it's the "five second rule". Is this rule safe to follow? | Image Source Jillian Clarke researched this in 2003 when she was a high school science intern at the University of Illinois . Among Clarke's findings : 70% of women and 56% of men are familiar with the 5-second rule, and most use it to make decisions about tasty treats that slip through their fingers. Women are more likely than men to eat food that's been on the floor. Cookies and candy are much more likely to be picked up and eaten than
cauliflower or broccoli . And, if you drop your food on a floor that does contain
microorganisms, the food can be
contaminated in 5 seconds or less . Clarke was awarded the 2004 IG Nobel Prize in Public Health for her work. Food Scientist Paul Dawson at Clemson University also looked into it. His findings were published in the Journal of Applied Microbiology : Three experiments were conducted to
determine the survival and transfer of Salmonella Typhimurium from wood, tile
or carpet to bologna (sausage) and
bread . In the case of the 5-second-rule we
found that bacteria was transferred
from tabletops and floors to the food
within five seconds , that is the 5-second-rule is not an accurate guide
when it comes to eating food that has
fallen on the floor. The MythBusters also busted the 5-second-rule : Even if something spends a mere
millisecond on the floor, it attracts
bacteria. How dirty it gets depends on
the food's moisture, surface geometry
and floor condition — not time. Here is the video. | {
"source": [
"https://skeptics.stackexchange.com/questions/3253",
"https://skeptics.stackexchange.com",
"https://skeptics.stackexchange.com/users/2760/"
]
} |
3,264 | The article Scientists cure cancer, but no one takes notice claims big pharma and the media are ignoring dichloroacetate because it's out of patent protection - but that question is answered at Dichloroacetate (DCA) as a cure for cancer The bit I'm interested in is where it says In human bodies there is a natural
cancer fighting human cell, the
mitochondria, but they need to be
triggered to be effective. Do they fight cancer in any way, shape or form, or is the author mistaking them with midi-chlorians ? Also, are mitochondria cells? | Aside from their main function in cellular respiration, mitochondria play an essential role in the regulation of programmed cell death (apoptosis). A variety of key events in apoptosis
focus on mitochondria, including the
release of caspase activators (such as
cytochrome c), changes in electron
transport, loss of mitochondrial
transmembrane potential, altered
cellular oxidation-reduction, and
participation of pro- and
antiapoptotic Bcl-2 family proteins. -- source Cancer cells have to find a way to evade triggering apoptosis in order to survive, and the treatment of cancer often relies on triggering apoptosis. For example, it is now clear that some
oncogenic mutations disrupt apoptosis,
leading to tumor initiation,
progression or metastasis. Conversely,
compelling evidence indicates that
other oncogenic changes promote
apoptosis, thereby producing selective
pressure to override apoptosis during
multistage carcinogenesis. Finally, it
is now well documented that most
cytotoxic anticancer agents induce
apoptosis, raising the intriguing
possibility that defects in apoptotic
programs contribute to treatment
failure. -- source Strictly speaking the statement is wrong, mitochondria are not cells and they don't explicitly fight cancer. But they are involved in apoptosis, a very important mechanism against cancer. This sounds pretty much like a reporter trying to summarize a topic he doesn't understand. | {
"source": [
"https://skeptics.stackexchange.com/questions/3264",
"https://skeptics.stackexchange.com",
"https://skeptics.stackexchange.com/users/104/"
]
} |
3,331 | In India, a large number of people believe in black magic. Black magic is the belief of practices of magic that draws on assumed malevolent
powers.This type of magic is invoked when wishing to kill, steal, injure, cause
misfortune or destruction, or for personal gain without regard to harmful
consequences. -wikipedia I have personally seen people speak of this thing and how it has spoiled their health/business etc. In India , there are even products that claims to protect one from black magic. Does black magic really work ? | "Black magic" is a really large umbrella for a wide range of claims, but the very definition of any "magic" is that it is somehow supernatural. Naturally (!), no compelling evidence has ever been put forth for any supernatural phenomenon. Once a phenomenon is observable, reproducible, and testable, and shown to exist, it may turn out to violate our current understanding of the natural laws, but if the phenomenon is for real, we shall have to adjust our views to accommodate this, and the phenomenon shall cease to be considered supernatural. "Magic" will never be shown to exist. All rigorously tested supernatural claims have turned out to be fake, and the great number of untested supernatural claims can most likely be largely attributed to the fact that the practitioners are aware of their fraud, and reluctant to be exposed; see the million dollar challenge in Regebro's answer. More specifically, in regions of India there is a somewhat widespread belief in tantra . While few practitioners would lend themselves to scientific studies for reasons explained above, there has been one notable appearance where Indias allegedly most powerful tantrik was challenged in live TV in front of an audience of millions of people, to kill a person with the aid of black magic alone. Rationalist International has a good writeup on the story. In short, the tantrik claimed he could kill any person he wanted within three minutes, but did not manage to inflict any sort of damage during what went on for more than three hours. Rationalist International concludes: Tantra power had miserably failed. Tantriks are creating such a scaring atmosphere that even people, who know that black magic has no base, can just break down out of fear, commented a scientist during the program. It needs enormous courage and confidence to challenge them by actually putting one’s life at risk, he said. By doing so, Sanal Edamaruku has broken the spell, and has taken away much of the fear of those who witnessed his triumph. | {
"source": [
"https://skeptics.stackexchange.com/questions/3331",
"https://skeptics.stackexchange.com",
"https://skeptics.stackexchange.com/users/2079/"
]
} |
3,332 | I think everyone may have heard this at one time or another, but it's said that women prefer men with a "good sense of humor". The idea is so commonly held, it even has its own abbreviation on dating services: GSOH. I know I've seen this claim more than a few times over the years, but mostly in magazines known more for their fashion advice than their academic rigor. However, I came across this... The article here seems to contribute to this idea. Has this claim ever been scientifically validated? Or is all the evidence merely anecdotal? Could this be due to post-hoc rationalization, as Chris Rock noted when he said, "women don't like men with a sense of humor, they just like it when they guy they want to f@#k happens to be funny." | Source From Psychology Today : According to Eric Bressler , a
psychologist at McMaster University in
Canada , men and women don't mean the
same thing when they say they value
humor in a long-term partner . [He] found that women want a man
who is a humor "generator," while men
seek a humor "appreciator." Production and appreciation of humor as sexually selected traits [...] ... a German study found that when male
and female strangers engaged in
natural conversation, the degree to
which a woman laughed while talking to
a man was indicative of her interest
in dating him . How much the woman
laughed also predicted the man's
desire to date her . On the flip side, how often a man laughed was unrelated
to his interest in a woman . (the study was conducted by Karl Grammer and Irenaus Eibl-Eibesfeldt , but at the moment I can't find it) This study mentions in its abstract: While there are a relatively small
number of studies in the area, those
looking at humour have found strong
correlations between humour and
increased attractiveness, but only for
women rating men. Psychologist Kristofor McCarty of Northumbria University : " A quick browse of lonely hearts ads
will confirm that women look for a
good sense of humour in a potential
partner - our research may explain why
this is the case." McCarty asked 45 women to rate the
personalities behind a selection of
lonely hearts ads drawn up especially
for the study. The
funny men were rated as more
intelligent, despite the ads
containing no clues on IQ. They were also seen as more honest and
better material for a relationship and
for friendship. The results of this study : ... suggest that the human sense of humor evolved at least partly
through sexual selection as an
intelligence-indicator. On the biological differences between men and women: The Times - One day, girls, you will laugh at this Experiments at Stanford University in
California found that women use more
parts of the brain than men to process
jokes and have less expectation that
they will find them funny . The experiments found that women
displayed more intense activity than
men in the prefrontal cortex of the
brain, which controls language
interpretation and in-depth analytical
processes. They took slightly longer
to react to jokes that were funny, but
enjoyed the punchlines more .
Researchers, however, said the time
difference was marginal . | {
"source": [
"https://skeptics.stackexchange.com/questions/3332",
"https://skeptics.stackexchange.com",
"https://skeptics.stackexchange.com/users/486/"
]
} |
3,348 | I know that frame-rates above 60fps all look the same to the human eye. Is that true? Why? If so, why do graphics cards boast anything higher than that? | Eyes? No. Humans? Yes. You'd be hard pressed to get 60fps out of the human eye. In laboratory conditions, it takes
around 150 ms for neurons in the
visual system to begin to recognize
and categorize a newly appearing
visual input. However, this little factoid is not the frame rate specification for human vision. If real-world perception were to
follow this same pattern, then for a
considerable time after each saccade we would still be perceiving the old
retinal input, rather than the
information currently on the retina.
In fact, we should have to wait around
150 ms to ‘see’ what is in front of
our eyes after each saccade, by which
time the oculomotor system has already
begun to choose the next saccadic
target. That would suck. Fortunately, the human eye is more than a camera* with fat pipe connection to the brain. While holding a pen, for example, the
sensory input is limited to the
receptors of a few fingers, leaving
the majority of the surface of the pen
outside of our direct sensory range.
Nonetheless, we perceive a complete
object, not a pen with holes where our
fingers do not touch. Similarly, our
visual system actively perceives the
world by pointing the fovea, the area
of the retina where resolution is
best, towards a single part of the
scene at a time. Human vision does not have properties like frame rate, latency, resolution, et al. Visual constancy can also be viewed as
a temporal phenomenon: objects appear
to be continuously present over time.
Yet the duration of external events
are typically longer than that of a
single sensory ‘sample’ such as a
fixation. Although movements of the
eyes, head and body disrupt our steady
access to these objects and events,
the stream of consciousness continues
smoothly across these sensory
disruptions. This is an amazing feat,
given that each saccadic eye movement
creates a temporal disruption in the
flow of information from the retina to
higher perceptual areas. The motor
smear on the retina during the saccade
is suppressed, making us largely
unaware of the retinal stimulation
during this time period. In addition,
each saccade requires the visual
system to ‘re-perceive’ the
information from a new fixation. Time is relative... ...perceived time seems to shift
forward, towards the beginning of the
new fixation, essentially compressing
the time immediately before and during
the saccadic eye movement. One
possible interpretation is that space
and time are inextricably linked in
the brain, with the pattern of strange
perceptual effects reported for
stimuli flashed around the time of
saccades reflecting a spatio-temporal
transformation between fixations. The Bottom Line... Human vision is not bound by frame rate. Source of all quotes: Visual stability , References More... Looking ahead: The perceived direction of gaze shifts before the eyes move How Human Vision Perceives Rapid Changes From eye movements to actions: how batsmen hit the ball Vision and the representation of the surroundings in spatial memory * Actually comparing the human eye to a camera is like comparing a thermonuclear weapon to a pen-knife. | {
"source": [
"https://skeptics.stackexchange.com/questions/3348",
"https://skeptics.stackexchange.com",
"https://skeptics.stackexchange.com/users/2829/"
]
} |
3,360 | I've heard it claimed from various people that we have historical records for the darkness that was said to engulf the earth when Jesus was crucified. In the bible, it claims that there was, but I would like more scientific/verifiable accounts on the matter. My question is, do we have any non-biblical, reliable sources that there was an eclipse of some sort when Jesus was crucified? Or are people who claim that there was a documented eclipse during this time mistaken? Do we have any evidence that an eclipse did not happen? | Passover occurs at the middle of the month in a lunar calendar which starts with the new moon. Since the crucifixion was supposed to happen close to Passover (around April), and therefore close to a full moon, there could not have been a solar eclipse, which occurs with a new moon. If you disbelieve the Passover part of the story, you can check solar eclipses around Jerusalem here. Year 32 has a solar eclipse sort of close to Passover (i.e. two weeks away). If you go for a lunar eclipse --which does happen with a full moon--then the only matching date is the 3rd of April, year 33. Anyway, there were certainly solar and lunar eclipses then as there are now, but given how difficult it is to come up with any independent confirmation of when the crucifixion happened, finding historical reports of eclipses won't help answer anything. | {
"source": [
"https://skeptics.stackexchange.com/questions/3360",
"https://skeptics.stackexchange.com",
"https://skeptics.stackexchange.com/users/2808/"
]
} |
3,364 | I recently was in a discussion where one of the people, as an argument, claimed that Jews have been known to participate in activities clearly detrimental to Jews as a whole (the argument itself had nothing to do with Jews, it was merely an analogy). To support his argument, he made two claims that really didn't sound very convincing to me though he indicated he was certain both were factual as opposed to myth. The first claim he made was that there was at least one Jew who was among the Nazi army high level brass. So, the question is, Is there a historical record of a Jew (who was officially considered Jewish by the contemporary Nazi Germany laws), who nevertheless served as high level officer in German army? Please note that I'm interested in someone who's documented (as opposed to rumored) to be Jewish. Also, to avoid "who's Jewish" definition arguments, I am using the most applicable (though abhorrent) definition - the Nazi law on who is a Jew. | I would need to research further to confirm, but I have the feeling that the person who made that argument may have meant Erhard Milch . From Wiki, it sounds that he was, indeed, high level brass: In 1933, Milch took up a position as State Secretary of the newly-formed Reichsluftfahrtministerium ("Reich Aviation Ministry" – RLM), answering directly to Hermann Göring. In this capacity, he was instrumental in establishing the Luftwaffe, originally responsible for armament production At the outbreak of World War II Milch, now with the rank of general, commanded Luftflotte 5 during the Norwegian campaign. Following the defeat of France, Milch was promoted to field-marshal (Generalfeldmarschall) and given the title Air Inspector General. Milch was put in charge of the production of planes during this time. However, it appears that in answer to your very specific question, Erhard Milch did NOT indeed fit the specific definition you used. His father was Jewish, which means he had at most 2 documented Jewish grandparents - the Nazi laws classified him as a Mischling ("crossbreed") and not a full Jew (3+ Jewish grandparents). The Wiki provides the following detail (sources apparently from Benno Müller-Hill, Murderous science: elimination by scientific selection of Jews (1998), p. 26 ): In 1935, Milch's ethnicity came into question because his father, Anton Milch, was a Jew. This prompted an investigation by the Gestapo that Göring squelched by producing an affidavit signed by Milch's mother stating that Anton was not really the father of Erhard and his siblings, and naming their true father as Karl Brauer, her uncle. These events and his being issued a German Blood Certificate prompted Hermann Göring to say famously " Wer Jude ist, bestimme ich" ("I decide who is a Jew ") An independent confirmation is quoted in a project from UCSB's class "for Prof. Marcuse's lecture course Interdisciplinary Perspectives on the Holocaust; UC Santa Barbara, Fall 2005". The quote is apparently from " Rigg, Mark. Hitler’s Jewish Soldiers, University Press of Kansas, 2002 " Field Marshal and State Secretay of Aviation Erhard Alfred Richard Oskar Milch’s "Aryanization" was the most famous case of a Mischling falsifying a father. In 1933, Frau Clara Milch went to her son-in-law, Fritz Heinrich Hermann, police president of Hagen and later SS general, and gave him an affidavit stating that her deceased uncle, Carl Brauer, rather than her Jewish husband, Anton Milch, had fathered her six children.… In 1935, Hitler accepted the mother’s testimony… ( Rigg, 29 ) He's still a good example of what most people whould consider "somewhat Jewish" person serving the Nazis at the top, but he does NOT fit the claim as you defined it in your question. P.S. As a caveat, Erhard Milch was not in the army (as your arguer claimed) - he was in the air force (Luftwaffe). So either the person was mistaken slightly, or they meant a different person and my asnwer is wrong. | {
"source": [
"https://skeptics.stackexchange.com/questions/3364",
"https://skeptics.stackexchange.com",
"https://skeptics.stackexchange.com/users/2801/"
]
} |
3,371 | Is it true that hot water freezes faster than cold water and if so, what practical applications have there been found for this phenomenon? | In certain settings, cold water freezers slower than hot water. This is called the Mpemba effect . The Mpemba effect is the observation that warmer water sometimes freezes faster than colder water. Although the observation has been verified, there is no single scientific explanation for the effect. Can hot water freeze faster than cold water? , Monwhea Jeng, University of California, 1998 Hot water can in fact freeze faster than cold water for a wide range of experimental conditions. This phenomenon is extremely counterintuitive, and surprising even to most scientists, but it is in fact real. It has been seen and studied in numerous experiments. While this phenomenon has been known for centuries, and was described by Aristotle, Bacon, and Descartes [1—3], it was not introduced to the modern scientific community until 1969, by a Tanzanian high school student named Mpemba. Some suggested reasons given in the paper: Evaporation — As the initially warmer water cools to the initial temperature of the initially cooler water, it may lose significant amounts of water to evaporation. The reduced mass will make it easier for the water to cool and freeze. Then the initially warmer water can freeze before the initially cooler water, but will make less ice. [...] Dissolved Gasses — Hot water can hold less dissolved gas than cold water, and large amounts of gas escape upon boiling. So the initially warmer water may have less dissolved gas than the initially cooler water. [...] | {
"source": [
"https://skeptics.stackexchange.com/questions/3371",
"https://skeptics.stackexchange.com",
"https://skeptics.stackexchange.com/users/2836/"
]
} |
3,410 | I've heard it claimed that there are no naturally blue foods and that blueberries don't count, because they are more purple. However, I have a hard time believing such a blanket statement so I want to ask this here. If you count foods that are blue in nature, but not blue when prepared, is this statement false? Are there no foods, in the state that we eat them, that are naturally blue? In terms of a definition for food, it would be something that is not only edible, but is commonly eaten by any group of people. So, something that is edible, but not commonly eaten by any group of people would not be considered. Blue is a light wave having a spectrum dominated by energy with a wavelength of roughly 440–490 nm. Defined specifically by Wikipedia | Blåbär (Common Bilberry) I come from a berry obsessed culture that every year consume a wide array of different berries. One of the most common ones that are native to my country is Vaccinium myrtillus more commonly called blåbär in Swedish which literally means blue berry. They aren't the same as the American blue berry ( Vaccinium cyanococcus ) that George Carlin most likely made fun of. Blue Crawdads Crawdads are sometimes blue, but they turn red on cooking. I'm not sure if that would count under your criteria, but people certainly find them appetizing enough to try to cook them. Atlantic lobsters Homarus Americanus are also blue until cooked. Starflowers Borago officinalis is sometimes eaten fresh and apparently has a cucumber-like taste. It grows in Asia and the middle east. Indigo Milk Cap Lactarius indigo, commonly known as the indigo milk cap, the indigo Lactarius, or the blue milk mushroom, is a species of agaric fungus in the family Russulaceae. [..] It is an edible mushroom, and is sold in rural markets in China, Guatemala, and Mexico. | {
"source": [
"https://skeptics.stackexchange.com/questions/3410",
"https://skeptics.stackexchange.com",
"https://skeptics.stackexchange.com/users/2808/"
]
} |
3,411 | I have read in several places that people use big, fancy, complicated, and little known words (such as Brobdingnagian) to give the impression that they are knowledgeable, smart, and professional. Does that work? | Although this doesn't answer your question directly, I think it does a good job of answering indirectly. Research shows a strong correlation between vocabulary and general intelligence. So does using big words make you appear smart, maybe, but having a high level of vocabulary (and being able to use it) would indicate that you actually are smarter. The key to the sentence above is actually being able to use the words correctly. Just going out and learning a bunch of words is not going to immediately make you smarter, but having a strong grasp of the language and a wide vocabulary indicates you are smarter. Your question has a large subjective part to it, because if I am really clever and use lots of clever words I may be smarter than someone with a low I.Q, but they may just think I'm a dick. This means that your question answered in that way can't be answered objectively. Now on to some examples: Analysis indicated strong correlations between the two measures,
particularly between the CREVT General Vocabulary and WISC-III Verbal
IQ (r = .80), WISC-III Verbal Comprehension Index (r =.83), and the
Vocabulary subtest (r =.76). These results held across the grades. (Smith, Smith, Taylor, & Hobby, 2005) . ... Acquisition of word meanings, or vocabulary, reflects general mental
ability (psychometric g) more than than do most abilities measured in
test batteries. Among diverse subtests, vocabulary is especially high
on indices of genetic influences. Behavioral and Brain Sciences
(2001), 24: 1109-1110 Copyright © 2001 Cambridge University Press This page provides about 10 examples with unlinked references. Examples include: Shows high positive correlation between JOCRF vocabulary score and
SAT-verbal. Bowker, R. (1976) ... “English vocabulary level has been shown to be strongly related to
educational success. In addition, it is related to the level of
occupation attained. It is highly correlated with measures of reading
ability and intelligence” Bowker, R. (1981) | {
"source": [
"https://skeptics.stackexchange.com/questions/3411",
"https://skeptics.stackexchange.com",
"https://skeptics.stackexchange.com/users/-1/"
]
} |
3,430 | So here's the premise. A cat reaches its terminal velocity after around 10meters of free fall. A cat can survive a landing from a speed equal to its terminal velocity. Therefore a cat can survive a fall from any height. This seems actually quite feasible and would be tremendous if it holds some truth in the majority of cases. I guess there are plenty of animals that can survive their own terminal velocity but a cat somehow just seems too close to home, too familiar. I also realise that this is a difficult claim to prove or falsify as throwing cats out of windows for experimental purposes doesn't seem the most moral thing. Maybe a collated record of accidents? But that's not too scientific. | As was brought up on in Is the use of parachutes supported by peer-reviewed papers? Where Andrew Grimm pointed to a study from 1987 which is widely reported (it's paywalled so I can't check myself) to say that not only do cats survive terminal velocity, but that their chance of survival increase over some shorter distances. That said the actual study cites that the cats falling from buildings had a 90% survival rate (after treatment), but also a lot of injuries. From the abstract: High-rise syndrome was diagnosed in 132 cats over a 5-month period. The mean age of the cats was 2.7 years. Ninety percent of the cats had some form of thoracic trauma. Of these, 68% had pulmonary contusions and 63% had pneumothorax. Abnormal respiratory patterns were evident clinically in 55%. Other common clinical findings included facial trauma (57%), limb fractures (39%), shock (24%), traumatic luxations (18%), hard palate fractures (17%), hypothermia (17%), and dental fractures (17%). Emergency (life-sustaining) treatment, primarily because of thoracic trauma and shock, was required in 37% of the cats. Nonemergency treatment was required in an additional 30%. The remaining 30% were observed, but did not require treatment. Ninety percent of the treated cats survived. The Straight Dope details how far the cats fell which mentions terminal velocity: But here's the weird part. When the vets analyzed the data they found that, as one would expect, the number of broken bones and other injuries increased with the number of stories the cat had fallen — up to seven stories. Above seven stories, however, the number of injuries per cat sharply declined. In other words, the farther the cat fell, the better its chances of escaping serious injury. The authors explained this seemingly miraculous result by saying that after falling five stories or so the cats reached a terminal velocity — that is, maximum downward speed — of 60 miles per hour. Thereafter, they hypothesized, the cats relaxed and spread themselves out like flying squirrels, minimizing injuries. This speculation is now widely accepted as fact. Although the Straight Dope is also careful to point out that perhaps the reason why more terminal velocity cats appear to survive is that the one that didn't land so gracefully wasn't brought into the emergency room and as such the statistics could be skewed. A more recent study from 2004 cites the previous study as well as several others. The cats in this study had a higher survival rate: High-rise syndrome was more frequent during the warmer period of the year. 96.5% of the presented cats, survived after the fall. It also go into a rather deep detail on various injuries sustained by the cats in all the studies, also stating cats don't reach terminal velocity until after the 6th floor and reaches the same conclusion as the previous studies: This substantiates the theory that cats falling at least seven stories flex their limbs so that truncal injuries are more common, while cats falling from distances lower than seven stories extend their limbs, the consequence being a greater incidence of limb fractures. Somewhat interestingly and related it cites a study on high rise syndrome in dogs from 1993 that says dogs cannot survive a fall from more than 6 stories. If we want to investigate further perhaps we should ask Disney to record a movie on the life of wild cats. | {
"source": [
"https://skeptics.stackexchange.com/questions/3430",
"https://skeptics.stackexchange.com",
"https://skeptics.stackexchange.com/users/-1/"
]
} |
3,433 | My computing teacher told us that closed source software is more secure than open source software, because with open source "anyone can modify it and put stuff in." This is why they do not want to use open source alternatives for learning to program, such as FreePascal (currently using Embarcadero Delphi, which is slow and buggy.) I think this is completely wrong. For example Linux seems to be considerably more resilient to exploits than Windows; although it could be down to popularity/market share. What studies have been performed which show that open source or closed source is better in terms of security? | "Secure design, source code auditing,
quality developers, design process,
and other factors, all play into the
security of a project, and none of
these are directly related to a
project being open or closed source ." Source : Open Source Versus Closed Source Security | {
"source": [
"https://skeptics.stackexchange.com/questions/3433",
"https://skeptics.stackexchange.com",
"https://skeptics.stackexchange.com/users/1626/"
]
} |
3,435 | I know there is global warming, and I know that it is caused by human activity, but is carbon dioxide the cause of it? I read somewhere that apparently increase of CO₂ doesn't cause the increase in global temperatures, but rather, global temperatures cause the increase of CO₂. Can someone verify or disprove this claim? | The Earth’s greenhouse effect is a natural occurrence that helps regulate the temperature of our planet. When the Sun heats the Earth, some of this heat escapes back to space. The rest of the heat, also known as infrared radiation, is trapped in the atmosphere by clouds and greenhouse gases, such as water vapor and carbon dioxide. If all of these greenhouse gases were to suddenly disappear, our planet would be 60ºF (33ºC) colder and would not support life as we know it. Human activities have enhanced the natural greenhouse effect by adding greenhouse gases to the atmosphere, very likely (greater than 90 percent chance) causing the Earth’s average temperature to rise. These additional greenhouse gases come from burning fossil fuels such as coal, natural gas, and oil to power our cars, factories, power plants, homes, offices, and schools. Cutting down trees, generating waste and farming also produce greenhouse gases. Source: The Environmental Protection Agency (EPA) (via the Internet Archive: URLs listed here are the original locations) http://www.epa.gov/climatechange/fq/science.html, as it appeared in May 2012 You may also want to read: http://www.epa.gov/climatechange/science/stateofknowledge.html, as it appeared in May 2012 This page acknowledges the gaps in scientific climate knowledge, and differentiates fact from speculation/uncertain predictions. Update: Adding a NASA site which specifically references CO2 as a greenhouse gas: https://climate.nasa.gov/vital-signs/carbon-dioxide/ | {
"source": [
"https://skeptics.stackexchange.com/questions/3435",
"https://skeptics.stackexchange.com",
"https://skeptics.stackexchange.com/users/2382/"
]
} |
3,451 | After stumbling upon a video of a guy getting knocked down in one hit , I did some quick research and couldn't conclude if this is a myth or not. The video in itself is very a compelling evidence for fact. Often we see it used as a plot resource in movies, when someone knocks the other person unconscious by hitting their head. Sometimes not even very hard. And we do see people get knocked out in real tv as well, although they often hide the technical details. I've also heard / read to actually manage to knock someone unconscious in reality, a very strong hit with Traumatic brain injury must be caused, potentially leaving scars on the brain. So, what's the verdict? Hopefuly we wouldn't need a scientific research to come up with a conclussion (phun intended)... I mean, there must be enough statistics about this just from sports or hospitals already! :) | In movies, knocking someone unconscious can look like this: It can be a PG-13 way to take care of a bad guy, who wakes up sometime later with only a headache.
(However, more realistic depictions can be found, usually in war movies.) In real life, it looks more like this: What isn't shown is that in reality a person knocked unconscious is usually knocked out only for a few seconds, minutes at most. If a person is knocked out for longer than that, this may indicate severe brain damage, which could lead to loss of function, life-long debilitation, coma, and death. Essentially, a blow hard enough to knock a person unconscious is classified as a Traumatic Brain Injury (TBI). Since most guards,henchmen,etc.in movies are knocked out for extended periods of time, it quite possible that they may suffer severe brain damage. Also the force ( Scientific American has one pro boxer's punch at 400kg )required to knock someone out might also break the skull or kill the person. Even wikipedia's article on boxing states that there is no clear line drawn between the force needed to knock someone out and the force needed to kill that person . So, knocking someone unconscious by hitting them in the head is clearly not as practical or consequence-free as tv and movies might lead one to believe. The most common causes of being knocked unconscious are related to either falls or vehicle crashes. However, direct trauma to the head is another cause, the CDC lists assault as accounting for 10% of reported cases . ( Fact: Chuck Norris is responsible for 9.7% ) What is TBI? Traumatic brain injury is the most
common cause of death and disability
in young people. There is much hope
for improvement in early care and
functional outcome by use of
scientific evidence-based guidelines.
Traumatic brain injury is graded as
mild, moderate, or severe on the basis
of the level of consciousness or
Glasgow coma scale (GCS) score after
resuscitation (panel). Mild traumatic
brain injury (GCS 13–15) is in most
cases a concussion and there is full
neurological recovery, although many
of these patients have short-term
memory and concentration
difficulties.1 In moderate traumatic
brain injury (GCS 9–13) the patient is
lethargic or stuporous, and in severe
injury (GCS 3–8) the patient is
comatose, unable to open his or her
eyes or follow commands. Patients with
severe traumatic brain injury
(comatose) have a significant risk of
hypotension, hypoxaemia, and brain
swelling. If these sequelae are not
prevented or treated properly, they
can exacerbate brain damage and
increase the risk of death. source In reality, TBI consists of more than just the initial impact. Step 1: Impact The main thing to keep in mind is: Although TBI is a problem of major
medical and socioeconomic
significance, its pathogenesis is
incompletely understood, and it is
often difficult to reconstruct the
events leading to the primary and
secondary lesions of varying severity
and regional distribution that
constitute TBI source This means that while there are some general aspects and theories we can apply broadly to patients with TBI, it is by no means a cut-and-dry phenomenon, and individual cases show great variation. At the time of the initial impact, the brain is injured in two places, the place of the impact and the side opposite the place of impact. This happens because the brain is surrounded by fluid and can be moved if if enough force is applied. In movies, it is this intial impact which renders the victim unconscious, however in reality, this is not always the case. Step 2: Secondary Injuries: In reality, the person may not be rendered unconscious by the primary injury. However, they may become unconscious later due to the secondary injuries. Secondary injuries are typically caused by bleeding or swelling within the skull which compresses the brain. The principal mechanisms of TBI are
classified as (a) focal brain damage
due to contact injury types resulting
in contusion, laceration, and
intracranial haemorrhage or (b)
diffuse brain damage due to
acceleration/deceleration injury types
resulting in diffuse axonal injury or
brain swelling.2404649 Outcome from
head injury is determined by two
substantially different
mechanisms/stages: (a) the primary
insult (primary damage, mechanical
damage) occurring at the moment of
impact. In treatment terms, this type
of injury is exclusively sensitive to
preventive but not therapeutic
measures. (b) The secondary insult
(secondary damage, delayed
non-mechanical damage) represents
consecutive pathological processes
initiated at the moment of injury with
delayed clinical presentation.
Cerebral ischaemia and intracranial
hypertension refer to secondary
insults and, in treatment terms, these
types of injury are sensitive to
therapeutic interventions. source Some common occurrences in head injuries: Concussions The word "concussion" has many different meanings to patients, families, and physicians. One definition: a condition in which there is a traumatically induced alteration in mental status, with or without an associated loss of consciousness (LOC). A broader definition for concussion: A traumatically induced physiologic disruption in brain function that is manifest by LOC, memory loss, alteration of mental state or personality, or focal neurologic deficits . While there are many individual variations, concussions usually result in relatively temporary impairment of neurologic function Again, things are not so clear cut when dealing with concussions, and post-concussion syndromes: Post concussive syndrome (PCS), a
sequela of minor head injury (MHI),
has been a much-debated topic. Muddled
by conflicting findings regarding
symptom duration, an absence of
objective neurologic findings,
inconsistencies in presentation,
poorly understood etiology, and
significant methodologic problems in
the literature, postconcussive
syndrome (PCS) remains controversial.
Depending on the definition and the
population examined, 29-90% of
patients experience postconcussive
symptoms shortly after the traumatic
insult. source (medscape link) (One symptom of concussions is nausea/vomiting which you don't see in movies too often.) Intracranial Hematomas A hematoma is a swelling of blood
confined to an organ or tissue, caused
by hemorrhaging from a break in one or
more blood vessels. As a cerebral
hematoma grows, it damages or kills
the surrounding brain tissue by
compressing it and restricting its
blood supply, producing the symptoms
of stroke. The hematoma eventually
stops growing as the blood clots, the
pressure cuts off its blood supply, or
both. source They are classified from small to massive depending on diameter and volume. Effects vary according to size and location. White arrows are pointing to the hematoma. Intracranial Hemmorhages Black arrows point to subdural bleeding
White arrow points to the midline shift of the brain. The build-up of blood in the skull is putting extensive pressure on the brain. Enough bleeding will essentially "crush" the brain, causing the brainstem to herniate. Diffuse Axonal Injury Basically this is extensive damage to the white matter. Diffuse axonal injury is one of the
most important types of brain damage
that can occur as a result of
non-missile head injury. Increasing
experience with fatal non-missile head
injury in man has allowed the
identification of three grades of
diffuse axonal injury. In grade 1
there is histological evidence of
axonal injury in the white matter of
the cerebral hemispheres, the corpus
callosum, the brain stem and, less
commonly, the cerebellum; in grade 2
there is also a focal lesion in the
corpus callosum; and in grade 3 there
is in addition a focal lesion in the
dorsolateral quadrant or quadrants of
the rostral brain stem. source Diffuse axonal injuries can occur at the time of the initial impact, or develop during the minutes or hours after the injury. Length of time unconscious correllates to severity of the brain injury Post-traumatic amnesia(PTA) is defined as the time from the initial injury until the patient can demonstrate conscious memory of what is going on around him/her. The duration of PTA was the best
predictor of outcome selected in this
model for all endpoints and elements
of the physical examination provided
additional predictive value. source (medscape link) Age is also a factor in predicting outcome... Duration of PTA appears to be a useful
variable in predicting specific
functional outcome in the TBI
population receiving inpatient
rehabilitation services. The use of
age as a factor in addition to
duration of PTA enhances the
prediction of functional outcome. source (medscape link) Concluding (finally) Most movies simply cherry-pick the most convenient aspects of head injury to advance their plot. Either the person will only be unconscious for a very short time and wake up relatively fine, or the person will be unconscious for an extended time, but likely suffer severe consequences. | {
"source": [
"https://skeptics.stackexchange.com/questions/3451",
"https://skeptics.stackexchange.com",
"https://skeptics.stackexchange.com/users/1125/"
]
} |
3,465 | There's a little tool called f.lux that claims: During the day, computer screens look good — they're designed to look like the sun. But, at 9PM, 10PM, or 3AM, you probably shouldn't be looking at the sun. F.lux fixes this: it makes the color of your computer's display adapt to the time of day, warm at night and like sunlight during the day. It's even possible that you're staying up too late because of your computer. You could use f.lux because it makes you sleep better, or you could just use it just because it makes your computer look better. Is it true that the color temperature of a computer screen could upset one's biorhythm? | One study says, "possibly". Note that the illumination provided by the light sources used in the study may not be consistent with the light coming from a computer monitor. Also note that the sample size is not large, and a mechanism for the effect is not proposed. ( source ) J Physiol Anthropol Appl Human Sci. 2005 Mar;24(2):183-6. Effect of color temperature of light sources on slow-wave sleep. Kozaki T, Kitamura S, Higashihara Y, Ishibashi K, Noguchi H, Yasukouchi A. Department of Physiological Anthropology, Faculty of Design, Kyushu University, Japan. [email protected] In order to examine whether the spectral compositions of light source may affect sleep quality, sleep architecture under different color temperatures of light sources was evaluated. Seven healthy males were exposed to the light sources of different color temperatures (3000 K, 5000 K and 6700 K) for 6.5 h before sleep. The horizontal illuminance level was kept at 1000 lux. Subjects slept on a bed in near darkness (< 10 lux) after extinguishing the light, and polysomnograms recorded the sleep parameters. In the early phase of the sleep period, the amount of stage-4 sleep (S4-sleep) was significantly attenuated under the higher color temperature of 6700 K compared with the lower color temperature of 3000 K. Present findings suggest that light sources with higher color temperatures may affect sleep quality in a view that S4-sleep period is important for sleep quality. A recent article in Scientific American ( source ) discussed a possible mechanism for the effect. Many years later researchers extended Keeler’s observation, showing that mice genetically engineered to lack rods and cones (the light receptors involved in vision) nonetheless reacted to changes in light by adjusting their circadian clock—the internal timer that synchronizes hormone activity, body temperature and sleep. The animals performed the usual daytime activities when in daylight and nighttime activities when in the dark. They could do so even though their retinas lacked the photoreceptor cells that vertebrate eyes use to form images, although surgically removing their eyes abolished this ability. This phenomenon may be common to many mammals, including humans: recent experiments have shown that certain blind people can also adjust their circadian clocks and constrict their pupils in response to light. | {
"source": [
"https://skeptics.stackexchange.com/questions/3465",
"https://skeptics.stackexchange.com",
"https://skeptics.stackexchange.com/users/1183/"
]
} |
3,469 | I've encountered this one many times over the years, mostly in those "useless facts" books and sites such as the one here . In fact, it's been one of those bits of trivia I seem to have unconsciously taken for granted as true, probably due to the sheer number of times I've heard it. It occurred to me though that while I've often heard the claim stated, I've never seen it proven. Has it been scientifically proven that ants are capable of lifting 50 times their weight? For the pedants: It doesn't matter what kind of ant We are assuming an otherwise healthy and normal ant (of any kind). | Source Rex Kerr 's answer has linked to photographic evidence of an Asian Weaver ant lifting 100 times its bodyweight (no, it's not the one above). The picture won first prize in the first Biotechnology and Biological Sciences Research Council science photo competition. To me the amazing thing is that the ant is actually clinging upside-down to a smooth surface while lifting that 500mg weight: Source But , ants are actually not stronger than humans. The reason why ants can lift so much is due to scaling , meaning it has to do with math, not muscles. Strength : The strength of a muscle scales with
the cross-sectional area . (Exercise makes a muscle bigger, but not longer) Source This means, the strength of an organism increases
as the square of the scale factor . Mass: The mass of an object depends on its
volume. Source The spider on the right is 3x the size of the small spider, but it weighs 27x as much . The weight of an object increases as
the cube of the scale factor (3 3 =27) Mass increases faster than strength. Source So, if an ant would be human size
it wouldn't be able to lift 100x
its bodyweight anymore. Or going the other way, playing
" Honey I Shrunk the Kids ": Source ant size humans would be as strong as ants. Sources: Scale Factors Why the little guys can do all the pushups How can ants carry so much weight in proportion to their size? True / False - Ants can lift huge weights Why can ants carry items much heavier than themselves? Ant Power | {
"source": [
"https://skeptics.stackexchange.com/questions/3469",
"https://skeptics.stackexchange.com",
"https://skeptics.stackexchange.com/users/486/"
]
} |
3,506 | My friend told me that he doesn't bother voting any more in elections, because "it's not like one vote ever made a difference." I will vote when I can and I think it is important, and although I share his sentiment about politicians... it got me thinking. Has there ever been any major election (around 1 million votes or more) where a single vote has decided the result? e.g. Candidate A got 1,000,000 votes and Candidate B got 1,000,001 votes. If the voter for candidate B did not vote it would be a tie and there might be a run-off election or some other system used to determine the winner and thus B may have lost if that one vote was not cast. | Answer: Yes! Although they weren't major. Örebro (Sweden) has around 100,000 inhabitants. In local municipality elections in Sweden, there have been cases of one vote making a difference in determining which party gets a seat. In the election of 2010, this single vote difference in determining the last seat of Örebro municipality actually meant that the socialist block got the majority there. Of course, every vote counts. And every vote makes a difference, so it wasn't one vote that made a difference it was all the votes that made a difference. Every single one of the votes for the socialist block gave the victory to that block, because without just one of them, it would have been a lottery. (Literally, they would have had a tombola draw.) Ref: En röst avgjorde valet i Örebro , SvD ( English translation ). And there are other cases of this in Sweden. In a referendum in 1971 there was only one vote's difference. En enda röst avgjorde när Gamleby bestämde sig , vt.se ( English translation ) | {
"source": [
"https://skeptics.stackexchange.com/questions/3506",
"https://skeptics.stackexchange.com",
"https://skeptics.stackexchange.com/users/1626/"
]
} |
3,515 | Or, at least, has a questionable business model? Kiva is a website/organization that allows private persons to give micro-loans out to (very) small businesses in developing countries. Their idea seems rather nice to me, however there is quite a sprawling discussion on this forum at scam.com where people argue back and forth that Kiva is a scam because while you loan your money out interest free, the microlenders charge relatively high interest rates. The discussion becomes quite vitriolic and certainly doesn't meet the standards of skeptical rigor, so I thought I ask here: Is Kiva a scam or operates in a deceiving way? | It's not a scam unless they are blatantly lying about something. Kiva itself doesn't charge interest rate instead their "field partners" do, which seem to be independent organizations that Kiva is partnering with. Here's information directly from their website : The Field Partner collects repayments from Kiva entrepreneurs as well as any interest due and lets Kiva know if a repayment was not made as scheduled. Interest rates are set by the Field Partner, and that interest is used to cover the Field Partner's operating costs. Kiva doesn't charge interest to its Field Partners and does not provide interest to lenders. Kiva also gives Field Partners the option to cover currency losses. They promise not to have field partners that charge an absurdly high interest rate: Our Field Partners are free to charge interest, but Kiva will not partner with an organization that charges exorbitant interest rates. We also require Field Partners to fully disclose their interest rates. I don't know what would constitute as an exorbitant interest rate in this case, but I'm going to assume they are keeping that promise as they also have open books by providing a list of all their partners with the interest rates charged. If you click the various partners you will get to see statistics for how much has been invested through that partner, how good they are at collecting money and so on. Kiva also outlines exactly how high the interest rate is with the partner you are viewing, how it compares to other partners in the same country and how it compares to the average across all Kiva partners. Here's one example : This field partner Median for MFI All
Peers in Country
Kiva Partners
Average Interest 12.50% 16.17% 37.03%
Rate and Fees
Borrowers Pay
(Portfolio Yield) If anything (although I'm not exactly familiar with aid organizations or micro-finance) Kiva seem to be doing a really good job at being transparent by providing all of this information so readily accessible on their website. It is as far as I'm concerned the donors fault if they give money to an organization that uses it in ways they do not approve of, when the donor hasn't done at least basic research first. Unless there's strong evidence to suggest Kiva is actively misleading people, and/or exploiting poor people for the purpose of economic gain, I don't see how they could be called a scam. | {
"source": [
"https://skeptics.stackexchange.com/questions/3515",
"https://skeptics.stackexchange.com",
"https://skeptics.stackexchange.com/users/1183/"
]
} |
3,529 | Moses is, arguably, the most significant prophet in Judaism and is credited with leading the enslaved Jewish population out of Egypt and into the promised land. Aside from the Torah, are there any verifiable records or concrete evidence that there was an actual Moses? Is he mentioned at all by the Egyptians during the exodus, or cited by other cultures or religions? | There is no direct evidence, outside of the Torah and the literary traditions which followed, that Moses ever existed. Whether he was made up out of whole-cloth, or whether there is some historical basis behind the legend, is impossible to say. The best you can do is consider that: Extensive archeological surveys
throughout the Sinai region seem to
have thoroughly discredited the possibility that any
population movement as massive as
the Exodus described in the Torah
ever occurred. (See xiaohouzi79's answer for references) Some details of Moses' life seem to
have been lifted from earlier
legends; specifically, the idea that
his mother placed him in a basket
and floated him down a river is
reminiscent of a legend involving
Sargon of Akkad. See Pritchard, J.
"Ancient Near Eastern Texts Relating
to the Old Testament", Page 119 Specifically, in the legend of
Sargon it is written that " my
mother, the high priestess, bore me
in secret. She set me in a basket
of rushes, with bitumen she sealed
my lid. She cast me into the
river... " Compare this with Exodus 2:3 (NIV): " But
when she could hide him no longer,
she got a papyrus basket, for him
and coated it with tar and pitch.
Then she placed the child in it and
put it among the reeds along the
bank of the Nile. " Note, however, that it is not
necessarily the case that the
Hebrews lifted the basket motif
directly from this Akkadian legend,
as the earliest known copy of the
Akkadian legend dates to after the
time the relevant passage in Exodus
was probably written (8th century
B.C.E). However, it is likely the
legend about Sargon comes from
earlier, Babylonian sources. While there is no direct evidence
that the Hebrews were ever enslaved
by the Egyptians, there is evidence that Semitic slaves were
kept in Egypt, however the texts
which prove this date to 600 years
before the generally accepted date
of the Exodus (~1200 B.C.E). See
" Asiatics in Egyptian Household
Service " from Pritchard, J. "Ancient Near Eastern Texts Relating to the Old
Testament", Page 553 This document records the names of
various slaves in service in
Egyptian households, including one
Menahem, which was later a common
Hebrew name, as well as the name of
a Hebrew king. Another slave name
is "Sephra", which is etymologically
similar to the Hebrew name
"Sapphira". This doesn't prove that the Hebrews
were enslaved as described in
Exodus, however it does demonstrate
that it wasn't anything out of the
ordinary for Semitic peoples to
serve as slaves in Egypt (especially before the Hyksos period). It is
therefore possible that the Exodus
story has some kernel of truth to
it, even if it has been exaggerated
beyond recognition. Of particular interest is Papyrus
Anastasi V (British Museum 10244),
dating to around the time of the
Exodus, (13th century B.C.E), which
records a correspondence regarding
the pursuit of two runaway slaves. See Pritchard, J. "Ancient Near Eastern Texts Relating to the Old
Testament", Page 259 It is notable because the route
taken by these slaves took them past
the watchtower at Migdol, which is
the same route mentioned in Exodus,
where Moses led the Israelites
before stopping in front of the sea.
(See Exodus 14:2-3) Finally, linguists have speculated that
the name "Moses" is etymologically
connected with the Egyptian name
Rameses. "Ra-Moses" is a legitimate
name for an Egyptian living at the
relevant time period. (I can't find
a great reference for this one, but
see http://www.time.com/time/magazine/article/0,9171,989815-3,00.html ) None of this comes even close to giving us direct evidence that the Exodus occurred or that Moses himself even existed. However, it does demonstrate that the Exodus account may be based on one or more (much more mundane) historical incidents, which are now permanently intertwined with later mythological embellishments. | {
"source": [
"https://skeptics.stackexchange.com/questions/3529",
"https://skeptics.stackexchange.com",
"https://skeptics.stackexchange.com/users/64/"
]
} |
3,558 | It's a common statement that passengers of a car are protected from lightening while inside because the car has rubber tires (which are insulators). An alternative theory often put forward is that the vehicle is made of a metal conductor. The question is, then: Are passengers protected from lightning while inside a car? If so, is this protection from: a. The rubber tires; or b. the metal shell of the car? Thanks for reading. | From the National Oceanic and Atmospheric Administration - TOP 10 Myths o Lightning Safety : Lightning laughs at two inches of rubber! Most cars are reasonably safe from lightning. But it’s the metal
roof and metal sides that protect you, not the rubber tires. Thus convertibles, motorcycles, bicycles, open
shelled outdoor recreational vehicles, and cars with plastic or fiberglass shells offer no lightning protection. But closed cockpits
with metal roof and sides are safer than going outside. And don’t even ask about sneakers! From NASA - Ask an Expert : If the car is metallic -- not a convertible! -- then a person is shielded by the metal of the car and lightning would be safely conducted around the people inside. This has nothing to do with the rubber tires! This is know in physics as a "Faraday cage" . From FEMA - Thunderstorms & Lightning : ... rubber-soled shoes and rubber
tires provide NO protection from
lightning. However, the steel frame of
a hard-topped vehicle provides
increased protection if you are not
touching metal. From Environment Canada - Lightning facts and fiction : Rubber-soled shoes and rubber tires provide no protection from lightning. The lightning strike between the cloud and the ground has potentially traveled thousands of meters through thin air, therefore rubber soled footwear or tires are inconsequential. However, the metal shell of a car provides a pathway for the lightning strike to flow around the vehicle provided the car has a hardtop metal roof (not a convertible). Although such vehicles do not offer you absolute protection from lightning, you and others are much safer inside a car with your hands on your lap, than outside. From Weather Imagery - Do Rubber Car Tires Protect Me From Lightning? : The truth is, the rubber tires don’t
deter lightning in the least bit. By
the time a lightning bolt reaches your
car, it has been traveling for miles
and miles through the air which is
many orders of magnitude more
resistant than a few inches of rubber. From Lightning Safety - Vehicles and Lightning : Rubber tires provide zero safety from
lightning. After all, lightning has
traveled for miles through the sky:
four or five inches of rubber is no
insulation whatsoever. From POPSCI - An Electric Aviation Experience : ... the aluminum hull of an aircraft
is highly conductive ... it forms a Faraday cage . For the same reason, you don't get electrocuted when lightning strikes your car (provided your car is made of metal and not fiberglass, you don't have a cloth convertible roof, and you're not touching the outside surface). It's a common misconception that the insulating rubber tires protect you. Not true. It's the Faraday cage. Here is a video of a plane getting hit by lightning in mid-air. Source None of the 500 people on board the Emirates Airlines Airbus A380 were injured as the plane flew through a storm as it approached London's Heathrow Airport on April 23 2011. A United Emirates spokesperson told the newspaper lightning strikes are not uncommon and that every plane in its fleet is designed and certified to withstand a lightning strike. Faraday Cage : ... a closed metal surface, no matter what
it's shape, screens out external
sources of electric field. Source Here is a video of MythBuster Adam Savage demonstrating the Faraday Cage . Source | {
"source": [
"https://skeptics.stackexchange.com/questions/3558",
"https://skeptics.stackexchange.com",
"https://skeptics.stackexchange.com/users/1792/"
]
} |
3,562 | There seems to be little doubt in public opinion that second hand smoke is dangerous, and I can see why. But many smokers have claimed to me that it is not dangerous at all and it is all media hype. What is the scientific consensus? I have heard that second hand smoking is almost equivalent to smoking a cigarette directly. | The Surgeon General of the United states issued a report in 2006 about second-hand smoke. The six major conclusions were: Many millions of Americans, both children and adults, are still exposed
to secondhand smoke in their homes and
workplaces despite substantial
progress in tobacco control. Secondhand smoke exposure causes disease and premature death in
children and adults who do not smoke. Children exposed to secondhand smoke are at an increased risk for
sudden infant death syndrome (SIDS),
acute respiratory infections, ear
problems, and more severe asthma.
Smoking by parents causes respiratory
symptoms and slows lung growth in
their children. Exposure of adults to secondhand smoke has immediate adverse effects on
the cardiovascular system and causes
coronary heart disease and lung
cancer. The scientific evidence indicates that there is no risk-free level of
exposure to secondhand smoke. Eliminating smoking in indoor spaces fully protects nonsmokers from
exposure to secondhand smoke.
Separating smokers from nonsmokers,
cleaning the air, and ventilating
buildings cannot eliminate exposures
of nonsmokers to secondhand smoke. Similar information can be obtained from the Centers for Disease Control (also see this Morbidity and Mortality Weekly Report ), the Institute of Medicine , the journal Environmental Health Perspectives , the National Cancer Institute (part of the National Institutes of Health), and the Mayo Clinic . A full list of research can be found at the MedLine , and it includes research from the American Heart Association, North Carolina Medical Journal , Current Opinions in Pulmonary Medicine , British Medical Journal , and Neurotoxicology and Teratology . (This is all in the first 15 articles.) | {
"source": [
"https://skeptics.stackexchange.com/questions/3562",
"https://skeptics.stackexchange.com",
"https://skeptics.stackexchange.com/users/1626/"
]
} |
3,568 | Here's where it gets interesting; I don't mean erase them from an object, I mean from your fingers directly. Some people claim there is a way to do it, some people claim it's impossible. I have a friend (no, really, she exists, I swear) who is attempting to write a novel, and for once, someone turned to a skeptic in an attempt to get the science right. She's been apparently scouring sites like this in an attempt to find a plausible method, presumably because there aren't many places to go for hard facts on this topic. Of course, there's very little (I couldn't find any) scientific study published on this topic, and many proposed solutions to the problem seem to be quite painful, bloody, and ultimately futile. It's also probably safe to assume the people offering advice via message boards on this particular topic are at best speculating, haven't actually had any practical or relevent experience, and are most likely basing claims on exaggerated or fictional accounts rather than demonstrable evidence.(Interestingly, I kept running across stories involving John Dillinger attempting this which may or may not be true). However, without the data, there's obviously no way for me to say for sure. Even though none of them sound like advice I would be willing to take, nor do they come from credible sources, some suggestions so far have been... Cut them off - (not your fingers, just your fingerprints) Apparently this does
not work that well, is obviously
painful and could possibly make
fingerprints more distinctive
according to some. Using a corrosive substance - Does
not seem to yield acceptable results,
as much like cutting them off, they
will regrow. Burning them off - This has received
some questionable support, but many
seem to think they will grow back. Rubbing them off - some claim this
smooths them out, and is certainly not as gory but they quickly
return to normal. Surgical removal - I ran across some
unverifiable claims about this
method. And at least one strange method
involving a pineapple which only
attempts to alter them, not remove
them entirely. I keep thinking that there's something I'm missing here. Have any of those methods ever been
proven to be successful? Has anyone ever successfully had
his/her fingerprints erased
successfully by any method? Is there a scientifically valid way
to do it, even if it's extremely
improbable? Or is this all just spy-movie stuff with no hard science behind it? | Source CNN article from 2010 : Fingerprint mutilation on the rise, but it's practically pointless According to Stephen G. Fischer Jr., a
spokesman for the FBI's Criminal
Justice Information Services , methods of
fingerprint mutilation can vary
depending on the circumstance and the
criminal. " It can go from people chewing on
fingers, using a knife, burning acid
or cigarettes. Or if
you have a career criminal or someone
who is a little more affluent, they
might go to a surgeon. " While no hard data on fingerprint
mutilations exist, Fischer says the FBI's forensics examiners have noticed
the uptick over the last few years ,
though the reason is unclear. But advancements in forensics
technology have made fingerprint
mutilation increasingly difficult to
pull off , as even severely damaged
fingers will provide investigators
with clues. " We can identify prints that we
couldn't 10 or 15 years ago. Basically, they're going
through all this pain and expense for
no reason ." From Scientific American : A Singaporean cancer patient was
detained by U.S. customs because his
cancer treatment had made his
fingerprints disappear . As it turns out, the drug, capecitabine (brand name, Xeloda ) had
given him a moderate case of something
known as hand–foot syndrome (aka chemotherapy-induced acral erythema ). What are some other ways that fingerprints can disappear? bricklayers — who wear down ridges on their prints handling heavy, rough materials frequently people who work with lime [calcium oxide] - because it's really basic and dissolves the top layers of the skin. The fingerprints tend to grow back over time. surprisingly, secretaries - because they deal with paper all day. The constant handling of paper tends to wear down the ridge detail. also, the elasticity of skin decreases with age , so a lot of senior citizens have prints that are difficult to capture. The ridges get thicker; the height between the top of the ridge and the bottom of the furrow gets narrow, so there's less prominence. So if there's any pressure at all [on the scanner], the print just tends to smear. But Forensics expert Edward Richards notes: "... your skin replaces at a fairly good rate, so unless you've done permanent damage to the tissue, it will regenerate ." From National Geographic - Born Without Fingerprints : Two rare and related diseases leave
their sufferers with no fingerprints : Naegeli syndrome dermatopathia pigmentosa reticularis (DPR). One case of DPR is Flight attendant Cheryl Maynard . | {
"source": [
"https://skeptics.stackexchange.com/questions/3568",
"https://skeptics.stackexchange.com",
"https://skeptics.stackexchange.com/users/486/"
]
} |
3,579 | This is a common argument against the legalization of prostitution, and I'm curious if there's any truth to the claim that human trafficking actually increase when prostitution is made legal. | The claim makes absolutely no sense. Source: Economics 101. If you decrease the cost of being a prostitute (e.g. no more danger of arrest), you increase the supply of prostitutes. The reason for trafficking is that - given the payoff - not enough people want the job voluntarily, so you need to being in involuntary (slave) labor. Increasing the supply due to legalization removes that need. Please note that legalizing prostitution STILL keeps both trafficking and child exploitation laws in place. Moreover, looks like there's absolutely no proof (despite major allegations by interested parties) that the center of legalized prostitution in the USA (e.g. Nevada) has a big trafficking problem. From http://www.lasvegassun.com/news/2007/jan/29/do-we-have-a-human-trafficking-problem/ : Terri Miller, ATLAS's civilian director and long one of the top Nevada activists against the sexual exploitation of women and children, and her boss, Metro Capt. Terry Lesney, say the need for the group is clear: There is a "huge" and growing sex-oriented trafficking problem in Las Vegas. Yet they quickly add that no statistics have ever been gathered and law enforcers never before have made it a top priority - so the scope of the problem still needs to be determined. ... The task force's first task was to determine whether, in fact, there was a human-trafficking problem, Lesney says. But because of the lack of hard data, she says, "we were struggling to quantify what we're dealing with." ... The largest human trafficking bust in the area in recent years was Operation Jade Blade. A national sting in 2000 netted five Las Vegas Valley residents, who were arrested for trafficking Asian prostitutes into the city. The women had been smuggled into the country for a fee, then were forced to pay back their debt by working as prostitutes. Editorial note here - this is the LARGEST bust - and it's not, strictly speaking, about trafficking. The women came to the USA voluntarily. The crime was in forcing them to pay, and it has nothing to do with prostitution - the same exact problem exists/existed with Asians being smuggled all over the country and forced to work off their fees in sweatshops, often textile related. So, not only are the provable trafficking numbers WAY low (the article further details several - as in, less than 10 in several years) - there's absolutely ZERO proof on anyone's part that it's something specific to Las Vegas or has any correlation - never mind causation - with legalized prostitution. Remember, these are the people who are in CHARGE of fixing the supposed problem. | {
"source": [
"https://skeptics.stackexchange.com/questions/3579",
"https://skeptics.stackexchange.com",
"https://skeptics.stackexchange.com/users/631/"
]
} |
3,590 | I've seen many studies comparing proposed cures to placebo, but what about any studies actually investigating the effectiveness of placebo effect itself? So, has there been any study as to the effectiveness of placebo in ameliorating certain affliction' symptoms, compared to no treatment at all? In which illnesses has it been determined to be most potent? | The claim makes absolutely no sense. Source: Economics 101. If you decrease the cost of being a prostitute (e.g. no more danger of arrest), you increase the supply of prostitutes. The reason for trafficking is that - given the payoff - not enough people want the job voluntarily, so you need to being in involuntary (slave) labor. Increasing the supply due to legalization removes that need. Please note that legalizing prostitution STILL keeps both trafficking and child exploitation laws in place. Moreover, looks like there's absolutely no proof (despite major allegations by interested parties) that the center of legalized prostitution in the USA (e.g. Nevada) has a big trafficking problem. From http://www.lasvegassun.com/news/2007/jan/29/do-we-have-a-human-trafficking-problem/ : Terri Miller, ATLAS's civilian director and long one of the top Nevada activists against the sexual exploitation of women and children, and her boss, Metro Capt. Terry Lesney, say the need for the group is clear: There is a "huge" and growing sex-oriented trafficking problem in Las Vegas. Yet they quickly add that no statistics have ever been gathered and law enforcers never before have made it a top priority - so the scope of the problem still needs to be determined. ... The task force's first task was to determine whether, in fact, there was a human-trafficking problem, Lesney says. But because of the lack of hard data, she says, "we were struggling to quantify what we're dealing with." ... The largest human trafficking bust in the area in recent years was Operation Jade Blade. A national sting in 2000 netted five Las Vegas Valley residents, who were arrested for trafficking Asian prostitutes into the city. The women had been smuggled into the country for a fee, then were forced to pay back their debt by working as prostitutes. Editorial note here - this is the LARGEST bust - and it's not, strictly speaking, about trafficking. The women came to the USA voluntarily. The crime was in forcing them to pay, and it has nothing to do with prostitution - the same exact problem exists/existed with Asians being smuggled all over the country and forced to work off their fees in sweatshops, often textile related. So, not only are the provable trafficking numbers WAY low (the article further details several - as in, less than 10 in several years) - there's absolutely ZERO proof on anyone's part that it's something specific to Las Vegas or has any correlation - never mind causation - with legalized prostitution. Remember, these are the people who are in CHARGE of fixing the supposed problem. | {
"source": [
"https://skeptics.stackexchange.com/questions/3590",
"https://skeptics.stackexchange.com",
"https://skeptics.stackexchange.com/users/2836/"
]
} |
3,614 | Here's what I'm talking about . There are a lot of opinions whether it's fake or not, I really don't know! What are the chances that one (even a professional player, let alone common mortals) can somehow feel the ball (did he hear it or what?), make it in 0.5 second to extend the arm and catch with a bare hand the ball approaching at a huge speed (really, how fast is it?)? | This article does a very good job of piling on the evidence for the video being fake : http://www.politifact.com/florida/statements/2011/may/24/evan-longoria/rays-3b-evan-longorias-spectacular-barehanded-catc/ The evidence is primarily circumstantial, so far no one involved has explicitly admitted it was fake. In the video there are four Gillette logos visible -- two behind home plate and two on a roof facade over the third-base bleachers. But those logos aren't part of McKechnie Field in real life, Trevor Gooby, the Pirates' director of Florida operations, told a reporter for Patch.com. The logos were added digitally and included in the final video that was posted on YouTube. Already we know that some aspect of the video was altered. It would be a fallacy of generalization to then say that the catch itself was therefore also altered, but the evidence supports that theory. The video itself was uploaded by a person who lists their company as Gillette and the recording itself was recorded following a 6 hour recording session for Gillette commercials featuring Evan, suggesting that it too is part of their marketing campaign. If you're still looking for evidence that this was not meant to be part of an actual newscast, there's the reporter and the video graphic identifying Longoria. There is no television station symbols or letters on the video, and the reporter is holding a microphone without a "flag" that identifies the station where the reporter works. Perhaps even stranger, we could not find the video posted on any news site. (Surely, a TV station would love to claim the video as theirs.) The circumstances of the batting practice also seemed contrived, according to baseball reporter Topkin of the times. Topkin noted several things that aren't typical during a batting practice session. There is no cage surrounding the batter to catch foul balls or stop pitches that aren't hit. There's also no screen protecting the batting practice pitcher. There are no coaches in the video hitting ground balls and no other fielders on the baseball diamond to track down any hits. Furthermore, the batter himself acts oblivious to the entire event. He never yells a warning when the ball heads straight for the interview and crew, and after the catch is made, he resumes a normal batting stance and ignores the congregation in spite of the cheers from the other supposed players on the field. On the physics side, while the initial hit and resulting trajectory is normal, a frame by frame of the video shows abnormal ball movement right when the catch is made. The ball that's caught clearly comes from a different location and angle than the ball hit their direction. You probably saw this in the original video, but this was slowed down to 1FPS to make it more clear. The ball clearly doesn't move for multiple frames, then changes direction, [that's] clear video tampering. Finally, the words of Gillette spokesperson Norton addressing questions about the video were a cryptic refusal to claim the video was real or admit it was fake, which is very much in the spirit of viral marketing campaigns: The video was filmed while on location for a Gillette Fusion ProGlide commercial... We'll leave the 'is it real?' debate up to the viewers. So as I said, primarily circumstantial evidence along with a fairly solid theory on the frames surrounding the catch being doctored, as the Gillette logos in the park were. | {
"source": [
"https://skeptics.stackexchange.com/questions/3614",
"https://skeptics.stackexchange.com",
"https://skeptics.stackexchange.com/users/3080/"
]
} |
3,635 | We all know that volcanoes emit a tremendous amount of CO₂ when they erupt. I've often heard people argue that the amount of CO₂ an erupting volcano emits dwarfs the amount of CO₂ that humans emit in an entire year. Is this true? Update: I don't mean human biological emissions (alone). I mean all natural and human-produced CO₂ emissions. | No, that’s not true. In fact, Humans emit 100 times more CO 2 than volcanoes. [ source ] so it’s the exact opposite. For example, in 2008 humans emitted about 36 billion metric tons of CO 2 . In that same year, the highest (!) estimates for all volcanoes combined (submarine volcanoes included) were just 270 million metric tons ( Gerlach, 2010 ). The claim to the contrary, for instance voiced here by Ian Plimer on ABC : Over the past 250 years, humans have added just one part of CO2 in 10,000 to the atmosphere. One volcanic cough can do this in a day. is an artful lie. This is exposed wonderfully in a comment to that article: Our emissions since [before the Industrial Revolution] have raised the level [from 280 ppm] to around 390 ppm, an increase in the CO 2 concentration of around 40%!! The increase of 110 ppm is 1.1 per 10,000 - roughly Ian's magic number. […] 1 in 10,000 of CO 2 [of 390 ppm] would add 0.039 ppm which a [volcanic] 'cough' could easily do. So Ian Plimer arrives at his assertion by comparing two different numbers: the number of overall increase in CO 2 in the atmosphere (110 ppm), and a relative percentage of the atmospheric concentration (0.039 ppm), and alleges that these numbers are the same. That’s like saying that 10$ and 10% of 10$ (= 1$) are identical. | {
"source": [
"https://skeptics.stackexchange.com/questions/3635",
"https://skeptics.stackexchange.com",
"https://skeptics.stackexchange.com/users/3091/"
]
} |
3,665 | In TV show or films, it seems that throwing any mains-powered electrical device (such as a television or radio) in a bathtub can kill the person in said bathtub. Can this happen? It appears to me that power should shut down almost instantly. | Very simply, it could but with modern electrical safety systems, it's unlikely . A modern television is not earthed, as the casing is usually plastic, although some bigger sets have grounds. The reason for this is because the class Y capacitors used to filter EMI require an earth conductor . Bigger sets emit more EMI because they use more power, thus can require an earth connector. In which case, earth must be connected to exposed metal parts, if any. If they have a direct ground, water is likely to short live to the earthed case and trip the RCD/GFCI within 30 ms , with a current as low as 30mA. A general rule is that you can feel 1mA, 10mA is painful and 100mA can stop the heart . As you can't get out, this leads to death... and it's not nice. If not, then it could be fatal if the bathtub is not correctly earthed which according to regulations it must be . If the bathtub is earthed, the water will allow enough current to flow from live to earth to trip the RCD/GFCI. It would hurt, but you'd be able to get out usually. Modern household RCD/GFCI devices will trip on the current required to cause fibrilation - often much lower to make it safer. Note that RCD/GFCI devices don't measure earth current but instead the imbalance between live and neutral so even without a proper mains earth for the bath they will work. All that is important is that the current has some way to avoid going from live to neutral. If that is through the plumbing, it could save your life, though any assailant throwing TVs at you is likely to have more devious plans in mind. | {
"source": [
"https://skeptics.stackexchange.com/questions/3665",
"https://skeptics.stackexchange.com",
"https://skeptics.stackexchange.com/users/3112/"
]
} |
3,692 | Does wearing a helmet while bicycling make an individual cyclist safer? That is, in the case of an accident, is one safer wearing an helmet? If yes, are the odds of an accident lower, higher or identical if one is wearing a helmet? If the odds of an accident are higher when wearing an helmet, does the added safety of an helmet in the case of an accident makes the practice worthwhile? To substantiate this claim being in the wild, Mikael Colville-Andersen casts doubt at a TEDxCopenhagen talk saying they make things worse : "To my surprise, it didn't take me very long to figure out that the bicycle helmet doesn't have a very impressive track safety record, scientifically. The scientific community has been completely split for years on the subject 50/50 down the middle. If you look at this way, if the bicycle helmet was a vaccine or a medicine there is no way it would be anywhere near getting approved by a ministry of health. There is simply not enough proof. [...] There are actually scientific studies that show your risk of brain injury is higher when you wear a helmet. You have a 14% greater chance of getting into an accident with a helmet on." While The League of American Bicyclists' contradicts that in their Helmet use when Cycling : Helmets are safety devices which prevent or mitigate head injuries in a crash or fall, not substitutes for education which is aimed at the prevention of crashes and falls. | Take this study: Abstract Objectives. —To examine the protective effectiveness of bicycle helmets in 4 different age groups of bicyclists, in crashes involving motor vehicles, and by helmet type and certification standards. Research Design. —Prospective case-control study Setting. —Emergency departments (EDs) in 7 Seattle, Wash, area hospitals between March 1, 1992, and August 31, 1994. Participants. —Case subjects were all bicyclists treated in EDs for head injuries, all who were hospitalized, and all who died at the scene. Control subjects were bicyclists treated for nonhead injuries. Main Results. —There were 3390 injured bicyclists in the study; 29% of cases and 56% of controls were helmeted. Risk of head injury in helmeted vs unhelmeted cyclists adjusted for age and motor vehicle involvement indicate a protective effect of 69% to 74% for helmets for 3 different categories of head injury: any head injury (odds ratio [OR], 0.31; 95% confidence interval [CI], 0.26-0.37), brain injury (OR, 0.35; 95% CI, 0.25-0.48), or severe brain injury (OR, 0.26; 95% CI, 0.14-0,48). Adjusted ORs for each of 4 age groups (<6 y, 6-12 y, 13-19 y, and ≥20 years) indicate similar levels of helmet protection by age (OR range, 0.27-0.40). Helmets were equally effective in crashes involving motor vehicles (OR, 0. 95% CI, 0.20-0.48) and those not involving motor vehicles (OR, 0.32; 95% CI, 0.20-0.39). There was no effect modification by age or motor vehicle involvement (P=.7 and P=.3). No significant differences were found for the protective effect of hard-shell, thin-shell, or no-shell helmets (P=.5). Conclusions. —Bicycle helmets, regardless of type, provide substantial protection against head injuries for cyclists of all ages involved in crashes, including crashes involving motor vehicles. (My emphasis) The abstract can be found here: http://jama.ama-assn.org/content/276/24/1968.short In addition, this should be a basic physics question: The severity of head injuries clearly depends on the force of the impact. If that force is reduced in any way, this will mean less severe injuries. In a way, this question is similar to the question if there are studies regarding the usefulness of parachutes... EDIT There was the added question of increased odds of accident when wearing a helmet. This is the theory of risk homeostasis . According to that theory, each individual has a personal target level for the risk it takes. If safety measures reduce the risk, behavior is adapted to bring it to the higher level again. A discussion of this effect regarding bicycle helmets is found here: http://injuryprevention.bmj.com/content/7/2/89.full However, there is apparently no consensus and no study regarding the specific question of risk homeostasis for bicycle helmets. Subsequent studies proved to be inconclusive as well, and there are many more effects to take into account. | {
"source": [
"https://skeptics.stackexchange.com/questions/3692",
"https://skeptics.stackexchange.com",
"https://skeptics.stackexchange.com/users/408/"
]
} |
3,696 | A commonly taught adage is to never accept a ride from a stranger. This applies mostly to children and goes hand in hand with warnings about accepting gifts from strangers. Now, having grown older, I cannot think of anyone amongst my friends, associates or acquaintances that would ever willing harm a child by giving them a lift and stealing them away. Is riding with strangers truly unsafe for children? To help define the question: How many child kidnappings (or other crimes) began by the child accepting a ride from a stranger? How does this compare to other kidnappings or other sources of danger for children? The bonus question here involves hitchhiking, which is essentially the same thing but over long distances and usually by people much older. | Here is a link that puts child kidnappings and disappearances in context. The figures are for Canada. The first thing to notice is that of the 60,000 or so missing children in a typical year, there are about 100 times as many runaways as there are kidnappings. Of those kidnappings, 80-90% are parental abductions. The number of kidnappings by someone other than a parent (not necessarily a stranger) are around the 30-60 range per year. Essentially that means your likelihood of having a child kidnapped by a non-parent is about the same as winning a million dollars on the lottery. Here is an exceptionally detailed study of kidnappings in Canada. | {
"source": [
"https://skeptics.stackexchange.com/questions/3696",
"https://skeptics.stackexchange.com",
"https://skeptics.stackexchange.com/users/2433/"
]
} |
3,707 | Sea salt has been getting much more popular lately due to a perception that it tastes better than regular salt. Since it has negligible amounts of iodine, and tends to replace iodized salt in our diet, I understand that some iodine-deficiency-related diseases are on the rise in the U.S. Other than that, Wikipedia tells me that the health consequences of ingesting sea salt or regular salt are the same. So is there really a difference in the way they taste? Have any scientific taste-tests been done to see if people could tell the difference in flavor when sea salt is used in or on their food? | Cooks Illustrated did a non-peer-reviewed blind taste test back in 2002 (available here , but it's behind a paywall ). They compared nine different salts, including iodized table salt, non-iodized table salt, non-iodized kosher salt (of different brands and coarsenesses), and a bunch of different sea salts. They performed five different tests: Tests were divided into three categories: salt used at the table (we sprinkled each sample on roast beef), salt used in baking (we used a plain biscuit recipe), and salt dissolved in liquids (we tested each salt in spring water, chicken stock, and pasta cooking water). The tests did uncover "profound differences" in the types of salt used, especially in the beef tenderloin test, with large flaked sea salt winning by a large margin. Texture seemed to be important, as table salt (both iodized and non-iodized) won in the baking category due to their small crystals that evenly distribute in batter. None of the tasters could detect the difference between any of the salts when dissolved in liquids. | {
"source": [
"https://skeptics.stackexchange.com/questions/3707",
"https://skeptics.stackexchange.com",
"https://skeptics.stackexchange.com/users/1314/"
]
} |
3,748 | I've read a number of articles suggesting that the USA has a Mars colony, including (if I recall correctly, but I can't find the link now) one where an official in China stated that they believed the USA had a secret military base on Mars, and another that indicates that the USA attempted to recruit the great-granddaughter of President Eisenhower for a Mars colony project. While the latter seems to likely be malarkey, it seems to be one of the more common references to a USA Mars colony project. I'm curious about whether anyone's put any serious thought whether the USA could secretly create a Mars colony. Is there any evidence that the USA either has or plans a secret colony on Mars? Is it logistically possible that the USA could start a Mars colony and keep it secret? Edit 2020 It's worth noting the newsworthy claims of Haim Eshed, noted in numerous news articles including NBC's "Former Israeli space security chief says extraterrestrials exist, and Trump knows about it" : A former Israeli space security chief has sent eyebrows shooting heavenward by saying that earthlings have been in contact with extraterrestrials from a "galactic federation." "The Unidentified Flying Objects have asked not to publish that they are here, humanity is not ready yet," Haim Eshed, former head of Israel's Defense Ministry's space directorate, told Israel's Yediot Aharonot newspaper. Eshed said cooperation agreements had been signed between species, including an "underground base in the depths of Mars" where there are American astronauts and alien representatives. | Of course there is a secret colony on Mars. The same people that kept the secret that we faked the moon landing are also keeping the secret that we went to Mars and established a colony... Okay, in all seriousness, please select a launch that was supposed to have sent anything off to Mars that included people or the equipment to support those people. Here is a list of all past NASA launches for you to choose from . I'm sure that whatever mission you choose, I will be able to knock a hole through any conspiracy theory that even the 9/11 troofers will think anyone who believes this is nuts. The logistics in launching a manned mission to Mars would be so large that it would be impossible to hide it. Conservative estimates place the price-tag at $1 TRILLION , and hiding that sort of spending would be nigh on impossible (and keep in mind that most estimates of any government program are usually way under the real cost). There are many ideas for a Mars mission , and if anyone got there, it would be a coup of historic proportions that no one would want to keep it secret. Much like with the moon landing hoax insanity, if we hadn't got there, the Russians would have been all over it. If the US has managed to get to mars, it would be front page news all over the world, and would be used in every possible manner to showcase the US in a positive light. As Oddthinking said, "extraordinary claims requires extraordinary evidence", and I have seen none! | {
"source": [
"https://skeptics.stackexchange.com/questions/3748",
"https://skeptics.stackexchange.com",
"https://skeptics.stackexchange.com/users/1792/"
]
} |
3,805 | There are many claims made about the source of fluoride in our water, some examples are: Intentional dosing of municipal water supplies with fluoride waste products from industry Increased cancer risk from using phosphate waste to fluoridate drinking water Most of the Queenslanders [...]are drinking water fluoridated with imported Chinese industry waste products Is the fluoride added to the water supply produced as an industrial waste product? And does it due to this origin contain any harmful components in significant amounts? | As a former resident of Queensland, I looked with interest at the third link provided in the question. The claims are extreme, so I believe it is valuable to address their claims directly, line by line. WHAT DO THEY ACTUALLY PUT IN THE WATER? The three Fluoride chemicals that could be added to Queensland water supplies for fluoridation are Hydrofluorosilicic Acid, Sodium Silicofluoride or Sodium Fluoride. I am going to take their word for that. Hydrofluorosilicic Acid and Sodium Silicofluoride are collectively known as the Silicofluorides and are the chemicals used most in other Australian states fluoridation schemes. I am going to take their word for that. The two Silicofluorides chemicals used, are waste products of Phosphate fertilizer manufacture. One of the fundamental concepts of Chemistry - one of the most important ideas that have advanced science - is that everything is made of atoms. It doesn't matter, chemically, where the atoms come from, they still react the same way. So, from a health perspective, that they are waste products of another process is irrelevant. [Reference: Year 8 high school science class.] From a political standpoint that may be relevant, but that's not being argued here. From an emotional standpoint, we associate "waste" with "bad". If we replaced the emotive term "waste" with "recycled" suddenly it sounds positive! They are industrial grade, not pharmaceutical grade products and can contain small residues of toxic heavy metals such as cadmium, mercury or lead. The introduction of the phrase "pharmaceutical grade" here is a red herring. Most items we ingest are not pharmaceutical grade. The sugar you put in your coffee isn't pharmaceutical grade, and nor is the coffee itself. Why should the water be? The standard here should be "food grade", and the water coming from the tap (certainly in first world countries) is rigorously monitored and controlled. (I'd include a reference here, but it is dependent on your local government, so I can't give a universal answer. I have examined the regulations for a number of states here in Australia, and there are a huge number of pollutants tested for, including heavy metals.) Once the chemicals coming in are dissolved to 1 ppm (see other answer), the "small residues" are going to be diluted even further, making the issue of industrial versus food grade inputs irrelevant. It is the output that matters. The two Silicofluoride compounds used DO NOT EVEN OCCUR IN NATURE, yet fluoridation promoters call fluoride "NATURAL”. I will take their word for the fact that the compounds don't occur in nature. Not only is the "natural" argument irrelevant (as they later point out themselves), but the reagents used are irrelevant, as the flouride is no longer attached to the rest of the compound once it is in solution. [Reference: Year 11 high school Chemistry class] I would, however, like citations for where flouridation promoters call it natural. Are they referring to the silicofluorides or to the idea of fluoride being dissolved in fresh water? No toxicology studies have ever been performed on the Silicofluoride used in water fluoridation schemes. The only toxicology studies ever done, have been done on Pharmaceutical
grade Sodium Fluoride as is used in toothpaste. Toxicology studies have been done on the fluoride dissolved in the drinking water (see other answer), which is where it is relevant. Currently less than 5% of Queensland's population drinks fluoridated water. I'll take their word for it, but irrelevant to the argument (except to explain why there is a motivation to start flouridating water.) Sodium Fluoride is a waste product of Aluminium smelting and is the fluoridation chemical used in Queensland in Dalby, Mareeba, Moranbah and Townsville/ Thuringowah. Again, the source of the chemical is irrelevant from a chemical/health perspective. Freedom of Information reveals that water supply of Bamaga is fluoridated with a Silicofluoride and that Sodium Fluoride used in other Queensland areas is imported from China. It would appear that most of the Queenslanders that are currently drinking fluoridated water are drinking water fluoridated with imported Chinese industry waste products, probably sourced as a waste product of the Chinese Aluminium smelting industry. Certainly the source country is irrelevant for health effects. It is only relevant to trigger emotive patriotic and political concerns. Similarly, it doesn't matter how the information was obtained - citing "Freedom of Information" strikes an emotive chord that the government may be trying to otherwise hide something. I would like to see a cite of the request and the resulting data, to ensure we aren't being exploited by people putting in FoI requests where a regular request (or even web search!) would get the same information. Water from rivers, creeks or dams does contain small amounts of natural fluoride. Levels of fluoride in SE Qld surface waters are usually only about 0.1 parts per million, or nine times less the amount of the Fluoride that Queensland Government plans on adding to Brisbane's water supply. Okay, I'll take their word for that. Fluoride occurs naturally in water when water flows though or over rocks and abrades rocks that contain Fluorspar, or Calcium Fluoride (Ca F2). Calcium fluoride is very insoluble. Water that contains natural Fluoride from abraded Fluorspar containing rocks also contains Calcium which can offer some protection from Fluoride. Fluoride binds with Calcium readily and Calcium is given as a treatment for Fluoride poisoning. Most of this sounds plausible, and I confirmed that Calcium Flouride is very insoluble on Wikipedia , so no disagreements here. Note: they are straying awfully close to the "natural is good" fallacy that they themselves later attack. Calcium Fluoride (the natural form of Fluoride) is not permitted to be added to any Australian water supply. I would like a cite for that. I note that it has been approved for food by the EU . If there is such a restriction, is it just to avoid unnecessary mining? (5 Billion kg mined annually [Holleman, A. F.; Wiberg, E. "Inorganic Chemistry" Academic Press: San Diego, 2001. ISBN 0-12-352651-5., via Wikipedia ]) Or concerns that there is a cost of extracting the Flouride with concentrated Sulphuric Acid ( Wikipedia ) before adding it to the water, thus defeating the purpose of using a "natural" source. Groundwater as in bore water or well water can contain very high levels of "natural" fluoride and in parts of China, India and the Rift Valley, natural Fluoride has led to devastating health effects such as crippling Skeletal Fluorosis for millions of people. Arsenic, Lead and Mercury are also "natural". Natural does not necessarily mean good or desirable. I believe all of this to be true. If anyone proposed to set flouride levels to the point they could trigger skeletal fluorosis , that would be terrible. Fortunately, I have seen no proposals to exceed the World Health Organization recommended maximum fluoride value at which fluorosis should be minimal [ref: Fawell J, Bailey K, Chilton J, Dahi E, Fewtrell L, Magara Y. Fluoride in Drinking-water [PDF]. World Health Organization; 2006. ISBN 92-4-156319-2. Guidelines and standards. p. 37–9. via Wikipedia ]. The Silicofluoride compounds used for water fluoridation are very acidic and addition to water often entails addition of other chemicals such as soda ash to neutralize the acidity to prevent corrosion of water reticulation equipment. Appendix one of the 1999 NHMRC Review of water fluoridation was a questionaire for Councils which fluoridate and included a request for any evidence for Fluoride incompatibilities, such as enhanced corrosion or breakdown of gaskets or seals, in the water distribution network. Okay. Someone asked a question. And? The Queensland Government has said they would pay the setting up costs of fluoridation, but will not be paying for any recurrent and on going costs. Any Fluoride caused corrosion problems in water treatment plants or water reticulation systems would be to the future cost of Councils and ratepayers. The Queensland Government is funded by tax-payers. The local council is funded by tax-payers. This isn't a health argument, it is an argument about which bucket of tax-payer money should be used within a political system, and therefore is not subject to scientific scrutiny. In conclusion: there are a number of emotive arguments here, but no references, and some half-truths. I would look elsewhere for evidence that flouridation contains harmful components. It may be right that that the flouride is extracted from the output of other industrial processes, but that is both irrelevant (from a consumer health and safety perspective) and claimed without evidence here. | {
"source": [
"https://skeptics.stackexchange.com/questions/3805",
"https://skeptics.stackexchange.com",
"https://skeptics.stackexchange.com/users/374/"
]
} |
4,081 | I ran across something odd today... I have an iPhone app that lists facts, trivia, and other useless information (it's called Cool Facts, downloaded from iTunes). So far, nothing I've found on it has been terribly inaccurate. However, flipping through it today, I came across this: To say the very least, I'm skeptical. So the question is: Was the name "Wendy" actually created by J.M. Barrie for Peter Pan ? Or, if this is a myth, does anyone know where it originated? | From The Straight Dope : All kidding aside, J. M. Barrie did
not invent the name Wendy for his 1904
play Peter Pan, the Boy Who Wouldn't
Grow Up (the book form of the story,
Peter and Wendy, was published in
1911). But we have absolute proof that there
were earlier Wendys, thanks to the
just-released 1880 U.S. Census and the
1881 British Census (available here ). These documents show that the name
Wendy, while not common, was indeed
used in both the U.S. and Great
Britain throughout the 1800s. I had no
trouble finding twenty females with
the first name Wendy in the United
States, the earliest being Wendy Gram
of Ohio (born in 1828). Using the search above you can see the results for yourself. | {
"source": [
"https://skeptics.stackexchange.com/questions/4081",
"https://skeptics.stackexchange.com",
"https://skeptics.stackexchange.com/users/486/"
]
} |
4,107 | If you search through the internets to find reasons why the television show Firefly was cancelled, you often find disparaging remarks about Fox and the Executives behind the cancellation. These can range from "Oh how stupid" to "Ulterior motives were behind the cancellation." One of the more thorough examples (emphasis original): They wanted to kill this show. I believe that, as surely as I do that the sun rises in the east. Had they really been behind the series, and wanted it to "go" somewhere, they would have first of all given it a decent time-slot, one in which it would have had a chance to find an audience—the nine-o'clock (Eastern) slot on Sunday nights, vacated by that overwrought piece of dreck The X-Files , would have been perfect. It is—was— not an eight o'clock primetime "kiddie" show. It was a serious drama with a fantastic setting. And it was simply without question the best show of its type ever made for television. So why did Fox kill Firefly so deliberately? Did they want to punish creator Joss Whedon for his "unexpected" successes with Buffy the Vampire Slayer and Angel ? Demonstrate to him conclusively that it is not the few genuinely creative people in Hollywood who hold the real power in the industry, but the men and women who hold the purse strings? Typical excuses I have heard: Someone had a problem with strong female characters The powers that be just didn't like Joss Whedon The cultural themes were too "out there" Executive backstabbing sabotaged the show for the purposes of making someone else look bad But the question at heart is this: Did something or someone specific target Firefly for cancellation aside from the reasons that typically get shows cancelled? Or, more bluntly, did an executive actively sabotage the show (or Joss Whedon) in such a way that resulted in Firefly getting cancelled? | Gail Berman , who served as the Fox Entertainment president at the time, was the one who pulled the plug. She served as executive producer on "Buffy" and "Angel" . In her own words: " Canceling Firefly was as difficult as
anything I'd ever been involved in
because Joss and I had been creative
partners at one time. I
worked with him very closely on this
particular show and when it didn't
perform [in the ratings], having to
cancel it was very difficult ." [ Source ] From Whedon.info : Scifi author Keith R.A. DeCandido
recently noted that Fox canceled
"Firefly" for the same reason why any network
cancels any TV show : it was not
making enough money to justify its
existence. " Firefly was an extremely expensive
show to make. It was
over $2 million an episode, which is a
ridiculous amount of money. It needed
to draw in more viewers than it got in
order for them to make it back on the
advertising ." (Keith R.A. DeCandido wrote the novelization of the movie " Serenity ") | {
"source": [
"https://skeptics.stackexchange.com/questions/4107",
"https://skeptics.stackexchange.com",
"https://skeptics.stackexchange.com/users/2433/"
]
} |
4,108 | I was recently sent an article and am trying to evaluate some of the claims. [1] The claim of interest here is this one: In light of new evidence that has emerged clearing Dr Wakefield of the allegations that he fabricated study data involving MMR vaccines and symptoms of autism, Dr Wakefield is now publicly demanding a retraction from the British Medical Journal and author Brian Deer. Documents just made public reveal that another medical research team which included a senior pathologist independently documented evidence of a possible MMR vaccine - autism link 14 months before Dr Wakefield's paper first appears in The Lancet -- based on several of the same children appearing in Dr Wakefield's study. Essentially, the Lancet retracted Wakefield's original 1998 paper after finding that he had altered data. [2] The British Medical Journal discussed this fraud in detail. [3] The article above claims that Wakefield didn't tamper with the diagnoses because he couldn't have -- due to a researcher named Walker-Smith having discussed 7 of the 12 children's diagnoses 14 mos. prior to the publishing of the 1998 Lancet paper. [4] Since these "new documents" have been uncovered, Wakefield has demanded a retraction. [5] My questions are: Even if this is true, does it change the implications of the BMJ allegations? In other words, the BMJ accusation is that of incorrectly reporting various facts. For example, the 1998 paper reports children having contracted autism-like symptoms days after vaccination, while follow up investigation by journalist Brian Deer found that in many cases, it was actually months before the onset of symptoms. Thus, while perhaps it wasn't fraud, am I correct that the core content is unchanged -- the 1998 paper still featured incorrect data and thus its conclusions were ill-founded? What conclusions, if any, can be drawn by the fact that Walker-Smith is a co-author of the 1998 paper? In other words, I find that Natural News is treating this as though an independent research team unconnected with Wakefield verified what he wrote in his paper... but the very person who is being used to show that Wakefield didn't make up these findings was actually a co-author . Is anyone familiar enough to offer a cited summary of where this all stands? It's been difficult to track down exactly how this has played out. How many of the 12 children actually ended up with autism diagnoses or bowel disorders? Is there a summary of their symptoms and time until onset after vaccination? This is a very he-said she-said topic, I'm finding. [1] http://www.naturalnews.com/031117_BMJ_Dr_Andrew_Wakefield.html [2] http://www.thelancet.com/journals/lancet/article/PIIS0140673697110960/fulltext [3] http://www.bmj.com/content/342/bmj.c7452.full [4] http://www.vaccinesafetyfirst.com/pdf/BRIAN%20DEER%20IS%20THE%20LIAR%20.pdf [5] http://www.vaccinesafetyfirst.com/pdf/BMJ%20MUST%20RETRACT.pdf | Natural News' claims of fraud on the part of BMJ hinge on two things: BMJ is allegedly "largely" funded by the very vaccine makers they are allegedly "protecting" by unfairly attacking Dr. Wakefield's paper published in The Lancet in 1998. The BMJ's claims of fraud are disproved by research presented by Professor Walker-Smith and Dr. Amar Dhillon that "independently" presents identical data on 7 of the 12 children in Wakefield's paper. The first is simply laughable. Even if we assume that Natural News' claim that the BMJ is "largely funded" by vaccine manufacturers is true (and while I have no evidence, I suspect it is not), it's not relevant. The BMJ published a series of articles by journalist Brian Deer, who (apparently) originally uncovered the fraud in 2004 . Once the original evidence came to light, the GMC investigated Wakefield, his co-authors, and his paper, and concluded (wholly independently of the BMJ) that Dr. Wakefield et al had engaged in fraud in their paper. It also hypocritically uses the specter of "biased funding source" to attack the BMJ, while conveniently ignoring the fact that it was "biased funding source" (to the tune of over £400,000 ) that originally lead to the investigation of Dr. Wakefield in the first place! The second claim here... I'll go ahead and call it outright fraudulent. According to this article from The Sunday Times in 2006, Dr. Wakefield's research began 2 years before his paper was published in The Lancent in February of 1998; roughly the first quarter of 1996. This puts his research beginning almost a full year before Walker-Smith and Dhillon presented their own "independent" findings in December of 1996 . Notice how weasel-worded Natural News and Dr. Wakefield's writings are, careful to always say that this was 2 years before The Lancet published their paper, carefully avoiding the fact that it was almost 1 year after their research for the paper began! The second claim further hinges on the alleged fact that Prof. Walker-Smith and Dr. Amar Dhillon independently verified Wakefield's research, when in fact there was nothing independent about it -- both individuals are co-authors on Wakefield's 1998 The Lancet paper (registration required to read the full text, but the authors list is there without it), and in fact Prof. Walker-Smith was himself investigated and determined to be guilty of Serious Professional Misconduct by the UK's General Medical Council in relation to -- coincidentally -- 7 of the 12 children in the 1998 The Lancet paper! (This was actually the very first Google result when searching for Walker-Smith's name; I unfortunately could find no additional information on Dhillon.) Given that Natural News is blatantly biased -- their "articles" on this conclude as ads for Dr. Wakefield's book -- and that they are clearly misrepresenting (at best) the facts, I don't think we need to consider Wakefield's paper to be any more valid now than we did before these "new documents" were released, especially since they fail in any way to refute any of the reasons for Wakefield's paper to have been retracted and he and many of his co-authors to be sanctioned by the GMC. | {
"source": [
"https://skeptics.stackexchange.com/questions/4108",
"https://skeptics.stackexchange.com",
"https://skeptics.stackexchange.com/users/2459/"
]
} |
4,124 | I've been admonished not to ever shine a laser into the night sky as it might "dazzle pilots" and potentially bring down airplanes. This seems kind of illogical for a number of reasons: If a plane is flying perpendicular to the earth, a laser from the ground would have to hit it at a significant angle, much more akin to pointing a bit higher than the horizon rather than straight up, for the actual pilot of the plane to see the laser output. While lasers can be deadly accurate, I find it hard to believe that any old Joe Shmoe can track the cockpit of an airplane at +10,000 feet, possibly miles away, and this would at the very least have to be deliberate. Most commercial airplanes fly for the most part (save takeoff and landing) on autopilot! Let's say you actually succeeded in temporarily blinding the pilot. The plane just keeps flying, as it exists in the 21st century and uses a computer. Unless you have something greater than a 1000mW laser (not easy or legal to acquire these days), won't the distance be too great to actually cause significant problems? We were once playing on a military beach at night with a simple, store-bought green laser, shining it into the sky, when a scruffy old man approached us and reprimanded us for shining it into the sky, claiming that we might "dazzle a pilot." Is this just fallacy? | It is a real threat. Enough of a threat for the FAA to mail out a Pilot Safety Notice (Linked PDF). Let me explain why, as a counter to each of your points in the original question: First of all, planes bank and turn, so they aren't always perpendicular to the ground. Also, the Plexiglas material that aircraft windshields are made of further scatter and intensifies the dazzle effect on a pilot that happens to be sitting right up on the window. So you don't need to shine it directly at the pilot, just the cockpit. The greatest danger is not for high flying aircraft, but rather while aircraft are in critical phases of flight. That is not to say that you can't have an effect on higher flying aircraft. There is a property of lasers called divergence that will allow you to cover a fair area of the sky. Although, then the inverse square law takes over, so the power getting to a pilot at those distance normally wouldn't be a concern for most, although it may screw up night vision. You greatly overestimate the amount of time an aircraft is on autopilot. Generally, below 10,000 feet is where most professional pilots will take over from the autopilot in order to warm up for the landing phase. ( This is a TTP for pilots .) It actually tunes you in for greater Situational Awareness (SA), and will have you looking out of the cockpit more than when on autopilot. Thus making you more susceptible to a laser dazzling effect. It only takes a small amount of light to screw up your vision for landing phases. This is especially dangerous during night operations. It is also intensified again by the scattering that you get from the Plexiglas as I mentioned in item 1. Add to that that green light is the most "dazzling" wavelength . Specific effects as listed here are : Distraction and Startle : This occurs when an unexpected laser (or other bright light) can distract a pilot during a night time take-off or approach/landing. Glare and Disruption : This occurs as the intensity of the laser light increases such that it starts to interfere with vision; night vision starts to deteriorate. Temporary Flash blindness : This effect is similar to that experienced when looking at a bright camera flash. There is no injury, but a portion of the visual field is temporarily knocked out. Sometimes there are ‘afterimages’. And you are lucky that the old guy didn't call the Military Police on you. They have a procedure for reporting these incidents (called a SAFIRE ). It has even happened with civilians. New law to combat louts dazzling pilots over Birmingham with lasers pens (that's Birmingham in the UK). PILOTS flying over the skies of Birmingham are facing a greater threat of being dazzled by a laser pen than almost anywhere else in the country, it has emerged. The region is third in a ‘league of shame’ of hotspots for the crime according to a report by the UK Civil Aviation Authority. Now they have introduced a new law to target the reckless offenders putting the lives of those in airliners and helicopters at risk. Man arrested for trying to dazzle pilots with laser (Reuters) - A man appeared in court on Tuesday accused of trying to dazzle pilots with a laser beam as they were landing at France's second-busiest airport Paris Orly, aviation authorities said. "Several pilots complained and the man was arrested near the runway," a spokesman for the civil aviation authority said. Aussie laser-pointer dazzle attacks on airliners: Bad Australian politicians are demanding restrictions on the ownership of laser pointers in the land down under. The banning calls follow a series of widely-reported incidents in which individuals on the ground have attempted to dazzle pilots of commercial aircraft making approaches to landing. A particularly troublesome dazzling attack took place last Friday, involving at least four comparatively-powerful green laser pointers in the Bexley area of Sydney. Six passenger flights were affected, with air-traffic controllers having to re-route the planes. "The use of these laser pointers against aeroplanes is unbelievably stupid and cannot be tolerated," Australian Home Affairs Minister Bob Debus told the Sydney Morning Herald. | {
"source": [
"https://skeptics.stackexchange.com/questions/4124",
"https://skeptics.stackexchange.com",
"https://skeptics.stackexchange.com/users/1836/"
]
} |
4,150 | I hear this very often, while woman can do several things at a time. If this is true, what are the qualifications that make woman capable of doing several things at once? A quick Google search returned an obviously popular book; However, I haven't read it and I'm not planning to. Plans may change if this gets interesting though. Are there any evidence that support the idea/ theory? | Edit: Now there's some evidence for this idea I thought I'd come back to this question, because I wasn't really satisfied with what the literature yielded back then and the paper that Peters mentioned (there had only been a press release) has come out. Stoet, O'Connor, Conner, & Laws (2013) looked at this and found some evidence for the idea. Quoting from their abstract Background There seems to be a common belief that women are better in
multi-tasking than men, but there is practically no scientific
research on this topic. Here, we tested whether women have better
multi-tasking skills than men. Methods In Experiment 1, we compared performance of 120 women and 120
men in a computer-based task-switching paradigm. In Experiment 2, we
compared a different group of 47 women and 47 men on
"paper-and-pencil" multi-tasking tests. Results In Experiment 1, both men and women performed more slowly when
two tasks were rapidly interleaved than when the two tasks were
performed separately. Importantly, this slow down was significantly
larger in the male participants (Cohen’s d = 0.27). In an everyday
multi-tasking scenario (Experiment 2), men and women did not differ
significantly at solving simple arithmetic problems, searching for
restaurants on a map, or answering general knowledge questions on the
phone, but women were significantly better at devising strategies for
locating a lost key (Cohen’s d = 0.49). Conclusions Women outperform men in these multi-tasking paradigms, but
the near lack of empirical studies on gender differences in
multitasking should caution against making strong generalisations.
Instead, we hope that other researchers will aim to replicate and
elaborate on our findings. --- end edit Old answer No, there is no such evidence. Apparently there didn't use to be much evidence against it either, but I found two recent studies by Noemi Peters ( 2010 , 2011 ). First I did a search on "sex
differences" multitasking and similar
terms, but I could only find a dodgy
study in support and not much
well-received publications in the
field anyway. Apparently Ms Peters found the
same dearth in the literature. The fun part:
I found her publications by looking who had
cited the Pease book :-) I searched extensively for peer-reviewed scientific publications that
examine gender differences in multitasking ability, but the closest I
could find is Criss (2006) and Havel (2004), which are manuscripts that
are made available online at the website of the National
Undergraduate Research Clearinghouse. Both examined subjects who had
to perform some specified tasks while tallying keywords from a
song/story. None of them found gender differences in productivity when multitasking, but Criss (2006) found that women were better at
accuracy. Nonetheless, we do not know whether the findings can be
attributed to multitasking as none of them had a control
group . Besides, some British newspapers reported recently about an
experiment that supports the view that women are better (see Gray,
2010), but when I contacted the lead researcher, Professor Keith Laws,
it turned out that there is not even a working paper yet that I could
discuss here. Her evidence to the contrary wasn't published
in peer reviewed journals, so if somebody
has something stronger, better pay attention to that instead. From the abstract of her dissertation: The view that women are
better at multitasking is widely held,
however there is no scientific
evidence supporting it . This
experiment examines whether there are
gender differences in multitasking
ability and in the inclination to
multitask. To this end, I conduct an
experiment with three treatments: one
where subjects have to execute two
tasks sequentially, one where subjects
are forced to multitask with the two
tasks, and one where they can choose
freely how to organize their work.
The results of the third treatment
indicate that there is no gender
difference in the inclination to
multitask . As far as multitasking
ability is concerned, I do find a gender difference but it is contrary
to the widely held beliefs : point
estimates indicate that men perform
better both under forced and voluntary
multitasking. This gender difference
reaches statistical significance in
case of voluntary multitasking. | {
"source": [
"https://skeptics.stackexchange.com/questions/4150",
"https://skeptics.stackexchange.com",
"https://skeptics.stackexchange.com/users/3271/"
]
} |
4,176 | Is there any evidence to support the claim that software patents stifle creativity and put many people at risk of legal action? If necessary refer to some background reading : Seriously think about it. Every time
you write code -- even a brand new
algorithm in a clean room
environment-- you could be infringing
a patent, somehow, somewhere. It's probably not fair to say that
software patents are 100% evil. But
from what I've read, I'd say they're
99 and 44/100ths percent evil. I'm not
sure what any of us can do about this,
but it's clear that the current
situation is untenable Something has to be done, or else we truly are staring down a coming software patent apocalypse. | Yes Patents stifle, not creativity, but innovation (which I'm sure you meant). We argue that when innovation is
“sequential” (so that each successive
invention builds in an essential way
on its predecessors) and
“complementary” (so that each
potential innovator takes a different
research line), patent protection is
not as useful for encouraging
innovation as in a static setting.
Indeed, society and even inventors
themselves may be better off without
such protection. Furthermore, an
inventor's prospective profit may
actually be enhanced by competition
and imitation. http://onlinelibrary.wiley.com/doi/10.1111/j.1756-2171.2009.00081.x/full And not only in software. However, the recent proliferation of
intellectual property rights in
biomedical research suggests a
different tragedy, an “anticommons” in
which people underuse scarce resources
because too many owners can block each
other. Privatization of biomedical
research must be more carefully
deployed to sustain both upstream
research and downstream product
development. Otherwise, more
intellectual property rights may lead
paradoxically to fewer useful products
for improving human health. http://www.sciencemag.org/content/280/5364/698.short And finally, Bill Gates: If people had understood how patents
would be granted when most of today’s
ideas were invented and had taken out
patents, the industry would be at a
complete stand-still today. The
solution . . . is patent exchanges . .
. and patenting as much as we can. . .
. A future start-up with no patents of
its own will be forced to pay whatever
price the giants choose to impose.
That price might be high: Established
companies have an interest in
excluding future competitors. Fred Warshofsky, The Patent Wars 170-71 (NY: Wiley 1994). | {
"source": [
"https://skeptics.stackexchange.com/questions/4176",
"https://skeptics.stackexchange.com",
"https://skeptics.stackexchange.com/users/374/"
]
} |
4,182 | A while ago, someone studying a health related subject (not necessarily medicine, but I forgot) claimed that "real" allergies could only be caused by protein-like substances. I was told this after I claimed I was allergic to kiwi fruit, which I then was told wasn't possible. Are there in medicine some subtle definitions that differentiate between allergies in that narrow sense, and, maybe, "intolerances" of a broader sense? | No, a common allergy is that to nickel. Nickel allergy is one of the most common causes of allergic contact dermatitis. [...] If you have nickel allergy, your body
reacts to nickel and possibly to other
metals, such as cobalt and palladium.
In other words, it's mistakenly
identified nickel as something that
could harm you. Once your body has
developed a reaction to a particular
agent (allergen) — in this case,
nickel — your immune system will
always be sensitive to it. That means
anytime you come into contact with
nickel, your immune system will
respond and produce an allergic
response. 1 and from New Zealand Dermatological Society Incorporated : Nickel allergy is one of the most
common causes of contact allergic
dermatitis. In affected individuals,
dermatitis (eczema) develops in places
where nickel-containing metal is
touching the skin. The most common
sites are the earlobes (from
earrings), the wrists (from a watch
strap) and the lower abdomen (from a
jeans stud); the affected areas become
intensely itchy and may become red and
blistered (acute dermatitis) or dry,
thickened and pigmented (chronic
dermatitis). 2 Nickel was named 'Allergen of the Year for 2008' by the The American Contact Dermatitis Society . 1 Mayo Clinic: Nickel allergy . 2 DermNet NZ: Nickel allergy . | {
"source": [
"https://skeptics.stackexchange.com/questions/4182",
"https://skeptics.stackexchange.com",
"https://skeptics.stackexchange.com/users/1183/"
]
} |
4,189 | I read here about human asexuality, and I cannot figure out if it's real or if people are just making stuff up. Have studies been done about this phenomenon and what explains it? | Yes, but until now only hypotheses on reasons for asexuality are available. Experiments on male gerbils show refused mating with female gerbils, indicating there are epigenetic, prenatal period factors causing diff. sexual orientation/behaviour: A study on Mongolian gerbils showed
that part of a population of male
gerbil fetuses that developed between
two female fetuses refused to mate,
but instead spent almost 50% more time
taking care of the young than male
gerbils who as fetuses were positioned
between two other males. They were
also about 30% more likely to stay
with a nest when the mother had left.
This suggests that, although not
perpetuating their own genes, they
helped perpetuate their sisters'
genes, which has evolution benefits
for at least half that family's genes For humans there has to be distinguished between an asexual lifestyle and sexual excitability/reduced libido , as some self-called asexuals are masturbating and obviously can feel orgasms, better called auto-sexual . Currently from a scientfic point of view there is no clear definition , but above mentioned properties would be obviously crucial. These articles sheds some light on classification of several defintions and genuine causes. There have been very few studies about
asexuality in humans, most of which
were about the stereotype that
disabled people are made asexual as a
result of their condition. One of the
only studies that looks at asexuality
as a possible orientation was actually
a reexamination by Anthony F. Bogaert
of a survey of 18,000 British about
general sexuality and STDs. 1.05% of
the respondents to the survey reported
"I have never felt sexually attracted
to anyone at all," very close to the
1.11% who responded they were homosexual or bisexual, although more
women tended to be the former than the
later, and more men tended to be the
later than the former. Bogaert noted
this asexual group to have poorer
health, shorter stature, less body
weight, higher attendance at religious
services, lower socio-economic status,
and asexual women had a later onset of
menarche, all when compared to sexual
people. Although these are only
correlations, they may help form later
hypothesis about the cause of
asexuality, and whether asexuality is
a valid orientation at all. Bogaert
suggests some of his own. Perhaps the
factors affecting height growth and
weight gain also affected a region of
the brain vital to sexuality, or
education or other resources dependent
on socio-economic status are somehow
vital in sexual development, or maybe
asexuals had fewer "sexual
conditioning" experiences growing up
(i.e. masturbation) which might also
explain the high proportion of women
and religious (both groups are less
likely to masturbate). Youth, however,
was not correlated with asexuality,
indicating these individuals were not
merely "late bloomers;" asexuals
actually tended to be older. Major
limitations to the study, besides
being merely correlative and not
actually about asexuality, include its
high non-response bias (30%) and its
face-to-face style of interviewing
(which may have pressured individuals
to alter their answers). However, the
study does contain enough correlative
evidence to warrant future research in the area. (6) So phenomenological asexuality seems more to be a female "property" , making a epigenetic more plausible than pure genetic causing of asexuality, as one would expect equal distribution. Differences in human brain structure indicate, that asexuality is also not caused by purely psychological/social development reasons Since scientists have already noted
that the brain of homosexual men is
structurally different from that of
heterosexual men (cell structure of
gay mens' hypothalamus more closely
resembles that of a heterosexual
female's), that the asexual brain may
too be structurally different should
not be too easily dismissed. The
existence of animal displays of
asexuality run contradictory any
suggestions that asexuality is a
problem caused by psychological issues
such as fear of commitment, or
conscious/unconscious repression of
sexuality, as animals are presumed to
be incapable of both, although this
rests upon the assumption that
asexuality has the same cause in
humans and animals There is also a link between hormone production and libido, chemical castration can force a reduction in libido; some countries use it for pedophiles therapy . Speculative reasoning: From an evolutionary point of view one has to ask how likely a pure genetic heredity of a general asexual property is, as humans mainly bear single not several babies and the development help similar to the mentioned gerbil case cannot play a role. Summary Asexuality as a mammal phenomenon exists, but currently its not clear how much genetic, epigenetic and post-birth development factors actually contribute to this phenomenon. But current knowledge emphasize factors influencing fundamental brain structure rather than psychologigal/social reasons. Special cases like genetic caused Asperger, Autism reducing will of physical closeness to other humans show set of difficulties defining and reasoning asexuality on humans. | {
"source": [
"https://skeptics.stackexchange.com/questions/4189",
"https://skeptics.stackexchange.com",
"https://skeptics.stackexchange.com/users/1229/"
]
} |
4,209 | I've always heard that you should change your oil every 3,000 miles (5,000 km). That little sticker you get on your windshield after an oil change agrees. Growing up, my parents told me the same thing. I doubt that you need to change your oil that often to maintain a healthy engine. I suspect it's a ploy by the companies to increase profits. Is it really necessary to change your oil that frequently to get the most life out of your vehicle? | No. Wiki on this myth. California's efforts to debunk this myth HERE Synopsis: follow the manual's recommended oil change schedule, not the 3,000 mile recommendation that has become commonplace advice. To translate that into some figures, I looked around for publicly available service manuals (just a few as I don't want to take all my time with this...): 2002 Mazda Protege ( LINK ): 6mos or 7,500 miles, whichever comes first, Sec. 8-4 2006 Volvo, all models ( LINK ): 7,500 miles 2011 Ford Explorer ( LINK ): when light comes on (up to 10,000 miles or 1 year), pgs. 417, 420 2008 Cadillac CTS ( LINK ): up to a year, Sec. 6-4 2000 Oldsmobile Alero ( LINK ): whenever the light comes on, typically between 3,000-7,500 miles, but never longer than 7,500 miles or 1 year, Sec. 7-6 Edit: I thought it might be helpful to know typical driving distances per year, since that came up in the comments. They are listed HERE by the US Dept. of Transportation (current as of 4/2011). The average for all age groups across both genders is ~13,500/year. This would equate to 4-5 oil changes based on the 3,000 recommendation vs. 1-2 for the typical manufacturer's recommendations above. As one last add-in, some in the comments brought up idle time. I don't know where that figures in. I'm assuming this question has to do with general use, however, not extreme cases of little/no usage. | {
"source": [
"https://skeptics.stackexchange.com/questions/4209",
"https://skeptics.stackexchange.com",
"https://skeptics.stackexchange.com/users/3073/"
]
} |
4,290 | I'm extremely skeptical of the idea of magnetic water softeners (strong magnets attached to pipes), but desperately wish it were true because I hate lugging 40# bags of salt out to my well house in the hot Texas sun. I'd love to see some objective research results on the subject from someone who isn't selling a magnetic water softening system. For the purposes of this question, "work" is defined as changing the properties of water treated with the system to: Substantially improve the effectiveness of soap products using the output water. Minimize scale buildup on fixtures, in pipes, and on dishes. The reason I'm being so specific is that I've seen some defenders of this technology that claim you get the benefits of soft water using their systems, but because of the way it works it doesn't show any difference on standard water hardness tests. That is, it is pseudo-soft water, but acts like soft water for all practical purposes. Just the fact that they have a miracle solution, that involves magnets, and is resilient to empirical testing makes me extremely skeptical. | No. There have been a few studies on the efficacy of magnetic water softening systems. This one (PDF) from the Lawrence Livermore National Laboratory compares chemical and magnetic systems against a control. The table of results for scale buildup are pretty compelling: As you can see, the Polyphosphate chemical process was effective, and the magnetic one was not. The Army Corps of Engineers also conducted a study on three magnetic water softening devices which found: The results of this study do not indicate any clear advantage for any of the three devices tested versus a control for the inhibition of mineral scale formation or the corrosion of copper. | {
"source": [
"https://skeptics.stackexchange.com/questions/4290",
"https://skeptics.stackexchange.com",
"https://skeptics.stackexchange.com/users/1227/"
]
} |
4,348 | From the Register , talking about the recent CERN antimatter experiment. I'm not concerned with the experiment itself, but rather this claim This, in turn, would help us understand how come our universe is asymmetrical, home to vastly more matter than anti-matter. I'm very skeptical of this claim because it seems to be making a definitive statement about something that appears unmeasurable. Is there legitimate scientific backing to this statement? | Matter/antimatter annihilation produces gamma rays at specific frequencies. That means we can detect regions of space where matter and antimatter are interacting.
The logic showing matter antimatter asymmetry thus goes something like this: Obviously in our local area of the Universe (solar system, Milky Way) consists of matter. Can’t tell if distant galaxies consist of matter or antimatter– spectra etc. all the same. Universe could consist of domains of matter and antimatter, with net baryon asymmetry. If matter/antimatter domains are in contact, gamma rays produced at boundary from annihilation. Cosmic gamma ray background indicates domains must be at least ~Gpc in size. Voids between domains would show up in the CMB. (list from The Origin of Matter-Antimatter Asymmetry - pdf) The gamma ray background doesn't reveal domain borders. WMAP does not show voids between large domains . --
So if there's a lot of antimatter out there, it's not in contact with matter anywhere that we can see, and it's not separated from normal matter by cosmic voids either. That doesn't leave much room for antimatter in the observable universe. | {
"source": [
"https://skeptics.stackexchange.com/questions/4348",
"https://skeptics.stackexchange.com",
"https://skeptics.stackexchange.com/users/34/"
]
} |
4,373 | Blackle is a search engine that claims to save energy because it uses a black background. Is there any evidence to back up their claim that a website using a black background will save energy, and if so, how much energy will be saved? | Blackle actually cite a real reference to backup their claims. Credit to them! On their About page they quote a line from a Energy Use and Power Levels in New Monitors and Personal Computers , Roberson et al, Environmental Energy Technologies Division,
Ernest Orlando Lawrence Berkeley National Laboratory, UCLA. The quote is: "Image displayed is primarily a function of the user's color settings and desktop graphics, as well as the color and size of open application windows; a given monitor requires more power to display a white (or light) screen than a black (or dark) screen." That line does actually appear in the report, and is backed by the following data: The reports goes on to conclude: Among the few LCD monitors in the table, the power used to display a white screen is indistinguishable from power used to display the desktop. Thus, it appears that display color is a significant determinant of on power for CRTs, but not for LCDs. Clearly, in LCD technology terms, 2002 is a long time ago. I have no knowledge of any power-saving innovations in the meantime. | {
"source": [
"https://skeptics.stackexchange.com/questions/4373",
"https://skeptics.stackexchange.com",
"https://skeptics.stackexchange.com/users/-1/"
]
} |
4,398 | I've seen numerous claims that circumcision reduces HIV risk, both on TV and online . Have there been any studies to verify if circumcision does or doesn't reduce HIV risk in a statistically-significant manner? | I would have to respectfully disagree with Russell's answer and say: No. Circumcision does not reduce HIV risk . The three controlled intervention trials suffer from some VERY major design flaws , which cast more than a reasonable doubt about the supposed 'benefit' of circumcision in males. A Cochrane review of circumcision questions the validity of previously performed studies on circumcision for the following reasons: Performance bias Attrition bias Selection bias They identified 14 cross sectional studies with inconsistent findings - 4 were statistically significant for a benefit to circumcision, 2 were statistically significant for harm from circumcision. They also mentioned study heterogeneity . They summarize with (bolded emphasis mine): In assessing the quality of the
observational studies we identified 10
potentially important confounders that
studies would need to ensure were
either balanced between circumcised
and uncircumcised groups or, if
unbalanced, that were adequately
adjusted for (see Box 2). Many studies
either did not measure these variables
or, if reported, were either not
balanced between groups or not
adjusted for. It is important to note
that observational studies, unlike
RCTs, can only adjust for known
confounders, and only then if they are
measured without error. The effect of
unknown confounders may well be
operating in either direction within
and across all of the included
studies. The studies from high-risk
groups included in this review do
report a powerful protective effect of
circumcision, measured by both
unadjusted and adjusted odds ratios.
More mixed results were reported for
the general population. As all the
observed results could be explained by
likely confounding, RCTs are essential
before circumcision is implemented as
a public health intervention. Implementation of circumcision will
encounter cost, both financial and in
terms of potential personal harm; no
adverse effects are reported in this
review only because none of the
observational studies investigated
them. Feasibility issues of
implementation are beyond the scope of
this review but need to be carefully
considered. If those clinical trials in Africa are flawed, how can one justify using them as the basis for a policy? There is a real risk of risk compensation reducing the 'benefit' of circumcision. The various pro-circumcision studies all cite the need for 'other' forms of prevention, ie. condoms - which in Africa aren't as available (or used) as they are elsewhere in the world. Further Reading: Circumcision status and HIV infection among MSM : Reanalysis of VAXGen VAX004 HIV vaccine clinical trial data, with conclusion: "Among men who reported unprotected insertive anal sex with HIV-positive partners, being uncircumcised did not confer a statistically significant increase in HIV infection risk. Additional studies with more incident HIV infections or that include a larger proportion of uncircumcised men may provide a more definitive result." Circumcision status and HIV/STI amongst MSM : Study with conclusion "Our findings suggest that male circumcision would not be likely to have a significant impact on HIV or sexually transmitted infections acquisition among MSM in Seattle." Study of male to female transmission (By the same researcher that produced the Uganda RCT!). An RCT (aborted early due to "futility") with conclusion: "Circumcision of HIV-infected men did not reduce HIV transmission to female partners over 24 months; longer-term effects could not be assessed." Case-Controlled Study of US Navy Men with conclusion: "[male circumcision] is not associated with HIV or STI prevention in this U.S. military population." Neonatal Circumcision does not reduce HIV/AIDS CDC Fact Sheet: Male Circumcision and Risk for HIV Transmission and Other Health Conditions: Implications for the United States , as cited by Russell. As mentioned above, this is countered by the Cochrane study . The above fact sheet cites this source (#1) , but that was debunked above. The above fact sheet cites this source (#3) re: foreskin tearing aiding HIV infection. This is discussed here and an alternative solution has been proposed . The above fact sheet cites this source (#4) , which is countered here , where it is determined to be "unlikely to have a substantial public health impact in reducing acquisition of most STIs in homosexual men" here This Lab Study suggested that "Circumcision likely reduces risk of HIV-1 acquisition in men by decreasing HIV-1 target cells", but is countered by this letter . I'll be happy to counter the other sources at a later point, it's 10:30 pm here and I'm knee deep in setting OEL limits. | {
"source": [
"https://skeptics.stackexchange.com/questions/4398",
"https://skeptics.stackexchange.com",
"https://skeptics.stackexchange.com/users/2836/"
]
} |
4,407 | At about the 11:40 point in the class day 2009 lecture by Robert Sapolsky the claim is made that humans are the only species that engage in non-reproductive sex. Is there solid evidence for or against the idea that only humans have non-reproductive sex? (Oh and it's a fun lecture.) | Bonobos are an example. From Discovering Animal Behaviour : Sex is key to the social life of the
Bonobo. They largely use sex as a means to alleviate conflicts or resolve them. When ill feelings begin to form between Bonobos– everything stemming from territorial issues to competition for food– their first reaction is to smooth it over with sexual contact. From Bonobo.org : Bonobos seem to ascribe to the 1960s
hippie credo, " make love, not war ."
They make a lot of love, and do so in
every conceivable fashion. Sex in bonobo
society transcends reproduction , as it
does in humans. It serves as a way of
bonding, exchanging energy and sharing
pleasure. More Sources: Bonobo Sex and Society PBS - Chimps and Bonobos African Wildlife Foundation - Bonobos | {
"source": [
"https://skeptics.stackexchange.com/questions/4407",
"https://skeptics.stackexchange.com",
"https://skeptics.stackexchange.com/users/34/"
]
} |
4,456 | The Bechdel Test is named for the author of a comic that popularized the following three "rules" that must be fulfilled before a movie passes the test: The film must have at least two women in it The two women must talk to each other during the film At least one of these discussions must be about something besides a man The premise of the comic (and the proponents of the rule) is that a striking majority of films fail this test. The implication is that the film industry has a strong gender bias toward male characters or male-centric plots such that the women are only really there to talk about the men. The relevant questions are such: Do a strong majority of modern films regularly fail this test? (Say, at least 75%.) Does flipping the gender of the test result a drastically different outcome? (Say, a difference of 33% or more.) These percentages are arbitrary, but they provide a starting point. A clarification of the last question: If the female Bechdel test fails 75% but the male Bechdel test fails 25% of the time, the difference would be 50%. To help restrict the data set if a larger one is not practical, feel free to concentrate on extremely popular or critically acclaimed films from the previous decade (2000-2010). This, again, only serves to provide a starting point. (An alternative would be the decade surrounding the comic's printing: 1980-1990.) | According to this site , which uses a community effort to rank movies, 50% of all movies pass the test, and only 10% fail on all three points. So no a strong majority does not fail the test. However the validity of the measure at all is questioned, for instance tvtropes a fair number of top-notch works have legitimate reasons for including
no women (e.g. ones set in a men's
prison or on a WorldWarTwo military
submarine or back when only men were
on juries or with no conversations at
all, or with only one character). A movie can easily pass the Bechdel Test and still be incredibly
misogynistic. it's also possible for a story to fail the test and still be strongly
feminist in other ways It would appear that the only thing the Bechdel test is good at measuring is whether or not something passes the Bechdel test . If it misidentifies mysogenist movies as feminist and vice-versa then it cannot be said to be a valid measure or critique of the industry. | {
"source": [
"https://skeptics.stackexchange.com/questions/4456",
"https://skeptics.stackexchange.com",
"https://skeptics.stackexchange.com/users/2433/"
]
} |
4,498 | Wikipedia's Effectiveness of torture for interrogation says, Torture has been used throughout history for the purpose of obtaining information in interrogation. Some arguments say that it is an effective way of making someone divulge vital information whilst others say it is [violent, horrific and] useless. It links to a New York Times article, Interrogations’ Effectiveness May Prove Elusive , which includes opinions from both sides of the debate. So, completely disregarding moral aspects and other consequences, is torture useful in interrogation ? Is it feasible to torture accurate information out of people? Is it feasible to determine whether the information is likely accurate without outside confirmation? Also related would be whether other interrogation techniques are better at extracting accurate information and confirming its accuracy. | Short answer : Your friends who think torture is effective at getting reliable information are wrong . New Edit/info : Since this is in the news (December 2014), the Senate Select Committee on Intelligence has just recently issued a report (PDF) on torture activities that the US engaged in (the link is the 500 page version). While the majority of details concern specific activities and practices, there are several conclusions that came from this report. One of the chief findings regarding the effectiveness of torture was that it wasn't, and any instances of it being reported as having played a role were fabrications by the CIA to justify their continued program (findings and conclusions items #1 and #2 as well as #10 stating the dissemination of inaccurate information). Finding and conclusion #8 would even indicate that these actions complicated and impeded US security. Army Field Manual 34-52 Chapter 1 says : “Experience indicates that the use of force is not necessary to gain the cooperation of sources for interrogation. Therefore, the use of force is a poor technique, as it yields unreliable results, may damage subsequent collection efforts, and can induce the source to say whatever he thinks the interrogator wants to hear.” The C.I.A.’s 1963 interrogation manual stated : Intense pain is quite likely to produce false confessions, concocted as a means of escaping from distress. A time-consuming delay results, while investigation is conducted and the admissions are proven untrue. During this respite the interrogatee can pull himself together. He may even use the time to think up new, more complex ‘admissions’ that take still longer to disprove. The act of torturing can even interfere with a subject's ability to tell the truth . Solid scientific evidence on how repeated and extreme stress and pain affect memory and executive functions (such as planning or forming intentions) suggests these techniques are unlikely to do anything other than the opposite of that intended by coercive or 'enhanced' interrogation. This Newsweek article also links to the Trinity College Institute of Neuroscience in Dublin that has a paper in the journal Trends in Cognitive Science . It will cost you nearly $40 for the paper itself though . In specifically dealing with the post 9/11 world, a work entitled A utilitarian argument against torture interrogation of terrorists states in its abstract: Drawing from criminology, organizational theory, social psychology, the historical record, and my interviews with military professionals, I assess the potential of an official U.S. program of torture interrogation from a practical perspective. The central element of program design is a sound causal model relating input to output. I explore three principal models of how torture interrogation leads to truth: the animal instinct model, the cognitive failure model, and the data processing model. These models show why torture interrogation fails overall as a counterterrorist tactic. Anyone remember the Star Trek episode that dealt with this issue? And this was before Waterboarding was part of our lexicon. Furthermore, the Society for the Psychological Study of Social Issues states: ...there is no evidence that torture is an effective means of gathering reliable information. Many survivors of torture report they that would have said anything to “make the torture stop” (Mayer, 2005; McCoy, 2005). Those who make the claim that “torture works” offer as evidence only unverifiable anecdotal accounts. LiveScience sums it up very well in the title of an article from October 2007 : Torture Has a Long History ... of Not Working I went through the USAF SERE School, and I can tell you that even though we weren't "tortured" we were placed under numerous stressors, and we would do as many deceitful things to get out of those situations as possible (i.e. tell them what they wanted as opposed to the truth). And we were even taught how to evade the torture by supplying plausible lies, and then "recover" from anything that may have been detrimental to our position as a POW. EDIT TO ADD : Someone asked if there are any methods for getting information quickly, and reliably. The answer is, "It depends." There are many, many techniques out there (Good Cop-Bad Cop, surprise, sympathy, etc.). All of those really depend on the state of mind of the subject. One really needs to get to know the subject before you can start to whittle away at them and find what you want/need. And even then, it is wildly variable and depends a great deal on psychology. The link to the Army Field Manual 34-52 mentions some specialized training required: The interrogator requires specialized training in international regulations, security, and neurolinguistics. Neurolinguistics is a behavioral communications model and a set of procedures that improve communication skills. The interrogator should read and react to nonverbal communications. An interrogator can best adapt himself to the source's personality and control his own reactions when he has an understanding of basic psychological factors, traits, attitudes, drives, motivations, and inhibitions. Also, keep in mind that HUMINT can be gathered much more reliably via other methods than direct interrogation. The recent example of the courier that led to the raid on Osama bin Laden was all HUMINT gathered via tailing and observation. DuckMaestro is interested in data, and every source I find says that interrogation isn't even a science, but rather an art... How is one supposed to get data on that? A scholarly paper on Police Interrogation techniques, even highlights the art nature more than anything (PDF File). The most effective that has some backing by studies seems to be the Reid Technique of investigative interviewing , which seems to be a recap from the Army Field Manual. Also, some folks may be interested in reading about Hanns Scharff , considered one of the most successful interrogators of WW II. He has been highly praised for the success of his techniques, in particular because he never used physical means to obtain the required information WHY USE IT? So why do people use it, or promote it? While probably beyond the scope of the answer, I wanted to address this with a couple of thoughts. First of all, human beings are animals. There is a visceral need to hurt your enemy. If you have captured an enemy, it may seem callous to hurt him for the sake of hurting him, so "enhanced interrogation" is a nice rationalization . And as long as you are told to do so by an authority figure, many people will comply (as also highlighted by the original Milgram Experiment ). Also, many of the proponents for torture have a vested interest in ensuring that it isn't deemed illegal. They would face prosecution should their actions be deemed illegal! That is self-preservation. I will add, torture IS effective at intimidation , and keeping people "in line" under an authoritative regime. In that respect, there is a great deal of historical evidence (recent history like Saddam, Pinochet, Iran; or older history like the Inquisitions or Roman methods). In that sense, it is a very effective tool, but generally not for the stated purpose of getting reliable information. But it will get a lot of false confessions that can be used for propaganda and other purposes. Not only that, some people will indeed give information ( as cited in this article ), however the overwhelming evidence is again that it may not be reliable, and what have you sacrificed in order to obtain that information? INFORMATION BEYOND JUST INITIAL REFUTATION : I just found a HUGE list of quotes as well. Let me go through them and get a few more for you. This is a long list of policy quotes, and people involved in the intelligence fields, so it really won't have a lot of actual research citations to back it up, since researching torture is highly unethical. Although, I think the Stanford Study may be about as close as you can get. My apologies for some of the references, they aren't always the most impartial, or reliable, so anything below this should probably be taken with a grain of salt. And of course, should you want to read it all, the inescapable conclusion is that torture does not work as a reliable interrogation technique, and never has. According to the Washington Post, the CIA’s top spy – Michael Sulick, head of the CIA’s National Clandestine Service – said that the spy agency has seen no fall-off in intelligence since waterboarding was banned by the Obama administration. “I don’t think we’ve suffered at all from an intelligence standpoint.” The CIA’s own Inspector General wrote that waterboarding was not “efficacious” in producing information. A 30-year veteran of CIA’s operations directorate who rose to the most senior managerial ranks (Milton Bearden) says (as quoted by senior CIA agent and Presidential briefer Ray McGovern) : It is irresponsible for any administration not to tell a credible story that would convince critics at home and abroad that this torture has served some useful purpose. This is not just because the old hands overwhelmingly believe that torture doesn’t work — it doesn’t — but also because they know that torture creates more terrorists and fosters more acts of terror than it could possibly neutralize. A former high-level CIA officer (Philip Giraldi) states : Many governments that have routinely tortured to obtain information have abandoned the practice when they discovered that other approaches actually worked better for extracting information. Israel prohibited torturing Palestinian terrorist suspects in 1999. Even the German Gestapo stopped torturing French resistance captives when it determined that treating prisoners well actually produced more and better intelligence. A retired C.I.A. officer who oversaw the interrogation of a high-level detainee in 2002 (Glenn L. Carle) says : [Coercive techniques] didn’t provide useful, meaningful, trustworthy information…Everyone was deeply concerned and most felt it was un-American and did not work.” A former top Air Force interrogator who led the team that tracked down Abu Musab al-Zarqawi, who has conducted hundreds of interrogations of high ranking Al Qaida members and supervising more than one thousand, and wrote a book called How to Break a Terrorist writes : As the senior interrogator in Iraq for a task force charged with hunting down Abu Musab Al Zarqawi, the former Al Qaida leader and mass murderer, I listened time and time again to captured foreign fighters cite the torture and abuse at Abu Ghraib and Guantanamo as their main reason for coming to Iraq to fight. Consider that 90 percent of the suicide bombers in Iraq are these foreign fighters and you can easily conclude that we have lost hundreds, if not thousands, of American lives because of our policy of torture and abuse. But that’s only the past. Somewhere in the world there are other young Muslims who have joined Al Qaida because we tortured and abused prisoners. These men will certainly carry out future attacks against Americans, either in Iraq, Afghanistan, or possibly even here. And that’s not to mention numerous other Muslims who support Al Qaida, either financially or in other ways, because they are outraged that the United States tortured and abused Muslim prisoners. In addition, torture and abuse has made us less safe because detainees are less likely to cooperate during interrogations if they don’t trust us. I know from having conducted hundreds of interrogations of high ranking Al Qaida members and supervising more than one thousand, that when a captured Al Qaida member sees us live up to our stated principles they are more willing to negotiate and cooperate with us. When we torture or abuse them, it hardens their resolve and reaffirms why they picked up arms. He also says : [Torture is] extremely ineffective, and it’s counter-productive to what we’re trying to accomplish. When we torture somebody, it hardens their resolve … The information that you get is unreliable. … And even if you do get reliable information, you’re able to stop a terrorist attack, al Qaeda’s then going to use the fact that we torture people to recruit new members. And he repeats : I learned in Iraq that the No. 1 reason foreign fighters flocked there to fight were the abuses carried out at Abu Ghraib and Guantanamo. He said last month : They don’t want to talk about the long term consequences that cost the lives of Americans…. [The way the U.S. treated its prisoners] was al-Qaeda’s number-one recruiting tool and brought in thousands of foreign fighters who killed American soldiers. The FBI interrogators who actually interviewed some of the 9/11 suspects say torture didn’t work. Another FBI interrogator of 9/11 suspects said : I was in the middle of this, and it’s not true that these [aggressive] techniques were effective. A third former FBI interrogator — who interrogated Al Qaeda suspects — says categorically that torture does not help collect intelligence . On the other hand he says that torture actually turns people into terrorists. The FBI warned military interrogators in 2003 that enhanced interrogation techniques are “of questionable effectiveness” and cited a “lack of evidence of [enhanced techniques’] success. “When long-time FBI director Mueller was asked whether any attacks on America been disrupted thanks to intelligence obtained through “enhanced techniques”, he responded “I don’t believe that has been the case.” The Senate Armed Services Committee unanimously found that torture doesn’t work, stating : The administration’s policies concerning [torture] and the resulting controversies damaged our ability to collect accurate intelligence that could save lives, strengthened the hand of our enemies, and compromised our moral authority. The military agency which actually provided advice on harsh interrogation techniques for use against terrorism suspects warned the Pentagon in 2002 that those techniques would produce “ unreliable information .” General Petraeus says that torture is unnecessary, hurts our national security and violates our American values. Retired 4-star General Barry McCaffrey – who Schwarzkopf called the hero of Desert Storm – agrees . The number 2 terrorism expert for the State Department says torture doesn’t work , and just creates more terrorists. Former Navy Judge Advocate General Admiral John Hutson says: Fundamentally, those kinds of techniques are ineffective. If the goal is to gain actionable intelligence, and it is, and if that’s important, and it is, then we have to use the techniques that are most effective. Torture is the technique of choice of the lazy, stupid and pseudo-tough. Army Colonel Stuart Herrington – a military intelligence specialist who interrogated generals under the command of Saddam Hussein and evaluated US detention operations at Guantánamo – notes that the process of obtaining information is hampered, not helped, by practices such as “slapping someone in the face and stripping them naked”. Herrington and other former US military interrogators say: We know from experience that it is very difficult to elicit information from a detainee who has been abused. The abuse often only strengthens their resolve and makes it that much harder for an interrogator to find a way to elicit useful information. Major General Thomas Romig, former Army JAG, said : If you torture somebody, they’ll tell you anything. I don’t know anybody that is good at interrogation, has done it a lot, that will say that that’s an effective means of getting information. … So I don’t think it’s effective. Brigadier General David R. Irvine, retired Army Reserve strategic intelligence officer who taught prisoner interrogation and military law for 18 years with the Sixth Army Intelligence School, says torture doesn’t work. The head of all U.S. intelligence said : The bottom line is these techniques have hurt our image around the world … The damage they have done to our interests far outweighed whatever benefit they gave us and they are not essential to our national security. Former counter-terrorism czar Richard A. Clarke says that America’s indefinite detention without trial and abuse of prisoners is a leading Al Qaeda recruiting tool. A former U.S. interrogator and counterintelligence agent , and Afghanistan veteran said, Torture puts our troops in danger, torture makes our troops less safe, torture creates terrorists. It’s used so widely as a propaganda tool now in Afghanistan. All too often, detainees have pamphlets on them, depicting what happened at Guantanamo. The first head of the Department of Homeland Security – Tom Ridge – says we were wrong to torture. The former British intelligence chairman says that waterboarding didn’t stop terror plots. A spokesman for the National Security Council ( Tommy Vietor) says : The bottom line is this: If we had some kind of smoking-gun intelligence from waterboarding in 2003, we would have taken out Osama bin Laden in 2003. The Marines weren’t keen on torture , either. As Vanity Fair reports : In researching this article, I spoke to numerous counterterrorist officials from agencies on both sides of the Atlantic. Their conclusion is unanimous: not only have coercive methods failed to generate significant and actionable intelligence, they have also caused the squandering of resources on a massive scale through false leads, chimerical plots, and unnecessary safety alerts…Here, they say, far from exposing a deadly plot, all torture did was lead to more torture of his supposed accomplices while also providing some misleading “information” that boosted the administration’s argument for invading Iraq. An Army psychologist – Major Paul Burney, Army’s Behavior Science Consulting Team psychologist – said (page 78 & 83) : It was stressed to me time and time again that psychological investigations have proven that harsh interrogations do not work. At best it will get you information that a prisoner thinks you want to hear to make the interrogation stop, but that information is strongly likely to be false. Interrogation techniques that rely on physical or adverse consequences are likely to garner inaccurate information and create an increased level of resistance…There is no evidence that the level of fear or discomfort evoked by a given technique has any consistent correlation to the volume or quality of information obtained. An expert on resisting torture – Terrence Russell, JPRA’s manager for research and development and a SERE specialist – said ( page 209 ): History has shown us that physical pressures are not effective for compelling an individual to give information or to do something’ and are not effective for gaining accurate, actionable intelligence. Okay, I think that's enough for this answer. I'll try to come back to this from time to time to add references and answers to questions. Keep in mind this is a charged subject, but the main science says it isn't reliable. And even if it were reliable, the ethics would cloud the issue beyond this site's charter, I think. | {
"source": [
"https://skeptics.stackexchange.com/questions/4498",
"https://skeptics.stackexchange.com",
"https://skeptics.stackexchange.com/users/39/"
]
} |
4,501 | Does Van Eck phreaking perform as described, i.e. allow a person to observe what is being displayed on a given computer screen (notably a LCD) from a distance without having any physical connection to the machine being monitored, and without the knowledge of the person being observed? | Yes it does. VGA or keyboard cable has side effect of acting as antenna. Both eavesdropping and countermeasure techniques are widely knows as TEMPEST (which was codename used by NSA). It's described with details and numerous references here . Example from above source: It is standard to use TEMPEST protected terminals in military ( NATO standard requirement ), banks, embassies, government installations etc. The buildings themselves are usually also TEMPEST protected. TEMPEST protection of hardware is basically shielding equipment and cables with metal, which acts as Faraday cage. Which also has another side effect — it gives some protection against EMP attack . BTW. the question gives away the answer. Van Eck's paper was published and peer reviewed. | {
"source": [
"https://skeptics.stackexchange.com/questions/4501",
"https://skeptics.stackexchange.com",
"https://skeptics.stackexchange.com/users/1792/"
]
} |
4,508 | I was once watching a slideshow about the new IPv6, and it mentioned that it is large enough for every grain of sand on earth to be IP addressable. Is there any grain of truth behind this? (no pun intended) | Estimating the number of grains of sand on Earth is difficult. This source suggests 7.5x10 18 grains (7.5 quintillion), but only includes beaches (deserts, under-sea sand and other sources not included.) This source suggests 10 20 to 10 24 grains (up to septillion grains of sand). The number of addresses IPv6 could possibly address is 2 128 (excluding reserved addresses), or about 3.4x10 38 (340 decillion). Even if you remove the reserved addresses you're still left with far more IPs than grains. In fact, assuming the most number of grains of sand - around 10 24 - 294 femtopercent (yes, femto, 10^-15) would be used if every grain were allocated an IP. You could allocate 340 billion planets with the same number of grains of sand before you even came close to filling up the address space. After all that, you'd still have 2.8x10^35 (280 decillion) addresses free. | {
"source": [
"https://skeptics.stackexchange.com/questions/4508",
"https://skeptics.stackexchange.com",
"https://skeptics.stackexchange.com/users/686/"
]
} |
4,521 | I've read in a few places including in comments in a recent article that Man vs. Wild is staged and he has every necessity covered. Is Man vs. Wild staged or is Bear Grylls actually in dangerous situations when we see him hanging from rocks above rapids etc.? Example comment: This fraud is a fairy-floss adventurer
who always has a full production crew
along on his "solo" adventures. Not
much chance of him getting into real
strife as even his cook could save
him. And, BTW, his real name is
Humphrey! | The guy was caught sleeping in a hotel , fer Pete's sake. From The New York Times : But as Mark Weinert, who said he served as a consultant on the show, told The Times of London, “If you really believe everything happens the way it is shown on TV, you are being a little bit naive.” More details from The New York Post: According to Weinert, while filming in California’s Sierra Nevada mountains — an episode in which Grylls, 33, is seen biting off the head of a snake for breakfast — Grylls actually spent some nights with the show’s crew in a lodge outfitted with television, stone fireplaces, hot tubs and Internet access. This is in direct contrast with Grylls' stated position: Meant to counter that disbelief is a statement by Mr. Grylls at the beginning of each show, saying he undertakes his adventures carrying only a flint, a knife and maybe some water, and that a camera crew following his journey through the wilderness would not aid him in any way. [Same article] | {
"source": [
"https://skeptics.stackexchange.com/questions/4521",
"https://skeptics.stackexchange.com",
"https://skeptics.stackexchange.com/users/374/"
]
} |
4,532 | Related: Is there any explanation for a near-death experience? That question is about vague, light-in-the-tunnel near death experiences (NDEs). This question is about very specific claims I've heard advanced by Dr. Gary Habermas, a Christian apologist, who uses them in debates to argue for the existence of an afterlife/soul. See his website for more. Here are three examples of the extraordinary claims made by Dr. Habermas: A debate involving Arif Ahmed: There are some [NDE] cases that are so evidential -- this doesn't make them true, this doesn't force them -- but they've been written up in from (sic) 10 to 20 different peer-reviewed medical journals have covered these cases. In Part III of a debate posted on Habermas' website , he says: I know of a case where a guy who had no activity, who was clinically dead. [...] They resuscitated him at the hospital, the guy says, "Hey, I was watching you do all this," and he explained things. [...] He said "I noticed there was a number on top of your ambulance. This is the number." In this video , Habermas presents the story of a girl who was underwater for 19 minutes, taken to the hospital with no eye movement and no brain activity, and put in the ICU. Three days later she spontaneously awoke and claimed to have watched the doctors working on her, was able to describe the ER (that she was no longer in), said an angel allowed her to look into her home, and reported details from home such as a song that came on the radio and what her mom made for dinner. Habermas reports that the story was written up in the Journal of Pediatrics and another journal. My questions: Are there many of these "evidential cases" about NDEs written u pin peer-reviewed medical journals as Habermas claims? Are there examples where specific things someone reports to have seen were shown to be true or false? In other words, are there any specific cases describing a definitive resolution with evidence vs. word of mouth stories? EDIT as a fun and related counterexample, here is a fantastic clip from James Randi in which he describes his own out of body experience in which he saw vivid, specific details about the room: the color of the bedspread and where they cat was laying. When he recounted the event upon waking, others pointed out that the bedspread was in the laundry and that the cat had been outside all night long. | Edit: I've looked into this and am updating it to the full answer. Here goes... Re. Habermas and his claims in particular I emailed the source of these claims, Dr. Habermas, to try and obtain the "list of over one-hundred evidential cases" he refers to in his debates/talks. He did not have such a list (I can provide the exact email if requested). He suggested checking out his book, Beyond Death and other articles/interviews/etc. for more sources. I followed up with a request for a reference to the girl who was underwater for 19 minutes (mentioned in the question, with a video link). He provided a reference to the journal in which the article appeared, the journal, Current Problems in Pediatrics and Adolescent Heath Care . I tracked down the article, available from Dr. Morse's (the author) SITE , available for download HERE . The mention of that case is, indeed, very short and disappointing, which surprised me if Habermas' account of the same girl is correct; compare the amazing story in my question by Habermas with Morse's account here: I reported the first pediatric NDE, a 7-year-old girl who was without spontaneous heartbeat for 19 minutes and had fixed and dilated pupils. She recovered to give a detailed description of her own resuscitation including hearing pieces of conversations in the emergency room, accurately describing her own resuscitation with details such as nasal intubation and being placed in a CT scanner. This was followed by a spiritual journey with a spirit guide through a dark tunnel to a heavenly realm and a decision to return to consciousness. No mention of seeing her mother/brothers at their home some distance away and recalling details which were all verified to be true. The fact that she recounted details about what was happening to her own body are far less impressive, at least to me. The paper cited for this short summary is HERE and I don't have access, so I admit the possibility that far more details are present in the original paper. I will note that Morse's site (linked above) is absolutely filled with references to religion (Jesus/Christianity in particular), a spirit/soul, etc. While these details don't establish anything by themselves, I'm simply noting that he may have a particular interest in these experiences leaning in one particular direction. Lastly, I'll note that Morse believes he can successfully remote view . The religious motivations are one thing, but belief in remote viewing abilities are another. HERE Morse presents a document showing how he successfully remote viewed (cough) HERE , Morse performs a remote viewing live on video. I noted that his adjective list includes about everything one can imagine in describing what he's "seeing," many of them seeming like antonyms: lights, darks, flat, etched... Near Death Experiences in general To close up, I'll list some interesting material I found on NDEs in general. THIS is an absolutely outstanding summary (fairly up to date as well) of NDE studies by Keith Augustine, including specific cases, references, and all. Just wonderful. Notable statements include: HERE is a section showing that non-Western NDEs have almost none of the features of "prototypical Western NDEs," decreasing the probability that an NDE is a snapshot into an objective, universal post-death reality by the NDEr HERE , Augustine presents essentially what I was looking for in a section called, "Veridical Paranormal Perception During OBEs?" He takes several cases and shows that there need be nothing "paranormal" about patients' recollection of details. He also states that no conclusive studies have cleared the air about whether any details have been 1) impossible to know without "leaving" the body and 2) verified conclusively: But at the end of the day, we are left with no compelling evidence that NDErs have actually been able to obtain information from remote locations, and we have clear evidence that NDErs sometimes have false perceptions of the physical world during their experiences. THIS paper examines near death experiences in cardiac arrest patients and makes this statement: It is not clear whether the experiences of patients, who report that they have ‘left their bodies’ and viewed their own resuscitation procedures are veridical or are hallucinations. Some patients do appear to have obtained information which they could not have obtained during unconsciousness. If this is so, it would suggest that some element of human consciousness is capable of separating from the body and obtaining information at a distance. However, it is also possible that the information that they report may have been gained from ordinary sensory sources. In this study, no out of body experiences occurred. The authors know of no prospective studies which have helped clarify this point. Not necessarily related, but THIS study shows that Catholics, Muslims, and atheists experience NDEs at approximately the same rate of occurrence (NDEs per survived cardiac arrest patient). So, it seems that at least two investigators familiar with the area of inquiry (Augustine's full article displaying an extremely long list of sources) have concluded that there are no known instances in which a patient knew of details that were later verified to be true and could not be learned by any method other than leaving the body. As an entertaining piece of dessert, I recalled James Randi's own Out of Body Experience, found HERE . He "floated" out of his body one night and recounted vivid details to his step-son the next morning (color of bed spread, where his cat was on the bed, and what the cat's eyes looked like), only to find that the specific bedspread was in the laundry and that the cat had been outside all night. Not an NDE, but in listening to him, it sounds like such a "typical" OBE, and it's just so fantastic that Randi of all people experienced it and then learned it was not real. | {
"source": [
"https://skeptics.stackexchange.com/questions/4532",
"https://skeptics.stackexchange.com",
"https://skeptics.stackexchange.com/users/2459/"
]
} |
4,559 | Historian A. Roger Ekirch contends that in ancient times, people naturally had two sleeping periods at night- the first starts shortly after sundown and ends in an short break, where people were semi-conscious and would speak to each other about their dreams, or pray, or talk, or have sex. Wikipedia summary here . The second sleep would then end at dawn. Here's a good NYT article by him. The issue is, it seems like this is a pretty major claim- that the ancients had multiple sleep periods, yet there doesn't seem to be a lot of coverage of this theory, nor any academic critiques as far as I can find. Is this a legitimate theory, or does it have any possible problems? | The evidence is quite overwhelming. See, for example, the Sleep Research section on my website . There, among other items, you will find a direct link to my article, " Sleep We Have Lost ," in which I first published this discovery. there is every reason to believe that segmented sleep, such as many wild animals still exhibit, had long been the natural pattern of our slumber before the modern age, with a provenance as old as humankind. [...] For the term "first sleep," I have discovered sixty-three references within a total of fifty-eight different sources from the period 1300–1800. [...] I have also found references to segmented sleep in twelve works of American fiction published during the first half of the nineteenth century.
(Full reference: " Sleep We Have Lost: Pre-industrial Slumber in the British Isles ," American Historical Review, CV, no.2 (April 2001), 343-387.) A fuller version of this research can be found in the 2005 book, At Day's Close: Night in Times Past (N.Y., W.W. Norton). I hope shortly to publish a fresh article that charts in considerable detail the transformation, which occurred in western cultures during the Industrial Revolution, from segmented slumber, the dominant pattern of sleep since time immemorial, to the pattern of consolidated sleep to which we currently aspire if don't always enjoy. | {
"source": [
"https://skeptics.stackexchange.com/questions/4559",
"https://skeptics.stackexchange.com",
"https://skeptics.stackexchange.com/users/3567/"
]
} |
4,561 | A suggested reason why doctors get paid so much more in the US as opposed to other developed countries is that the American Medical Association ( AMA ) artificially limits the physician supply in order to drive up salaries. I found this article which blames the AMA, but gives as its only source Milton Friedman's book from 1962. A more recent article dates from 1986. Lew Rockwell also blames the AMA, but he also doesn't cite too many sources nor go into specifics. So, does the AMA limit doctor certifications in order to increase salaries? EDIT: At Cos's suggestion, I would like to remark that it is unlikely that you will find an interview with the head of the AMA saying "we want to screw over new doctors so existing ones get paid more." So the standard of evidence is something like: Has the AMA (since 1962) had policies (or taken action) to restrict the expansion of existing medical schools or discourage the creation of new medical schools? Are doctors in short supply? If so, can this be explained independently of the AMA policies? | This USA today article from 2005 confirms that the AMA and other organizations were indeed actively seeking to limit the number of new physicians being trained to prevent a projected surplus. For the past quarter-century, the American Medical Association and other industry groups have predicted a glut of doctors and worked to limit the number of new physicians. In 1994, the Journal of the American Medical Association predicted a surplus of 165,000 doctors by 2000. However once the looming shortage became apparent, these efforts were reversed. For example the American Association of Medical Colleges (AAMC) set the goal of increasing medical school enrollment by 30% from 2002 levels by 2015. Unfortunately they are already behind on this goal . More importantly, medical school itself is not the rate-limiting step in training new physicians. As a recent, excellent article in the Seattle Times points out, In order to become practicing physicians, graduates must complete at least three years of residency training, usually in large teaching hospitals. Without more residency slots, the number of physicians entering the workforce cannot increase. (If the number of U.S. medical school graduates increased, but the cap were left in place, graduates of U.S. medical schools, who have preference for residency slots, would replace graduates of foreign schools, but that would have no net impact on total physician supply.) The article goes on: The logjam in residency openings stems from the 1997 Balanced Budget Act. At that time, the number of residency slots funded by Medicare (the principal source of residency funding) was capped at around 100,000, and that cap has remained in place ever since. The article also includes a fairly in-depth account of the mid-00's reversal of fears from surplus to shortage which I won't bother to blockquote here. It's worth reading if you're really interested. In summary, while this claim may have had some truth in the past, it is certainly not true now as the major professional organizations are actively lobbying to expand medical education. Unfortunately at the moment the major limiting factor in that expansion is federal health spending, which in the current political environment is a hard sell even for the powerful AMA lobby. | {
"source": [
"https://skeptics.stackexchange.com/questions/4561",
"https://skeptics.stackexchange.com",
"https://skeptics.stackexchange.com/users/1094/"
]
} |
4,588 | I have a friend who told me that one of the reasons why he participates actively in his religion is because praying helps him get through tough times in his life, mainly because he can believe that God will take care of his problems. I was wondering if humans in general benefit from religion in any positive psychological way. | Yes. Levin & Chatters, Journal of Aging and Health -- "Religion, Health, and Psychological Well-Being in Older Adults" (1998): These findings provide mixed support for the three study hypotheses, with Hypothesis 1 (an association between organizational religiosity and health status) farinb best. In general, results suggest that (a) religious involvement is moderately and significantly associated with health status and psychological well-being ; (b) these associations withstand controlling for effects of key sociodemographic constructs such as age, race, and gender; (c) these associations emerge across a variety of indicators; and (d) results are present, though inconsistent, across samples drawn from three large-scale national probability surveys conducted in the 1970s and 1980s. ( SOURCE ) This is a meta-analysis of several surveys from the 70s and 80s, so the data may not be representative of society today. There are some caveats provided in the discussion, as well, such as that one set of data did favor the correlation between religiosity and health, but not religiosity and psychological well-being. Abdel-Khalek, Mental Health, Religion, & Culture -- "Happiness, health, and religiosity: Significant relations" (2006): Based on the self-rating scales, the current data provide strong evidence that, among a large sample of Kuwaiti Muslim undergraduate students, religious people are happier. ...the main predictor of happiness was mental health. So, the self-rating of religiosity came as a predictor of happiness, but to a lesser degree. ( SOURCE ) Ross, Journal for the Scientific Study of Religion -- "Religion and Psychological Distress" (1990): The positive emotional function of religion has been well accepted, but the evidence has not been conclusive. Furthermore, research has rarely made explicit comparisons to persons who claim to have no religion. Using a representative sample of Illinois residents (and controlling for sociodemographics and willingness to express feelings), I found that the stronger a person's religious belief, the lower the level of psychological distress. ( SOURCE ) This is an old study, and featured in a journal with a title that may reveal an agenda. I can't be sure, but at least want to mention it. I would be thrilled to see a footnotes for the statement that "the positive emotional function of religion has been well accepted..." That might provide many more sources for this claim. Levin, Social Science & Medicine -- "Religion and health: Is there an association, is it valid, and is it causal?" (2002): This paper reviews evidence for a relationship between religion and health. Hundreds of epidemiologic studies have reported statistically significant, salutary effects of religious indicators on morbidity and mortality. However, this does not necessarily imply that religion influences health; three questions must first be answered: “Is there an association?”, “Is it valid?”, and, “Is it causal?” Evidence presented in this paper suggests that the answers to these respective questions are “yes,” “probably,” and “maybe.” ( SOURCE ) Chida, Steptoe, Powell, Psychotherapy and Psychosomatics -- "Religiosity/Spirituality and Mortality" (2008): The results of the meta-analyses showed that religiosity/spirituality was associated with reduced mortality in healthy population studies... but not in diseased population studies. ( SOURCE ) I realize this is not a psychological benefit, but still found it interesting, and it appears (from the comments) that your original question may have included requests for examining correlation between religion and other benefits as well. Davidson et al., Psychosomatic Medicine -- "Alterations in Brain and Immune Function Produced by Mindfulness Meditation" (2003) We report for the first time significant increases in left-sided anterior activation, a pattern previously associated with positive affect, in the meditators compared with the nonmeditators. We also found significant increases in antibody titers to influenza vaccine among subjects in the meditation compared with those in the wait-list control group. Finally, the magnitude of increase in left-sided activation predicted the magnitude of antibody titer rise to the vaccine. ( SOURCE ) I was actually forwarded this study a few weeks ago and list it here to show that while the literature suggests that religion does have a positive effect on health and psychological states, so does mindfulness/meditation. Davidson also gives a talk entitled, "Be Happy Like a Monk" -- one can watch it or request the transcript HERE , if interested. Just for some counter-points, I thought I'd list these: Effects of participation in Alcoholics Anonymous (AA) ( LINK ): Affiliation with AA after treatment was related to maintenance of self-efficacy and motivation, as well as to increased active coping efforts. I offer this simply to illustrate that (as Levin's paper about causality also aims at), it might not be merely religion that improves psychological conditions, but the aspects of social bonding, support, being accepted, having an important goal, a common purpose, etc. Paul, Journal of Religion & Society -- "Cross-National Correlations of Quantifiable Societal Health with Popular Religiosity and Secularism in the Prosperous Democracies" (2005): In general, higher rates of belief in and worship of a creator correlate with higher rates of homicide, juvenile and early adult mortality, STD infection rates, teen pregnancy, and abortion in the prosperous democracies (Figures 1-9). The most theistic prosperous democracy, the U.S., is exceptional, but not in the manner Franklin predicted. The United States is almost always the most dysfunctional of the developed democracies, sometimes spectacularly so, and almost always scores poorly. I add this, as these individual effects of religiosity may be measurable, but perhaps religiosity on the societal level, or at least various forms of it, are not correlated in the same manner, at least with external/societal benefits. As always, correlation does not equal causation. While there is evidence supporting the claim examined here, the fact that it doesn't seem to matter which religion it is, that AA may have such benefits, and that individual benefits doesn't scale as a society makes me want to look further. I'd be particularly interested in the Levin paper above, for the abstract also contains this: Third, alternative explanations for observed associations between religion and health are described. Finding this and other papers looking at similar things might be quite interesting, unless the only question of interest here is purely about religion/health/well-being. Personally, I'd like to know the underlying mechanisms as well, as the benefits of religion might be reducible to lower factors that could prove useful. | {
"source": [
"https://skeptics.stackexchange.com/questions/4588",
"https://skeptics.stackexchange.com",
"https://skeptics.stackexchange.com/users/-1/"
]
} |
4,611 | I received this Mercola announcement for the movie, Burzynski about Stanislaw Burzynski 's (allegedly) amazing cure for cancer, which uses antineoplastons . The Mercola article claims some fairly hefty things, such as: Burzynski, the Movie is the story of a medical doctor and Ph.D biochemist named Dr. Stanislaw Burzynski who won the largest, and possibly the most convoluted and intriguing legal battle against the Food and Drug Administration in American history. and You will learn that not only did the US Federal government spend 14 years actively suppressing a cancer treatment that had a FAR greater success rate than any other treatment available, they also spent well over $60 million of US taxpayer dollars trying to put the inventor of the treatment in jail in order to steal his patents and either suppress or cash in on his discovery. But Wikipedia says: Burzynski had appealed the limitations on his advertising [due to a cease and desist order from the FDA] on the grounds of free speech, but the appeal court upheld the decision, stating that "Burzynski's commercial speech does not concern a lawful activity." ...The 2010 film Burzynski directed by Eric Merola, documents Burzynski's efforts to gain FDA approval for the therapy. [emphasis mine] My resultant questions are: Is there any evidence for his "cure"? In order for the FDA to suppress a cure that had a "FAR greater success rate than any other treatment available," well, it has to be shown to be successful at all. Is Mercola correct in reporting that Burzynski won the most important battle against the FDA in American history? Wikipedia seems to indicate that the FDA ruling still stands. I suspect that the new, shiny developments in this industry that one doesn't hear about until Mercola advertises it were probably never much to begin with. Also, they unsurprisingly tend to involve aspects of a "big government conspiracy," and the bundled accusation that the only thing the US government cares about is keeping its citizens diseased and broke. Nevertheless, I'm still interested in facts others can find, as well as making this a reference for other googlers. I will say that he has quite the list of publications , but it's difficult to find reviews of Burzynski's work that aren't published by him. | From 2008: http://www.cancer.gov/cancertopics/pdq/cam/antineoplastons/patient/page1 My government firewall strips all formatting, so I can't do what is required to make this answer look pretty, but the basic gist is (again, suggest you go to the Cancer.gov site for the links, and emphasis mine): Antineoplastons are chemical compounds that are found normally in urine and blood. For use in medical research, antineoplastons can be made from chemicals in a laboratory. (See Question 1.) Antineoplaston therapy was developed by Dr. S. R. Burzynski, who proposed the use of antineoplastons as a possible cancer treatment in 1976. (See Question 2.) No randomized, controlled trials showing the effectiveness of antineoplastons have been published in peer-reviewed scientific journals. (See Question 6.) Nonrandomized clinical trials are ongoing at Dr. Burzynski’s clinic to study the effect of antineoplastons on cancer. (See Question 6.) Antineoplastons have caused mild side effects and some serious nervous system problems. (See Question 7.) Antineoplastons are not approved by the U. S. Food and Drug Administration for the prevention or treatment of any disease. (See Question 8.) EDIT TO ADD: You have to ask, if this is a cure, and the FDA is supressing it, why isn't it being used in areas that the FDA has no authority? Things that make you go "Hmm?" Edit from Asker: This was a great answer! I'm simply adding links to two more resources I found on my own to add to it for anyone else who stumbles across this (hope the answerer does not mind!). HERE is quite a long writeup/interview with Burzynski by the Houston Press. In 1998, Paul Goldberg, editor of The Cancer Letter, a D.C.-based newsletter covering cancer research and drug approval, investigated Burzynski's claims up to that point. He asked three renowned and independent researchers to examine Burzynski's scientific protocols — all three said they could not make sense of the data, saying it did not resemble any commonly accepted models. Ten years later, Goldberg and two of those doctors don't feel any differently. Henry Friedman, a neuro-oncologist at the Duke University Medical Center, was one of the independent doctors who reviewed the data for Goldberg. [He said,] "Despite thousands of patients treated with the antineoplastons, no one has yet shown in a convincing fashion, [through] the rigorous requirements for peer review, that the therapy works"... HERE is a summary about Burzynski from QuackWatch. Burzynski has never demonstrated that A-2.1 (PA) or "soluble A-10" (PA and PAG) are effective against cancer or that tumor cells from patients treated with these antineoplastons have been "normalized." Tests of antineoplastons at the National Cancer Institute have never been positive. The drug company Sigma-Tau Pharmaceuticals could not duplicate Burzynski's claims for AS-2.1 and A-10. The Japanese National Cancer Institute has reported that antineoplastons did not work in their studies. No Burzynski coauthors have endorsed his use of antineoplastons in cancer patients. | {
"source": [
"https://skeptics.stackexchange.com/questions/4611",
"https://skeptics.stackexchange.com",
"https://skeptics.stackexchange.com/users/2459/"
]
} |
4,630 | Reddit is all abuzz about DRM, piracy and all related topics. I don't really know much about the details in the technology here, but people were throwing around statements like these (all pulled from the same thread ): DRM would only affect pirates if everyone had to crack their game, but no, anyone can easily play these games because someone has already cracked the DRM, so it's no use at all, NO pirates have to deal with it. My problem with DRM is that I don't see any evidence that it actually does anything to prevent piracy. [...] DRM only seems to end up inconveniencing legitimate paying customers. In some cases it's bad enough that after purchasing a game I will just go ahead and illegally download it because in doing so I'm getting a superior version. And other similar comments can be quickly found. The basic gist seems to be that DRM is so easily bypassed by pirates that it may as well not be there. So, assuming that the purpose of DRM is to prevent pirating a game, does it work? | Short answer: No. Long answer: You need to look at the definition of "effective". Does it accomplish the purpose of preventing people from pirating the game it protects? Sometimes . And even then, usually only briefly. As a case study, let's look at Assassin's Creed 2 by Ubisoft. According to press releases at the time they had instituted a radical new DRM method that required the player to be constantly connected to the internet to make sure their account was and remained verified, and losing connection to the authentication servers for more than thirty seconds dropped you out of the game. Ubisoft hailed it as the ultimate anti-piracy solution and smugly dismissed everyone who complained about how it's ridiculous for a single-player game to require an internet connection or how it massively inconvenienced anyone with a spotty connection as 'whining would-be pirates'. There was a (mostly) working crack almost before the game was good and well released , and a fully functioning one within a month . The crack was thorough enough that Ubisoft's next game using the same DRM was fully cracked within days. Total "effectiveness" of this DRM scheme: One month of possibly limited piracy. Negative consequences: Severely inconvenienced paying customers who needed permanent internet connections just to be able to play a single-player game; massive inconvenience when the release day instant load took the authentication servers down (Ubisoft swears up and down that it was a DDoS by said whining pirates, disregarding the fact that said pirates were the only ones at that point that could play the game); enraging a nontrivial chunk of their former customer base who have subsequently declined to buy any new Ubisoft games and instead take their money to companies that don't accuse them all of thievery. Was this "effective"? Not unless the intended purpose was to lose business. Don't get me wrong. I understand the publishers' dilemma -- they really would like to get paid for the games they design, create and/or publish. And I have no objection to paying them, and if they want some kind of reassurance that the people who play their games did pay for them, that's entirely reasonable. And I like digital distribution -- it means not having to spend time traveling to town and searching five different game stores only to discover that the game is completely sold out -- and I understand that said digital distribution means that asking for a word in the manual doesn't really work anymore. And far too many pirates try to justify themselves with this kind of circular logic . On the other hand, "reasonable" efforts to check whether a game hasn't been pirated do not, in my opinion, extend to the electronic equivalent of a body cavity search every time you make a purchase at the supermarket - another notorious example is the StarForce "Copy Protection Scheme" that amounted to the equivalent of a rootkit . (You may note that the various mouthpieces defending StarForce also all insist that everyone complaining about it are 'obviously from international piracy groups' that really are only fearmongering because of the way the awesome system completely thwarts them. It's a recurring refrain) And the above doesn't cover the long-term problem of what happens when, say, your old install CD gets too scratchy for the CD check DRM to recognize it (happened to me with three separate games; I wound up downloading a crack for one of them, purchasing the second via Steam when it was on sale and getting the third when it appeared on GOG.com . Funny anecdote, but the game I'd downloaded the crack for always had some kind of issue with the in-game cutscenes not playing, which was cleared up by the crack as well. Score one for the pirates.) XKCD pointed out that basic problem very succinctly , and it doesn't apply to just games. Of course, there are DRM solutions that are more acceptable -- stepping away from the purity pedestal insisting that no DRM is ever "acceptable" -- such as Steam , which not only checks just once during install, and lets you play entirely offline if that's what you want, but also serves big heaps of added value such as automatic updating, keeping track of what you bought so you can reinstall it at any time, an easy to access storefront, regular sales and a social network to sweeten the pot. Given the way Gabe Newell is rolling in cash at the moment, you'd almost have to suspect that maybe the whole "let's not treat our customers like criminal scum" approach has something going for it. Incidentally, he had a few things to say about DRM as well . Additionally, many "indie games" don't bother with DRM at all, or at most use a serial key -- the additional cost to develop or license a DRM solution would eat unacceptably into their profits. Again, given the popularity of a lot of these, the lack of DRM doesn't seem to have hurt their business model much. So to summarize: In my opinion - and that of several fairly big names in the game industry as well - overly restrictive DRM does far more harm than good, especially when customers realize that they can either pay money for the game and suffer the DRM's inconveniences, or download the pirated version, play the game for free and have a better gaming experience because the DRM isn't getting on their nerves. On the other hand, nonrestrictive DRM that adds significant value to the playing experience makes for repeat customers. | {
"source": [
"https://skeptics.stackexchange.com/questions/4630",
"https://skeptics.stackexchange.com",
"https://skeptics.stackexchange.com/users/2433/"
]
} |
4,714 | About a year ago I heard that burning your food can lead to an increased risk of getting cancer. The explanation was that the burned portion of the food was carcinogenic. The Carbon that would be produced in burning of the food was also involved in this somehow although its relevance wasn't clearly explained. I was dubious of the claim until I posed the question to some chemist friends of mine who, to my surprise, were not as dismissive of the idea as I was. Is burning your food something that can increase the likelihood of getting cancer and, if so, it the increase enough to be concerned about? Update This question has hit the headlines again (at least in the UK early in 2017). For example, The Independent reports: Over-cooked potatoes and burnt toast could cause cancer, new research suggests The headlines have emerged because the Food Standards Agency has issued new advice: The Food Standards Agency (FSA) has issued a public warning over the risks of acrylamide - a chemical compound that forms in some foods when they are cooked at high temperatures (above 120C). A new campaign tells people how they can cut their risk, including opting for a gold colour - rather than darker brown - when frying, roasting, baking, grilling, or toasting. The warnings are not just about burnt food but about well-cooked food and specifically mention acrylamide as the guilty party. update 2018 The story has reappeared (again) in the UK where a major supermarket has been criticised by several bodies for selling "well fired" bread: The Sun , for example, reports: Experts claim that the company should warn people of the blackened edges of the bread, as they may contain a cancer-causing chemical. The story appears to be based on the same Food standards Agency advice that triggered some of the previous stories. | Acrylamide (C 3 H 5 NO) is a chemical compound produced when starchy foods are burnt . It is also in coffee, prunes and olives amongst other foods, and is inhaled from cigarette smoking . Ingestion of acrylamide has been linked to a number of health concerns, including postmenopausal
endometrial and ovarian cancer in women neuropathy and male fertility issues at extreme doses . (Direct exposure to acrylamide causes problems too, but that's out of scope here.) For balance, acrylamide has been ruled out of causing several other concerns, such breast cancer and cancer through exposure at work . The Wikipedia page on acrylamide provides a more comprehensive list of the research and various government's strategies for dealing with it. Note: None of this answer addresses the seriousness of the dosages found on a typical piece of burnt beef. The issue may be too small for serious concern, when placed in the context of other dietary concerns. @Matt Black's answer addresses this shortcoming - please consider it for an upvote. | {
"source": [
"https://skeptics.stackexchange.com/questions/4714",
"https://skeptics.stackexchange.com",
"https://skeptics.stackexchange.com/users/3159/"
]
} |
4,724 | I've heard a lot of hype lately about Non-Celiac Gluten Sensitivity and seeing people buy plenty of "Gluten-Free" goodies at the local grocery store, I am wondering. Is Non-Celiac Gluten Sensitivity an actual illness or just another trendy diet? | Because the condition is so variable and widely thought to be underdiagnosed, it's a little of both, actually. And the term "non-celiac sensitivity" usually refers to sub-clinical cases of Celiac where the symptoms are present, but not of such a nature that the patient ever seeks medical attention regarding them. First, what is Celiac? Celiac disease (CD) is an
immune-mediated disease of the
intestines that is triggered by the
ingestion of gluten in genetically
susceptible individuals. Gluten is the
major protein component of wheat, rye,
and barley. source (medscape link) Basically, the condition can be explained as: The susceptible person ingests gluten The body's immunologic response goes haywire, because it identifies the protein on the
wheat gluten (usually gliadin ) as "foreign" and attacks. The intestinal mucosa is damaged in the cross-fire. The damaged mucosa is no longer able to function properly and as a result will not properly
absorb nutrients until it is repaired. Current research is indicating a possible genetic link.... Genetic predisposition plays a key
role in CD and considerable progress
has been made recently in identifying
genes that are responsible for CD
predisposition. It is well known that
CD is strongly associated with
specific HLA class II genes known as
HLA-DQ2 and HLA-DQ8 located on
chromosome 6p21. source (medscape link) The signs and symptoms a person with this condition will manifest are dependent on many things, such as the length of intestine involved, and age. In infants Celiac can be life-threatening or cause permanent sequelae such as growth and developmental delays, weakness and muscle wasting. In older children gastrointestinal symptoms are often seen, such as abdominal pain, diarrhea, dyspepsia, flatulence and weight loss. In adults the condition is less likely to be life-threatening and are often vague or non-specific symptoms such as impaired fertility, fatigue, depression, anemia, and sometimes short stature. Adults are more likely to manifest these symptoms than the digestive symptoms displayed in younger patients with celiac. The symptoms exhibit this wide range and variety of presentations because they often co-present with the symptoms of malabsorbtion caused by the damage done, which is dependent on the amount of intestine the disease has damaged. Also relevant is the fact that up to 40% of patients who have been diagnosed serologically (In the US, serologic tests are usually IgA endomysial Antibody Test and IgA tTG antibody test which have >90% sensitivity and a >95% specificity for celiac, so they are fairly reliable. There are also other tests available, though) for celiac have the "silent" form of the condition, in which there are little to no symptoms present. The reason for this remains unclear (This comes directly from Current Medical Diagnosis & Treatment , but the one my shelf, sorry). Those who may have a "silent" or sub-clinical(sometimes referred to as "Non-Celiac sensitivity popularly, but not by the medical community) case of Celiac will of course benefit from reducing, if not eliminating gluten from the diet. However, true elimination of gluten exposure is extremely difficult, as it is in practically everything, not merely food products, but many medications, some glues, such as one used in certain brands of cigarettes (I know, yet another reason not to smoke). Even though attempts at self diagnosis are almost always unwise even for trained medical professionals, if a person perhaps has a family history of the disease or a strong reason to suspect they might be experiencing a reaction to gluten, certainly a trial of a gluten free diet for a period of time is a relatively safe way to see if improvement occurs. However there are certain comorbidities with celiac disease that should be looked into by a physician, such as diabetes, osteoporosis, increased risk for certain cancers, if you think you actually do have celiac. And now for the fads.... The main "selling point" for the gluten-free diet is the reported under-diagnosing of celiac (at least in the US). As Harvard Health reminds us: Celiac specialists say the disease
isn’t diagnosed as often as it should
be. As a result, many people suffer
with it for years, often after getting
other — and incorrect — diagnoses and
useless treatments This leaves much room for people to peddle the gluten-free diet. Especially if those selling the plans and products can convince you that you are in that 40% mentioned above who may have "silent" or sub-clinical celiac. Also, since there are virtually no documented side effects of an otherwise healthy person adopting a gluten-free diet, there's the added selling point of the "what's the harm?" argument. However, just because there aren't many documented adverse effects does not mean there are benefits, and even most of the advocates for the diet make vague and weak arguments as to the benefits of a gluten free diet a healthy individual, sometimes relying on the "common sense" argument that people adhering to a gluten-free diet would also be avoiding many of the most common fried and fattening foods. Basing your diet off of the
gluten-free phenomenon can be
genuinely healthy and may benefit your
cholesterol levels, digestion, and
energy level. You don’t have to worry
about the little things like soy sauce
and malt flavorings, but if you avoid
the major red flags in the gluten-free
diet, you just might start to feel
healthier. For example, you would have
to avoid everything that’s fried
because of the breading, which would
allow you to avoid the oil and fat, as
well. source Another argument which falls on the side of those selling the gluten-free diet is that there has been at least one study done indicating that the higher cost and lower availability of gluten-free products is one of the chief factors for non-compliance in diagnosed celiac patients, and an increase in the popularity and availability of gluten-free products may help this. Conclusions: There is limited
availability of gluten-free foods and
they are generally more expensive than
their standard counterparts. This may
impact on compliance to a gluten-free
diet, with potential nutritional and
clinical consequences, together with
an increased risk of complications. There has been some study done into how gluten metabolism affects the natural bacterial activity in the intestines, such as this The activity of the intestinal
microbiota is modified by gluten
intake in the diet. The incorporation
of gluten in the diet increases the
activity of a gluten proteolytic
activity in the faeces but however, there is no proven practical application for this finding, as far as I know. And there have been no documented instances of benefits for non-celiac patients adopting the gluten-free diet, even though there has been some mention of a weak correlation with lower cholesterol levels. In the end, it's not likely to harm you. But then again, it's probably not likely to help you, unless you actually are in that 40% of "silent" celiac cases. | {
"source": [
"https://skeptics.stackexchange.com/questions/4724",
"https://skeptics.stackexchange.com",
"https://skeptics.stackexchange.com/users/3759/"
]
} |
4,740 | Someone who is opposed towards energy saving and climate change (I did not realise they existed until I met him) has told me that public transport (buses, trains and so on) are less efficient than individual cars. I found it difficult to believe. A bus is very big, but it can hold many people. A car could carry maybe 5 people maximum and usually, when commuting only one or two people will come. A bus could hold 30 or more. That means that an average car doing 35 mpg would only be better if a bus did about 1.2 mpg, which I find difficult to believe. I don't know about trains, though. This is also assuming the bus is full which it might not be, but probably is most of the time, especially during morning commutes, in my experience. Maybe not all day, though. Is this guy lying to make a point, or does he actually have a point? | Yes , public buses appear to be worse than cars, at least based on the data I found for the US.
(At present and on average. See the "Caveats" section for a discussion about this.) Edit: the answer from DJClayworth appears to be based on the same information, I just noticed. bbc.co.uk just links to the overview for the Dept. of Energy report, but doesn't break things down. Update: I used data from National Transit Database ( LINK ) to calculate BTUs/passenger-mile for every US city. I was initially unsure of how to do this, but I just wasn't looking at the right documents. The data I used is available in the bundle of Excel files titled, "RY 2009 Data Tables - Complete Set (Self-extracting xls)" ( LINK ). The necessary files are "T17_Energy_Consumption.xls" and "T19_Op_Stats_Service.xls." From these, I was able to compare fuel used to passenger-miles traveled for all buses in the country. The results are as follows: The Excel files above list all public transportation by type, so I sorted and pulled out all bus listings, then summed the listed passenger-miles for each state. I also used the conversion factors mentioned below to convert all of the fuel listing into BTUs and then simply divided BTUs by passenger-mile to find the rates. I used the value of 3,400 BTUs/passenger-mile as listed below and inserted "Car" into the chart. It appears 6th in the list; in other words, only five US states have achieved an efficiency higher than a car for transporting individuals. See below for discussion about BTUs I left the rest of the answer mostly as it was -- it jives fairly well with this data, other than the bus average coming out to about 5,000 BTUs/passenger-mile with the National Transport Database data vs. the Department of Energy value below of 4,300. Data/Analysis Data to attempt to answer this question may reside in The Transportation Data Energy Book, published by the US Department of Energy (DoE). We will be pulling from Edition 29, published in June 2010 (available HERE ). Table 2.12 is shown here ( LINK ): To highlight the pertinent values for 2008: Cars: 137 million cars traveled 1.6 quadrillion miles for a total of 2.6 quadrillion passenger-miles, and consumed 8.8 quadrillion BTUs of energy to do so. Buses: 67,000 buses traveled 2.4 billion miles for a total of 21.9 trillion passenger-miles and consumed 95 trillion BTUs of energy to do so. Passenger-miles are a summation of (passenger n * miles n ) for all passengers (1 passenger mile = 1 passenger travelling 1 mile, 2 passengers travelling 0.5 miles, etc.). The key value is BTU/passenger-mile : Cars: 3,437 BTUs/passenger-mile Buses: 4,348 BTUs/passenger-mile This means that, on average , it is taking buses more energy than it takes cars to transport a given number of individuals a given amount of distance. By using BTUs, fuel types are normalized by converting to energy per volume. See Table 2.5 in Chapter 2 ( LINK ) for a breakdown of fuel type usage by vehicle type. Also, Appendix A ( LINK ) provides the conversion factors for different fuels to BTUs. BTU Discussion The British Thermal Unit is a unit of energy. Analogs would be the calorie and the joule. Thus, the Department of Energy has taken each fuel type and translated it to an energy per volume output. Thus, if we know that a gallon of fuel X outputs Y BTUs, and we know the average number of BTUs required to move vehicle type Z a given distance, we have an efficiency value for each type of vehicle that has been normalized from fuel type to energy. Then we can analyze the energy required to transport a vehicle with it's typical passenger load (9.2 passengers per vehicle for buses, and 1.57 passengers per vehicle for cars) and determine energy per "passenger-mile." We want a lower value here, since lower BTUs/passenger-mile means that it takes less energy to transport a given number of passengers a given distance. See Table 11.11 in Chapter 11 ( LINK ) for a breakdown of emissions by fuel type. Buses primarily use diesel, while cars primarily use gasoline; the emissions for these two aren't all that different, with diesel at 10,000 grams/gallon emitted and gasoline at 8,800 grams/gallon. Caveats First off, this is only US data. I have no idea how the rest of the world compares. Second, this is a snapshot . If buses were to increase their average "load factor" (persons/vehicle) such that their passenger miles were 30,000, they would be more efficient than cars on average. If buses reduced their BTU consumption, this would also help a great deal (I was blown away by the fact that buses require almost 8x the BTUs to travel 1 vehicle-mile compared to cars and 7x for trucks). Now, this is the data for all buses in all cities in operation. Thus, there are probably some cities doing quite well, while others are doing horribly . This paper abstract seems to confirm the same (emphasis mine): The simulation results show that substitution of bus for car travel generally decreases the overall costs, particularly the costs of congestion, but increases exhaust emission costs if bus load factors are insufficiently high . In order to reduce exhaust emission costs from car to bus transfer at given load factors, the most effective policy option is to encourage the reduction of particulate emissions from bus engines. In terms of the overall costs, increasing bus load factors by relatively modest amounts can lead to substantial reductions in these overall costs . ( SOURCE ) So, it seems to depend on the usage efficiency of the vehicle. I also note that the rail figures in Table 2.12 shows that "rail" travel does better than cars, so whatever circumstances are surrounding that mode may be of interest. Lastly, suggested by @Ian, is that this comparison doesn't take into account the travel by individuals to the public transport departure location. This could be neutral in the case of walking, but it could be optimistic if folks are driving cars to a pickup location. | {
"source": [
"https://skeptics.stackexchange.com/questions/4740",
"https://skeptics.stackexchange.com",
"https://skeptics.stackexchange.com/users/1626/"
]
} |
4,765 | Some time ago, the barefoot running movement started, claiming that it is more natural and reduces chronic pain. Has there been some research on it since then? A lot of people claim that it helped them, but it may be confirmation bias (only those with good experiences wrote about it, the others thought they had bad technique or there is something wrong with their feet or they started too quickly...) | The answer, basically, is yes . Though "healthier" is not a claim that is testable, there is a great deal of research on barefoot running that shows that if you run barefoot you will be injured less, and that there are other benefits. Research papers on barefoot running include: "Running Related Injury Prevention Through Barefoot Adaptations" by Steven E. Robbins and Adel M. Hanna of the Human Performance Group at Concordia University in Montréal, Québec found "an extremely low rate of running-related injury in barefoot populations", and concludes that the "modern running shoe appears responsible for the high injury frequency associated with running". This article was published by The American College of Sports Medicine's official journal, "Medicine & Science in Sports & Exercise". Another study, published in the "International Journal of Sport Medicine" titled "Mechanical Comparison of Barefoot and Shod Running" done by a group of scientists at the University of Saint-Etienne, France and the Department of Preventive and Rehabilitative Sports Medicine, University of Freiburg, Germany suggests that there is greater muscle activation throughout the body when running barefoot, and that the impact of each step is lesser. "Barefoot Running" which was presented at the 3rd International Sports Science Days in 2004 by Michael Warburton, and summarized the state of the research on the effects barefoot running: Running in shoes appears to increase the risk of ankle sprains, either by decreasing awareness of foot position or by increasing the twisting torque on the ankle during a stumble. Running in shoes appears to increase the risk of plantar fasciitis and other chronic injuries of the lower limb by modifying the transfer of shock to muscles and supporting structures. Running in bare feet reduces oxygen consumption by a few percent. Competitive running performance should therefore improve by a similar amount, but there has been no published research comparing the effect of barefoot and shod running on simulated or real competitive running performance. Research is needed to establish why runners choose not to run barefoot. Concern about puncture wounds, bruising, thermal injury, and overuse injury during the adaptation period are possibilities. Running shoes play an important protective role on some courses, in extreme weather conditions, and with certain pathologies of the lower limb. | {
"source": [
"https://skeptics.stackexchange.com/questions/4765",
"https://skeptics.stackexchange.com",
"https://skeptics.stackexchange.com/users/3098/"
]
} |
4,771 | There are conflicting reports of whether it is possible to safely recharge alkaline batteries (not lithium) that are not sold as rechargable. The packaging for batteries regularly has cautions against recharging. However, there are also claims that they can be successfully recharged (e.g. Ref , Ref , Ref ). One of the recharger manufacturers claims: Our Alkaline battery charger will recharge standard alkaline batteries up to 20 times depending the type and quality of batteries used. It is unclear whether the battery manufacturers claims are just trying to sell more batteries. Can single-use, alkaline batteries be safely and successfully recharged in standard battery chargers? | So far, this appears to be a Yes , provided one has the right tools. I was first going to look at the chemistry involved, but figured it would be easier to just look at one of the top hits for chargers claiming to do this and examine its patent. One such device is the Battery Xtender . I searched for patents containing "alkaline battery recharger" and came across a few. I wanted to try to figure out which one was responsible for the "patented technology" claim in the link above. I simply searched the applicant's name and "battery xtender" and got it on the first hit: JD Pfeiffer of Quebec, Canada, owns the trademark to Battery Xtender, and his patent application was granted under US Patent #5,543,702 ( LINK ). You can read the patent, which contains circuit diagrams and descriptions of preferred current supplies and test methods to determine when charging is completed. Now... does it work? I can't be sure without testing one, but one of the requirements for a granted US Patent is utility -- it has to be useful ( LINK ). A charger that doens't charge is not useful. Assuming the US Patent Office is doing their job, the data provided by the applicant with the application showed that this device was useful! You do have to jump through some hoops for this -- charging alkaline batteries requires that they not be depleted. Here's a page from their manual ( LINK ): So, note that instead of using an alkaline battery until it's dead, one would cut it off about about 1 / 5 to 1 / 6 of it's normal life and recharge it. I don't know how long it takes to recharge, but this might amount to having an extra set or two of batteries if one is to keep cycling like this. Wiki has an article HERE about this which references pulsed current for charging; the whole article is quite vague and un-referenced, though. I didn't see anything about pulsing in my read of the Pfeiffer patent. So, with the right type of charger, one does appear to be able to recharge alkaline batteries intended for single use if recharged frequently and before draining very much. I wish there were more available test data from this device from external users/organizations. So far, nothing like that. Here's the best I was able to find. Perhaps some independent, controlled tests/reviews will come about down the road. | {
"source": [
"https://skeptics.stackexchange.com/questions/4771",
"https://skeptics.stackexchange.com",
"https://skeptics.stackexchange.com/users/1850/"
]
} |
4,796 | Avoiding exceptions for same-sex couples, is abuse in heterosexual couples more commonly initiated by the husband? I am mostly interested in physical violence that is non-reciprocal. In other words, in a couple where only one spouse beats the other, is it significantly more common for the man to beat the women? This belief seems stemmed from the idea that men are more violent and are physically stronger. "Wife-beater" is a commonly heard accusation; I cannot recall hearing any stories of "husband-beaters". I have also seen claims by mens rights groups suggesting that women are more often the abuser in such a relationship. Given the contended point, is there a statistical difference between the initiating gender? | Statistics don't always agree on this issue. According to this study: Almost 24% of all relationships had
some violence, and half (49.7%) of
those were reciprocally violent. In
nonreciprocally violent relationships,
women were the perpetrators in more
than 70% of the cases. Reciprocity was
associated with more frequent violence
among women (adjusted odds ratio
[AOR]=2.3; 95% confidence interval
[CI]=1.9, 2.8), but not men (AOR=1.26;
95% CI=0.9, 1.7). Regarding injury,
men were more likely to inflict injury
than were women (AOR=1.3; 95% CI=1.1,
1.5), and reciprocal intimate partner violence was associated with greater
injury than was nonreciprocal intimate
partner violence regardless of the
gender of the perpetrator (AOR=4.4;
95% CI=3.6, 5.5). Methodology looks rock solid to me: We analyzed data on young US adults
aged 18 to 28 years from the 2001
National Longitudinal Study of
Adolescent Health, which contained
information about partner violence and
injury reported by 11 370 respondents
on 18761 heterosexual relationships. The idea that men are stronger, I think is obviously true - which leads to women being more likely to suffer injuries. However, according to this study and others, women can be plenty aggressive as well, and it often seems as if society simply ignores this fact. Although apparently there's no shortage of physical violence of women against men, it is true that you won't see this often in the mainstream media. But it does happen, occasionally. Quote from the study discussed in the Guardian article: For the year preceding the survey, and
excluding stalking, 5.6% of women and
4.1% of men reported having suffered non-sexual partner abuse (any abuse,
threat, or force from a partner or
ex-partner), a proportion of male
victims of about 42%. Of these, 2.7%
of women and 2.0% of men reported
suffering actual force [assault or
violence], a proportion of male
victims of about 43%, which was
designated as ‘severe’ in the case of
1.8% of women and 1.6% of men, a proportion of male victims of about
47%. These proportions are slightly
higher than those found by Study 276
some four years earlier. Such
proportions of male victims are almost
double those found by the BCS of
2004/05 (23% based on numbers of
incidents) and that of 2005/06 (20%). This suggests either a significant
level of under-reporting especially by
male victims of domestic abuse to
these routine annual surveys or that
basing the proportion on the numbers
of incidents distorts the actual
prevalence of male victims. To summarize: existing research doesn't necessarily support the stereotype of an abusive male partner , violence seems to be a problem for both genders. | {
"source": [
"https://skeptics.stackexchange.com/questions/4796",
"https://skeptics.stackexchange.com",
"https://skeptics.stackexchange.com/users/2433/"
]
} |
4,821 | Related: Is there any evidence to support the benefits of lunar planting? ( LINK ) A couple years ago, I first ran across something called "moon wood," described as follows ( SOURCE ): Moonwood has recently received a good revival among the guitar community and scientists... Moonspruce is simply a name for spruce that was harvested and handled according to a century-old tradition from the Alp-regions in Europe. Carpenters and luthiers had recognized that wood that was cut under certain conditions, differs from wood that is not cut using the old traditional way of handling. [It is] Cut an according tree within the last quarter of waning moon (end of waning moon phase) in the wintertime after the growing period of the tree has stopped (low sap flow). THIS site also gives some characteristics and claims evidence supports these desirable qualities: The amalgam of the data material... and the use of specific, statistic-based analysis allow for a determination of significant...lunar oriented components in the variability of moisture loss, shrinkage and relative weight. It was determined that the division waxing...and waning...points to significant, across-the-board differences in shrinkage, but final results are still outstanding. ...The inclusion of the reference weight (taken before felling)of each individual tree, independent of time, shows that this factor plays a superordinate role, but confirms the significance of the lunar models tested. There are also allegations that Stradivarius used "moon woods" to accomplish the legendary sound of his violins (mentioned in the source above as well as HERE as "legend has it" in a forum). There are guitars sold today boasting "moon harvested spruce" as a selling point ( EXAMPLE . Questions Is there any evidence to support the superior qualities (either inherent qualities such as moisture retention, shrinkage, stiffness, etc. or end-product qualities such as sustain, complexity of tone, frequency profile, resonance, etc.) of moon-harvested woods used in musical instruments? Is there any evidence to support the claim that this is an "age-old" tradition used for many centuries? I ask as sources above discuss this as sort of following the wisdom of the "ancient experts" -- I'm wondering if it was completely fabricated more recently and just stuck because people tend to be fascinated with "the moon" and celestial influences on things. | Statistics don't always agree on this issue. According to this study: Almost 24% of all relationships had
some violence, and half (49.7%) of
those were reciprocally violent. In
nonreciprocally violent relationships,
women were the perpetrators in more
than 70% of the cases. Reciprocity was
associated with more frequent violence
among women (adjusted odds ratio
[AOR]=2.3; 95% confidence interval
[CI]=1.9, 2.8), but not men (AOR=1.26;
95% CI=0.9, 1.7). Regarding injury,
men were more likely to inflict injury
than were women (AOR=1.3; 95% CI=1.1,
1.5), and reciprocal intimate partner violence was associated with greater
injury than was nonreciprocal intimate
partner violence regardless of the
gender of the perpetrator (AOR=4.4;
95% CI=3.6, 5.5). Methodology looks rock solid to me: We analyzed data on young US adults
aged 18 to 28 years from the 2001
National Longitudinal Study of
Adolescent Health, which contained
information about partner violence and
injury reported by 11 370 respondents
on 18761 heterosexual relationships. The idea that men are stronger, I think is obviously true - which leads to women being more likely to suffer injuries. However, according to this study and others, women can be plenty aggressive as well, and it often seems as if society simply ignores this fact. Although apparently there's no shortage of physical violence of women against men, it is true that you won't see this often in the mainstream media. But it does happen, occasionally. Quote from the study discussed in the Guardian article: For the year preceding the survey, and
excluding stalking, 5.6% of women and
4.1% of men reported having suffered non-sexual partner abuse (any abuse,
threat, or force from a partner or
ex-partner), a proportion of male
victims of about 42%. Of these, 2.7%
of women and 2.0% of men reported
suffering actual force [assault or
violence], a proportion of male
victims of about 43%, which was
designated as ‘severe’ in the case of
1.8% of women and 1.6% of men, a proportion of male victims of about
47%. These proportions are slightly
higher than those found by Study 276
some four years earlier. Such
proportions of male victims are almost
double those found by the BCS of
2004/05 (23% based on numbers of
incidents) and that of 2005/06 (20%). This suggests either a significant
level of under-reporting especially by
male victims of domestic abuse to
these routine annual surveys or that
basing the proportion on the numbers
of incidents distorts the actual
prevalence of male victims. To summarize: existing research doesn't necessarily support the stereotype of an abusive male partner , violence seems to be a problem for both genders. | {
"source": [
"https://skeptics.stackexchange.com/questions/4821",
"https://skeptics.stackexchange.com",
"https://skeptics.stackexchange.com/users/2459/"
]
} |
4,825 | It's long floated around my ears that Disney movies have "adult content" secretly/intentionally "embedded" in their various movies. I've heard examples from Lion King, Aladdin, and the Little Mermaid. There may be others. HERE is another example of such a claim: I've known for awhile that some of Disney's children movies contain sexual scenes &/or words in them. Last night I thought, could these same movies be putting sexual messages in children's subconscious? The Little Mermaid: The priest has an errection during the ceremony...On the cover of this movie there is a penis. If you have a copy, one of the first 200, you can see it. ( VIDEO ) Aladdin: When he's at Jasmin's window & trying to calm her tiger down he says "Good teenagers take off their clothes." ( SOURCE ) The Lion King: When Simba jumps in the dust to run after his father the word "sex" can be seen in the dust. ( VIDEO ) Have these, and other claims like them, shown to be conclusively false? | This is all taken from Snopes - Disney Films : The Lion King Status: UNDETERMINED The generally accepted explanation is
that the letters were slipped in by a
special effects group to form the
abbrevation ""S-F-X". The Little Mermaid Source Status: FALSE The plain truth is that the resemblance between the castle spire and a penis was purely accidental... Aladdin Status: FALSE Listen for yourself: Wav Audio Whatever is being said, to the casual
listener the resulting phrase can
certainly sound like " Good teenagers,
take off your clothes ", although the
phrase is clearly the combination of
two different voices speaking in two
different tones. The image of a topless woman in The Rescuers has been confirmed though: Status: TRUE Unlike most rumors of risque words
images hidden in Disney's animated
films, this one is clearly true, and
the images were undeniably purposely
inserted into the movie. More: YouTube - Subliminal Messages in Disney Movies Straight Dope - Do Disney movies contain subliminal erotica? Aie Salas - Disney's Most Outrageous Message Moviefone - Tangled | {
"source": [
"https://skeptics.stackexchange.com/questions/4825",
"https://skeptics.stackexchange.com",
"https://skeptics.stackexchange.com/users/2459/"
]
} |
4,840 | Is this video fake? http://www.youtube.com/watch?v=FyHYbsXt05k Most of the links according this video have no information besides the video itself and are very recent. How should I proceed to get more information about this expedition? | The description of the video claims that "Tribe in Papua New Guinea meets white man for the first time. Filmed in 1976. They have never seen modern civilization, or any modern technology." First of all, it is not from 1976. The date is incorrect on many videos because, most likely, someone misread the following disclaimer, at the bottom of the original upload : Copyright Disclaimer under Section 107 of the Copyright Act 1976 All rights reserved to the owners the same. Finding the original upload gives us also a lot more information to go with, from it's description box: The white man in the video is the film director Jean-Pierre Dutilleux . The tribe in question is the Toulambis . The video is allegedly filmed in 1998 (which, again, is false ). Using that information, I was able to find The hunt for authenticity , an article published in the peer-reviewed journal The Journal of Pacific History, in which the author claims the video is fake but not in the way you would expect. To quote the abstract, Living neither as cavemen nor as colonized subjects, the Ankave-Anga (Papua-New Guinea) are sufficiently isolated for journalists to have seen them as a “lost tribe”, even though their “contact” with the outside world dated from the 1950s. Nonetheless, decades of interactions with the state, church and marketplace have not deeply altered their society. Australian archives and accounts of life “before the white man came”, even though they refute journalistic dreams of authenticity, paradoxically portray places and times that history can hardly explain. Unfortunately, there is no English version of the article freely available online. There is, however, a French version of the article which can be read here . According to the article, it is apparently largely documented that Jean-Pierre Dutilleux was not the first white man to meet the Toulambis. From the 2004 English translation : Although they intermarry with the groups from the two other valleys (who are at a distance of one or two days’ walk), are initiated at the same time as them and visit them regularly, they are sufficiently isolated for every European who passes through to feel compelled to take photographs of them. Prior to their appearance in the Stone Age in Paris-Match , they had allowed at least three anthropologists to photograph them: Jadran Mimica in 1979, myself in 1985 and Pascale Bonnemère in 1987. Situated downstream of the trade route that has historically provided the rest of the Ankave tribal group with steel tools ... the Yoye Amara/‘Toulambi’ abandoned their stone adzes at least 50 years ago. ‘At least’, because our informant Idzadze Erauye, who was born around 1945, had never seen any stone adzes in use; or again, because Witi Dzadze, Erwanguye Patse and Idzi Erauye (all about 60 years old in 1990) were very young initiates when the first steel blades arrived. The length of time that has passed since this move ‘from stone to steel’ is confirmed by a patrol officer who crossed the southern part of Ankave territory in August 1950. Though ‘worn almost paper thin’, metal tools were rare and were used communally, but they were well known, particularly to the ‘Toulambi’ who traded them. The Australian colonial archives also indicate that ‘Toulambi’ territory was
visited by at least six government patrols between 1929 and 1972: Interestingly, Jean-Pierre Dutilleux is also cited in the article, defending himself that: “If the Toulambis are actors, we should give them a César Award .” In either case, if you are fluent in French and are curious to see the whole documentary, it can be purchased online here for about three euros. | {
"source": [
"https://skeptics.stackexchange.com/questions/4840",
"https://skeptics.stackexchange.com",
"https://skeptics.stackexchange.com/users/2769/"
]
} |
4,845 | I always thought pilots were paid well. I am currently watching the movie "Capitalism, A love Story," and Michael Moore interviews some pilots, in which the pilots say they make around $19,000 - $25,000 per year as their starting salary flying commercial airline passenger flights. Some say they even worked second jobs. Do they make such little money? Are there any concrete data for this? | The pay of pilots covers a huge spectrum. Experienced pilots for major airlines are well paid. However the pilots of regional airlines and commuter lines are sometimes paid much much less. This is partly because there is a glut of pilots right now, and also because pilots use these jobs as a way to gain the experience they need to be hired by the major airlines. This site (thanks erekalper) gives ranges of salaries. Note that some of them (hello United) go down to $21,000! Delta goes down to $35,000 and SkyWest to $45,000, neither a lot of money for someone who has had to undergo years of training and qualification and who literally has your life in their hands. The difference from airline to airline is also huge. This Wall Street Journal article (thanks fred) points out that the starting salary for US Airways is $21,000. Since they haven't been hiring for years there are no pilots on 'starting salary', but that's what they would get. The pilots of the Colgan Air plane that crashed into Buffalo were on about $24,000. So in short: Yes, it is perfectly possible that the guy piloting your plane could be earning a poverty wage . Make of that what you will. Non-airline commercial piloting jobs are paid even less, because they are the stepping stone to the junior airline positions (there is, incidentally, a licensing difference between an 'airline pilot' and a 'commercial pilot' - any reference to 'airline pilots' should exclude the guy flying a Piper Cub, even for money). (Technically some of the salaries in the first survey could be for a flight engineer, but flight engineers are a vanishing breed in these days of two-person cockpits.) Washington Post Professional Blog EDIT: jwenting says that the low figures for United may be because pilots working their regional subsidiaries list "United" as their employer. Looking for confirmation of that. And of course it still means there are low-paid pilots out there. | {
"source": [
"https://skeptics.stackexchange.com/questions/4845",
"https://skeptics.stackexchange.com",
"https://skeptics.stackexchange.com/users/2079/"
]
} |
4,873 | Everybody keeps telling me don't put knives in dishwashers . And if I ask them why, nobody seems to have an answer or an argument to support his/her advice. So I thought I'll shoot the question to stackexchange: Do knives get dull in dishwashers? | Yes, knives get dull in the dishwasher, through rubbing against other items. When I bought a set of nice Wüsthof-Trident kitchen knives, I heard this rumour too. I also heard that cutting onto a plate can dull them. As this was about 25 years ago, in the days before I had heard of the web, let alone a StackExchange, I wrote to my knives' manufacturer, Wüsthof, asking for advice. They kindly sent me a brochure which has this section (my emphasis): Caring for Fine Knives The best of edges will quickly dull if it strikes metal, glass or formica. A soft wood or plastic cutting board makes the best surface. And if a slip occurs, a proper cutting board is safer for the user. Knives should be used only for the purpose intended. Never use good cutlery to cut string or paper - it is an outrage to cut bones or metal with a good blade. Blades should never be heated in a flame in an oven. Elevated temperatures will destroy the temper of the steel. After use, knives should not be allowed to soak in water. The best practice is to hand wash and dry them immediately. This is especially true if they have been used on fruit or salty foods, which may cause some staining, even of stainless steel blades. Although Wusthof-Trident knives can be cycled in a dishwasher, it is not recommended. High water pressure will dull the cutting edges by knocking them again the rack and against other objects. Fine knives should be carefully stored in their own block, or special vinyl "roll" produced by Wusthof for this purpose. Source: Wüsthof brochure, vintage ~1994. (I still have it, and can scan it in if required.) I avoided putting them in the dishwasher, until I had a partner who refused to treat them with such respect. (To quote Wüsthof, it was an "outrage"!) For domestic harmony reasons, I got over it. | {
"source": [
"https://skeptics.stackexchange.com/questions/4873",
"https://skeptics.stackexchange.com",
"https://skeptics.stackexchange.com/users/3014/"
]
} |
4,876 | Are Windex and other commercial window cleaners better at cleaning windows than vinegar and water (and other household chemicals)? For example care2.com suggests the following is the best window cleaner: Make a great all-purpose window cleaner by combining 1/4 cup vinegar, 1/2 teaspoon liquid soap or detergent, and 2 cups of water in a spray bottle. Shake to blend and spray on your windows! Do commercial window cleaners offer any advantage over vinegar + water (+ dish soap), the best window cleaner suggested by some web-sites? | Yes, knives get dull in the dishwasher, through rubbing against other items. When I bought a set of nice Wüsthof-Trident kitchen knives, I heard this rumour too. I also heard that cutting onto a plate can dull them. As this was about 25 years ago, in the days before I had heard of the web, let alone a StackExchange, I wrote to my knives' manufacturer, Wüsthof, asking for advice. They kindly sent me a brochure which has this section (my emphasis): Caring for Fine Knives The best of edges will quickly dull if it strikes metal, glass or formica. A soft wood or plastic cutting board makes the best surface. And if a slip occurs, a proper cutting board is safer for the user. Knives should be used only for the purpose intended. Never use good cutlery to cut string or paper - it is an outrage to cut bones or metal with a good blade. Blades should never be heated in a flame in an oven. Elevated temperatures will destroy the temper of the steel. After use, knives should not be allowed to soak in water. The best practice is to hand wash and dry them immediately. This is especially true if they have been used on fruit or salty foods, which may cause some staining, even of stainless steel blades. Although Wusthof-Trident knives can be cycled in a dishwasher, it is not recommended. High water pressure will dull the cutting edges by knocking them again the rack and against other objects. Fine knives should be carefully stored in their own block, or special vinyl "roll" produced by Wusthof for this purpose. Source: Wüsthof brochure, vintage ~1994. (I still have it, and can scan it in if required.) I avoided putting them in the dishwasher, until I had a partner who refused to treat them with such respect. (To quote Wüsthof, it was an "outrage"!) For domestic harmony reasons, I got over it. | {
"source": [
"https://skeptics.stackexchange.com/questions/4876",
"https://skeptics.stackexchange.com",
"https://skeptics.stackexchange.com/users/1792/"
]
} |
4,909 | I've always heard that way-back-when, the simplest way to rewind a car's odometer was to drive backwards (at least in some models). Is this true? Did those manufacturers miss such an obvious gaming method? | Yes, older cars used mechanical odometers , which go forward or backwards, depending on which way the gears are turned. Modern cars use electronic odometers. I couldn't find anything indicating over what time period this changed. It was well before my time behind the wheel, though. I also found a January 1961 article from Popular Science Magazine on the prevalence of odometer rollback fraud. (Page 59, if the link doesn't jump right to it) | {
"source": [
"https://skeptics.stackexchange.com/questions/4909",
"https://skeptics.stackexchange.com",
"https://skeptics.stackexchange.com/users/3097/"
]
} |
4,923 | I've been sent an email saying that every generation thought that kids were too stupid or unruly, with this text as evidence: "Our Earth is degenerate in these later
days; there are signs that the world
is speedily coming to an end; bribery
and corruption are common; children no
longer obey their parents; every man
wants to write a book and the end of
the world is evidently approaching."
From an Assyrian clay tablet, circa 2800 BC. Is this attribution accurate? | In this answer, I address some moderately pedantic issues that make the answer to the question "No", but I do not address the substantive question of whether that quote is more than 90 years old. There are several versions of the quotation floating around. So, the first step is to try to find the earliest mention. Here, I am leveraging off the work of Jim L who addressed this same question on a competing Q&A site . (He concluded that the quote was invalid, but that site has differing standards of evidence than here.) He found two cites that I have been unable to beat: A November 1922 State of Connecticut Public Document 13: Report of the State Librarian (p 93) A tablet (Assyrian) 2800 B. C. says : " Our earth is degenerate in these latter days ; there are signs that the world is speedily coming to an end; bribery and corruption are common ; children no longer obey their parents ; every man wants to write a book, and the end of the world is evidently approaching." Tablet preserved in Constantinople A 1923/1924 book Nineteenth century evolution and after; a study of personal forces affecting the social process, in the light of the life-sciences and religion , by Marshall Dawson, which contains the quote (p 76): An Assyrian tablet, dating from 2800 B. C., preserved in Constantinople, says: "Our earth is degenerate in these latter days; there are signs that the world is speedily coming to an end; bribery and corruption are common; children no longer obey their parents; every man wants to write a book, and the end of the world is evidently approaching." Dawson provides no further reference. From here, two points stick out. The first is "write a book"? This is written on tablets! What is the Assyrian word for book? I'm going to give the benefit of the doubt here to Dawson — perhaps the translation was very literal, and the word "book" had a different meaning. The second is that it is said to be an Assyrian tablet in 2800 BC. There is some debate about whether Asyrria existed in 2800 B.C. (for example, a poorly cited Wikipedia entry suggests it was formed when the Akkadian Empire fell circa 2080 BC, while also suggesting it was a part of the earlier Akkadian Empire.) Whether it existed as a geographical place, there is another question of whether it existed as a language. Jim L., (above), claims, without substantial evidence the Assyrian language hadn't been developed by then. Andrew George explains that the some earlier Akkadian works, were ascribed to Assyrian: Because the first substantial discoveries of written Akkadian where made in the ruins of Assyrian cities, Akkadian was known to its first decipherers as Assyrian. Reference: George , Andrew (2007) "Babylonian and Assyrian: a history of Akkadian" . In: Postgate, J. N., (ed.), Languages of Iraq, Ancient and Modern. London: British School of Archaeology in Iraq, pp. 31-71. [Note: I had some difficulty displaying that link in my browser, but found it worked by downloading it and opening it directly] In his time-chart (Table 2), George shows Old Assyrian didn't develop until 2000 BC. By the same chart, it seems suggestive that even Akkadian writings didn't exist in 2800 BC. Certainly, some forms of writing, such as Sumerian did, but I haven't found any references that claim to have samples of Akkadian writing prior to 2500 BC. In summary: I have not shown whether or not this is a quote from an ancient work. I've shown that the quote, and its provenance has survived largely intact since the 1920s at least. In particular, it has been traced far further back than Sir Isaac Asimov's book (as suggested by others here). However, I have shown it was not both Assyrian and from 2800 BC. It may have in Akkadian, a related language, from 2800 BC, but that is earlier than any references I found so I find it unlikely. It might have been Sumerian. IMHO, given the dubious provenance of the source, a more likely scenario is that it is either a true quote, oddly translated, from a much later date, or invented in the early 20th Century. | {
"source": [
"https://skeptics.stackexchange.com/questions/4923",
"https://skeptics.stackexchange.com",
"https://skeptics.stackexchange.com/users/3855/"
]
} |
4,954 | Related Just how inaccurate are vaccine myths? Is it dangerous to have several vaccines at the same time? I was passed along an article reporting that a new study found that despite having the highest number of vaccines in its recommended schedule, the United States is ranked 34th in infant mortality rates (IMR) in the world. Mercola presents this here . The report abstract is here and the full text is here . From the conclusion: A closer inspection of correlations between vaccine doses, biochemical or synergistic toxicity, and IMRs, is essential. All nations—rich and poor, advanced and developing—have an obligation to determine whether their immunization schedules are achieving their desired goals. Here's their plots of number of vaccines in a country's schedule with infant mortality rate. In other words, I take this to mean that they are suggesting that vaccine toxicity has a causal relationship to infant deaths, or that the vaccines are innefective, despite a high number in one's scheduled dosage recommendations. I immediately wondered what contributes to infant mortality rate. Is it just vaccine-preventable illnesses and the paper is suggesting incorrectly that vaccines are ineffective? Do deaths during delivery count... and can those even possibly be related to vaccination schedules? They mention that there are 130 categories of infant deaths: Many nations adhere to an agreed upon International Classification of Diseases (ICD) for grouping infant deaths into 130 categories. Among the 34 nations analyzed, those that require the most vaccines tend to have the worst IMRs. Thus, we must ask important questions: is it possible that some nations are requiring too many vaccines for their infants and the additional vaccines are a toxic burden on their health? They only discuss SIDS as a potential vaccine side effect; I would be curious to know what the other 130 categories are and whether or not they have a possibility of being related to vaccines. They mention some limitations here... This analysis did not adjust for vaccine composition, national vaccine coverage rates, variations in the infant mortality rates among minority races, preterm births, differences in how some nations report live births, or the potential for ecological bias. A few comments about each of these factors are included below This followed by a discussion of why they don't think these categories would sway the results [much]. My Questions: Is this paper's methodology/conclusion sound? Is there a valid concern here about a potential causal relationship between vaccines and infant mortality rates? Are there studies that have alternative explanations to why the US infant mortality is high compared to similarly developed nations? | The first author Neil Z Miller is the director of the Thinktwice Global Vaccine Institute , which is decidedly anti-vaccination as a short look at their website will confirm. He also published a series of books on vaccination. This does not mean the paper is necessarily biased, but it is an undisclosed conflict of interest which is not a good sign. They are also not providing any evidence for causation, the linear regression graphs in your question is essentially the whole analysis the authors performed. There is an excellent deconstruction of this specific study by David Gorkski on the Scienced Based Medicine blog . I recommend to read the whole article, I will only summarize a few points here. David Gorski also notes the conflict of interest that I observed when I researched the first author. One aspect he point out is that the authors used only the data for one year (2009) and only for countries with IMR lower than the United States. Miller and Goldman only looked at one
year’s data. There are many years
worth of data available; if such a
relationship between IMR and vaccine
doses is real, it will be robust,
showing up in multiple analyses from
multiple years’ data. Moreover, the
authors took great pains to look at
only the United States and the 33
nations with better infant mortality
rates than the U.S. There is no
statistical rationale for doing this,
nor is there a scientific rationale.
Again, if this is a true correlation,
it will be robust enough to show up in
comparisons of more nations than just
the U.S. and nations with more
favorable infant mortality rates.
Basically, the choice of data analyzed
leaves a strong suspicion of cherry
picking. When I saw the graph on the right with the grouped data I was suspicious as I could not see any reason to arbitrarily group the data. It looked like a cheap way to make the plot look better, not like an analysis that would actually provide more insight. David Gorsky shares my suspicion and notes More dubiously, for some reason the
authors, not content with an weak and
not particularly convincing linear
relationship in the raw data, decided
to do a little creative data
manipulation and divide the nations
into five groups based on number of
vaccine doses, take the means of each
of these groups, and then regraph the
data. Not surprisingly, the data look
a lot cleaner, which was no doubt why
this was done, as it was a completely
extraneous analysis. As a rule of
thumb, this sort of analysis will
almost always produce a much
nicer-looking linear graph, as opposed
to the “star chart” in Figure 1. As pointed out by Catharina from the Just the Vax blog , the paper also contains an error regarding the german vaccination schedule. The German Childhood Vaccination Schedule recommends additionally Hepatitis B from birth on, as well as MMR and Chickenpox vaccinations starting at 11 months. There are other studies that examined the association of SIDS (sudden infant death syndrome) and vaccinations, a meta analysis concluded that vaccinations help to prevent SIDS Immunisations are associated with a
halving of the risk of SIDS. There are
biological reasons why this
association may be causal, but other
factors, such as the healthy vaccinee
effect, may be important.
Immunisations should be part of the
SIDS prevention campaigns. US infant mortality rate There is a report from the CDC addressing the high infant mortality rate in the United States: Behind International Rankings of Infant Mortality: How the United States Compares with Europe . Infant mortality rates for preterm (less than 37 weeks of gestation)
infants are lower in the United States
than in most European countries;
however, infant mortality rates for
infants born at 37 weeks of gestation
or more are higher in the United
States than in most European
countries. One in 8 births in the United States were born preterm, compared with 1 in
18 births in Ireland and Finland. If the United States had Sweden’s distribution of births by gestational
age, nearly 8,000 infant deaths would
be averted each year and the U.S.
infant mortality rate would be
one-third lower. The main cause of the United States’ high infant mortality rate when
compared with Europe is the very high
percentage of preterm births in the
United States. The conclusion is that the higher rate of preterm infants explains a large part of the higher infant mortality rate, but not the whole discrepancy between Europe and the United States. The following figure shows the IMR comparison if you exclude births earlier than 22 weeks, the US rate is significantly lower, but still higher than in most european countries. However, infant mortality rates for
infants born at 37 weeks of gestation
or more are generally higher in the
United States than in European
countries. The report does not speculate what the source of the remaining difference between Europe and US infant mortality rate could be. Conclusion The whole paper looks more like a fishing expedition to me than a thorough and objective analysis. They used an arbitrarily limited subset of the available data and did not correct for any potential confounding factors. This looks suspiciously like they played around with the data until they found the correlation they searched for, especially given the known bias of the authors. | {
"source": [
"https://skeptics.stackexchange.com/questions/4954",
"https://skeptics.stackexchange.com",
"https://skeptics.stackexchange.com/users/2459/"
]
} |
4,984 | I've read the odds of dying as around 1 : 250 000 each day, which presumably is based on the global death rate. I use this when people buy lottery tickets to point out they have more chance of dying than winning the money. This forum thread is one example how the odds are worked out, but there are also no doubt plenty of others. Is this a good way to base it? Is it actually measurable at all? And does this chance increase when you drive a car, fly, cycle etc.? | This is based on numerous flawed assumptions. It should not be calculated in such a simplistic way, because: chance of dying in given time is not uniformly distributed among people, there are great variety of factors (genetic, behavioral and environmental); For example there are numerous wars going on right now , greatly increasing chances of death in these zones. chance of dying of one single person is not uniformly distributed in time, Gompertz–Makeham law of mortality applies; total death rate takes in account infant death, children death etc. For calculating the life expectancy for person who has already lived X years, you should only take in account deaths of people X or older. [ Image Source ] Note, that vertical scale in above graph is logarithmic. | {
"source": [
"https://skeptics.stackexchange.com/questions/4984",
"https://skeptics.stackexchange.com",
"https://skeptics.stackexchange.com/users/1033/"
]
} |
5,061 | I've been sent chain emails about pornography industry revenues and was quite flabbergasted by some of the figures. The version I recalled hearing was that pornography brings in more revenue than all US sports franchises combined . I looked around for some quotes so I could ask about this here, and it turns out that there are several related stats, so I included them all: Bigger than all sports franchises in the US combined : The sex industry is HUGE - $57 Billion Worldwide, $12+ Billion in the United States. It is larger than all the sport franchises put together. ( SOURCE ) Adult entertainment model Jasmine Mai told the BBC: "The adult industry is bigger than every professional sport combined. ( SOURCE ) Bigger than all top technology companies combined : The pornography industry is larger than the revenues of the top technology companies combined: Microsoft, Google, Amazon, eBay, Yahoo!, Apple, Netflix and EarthLink ( SOURCE ) Bigger than top television broadcasting channels combined : US porn revenue exceeds the combined revenues of ABC, CBS, and NBC. ( SOURCE ) Question: Does the pornography industry have higher earnings than the respective combined earnings of US sports franchises, top tech companies, and top broadcasting networks (ABC/NBC/CBS)? | The sex industry is HUGE - $57 Billion Worldwide, $12+ Billion in the United States. It is larger than all the sport franchises put together The NFL had 7.2 Billion In gross revenue in 2010 [ Source ] The MLB had 6.6 Billion in revenue in 2009 [ Source ] The NHL projected about 2.7 Billion in revenue for the 2009-2010 season [ SOURCE ] These totals would exceed the "$12+ Billion in the United States" cited by the source with out including Nascar (I can find claims of around 3 billion with no source but seems reasonable), Golf, Tennis, Other racing ( I would love to see statistics of all the revenue from the local tracks ), or any of the less followed sports. Then there is that one sport with the usually white and black truncated icosahedron shaped ball that a handful of people follow around the world... ($216B world wide) [ SOURCE ] thats almost 4x the estimate from the post. Bigger than all top technology
companies combined: Apple had 65 Billion in sales in 2010 [ SOURCE ] Microsoft had 62 Billion [ Source ] Google had 29 Billion [ SOURCE ] So I think that compared to the 57 billion thats busted. Even if it has duobled since 2005 these 3 still beat it. (No pun intended) Bigger than top television
broadcasting channels combined: Comcast(NBC) had revenues of almost 38 billion [ SOURCE ] Disney(ABC) and revenues of 38 billion [ SOURCE ] Viacom(CBS) had revenues of 14 billion ins 2008 (for ease since it gets messy with mergers and sales) [ SOURCE ] While Disney is more than just ABC, and Comcast is more than just NBC, Viacom is just a broadcast company. It alone exceeded the 12 billion quoted above for the US. | {
"source": [
"https://skeptics.stackexchange.com/questions/5061",
"https://skeptics.stackexchange.com",
"https://skeptics.stackexchange.com/users/2459/"
]
} |
5,121 | Perhaps I have been reading the wrong kind of scurrilous literature, but I have seen it asserted that the Empress Catherine the Great of Russia died from injuries received by committing unnatural acts with a stallion. Is there any truth in this assertion? | This claim doesn't seem to have any factual underpinning; none of the books about Catherine the Great support it. While I haven't read any of them, I did read the reviews on Amazon, and none mentioned such an event; some reviewers specifically point out that there are no records of Catherine the Great having equine sex partners. In addition, a number of other on-line resources discuss this issue, and all come to the same conclusion, that this rumor is false. From europeanhistory.about.com : Alexander's book goes on to explain
(in paragraphs rarely quoted) how
Catherine was laid in her bed as
doctors tried to save her body and
priests made rites to save her soul.
Throughout she was racked with pain,
her convulsing appearance causing
great distress to her consorts. It was
over twelve hours after Zotov found
her, well past nine o'clock at night,
that Catherine finally died of natural
causes, in bed and surrounded by
friends and carers . From The Straight Dope : The simple answer to your question is
no, the rumor is not true. However,
that won't stop us from repeating the
rumor, to wit: that Catherine the
Great, empress of Russia in the latter
part of the 18th century, was crushed
to death when attendants lost their
grip on ropes supporting a horse that
was being lowered on her for, ah,
sexual purposes. This is without doubt
the most outrageous story I heard
during my entire college career, which
is when you usually come across these
historical tidbits. The boring truth is this: Catherine
the Great died of a stroke while
sitting on the commode in the palace
at St. Petersburg. Another less
commonly circulated rumor has it that
Catherine was so grossly fat (true in
itself) that she broke the commode and
died of blood loss from resultant
injuries, but this is regarded as a
fabrication also. From Snopes.com : Catherine the Great actually expired
alone and of natural causes. On the
morning of 5 November 1796, Catherine
arose, drank coffee, and sat down to
write. About three hours later her
chamberlain, curious that he had not
been summoned as usual, found her
barely conscious on the floor of a
closet adjacent to her bedroom. As her
servant summoned help, Catherine
lapsed into unconsciousness from which
she never awakened and died at 9:45 PM
the next day. An autopsy conducted the
next day determined the cause of death
to be a cerebral hemorrhage. I think it is safe to conclude that there is not a single speck of truth in the assertion. | {
"source": [
"https://skeptics.stackexchange.com/questions/5121",
"https://skeptics.stackexchange.com",
"https://skeptics.stackexchange.com/users/1639/"
]
} |
5,159 | (I've asked this before but people had troubling separating the question from just the gender pay gap. This is a way I was suggested to ask it, so I am giving it a try) No one can deny the gender pay gap exists, but is it evidence of discrimination? I would like to limit my question to western countries, as the question becomes irrelevant or too complex when looking at countries where gender equality is not a consideration. According to many parties, yes. The gender pay gap has many contributing factors and causes, of which discrimination is only one. According to various groups of activists or protestors the primary reason for the gender pay gap is pay discrimination on the basis of gender. Given the more simple explanations such as differences in hours worked(men tend to work more hours) and types of jobs(men tend to do more laborious intensive or dangerous jobs) does it still make sense to consider gender pay discrimination a significant factor? My understanding is that the gender pay gap is only apparently when looking at the entire set of women compared to the entire set of men, regardless of jobs or hours worked. When you look at any specific field men and women earn equivalent wages, based on experience or skills and not gender. Are women with equivalent skills and experience as men being paid less for the same position on a wide enough scale to contribute to the pay gap? Part of the reason for this would be the anti-discrimination legislation, simply not making it worth it to discriminate with pay based on gender. Which is not to deny that discrimination happens, but to say that it happens on a large enough scale to result in the gender pay gap is surely incorrect? So, is the gender pay gap evidence of discrimination, or simply evidence of a pay gap? | Like anything else, it's a combination of factors. The London School of Economics did a pretty in-depth study and found: The main cause of this is that many women continue to take breaks from paid employment when they have children. The problem is not that women are choosing one career – such as hairdressing – rather than another – such as plumbing. It is that they are continuing to choose family over career at some point in their life. However, the same study also goes on to say that: While career breaks clearly have an impact,
my research with Joanna Swaffield finds that
most of the gender gap in wage growth
among young workers cannot be explained
by differences in labour market attachment.
For example, we estimate that a woman
who has worked full-time ever since leaving
full-time education can still expect to be
paid 12% less than an equivalent man after
10 years. The cause appears to be a combination of factors. This study sites the following as the cause: One way of seeing this is in the evidence
that women are much less likely to become
managers...Some recent research (see Babcock and Laschever) suggests that
systematic differences in personality are
responsible – for example, that women are
intrinsically less competitive than men, tend
to be less self-confident and less effective in
negotiation. This might be because of
intrinsic differences between men and
women or because of gender stereotyping
within the education system. The report ends with the remark that, "that it is now not so easy to
identify the remaining causes of the gender
pay gap." So to summarize this study, the main causes seem to be breaks from employment and personality differences. The study also mentions that the pay gap has been decreasing in recent years, although does not mention if this is due to less focus on staying home with the kids or less discrimination. However, Ian Watson (published in the Austrian Journal OF Labour Economics ) takes a slightly different opinion on this. The abstract of his study is that: The results
show that female managers earned on average about 27 per cent less than their male counterparts and the decompositions suggest that somewhere between 65 and 90
per cent of this earnings gap cannot be explained by recourse to a large range of
demographic and labour market variables. A major part of the earnings gap is simply
due to women managers being female. You can read the statistics in the paper, but the results are: The extent to which discrimination accounts for the gender pay gap varies between
65 per cent and 94 per cent, depending on the approach one takes. The higher figure
comes from using the Oaxaca method, while the lower figure comes from the Blinder
method. These decomposition results are shown in summary form in table 5 and with
a more detailed breakdown in table 6. The U.S. Government Accountability Office did another study and determined that discrimination is indeed a factor: In 2003, GAO found that women, on average, earned 80 percent of what men earned in 2000 and workplace discrimination may be one contributing factor There has been a meta analysis done on various studies and this analysis found: The results show that data restrictions – i.e.
the limitation of the analysis to new entrants, never-marrieds, or one narrow
occupation only – have the biggest impact on the resulting gender wage gap.
Moreover, we are able to show what effect a misspecification of the underlying
wage equation – like the frequent use of potential experience – has on the
calculated gender wage gap. Over time, raw wage differentials worldwide have
fallen substantially; however, most of this decrease is due to better labor market
endowments of females.
... Our results show that data restrictions have
the biggest impact on the resulting gender wage gap...For example, in the fixed effects regressions we find
that studies where work experience is missing seriously overestimate the
unexplained gender wage gap. However, the study does still say there is some discrimination, but it is not as dramatic as others make it out to be. The resulting decrease in the pay gap is due to training and some decrease in discrimination. From the 1960s to the 1990s, raw wage differentials worldwide have fallen
substantially from around 65 to only 30%. The bulk of this decline, however,
must be attributed to better labor market endowments of females which came
about by better education, training, and work attachment...The ratio of what women would earn absent of discrimination relative to
their actual wages decreased approximately by 0.17% annually. This indicates
that a continuous, even if moderate, equalization between the sexes is taking
place. One thing to note as well is that the pay gap is higher in the public sector than the private sector . This tends to show there is an element of discrimination, as public sector jobs tend to have pretty strict promotion/pay increase rubrics. In conclusion, it's caused by a number of factors, but it would be incorrect to make the claim that discrimination is not one of them. However, it's certainly not the only cause and it may or may not be the greatest cause. In terms of the pay gap being evidence of discrimination, it certainly is, as most of the statistical studies say the gap is unexplained by other factors. The gap may not be caused solely by discrimination, but it does show evidence of some unexplained factor causing pay differential, which is usually attributed to discrimination when the other causes are explained. Causes such as time off for family and personality are included in the studies, but there is a statistically significant gap left unexplained that can be reasonably filled with discrimination. According to the studies I've posted, the gap decreases in the public sector and in large companies with pay scales. This points towards the pay gap being evidence to discrimination. Hence, the pay gap is evidence of an unexplained discrimination of women in terms of their compensation. | {
"source": [
"https://skeptics.stackexchange.com/questions/5159",
"https://skeptics.stackexchange.com",
"https://skeptics.stackexchange.com/users/3332/"
]
} |
5,199 | It's a widely held stereotype that Russians drink a lot of alcohol. Is there any research supporting or refuting this belief? Possible metrics I can think of (though you're welcome to add more) can be: Average volume of alcoholic drinks consumed in a year per capita. Ideally somehow normalized for alcohol content. % of population medically classified as suffering from alcoholism Mortality directly or indirectly attributable to alcohol (this one is tricky to get right I guess) I'm fine with studies assuming either of definitions of "russian" that you want to pick - ethnically russian people; population of current Russian Federation, population of former USSR. Don't care if the study includes/excludes expatriates/emigrants, but would ESPECIALLY be interesting to see a study covering differences between those and people living in Russia. | According to existing evidence, it is true that Russians drink a lot of alcohol. 1. Adult Per Capita Consumption The World Health Organization 2011 Global status report
on alcohol and health provides us with statistics in regard to average alcohol consumption per year, for people aged 15 and above, in liters of pure alcohol. Russia ranks fourth, with 15.76 liters ( of which 6.88 liters are consumed in the form of "spirits" ), more than double the world average - 6.13. 2. Alcoholism The report provides no information specifically about alcoholism prevalence in Russia, but using The Global Information System on Alcohol and Health ( http://www.who.int/globalatlas/alcohol ) shows Russia as having the highest rate of males aged between 18 and 65, which are dependent on alcohol: 17.61% 3. Mortality The report tells us that Russia has one of the highest proportion of alcohol-attributable mortality, but doesn't give precise numbers - most of the data in this report is given by WHO subregion. According to Wolfram Alpha , 8327 deaths per year occur due to alcohol use disorders - 0.35% of the total, much higher than the world probability of 0.16%. However, this figure doesn't seem to be accurate according to the WHO report - which, although it doesn't give exact numbers, does say: By far the highest proportion of alcohol-attributable mortality is in
the Russian Federation and neighbouring countries, where every fifth
death among men and 6% of deaths among women are attributable to the
harmful use of alcohol. From the map above, we can estimate the minimum number of alcohol-attributable deaths to 10% of Russia's 2010 deaths ( 2028516 ), which gives us ~200.000, and the maximum ( 1/5 of the total ) to ~400.000. In 2004, 3.8% of all global deaths were attributable to alcohol, 6.2% for men and 1.1% for women. As for the ethnic identity of the drinkers, from this study: Ethnic identity of drinkers cannot be established on the basis of
available state statistics and, to the best of my knowledge, neither
state statisticians nor academic analysts have ever looked at ethnic
differentials in per capita consumption of alcohol. These
differentials are, however, significant and cannot be disregarded in
any serious analysis of the alcohol situation in the country.
According to my rough estimates people of the Muslim culture consume
on a per capita basis slightly less than half of the alcohol consumed
by Slavs and other ethnic groups in Russia.11 As the result, regions
of Russia in which Muslims constitute a significant part of the
population show lower incidence of alcohol-related mortality and
morbidity and socially disruptive alcohol abuse. And: The high-risk groups are mainly adult male Slavs (Russians,
Ukrainians, and Belarusians) and the main explanation of alcohol abuse
is not only the relatively high level of overall consumption of
alcohol, but the high share of alcohol consumed in the form of vodka
and samogon, as can be seen in Table 8-3. Drinking vodka results in
faster intoxication, more frequent violence, and more serious somatic
effects, particularly accidents of different types and fatal alcohol
poisonings (see Section 5 below), than drinking wine or beer. A
second, equally important factor is the mode of drinking prevalent
among Slavs , which characteristically consists of "drinking
binges"--the intermittent consumption of large quantities of alcohol
in a relatively short period of time and often without accompanying
meals. It should be noted that a small group of Russian alcohol
specialists have long suggested that total alcohol prohibition is
fruitless and that the most promising policy would be to educate the
public in "civilized" drinking. This position was never popular in the
Soviet Union and its proponents had been all but silenced during
Gorbachev's anti-drinking campaign. In conclusion, it would appear that the widely held beliefs hold true - Russian do drink a lot, much more than the world average. It appears to be a huge problem for the country, and for many of countries that were part of USSR. Medvedev called Russia's drinking problem a "national disaster". | {
"source": [
"https://skeptics.stackexchange.com/questions/5199",
"https://skeptics.stackexchange.com",
"https://skeptics.stackexchange.com/users/1044/"
]
} |
5,204 | I've heard at least a few times now from apologists that Christianity is responsible for science. For some examples: From a commercial for the site, Catholics Come Home (video LINK that opens right to the quote): We [The Catholic Church] developed the scientific method and laws of evidence. The downloadable PDF they have at their site for this video (LINK) provides sources for the video's claims. Extracted are these for the above quote: We developed the scientific method.... Source: From How the Catholic Church Built Western Civilization by Dr. Thomas
Woods, page 94 and following: "Roger Bacon, a Franciscan who taught at Oxford, was admired for his work in
mathematics and optics, and is considered to be a forerunner of modern scientific
method." "Like Roger Bacon, Saint Albert [the Great] was careful to note the importance of
direct observation in the acquisition of knowledge about the physical world. In De
Mineralibus, he explained that the aim of natural science was 'not simply to accept the
statements of others, that is, what is narrated by people, but to investigate the causes
that are at work in nature for themselves.'" ...and laws of evidence. Source: From How the Catholic Church Built Western Civilization by Dr. Thomas
Woods, page 187 and following: “...Cases like this have led legal scholar Harold Berman to observe that modern
Western legal systems „are a secular residue of religious attitudes and assumptions
which historically found expression first in the liturgy and rituals and doctrines of the
church and thereafter in the institutions and concepts and values of the law...‟"
(Berman‟s work: Law and Revolution: The Formation of the Western Legal Tradition) I've also read Dinesh D'Souza's What's So Great About Christianity, which makes the following statements: "Why did science arise here and nowhere else? In his September 12, 2006, speech in Rosenberg, Germany, Pope Benedict XVI argued that it was due to Christianity's emphasis on the importance of reason...modern science is an invention of medieval Christianity, and that the greatest breakthroughs in scientific reason have largely been the work of Christians." (pgs. 83-84) Questions: Is this the case? Is it Christianity (and no other source) that brought about the scientific method? In other words, if we looked at all the variables of those credited for early forms of modern science, would their Christian beliefs be the single most prominent causal factor that led such contributions? | The answer is an emphatic NO . If anything, an early Persian could be credited with the most modern version of the scientific method. Ibn al-Haytham (Wikipedia which is reliable enough for this sort of discussion) specifically championed the following method: Explicit statement of a problem, tied to observation and to proof by experiment Testing and/or criticism of a hypothesis using experimentation Interpretation of data and formulation of a conclusion using mathematics The publication of the findings If that isn't the scientific method, I don't know what is!
Even before that, Aristotle championed an empirical method of scientific thinking . Which absolutely predates even the concept of Christianity. Not to mention that science and technology was also flourishing in China and India with no influence from the West or any Christian influences. And it has been argued by some that it was actually the influences of the far east that were transported over by the ancient Muslim scholars that kicked off the Renaissance (as opposed to the Muslim scholars preserving the ancient Greek ideas only). That should have been a very easy bit of propaganda to debunk with minimal effort. If you don't think Wikipedia is impartial enough, this page also offers a history , essentially agreeing with the Wikipedia entry. This page actually credits the Muslim age of science much more than the christian religion (as well as aligning with the Wikipedia entry and the previous link). The early Islamic ages were a golden age for knowledge, and the history of the scientific method must pay a great deal of respect to some of the brilliant Muslim philosophers of Baghdad and Al-Andalus. Keep in mind, a lot of the enlightenment though was actually more a preservation of knowledge from other cultures (such as commonly credited, the Greeks). Although, there may be a bit of western bias in that history, since a great deal of science also came from India and China in those times. Generally people just aren't made aware of that (for instance, the origin of "Damascus Steel" is actually from India ). | {
"source": [
"https://skeptics.stackexchange.com/questions/5204",
"https://skeptics.stackexchange.com",
"https://skeptics.stackexchange.com/users/2459/"
]
} |
5,210 | I used to hear this all the time when I was growing up. Just ignore bees, don't swat at or try to kill them, because if they die, they'll send a "signal" to the hive that will attract dozens more. After forgetting about it for many years, I recently heard the same claim from adults. I've killed and seen killed several bees and haven't been subsequently visited or attacked by swarms the size of a grand piano, but maybe they were just very far away from their hive. There's all sorts of anecdotal/non-credible mentions of this claim, but I've never seen anything convincing. For example, from an exterminator's site : Do not swat at bees. Swatting bees causes the release of an alarm signal and only increases the intensity of an attack by stimulating other bees to attack Or from a September 1988 issue of Field and Stream : Don't kill bees. Injured or dying bees are thought to emit a cry of distress, which other bees respond to quickly. The alarm signal sends squadrons of armed investigators to the scene. These rescuers often attack anything that moves or that has body temperature. Final example from this Associated Content article, which references a semi-trustworthy government source but it is not clear where this specific claim was derived from: To avoid getting attacked by bees, one should never swat or threaten a bee. The reason for some swarm of bee attacks are due to the release of a pheromone a threatened bee will emit. This pheromone sends a distress signal to all other bees in the area, and will attract the whole swarm to come in and defend them. It mentions a pheromone being responsible, but the government source, quoting another primary source, only seems to talk about this in the context of colony defense, and if I'm reading it correctly, releasing the pheromone is only really effective in proximity to the hive and is also one of the first things a defender does (long before one would have a chance to swat). Is there any truth to the claim of dead/dying bees sending off an "alarm signal"? Is it backed by research, or is it just an old wives' tail? | Here's a primary source, a research done on Honey Bees: The bee's response to the first (alerting) stimulus strengthens her guarding stance; for instance the abdomen is raised, possibly with the sting protruded, and the antennae are waved. In addition, the bee may recruit other bees to guard activity, by entering the colony with her sting chamber open and the sting prodtruded, thus releasing alarm pheromone. Also, From the Science Daily : The stinger's injection of apitoxin into the victim is accompanied by the release of alarm pheromones, a process which is accelerated if the bee is fatally injured. Release of alarm pheromones near a hive or swarm may attract other bees to the location, where they will likewise exhibit defensive behaviors until there is no longer a threat (typically because the victim has either fled or been killed). These pheromones do not dissipate nor wash off quickly, and if their target enters water, bees will resume their attack as soon as the target leaves A biology site here tells us there are different types of pheromone released, and the specific one released, is the attack pheromone : Two main alarm pheromones have been identified in honeybee workers. One is released by the Koschevnikov gland, near the sting shaft, and consists of more than 40 chemical compounds, including isopentyl acetate (IPA), butyl acetate, 1-hexanol, n-butanol, 1-octanol, hexyl acetate, octyl acetate, n-pentyl acetate and 2-nonanol. These chemical compounds have low molecular weights, are highly volatile, and appear to be the least specific of all pheromones. Alarm pheromones are released when a bee stings another animal, and attract other bees to the location and causes the other bees to behave defensively, i.e. sting or charge So, it is released, but how quickly does it take effect? I looked at a beekeeper's site, because they have the most experience as to the speed with which bees attack after pheromone is released.
Let's look at a beekeeping site: Immediately and steadily back away from the hives, without swatting at the bee, screaming, convulsing, or otherwise freaking out. When you are away from the hives, kill the bee by slapping it soundly. Kill the bee before it escapes from your hair or clothing, as it will likely sting you when it is free. Discard the dead bee outside the apiary and apply smoke liberally to the area on your body where the bee was killed. Smoking the area will mask the alarm pheromone secreted when the bee was crushed. The pheromone is produced, but it doesn't reach the hive immediately. It will eventually be detected if you are near enough, but it does take some time. If you are far away from the hive, the bees might not smell it either, as the pheromone will dissipate. That may be the reason why you could kill a bee with impunity. Note also how the beekeeper instructs to back away from the hives, then kill the bee, and then smoke the area. It's safe to kill the bee in two situations: You're far away. You smoke the area of your body after you kill it. The pheromone takes approximately a few minutes to be detected if you are near enough. You are safe for a few minutes. This can be seen here: If a bee stings you, don't panic and immediately run, and especially don't make a lot of jerky, sudden movements. This just increases your chances of being stung again. Instead, calmly back away a few feet from the hive and use the edge of your fingernail, the hive tool, or a knife, to scrape the sting sac out from the side. Never grab the stinger and pull it out -- this only injects more venom. After the sting sac is removed and discarded away from your body and the hives, smoke the area of the sting to mask the odor of the alarm pheromone. You actually have time to remove the sting sac and smoke the area, and this is being done by bee-keepers all the time. So, pheromone is produced, it's not a house-wives' tale, and it does ignite the hive to attack, but you could mask it, or if you are far away, it won't be detected. | {
"source": [
"https://skeptics.stackexchange.com/questions/5210",
"https://skeptics.stackexchange.com",
"https://skeptics.stackexchange.com/users/4021/"
]
} |
5,222 | My father and I couldn't be more different when it comes to sleep. I rarely go to bed before midnight, while he will often retire around 9:30. While growing up, we'd often get into discussions on the topic of our sleep habits. Over and over I heard him make an assertion that he heard once years ago: "You sleep better in the hours before midnight....". In his mind, even if I sleep from 12-8am, I'm still worse off than if I went to bed at 9:30pm and slept for the same amount of time. Is there any evidence one way or the other on this subject? Measures of what might qualify as "better" sleep could include: Perceived refreshedness upon waking (does one feel ready to "jump out of bed"? Rate of diminishing alertness/mood as the day proceeds Objective measurements such as REM/deep sleep or tossing and turning per sleep session Some other suggested method of comparing sleep quality In other words, any way to compare some measured/perceived "quality" of sleep based on the time of day of the sleep session (much prior to midnight or at/after midnight) while keeping duration relatively constant could answer this question. | It might depend on how you define the quality, but if alertness suffices, then No, it does not seem to be any better to get more sleep before midnight if duration is relatively constant. See THIS Discover Magazine blog post, which summarizes THIS Science Magazine study. Here are the pertinent bits: In a sleep lab, the researchers studied people with extreme bedtimes, or chronotypes, both early and late. The larks in the study typically woke up between 4 a.m. and 5:30 a.m. and went to bed by 9 p.m. The night owls, or evening chronotypes, left to their own devices would go to bed at 3 a.m. or 4 a.m. and rise at noon. In other words, both groups sleep between 7-9 hours per night, but the time of day is significantly different. All the test subjects...took tests measuring their alertness 1.5 hours after waking, and again 10.5 hours after waking. In the earlier test researchers saw no difference between the two groups’ performances, but in the later test the night owls performed better than the early birds, and also topped their own prior test results. FMRI brain scans told the rest of the tale. In the night owls, increased activity was seen in two parts of the brain at 10.5 hours — the suprachiasmatic nucleus area and the locus coeruleus — that are involved in regulating the circadian signal. Essentially, the circadian signal was winning out over the pressure to sleep. In the early birds, on the other hand, “the sleep pressure prevents the expression of the circadian signal,” so those individuals were less able to keep their attention focused. So, it seems that despite not getting to bed before midnight at all, the study shows the night owls fared just as well or better as early birds on measures of alertness. The post concludes with the caveat that outside the lab, some who are prone to night-owl activities might not have the luxury of a noon wake up time, and thus do worse if they heed the desire to stay up late while also being required to get up early. I looked around for more, but this was the best I found that seemed like it could try to answer the question based on some metric (alertness throughout the day) to look at simply the time of day of sleep while keeping duration constant. If there were more, I'd be quite interested in: perceived refreshedness after a sleep session if this claim stems from the fact that many who go to bed late might need to get up early and thus be in a bit of sleep deprivation vs. how the study above proceeded with those who maintained a consistent schedule but simply had their sleep session shifted | {
"source": [
"https://skeptics.stackexchange.com/questions/5222",
"https://skeptics.stackexchange.com",
"https://skeptics.stackexchange.com/users/1632/"
]
} |
5,252 | People going to Lourdes sometimes return and tell stories about a miraculous healing they have witnessed. Do visitors of Lourdes experience spontaneous recovery more often than would be expected by chance? (Question motivated by meta discussion, I think it is one of real questions people with Catholic background ask, and I would like to see it answered.) | "The spontaneous remission rate of all cancers, lumped together, is
estimated to be something between one in ten thousand and one in a
hundred thousand. If no more than 5 percent of those who come to
Lourdes were there to treat their cancers, there should have been
something between 50 and 500 'miraculous' cures of cancer alone.
Since only three of the attested 65 cures [accepted by the R C Church
as miraculous cures] are of cancer, the rate of spontaneous remission
a Lourdes seems to be lower than if the victims had just stayed at
home." —From Carl Sagan, The Demon-Haunted World p. 221. I will reconstruct the references independently because the book has a very poor references section. spontaneous remission of cancer can be estimated between 1 and 10 cases every million (so it's much rarer than Sagan assumed) (source) it is estimated that 200 million people have visited Lourdes since 1860 (source) there are 67 recognised "miracle healings" at Lourdes, of which only 5 are cancer-related (source) cancer accounts for way more than 5% of deaths, so we can assume it's an underestimation. From here we can verify that at least 16.8% of male deaths and 11.7% of female death are due to cancer, at least in UK. There is no particular preference for cancer victims to go to Lourdes over victims of other illnesses, so we can estimate that at most 12-17% of the critically/terminally ill are there for cancer-related reasons (one of the preconditions for a "miraculous cure" is a diagnosed disease). Not all people go to Lourdes for a terminal illness though, so the more conservative 5% figure that Sagan provided potentially compensates for this. In other words, if 5% of people coming to Lourdes are there to cure cancer, then their number would amount to 10 million people. Out of a set of 10 million cancer victims, we should normally expect between 10 and 100 cases of spontaneous remission of the disease, statistically speaking. However, only 5 cases of "cancer miracles" are reported from Lourdes, making the healing powers of Lourdes statistically insignificant over pure chance. Most of the other alleged "cures" are pre-1970 and related to TBC and MS, which are now either curable or known to have high rates of temporary remission. Funnily enough, the number of alleged "cures" has diminished as medical science has learned to heal patients of the most common ailments and to properly diagnose common remissions. | {
"source": [
"https://skeptics.stackexchange.com/questions/5252",
"https://skeptics.stackexchange.com",
"https://skeptics.stackexchange.com/users/79/"
]
} |
5,271 | Many of us might recall learning in school that when Christopher Columbus reached the New World, he believed that he had reached India. For that reason, Native Americans are often called Indians. However, Wikipedia claims he actually knew where he had arrived : Never admitting that he had reached a continent previously unknown to Europeans, rather than the East Indies he had set out for, Columbus called the inhabitants of the lands he visited indios (Spanish for "Indians"). Unfortunately, the only sources are books so I can't check up on them. Did Columbus know he had reached a new continent? If so, why would he lie to the Spanish government? | Michael Shermer covers this in The Believing Brain , stating that Columbus held this belief until his death. Vartec's link also supports this from an EDU site. Columbus, who, to his death, clung to the idea that he had found the shores of Asia In Shermer's book, he talks about finding data that is totally unexpected, so that you can't accept the new information, and thus integrate it with your already held notions. That is in essence what Columbus did. Columbus had no reason to "lie" since he was convinced by his own brain that he had found Asia, and he was going to stick to that story. He even had incentive to say he found new lands according to the wikipage you cited: According to the contract that Columbus made with King Ferdinand and Queen Isabella, if Columbus discovered any new islands or mainland, he would receive many high rewards. He didn't, although he did take on governorship of the islands he believed to be the Indies, and acted "poorly" as said governor. | {
"source": [
"https://skeptics.stackexchange.com/questions/5271",
"https://skeptics.stackexchange.com",
"https://skeptics.stackexchange.com/users/334/"
]
} |
5,283 | I've heard that McDonalds and other fast food chains coat their fries in sugar water before frying to aid in color development, but the official McDonalds nutrition information (see page 2) indicates that their fries contain no sugar. Is there evidence that McDonalds uses sugar to prepare their fries and if so is it just too negligible an amount to show up in the nutritional info? | A small amount of sugar is used to even out the appearance - to ensure an even quantity of reducing sugar is available for Maillard's reaction to proceed equally everywhere (see Wikipedia link for more info). However, the amount does not affect significantly the taste or nutritional values of the fires. [Washing the potatoes with slightly sugary water] It’s not done for flavouring and it doesn’t mean you’re eating additional sugar. In fact, total sugar represents approximately .007% per pound of potato, so it’s an extremely small part of the finished product. — McDonald's Australia Regarding the common claim that the fries are sweet because of this, there is, in fact, evidence of the contrary. Sugar, when exposed to high temperatures such as that of the oil blend that McDonald's uses when frying, caramelises. The dark brown colour at the top of a creme-caramel is due to caramelisation of sugar Fast food fries are most definitely not caramelised. The browning of the chips is through Maillard's reaction and not caramelisation. The light brown colour of French fries is due to Maillard's reaction. Fries are sweet because the starch is first gelatinised during cooking and then broken down into glucose thanks to the amylase enzyme present in our mouth. Facts and references Oil used by McDonald's to fry: Canola oil Canola oil smoking point: 230˚C Common fry temperature for chips: 200˚C Caramelisation temperature for sugar: 160˚C Maillard's reaction temperature: 154˚C Temperature at which starch gelatinises: 55–85˚C Saliva contains: amylase | {
"source": [
"https://skeptics.stackexchange.com/questions/5283",
"https://skeptics.stackexchange.com",
"https://skeptics.stackexchange.com/users/4055/"
]
} |
5,344 | The claim is the Daddy Long Legs ( Pholcus phalangioides ) spider is the most venomous spider in the world, but that its fangs are too small to be able to penetrate human skin. The wikipedia article calls it an urban myth and links through to some supposed "research", but its just some guy claiming: There is no scientific basis for the supposition that they are deadly
poisonous and there is no reason to assume that it is true. Which I don't feel a lack of evidence should stand as claimed scientific research. There is also a link to a myth busters episode where they apparently get a Daddy Long Legs spider to bite someone. Is it possible for the Pholcidae spiders fangs to penetrate the skin and has there been any official measurement into the common dose size and toxicity of the venom? | It's a myth. From the University of California, River side: ... There is no
scientific basis for the supposition that they are deadly poisonous
and there is no reason to assume that it is true. There is no reference to any pholcid spider biting a human and causing
any detrimental reaction. If these spiders were indeed deadly
poisonous but couldn't bite humans, then the only way we would know
that they are poisonous is by milking them and injecting the venom
into humans. For a variety of reasons including Amnesty International and a humanitarian code of ethics, this research has never been done. Furthermore, there are no toxicological studies testing the lethality
of pholcid venom on any mammalian system (this is usually done with
mice). Therefore, no information is available on the likely toxic
effects of their venom in humans, so the part of the myth about their
being especially poisonous is just that: a myth. What about their
fangs being too short to penetrate human skin? Pholcids do indeed have
short fangs, which in arachnological terms is called "uncate" because
they have a secondary tooth which meets the fang like the way the two
grabbing parts of a pair of tongs come together. Brown recluse spiders
similarly have uncate fang structure and they obviously are able to
bite humans. There may be a difference in the musculature that houses
the fang such that recluses have stronger muscles for penetration
because they are hunting spiders needing to subdue prey whereas
pholcid spiders are able to wrap their prey and don't need as strong a
musculature. So, again, the myth states as fact something about which
there is no scientific basis. A video of the MythBusters can be watched here : Supposedly, daddy longlegs possess extremely powerful poison, but
their fangs are too short to penetrate human skin. To find out, Jamie
and Adam hunted down a host of daddy longlegs and took them to a
spider specialist who could milk out their venom. Next, the spider
specialist compared the toxicity of daddy longlegs venom to black
widow venom (on mice). The red-bellied widow won out, busting the myth. A microscopic measurement of the long-legged spider's fangs proved their
miniscule quarter-millimeter length could puncture human skin , taking
a double bite out of the daddy longlegs myth. [ Source ] As seen in the video Adam did let himself get bitten and only felt " a tiny little burning ". | {
"source": [
"https://skeptics.stackexchange.com/questions/5344",
"https://skeptics.stackexchange.com",
"https://skeptics.stackexchange.com/users/374/"
]
} |
5,352 | I recently came across the following quote on the twitters, attributed to Albert Einstein: Everybody is a genius. But if you judge a fish by its ability to climb
a tree, it will live its whole life believing that it is stupid. I doubt that it's a genuine quote, but is the implied claim that all fish are incapable of climbing trees true? Related question: Did Einstein say the "if you judge a fish" quote that many are attributing to him? | There are several fish which are capable of achieving this: Mangrove killifish The mangrove killifish (Rivulus marmoratus), found among the mangroves in Florida, Latin American and Caribbean, is a strange fish indeed. For starters, this is the only vertebrate animal that is known to fertilize its own eggs. There are males and females, but most of these little fish are hermaphrodites. Mangrove killifish are able to alter their gills to be able to live out of the water. When the water around the mangroves dries up, these fish climb up into the trees and hide in logs until the water returns. Once it's safe, they change their gills back and venture back to the water. Climbing Gourami Shirlie from Freshwater Aquariums told me about these fish when I was excitedly telling her about the mangrove killifish. I've always been a fan of the beautiful gouramis in my own aquarium, but I never knew that any of them could climb. They hail from Africa and Southern Asia. One is called a climbing perch (Anabas testudineus). If the water it lives in dries out, it will climb out and travel in search of a new home. Its gills are spiny, and the climbing perch can use them (as well as its anal fin) to even climb up trees. Climbing Catfish Finally, scientists have discovered Lithogenes wahari recently. This member of the catfish family can actually grasp with its pelvic fin. They have been finding specimens clinging to rocks, but it's not a stretch to think that they could climb trees too. This fish also has a sort of bony armor that protects its head and tail. From the Science Centric regarding the climbing catfish: the bony armour that protects its head and tail, and a grasping pelvic fin that allows it to climb vertical surfaces. It's definitely true that some fish can climb trees. It's certain that they don't do it very often. | {
"source": [
"https://skeptics.stackexchange.com/questions/5352",
"https://skeptics.stackexchange.com",
"https://skeptics.stackexchange.com/users/104/"
]
} |
5,459 | My dad just bought a device that claims to reduce electricity consumption in the house. It is called a Spike Buster . It is just a small box with an LED that you connect to any socket in the house. It claims to stabilize the voltage of the electricity and as a result the consumption is reduced. This looks very suspicious to me. It doesn't makes sense to me. Is there any reliable info on these devices that I can show to my dad? | Definite scam! Save up to 50% on your electric bill? Because of a device plugged into any outlet? Uh, no. Here's PG&E with a warning about "black box" energy savers: Before You Buy a Black Box In the last few years I've noticed a huge "surge" :) in scam "electricity saver" products. Some are PFC capacitors, some are plans for "free energy" devices, some are ripoffs involving solar panels. Yours is just one of many, although this is the first time I've seen a claim that a surge protector can affect your KWH meter. And if it's actually a voltage regulator, then it needs to be wired into your fusebox panel. A regulator needs a series connection and cannot work by being plugged into an outlet. Similar topic: The old standby scam is to sell capacitors to homeowners, claiming that the AC motors in their appliances need correct power-factor. The scammers can get away with this because PFC power-factor correcting does actually save energy. Unfortunately the saved energy was all in the utility company power lines, not inside the home. The electric company will love you if you spend hundreds of bucks on a PFC capacitor. Their power lines run slightly cooler, and their generators use slightly less fuel. The higher current all remains between the capacitor and your various motors, so electric company is happy. But the scam part relies on customer ignorance: your KWH meter cannot detect the change. That's part of its design. Power-factor correction has zero effect on your KWH meter, so it won't actually save you a dime. If you buy a fake "Power Saver," probably you'll see your electric bill actually go down. This is an interesting psychological effect: spending that much money makes you become waste-conscious about the electric utility. You'll start turning off lights, taking shorter showers, running dishwashers only when full. Doing all sorts of things to help reduce the bills. When it actually works, should we give credit to the magic scam box? Send testimonials to the advertisers? Instead try installing the device in someone elses' home, then don't tell them about it. Will it still work? Better yet, have a 3rd party do it without telling anyone which house has the device. To stop everyone from unconsciously helping it along, the only fair test is a double-blind experiment. Well, actually the fairest test is to use high-precision real-wattage recorder to measure home energy consumption with the device attached and with it removed. We all have one of these: the electric meter outside the house. Run your electric clothes dryer, hook up the "power saver" device, then count the rotations of the meter disk for exactly one minute. Then unplug the device and count them for another minute. Repeat a few times. Can you see any difference? If so, is it large enough to justify the purchase price of the device? One "power saver" does actually work. These are the NASA "Nola" devices used to control induction motors (for example, refrigerator motor). They remain controversial because they're mostly used industrially, and the energy savings is probably too small for any homeowner to justify buying the expensive controller. If someone tries to sell you one, they're probably lying about the expected amount of energy savings. | {
"source": [
"https://skeptics.stackexchange.com/questions/5459",
"https://skeptics.stackexchange.com",
"https://skeptics.stackexchange.com/users/3213/"
]
} |
5,480 | There are stories like this floating in the internet. Apple is now more liquid than the United States government, the Financial Post reports. As the government struggles to resolve the debt ceiling debate, the operating balance in Washington is at US$73.768 billion and falling. Meanwhile, Apple has US$75.876 billion – and that number isn’t going anywhere but up as the company continues to break records and make its competitors look bad. Does Apple really have more money than the U.S. government? Is the story misleading? | This is just a case of total misunderstanding of terms. Original claim was that Apple has more cash, than operating balance of US federal government. Operating balance means revenues minus liabilities. Of course, the numbers aren’t directly comparable; the government’s number represents how much financial headroom it has before bumping up against an arbitrary debt ceiling, while Apple’s cash reserve represents the pile of money the Cupertino Operating balance is something completely different than reserves. As for reserves, current assets of Federal Reserve are worth $2,907,837 million , which is almost 40 times more than Apple's reserves. | {
"source": [
"https://skeptics.stackexchange.com/questions/5480",
"https://skeptics.stackexchange.com",
"https://skeptics.stackexchange.com/users/2079/"
]
} |
5,484 | Claim:
Drinking distilled water can actually remove needed minerals from the body, due the lack of minerals in the water, and therefore do damage to the body. I have a hard time accepting this due to amount of minerals that can be found in the body. However, I know that you can drown the body by drinking to much water and that body does use water to remove waste from it cells. | This is true. I did some digging: the Wikipedia article on water purification linked to a study by the WHO that discusses the risks of drinking demineralised water. It states It has been adequately demonstrated that consuming water of low mineral content has a
negative effect on homeostasis mechanisms, compromising the mineral and water metabolism in
the body This means that it interferes with the body's attempts to keep the pH and mineral composition of internal organs constant, which is a more exact way of saying that it removes minerals from your body. It reports some of the effects: Results of experiments in human volunteers evaluated by researchers for the WHO report are in agreement with those in animal experiments and suggest the basic mechanism of the
effects of water low in TDS (e.g. < 100 mg/L) on water and mineral homeostasis. Low-mineral
water markedly: 1.) increased diuresis (almost by 20%, on average), body water volume, and
serum sodium concentrations, 2.) decreased serum potassium concentration, and 3.) increased the
elimination of sodium, potassium, chloride, calcium and magnesium ions from the body. The report warns of other indirect downsides of demineralised water, including corrosion of pipes, and reduction of minerals from food cooked with it. Ref: Health Risks From Drinking Demineralised Water , Frantisek Kozisek, Water safety plan manual: Step-by-step risk management for drinking-water suppliers, 2009, Chapter 12. | {
"source": [
"https://skeptics.stackexchange.com/questions/5484",
"https://skeptics.stackexchange.com",
"https://skeptics.stackexchange.com/users/4245/"
]
} |
5,553 | SOURCE PC World reports that a "psychometric consulting" firm called
AptiQuant gave free online IQ tests to 100,000 people, and then
plotted the scores against the browser on which the tests were taken. It found that Internet Explorer users scored lower than average, while
Chrome, Firefox and Safari users were very slightly above average.
Camino, Opera and Internet Explorer with Chrome Frame were scored
"exceptionally" high. I have a feeling that this is nothing more than a marketing gimmick but is there any validity in in the study? Is there any independant data to back this up or refute? Can a anyone who takes and online IQ test really be allowed to score Exceptional? | The 'study' has apparently since been exposed as a hoax: BBC News: Internet Explorer story was bogus A story which suggested that users of Internet Explorer have a lower IQ than people who chose other browsers appears to have been an elaborate hoax. A number of media organisations, including the BBC, reported on the research, put out by Canadian firm ApTiquant. (sic) It later emerged that the company's website was only recently set up and staff images were copied from a legitimate business in Paris. IT News: Analysis: Is the Internet Explorer IQ test a fake? But whois records - the database that lists who was responsible for a website such as its technical contacts and owner - and web content comparisons raised questions over the company behind the survey that was the subject of the story. So currently there is no evidence to support a link between IQ and browser choice Other sources: http://www.ibtimes.com/articles/191615/20110803/internet-explorer-aptiquant-iq-study-hoax.htm | {
"source": [
"https://skeptics.stackexchange.com/questions/5553",
"https://skeptics.stackexchange.com",
"https://skeptics.stackexchange.com/users/3126/"
]
} |
5,581 | Considering the number of people that use it compared to other transports (cars, trains, boats), do you have less chances to die in a plane? Are there any studies about this? | The question of airplane safety is a nice field to demonstrate the old quip that you shouldn't trust any statistic you haven't forged yourself. It is crucial to define exactly what you mean by safe, and what you are comparing: Are you interested in total deaths per year? Then the car is the most dangerous and, depending on the country you are looking at, trains are the least dangerous, with airplanes also quite safe: http://www.medicine.ox.ac.uk/bandolier/booth/Risk/trasnsportpop.html Your annual odds to die in a passenger airline accident is 1 in 4,406,209 in the US, whereas the annaul odds to die in a car accident are 1 in 6,478. This seems to indicate that a plane is almost a thousand times safer than a car. But is that the question one should be asking? You could ask: What is the chance of death in a transport related accident per hour of travel , or per kilometer traveled . I found this short statement in a review on the book "Freakonomics": Levitt also debunks some of today's more relevant conventional wisdom: the notion of driving being a far more dangerous form of transportation than flying. While it's true that many more people die annually in car accidents than in plane crashes, it's often overlooked that the dramatic difference in number of deaths is largely due to the amount of time the average person spends in an automobile in comparison to the relatively small number of hours spent in flight. Levitt goes on to show the per hour death rate of driving to be about equal to that of flying. Have a nice trip. So what is the conclusion we can draw? Well, you can convert the chance of death per hour into a chance of death per trip by multiplying with the duration of the trip. And even if both car and plane have the same hourly rate, a trip in a plane for the same destination is far shorter than a trip in the car. If the hourly death rates are the same, then a plane is safer on a per trip basis by how much faster it is than a car. While this still means that a plane is safer than a car per trip , the factor is far less than 1000. More on the order of ~10. | {
"source": [
"https://skeptics.stackexchange.com/questions/5581",
"https://skeptics.stackexchange.com",
"https://skeptics.stackexchange.com/users/417/"
]
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.