text
stringlengths 558
17k
|
---|
# Was Bill Clinton a Good President?'
**Argument**
Science / Technology:
Clinton cut NASA’s budget by $715 million in 1995 (about 5%) and did not restore the bulk of the money until three months before he left office. The result was a space program struggling to operate with less money for most of Clinton’s time in office. Some blame the 2003 Space Shuttle Columbia explosion on Clinton’s decision to slash NASA’s budget by an aggregate of $56 million over his presidency.
**Background**
William Jefferson Clinton, known as Bill Clinton, served as the 42nd President of the United States from Jan. 20, 1993 to Jan. 19, 2001.
His proponents contend that under his presidency the US enjoyed the lowest unemployment and inflation rates in recent history, high home ownership, low crime rates, and a budget surplus. They give him credit for eliminating the federal deficit and reforming welfare, despite being forced to deal with a Republican-controlled Congress.
His opponents say that Clinton cannot take credit for the economic prosperity experienced during his scandal-plagued presidency because it was the result of other factors. In fact, they blame his policies for the financial crisis that began in 2007. They point to his impeachment by Congress and his failure to pass universal health care coverage as further evidence that he was not a good president. Read more background…
|
# Animal Dissection - Pros & Cons - ProCon.org'
**Argument**
Animal dissection is a productive and worthwhile use for dead animals.
A large portion of dissected animals were already dead before being allocated for dissection. Having students dissect the animals allows for a learning opportunity instead of just wasting the animal.
Bio Corp, a biological supply company, reported that more than 98% of the animals they received were already dead. Bill Wadd, Co-Owner of Bio Corp, stated, “We just take what people would throw away. Instead of throwing it in the trash, why not have students learn from it?”
Most animals used in classroom dissections are purchased from biological supply companies. Some animals, such as cats, are sourced from shelters that have already euthanized the animals. However, cats and dogs account for fewer than 1% of lab animals. Fetal pigs are byproducts of the meat industry that would have otherwise been sent to a landfill.
**Background**
Dissecting a frog might be one of the most memorable school experiences for many students, whether they are enthusiastic participants, prefer lab time to lectures, or are conscientious objectors to dissection.
The use of animal dissection in education goes back as far as the 1500s when Belgian doctor Andreas Vesalius used the practice as an instructional method for his medical students. [1]
Animal dissections became part of American K-12 school curricula in the 1920s. About 75-80% of North American students will dissect an animal by the time they graduate high school. An estimated six to 12 million animals are dissected in American schools each year. In at least 21 states and DC, K-12 students have the legal option to request an alternate assignment to animal dissection. [2] [3] [27]
While frogs are the most common animal for K-12 students to dissect, students also encounter fetal pigs, cats, rabbits, guinea pigs, rats, minks, birds, turtles, snakes, crayfish, perch, starfish, and earthworms, as well as grasshoppers and other insects. Sometimes students dissect parts of animals such as sheep lungs, cows’ eyes, and bull testicles. [2]
Are animal dissections in K-12 schools crucial learning opportunities that encourage science careers and make good use of dead animals? Or are animal dissections unnecessary experiments that promote environmental damage when ethical alternatives exist?
|
# Should the Federal Minimum Wage Be Increased?'
**Argument**
If the minimum wage is increased, companies may use more robots and automated processes to replace service employees.
If companies cannot afford to pay a higher minimum wage for low-skilled service employees, they will use automation to avoid hiring people in those positions altogether. Oxford University researchers Carl Benedikt Frey, PhD, and Michael A. Osborne, DPhil, stated in a 2013 study that “robots are already performing many simple service tasks such as vacuuming, mopping, lawn mowing, and gutter cleaning” and that “commercial service robots are now able to perform more complex tasks in food preparation, health care, commercial cleaning, and elderly care.” As attorney Andrew Woodman, JD, predicted in his blog for the Huffington Post, a minimum wage increase “could ultimately be the undoing of low-income service-industry jobs in the United States.” The Washington Post observed that as minimum wage campaigns gain traction around the country, “Many [restaurant] chains are already at work looking for ingenious ways to take humans out of the picture, threatening workers in an industry that employs 2.4 million wait staffers, nearly 3 million cooks and food preparers and many of the nation’s 3.3 million cashiers.”
**Background**
The federal minimum wage was introduced in 1938 during the Great Depression under President Franklin Delano Roosevelt. It was initially set at $0.25 per hour and has been increased by Congress 22 times, most recently in 2009 when it went from $6.55 to $7.25 an hour. 29 states plus the District of Columbia (DC) have a minimum wage higher than the federal minimum wage. 1.8 million workers (or 2.3% of the hourly paid working population) earn the federal minimum wage or below.
Proponents of a higher minimum wage state that the current federal minimum wage of $7.25 per hour is too low for anyone to live on; that a higher minimum wage will help create jobs and grow the economy; that the declining value of the minimum wage is one of the primary causes of wage inequality between low- and middle-income workers; and that a majority of Americans, including a slim majority of self-described conservatives, support increasing the minimum wage.
Opponents say that many businesses cannot afford to pay their workers more, and will be forced to close, lay off workers, or reduce hiring; that increases have been shown to make it more difficult for low-skilled workers with little or no work experience to find jobs or become upwardly mobile; and that raising the minimum wage at the federal level does not take into account regional cost-of-living variations where raising the minimum wage could hurt low-income communities in particular. Read more background…
|
# Should Animals Be Used for Scientific or Commercial Testing?'
**Argument**
Animal testing contributes to life-saving cures and treatments.
The California Biomedical Research Association states that nearly every medical breakthrough in the last 100 years has resulted directly from research using animals. Animal research has contributed to major advances in treating conditions such as breast cancer, brain injury, childhood leukemia, cystic fibrosis, multiple sclerosis, tuberculosis, and more, and was instrumental in the development of pacemakers, cardiac valve substitutes, and anesthetics.
**Background**
An estimated 26 million animals are used every year in the United States for scientific and commercial testing. Animals are used to develop medical treatments, determine the toxicity of medications, check the safety of products destined for human use, and other biomedical, commercial, and health care uses. Research on living animals has been practiced since at least 500 BC.
Proponents of animal testing say that it has enabled the development of many life-saving treatments for both humans and animals, that there is no alternative method for researching a complete living organism, and that strict regulations prevent the mistreatment of animals in laboratories.
Opponents of animal testing say that it is cruel and inhumane to experiment on animals, that alternative methods available to researchers can replace animal testing, and that animals are so different from human beings that research on animals often yields irrelevant results. Read more background…
|
# Should Pit Bulls Be Banned? Top 3 Pros and Cons'
**Argument**
BSL is expensive to enact.
Nationwide, BSL would cost an estimated $476 million per year, including enforcement of the law, related vet and shelter care, euthanization and disposal, and legal fees. There are about 4.5 million dog bites per year, resulting in about 40 deaths, making each death cost taxpayers about $11.9 million.
That’s a steep cost for a relatively small, albeit important, issue. There are about 78 million dogs in the United States, meaning less than 17% of dogs bite less than 1.4% and kill less than 0.00001% of the US population. And, of course, breeds not covered by BSL also bite. One study of 35 common breeds found Chihuahuas were the most aggressive.
**Background**
Breed-specific legislation (BSL) is a “blanket term for laws that regulate or ban certain dog breeds in an effort to decrease dog attacks on humans and other animals,” according to the American Society for the Prevention of Cruelty to Animals (ASPCA). The laws are also called pit bull bans and breed-discriminatory laws. [1]
The legislation frequently covers any dog deemed a “pit bull,” which can include American Pit Bull Terriers, American Staffordshire Terriers, Staffordshire Bull Terriers, English Bull Terriers, and pit bull mixes, though any dog that resembles a pit bull or pit bull mix can be included in the bans. Other dogs are also sometimes regulated, including American Bulldogs, Rottweilers, Mastiffs, Dalmatians, Chow Chows, German Shepherds, and Doberman Pinschers, as well as mixes of these breeds or, again, dogs that simply resemble the restricted breeds. [1]
The term “pit bull” refers to a dog with certain characteristics, rather than a specific breed. Generally, the dogs have broad heads and muscular bodies. Pit bulls are targeted because of their history in dog fighting. [2]
Dog fighting dates to at least 43 CE, when the Romans invaded Britain, and both sides brought fighting dogs to the war. The Romans believed the British to have better-trained fighting dogs and began importing (and later exporting) the dogs for war and entertainment wherein the dogs were made to fight against wild animals, including elephants. From the 12th century until the 19th century, dogs were used for baiting chained bears and bulls. In 1835, England outlawed baiting, which then increased the popularity of dog-on-dog fights. [3] [4]
Fighting dogs arrived in the United States in 1817, whereupon Americans crossbred several breeds to create the American Pit Bull. The United Kennel Club endorsed the fights and provided referees. Dog fighting was legal in most US states until the 1860s, and it was not completely outlawed in all states until 1976. Today, dog fighting is a felony offense in all 50 states, though the fights thrive in illegal underground venues. [3] [4]
More than 700 cities in 29 states have breed-specific legislation, while 20 states do not allow breed-specific legislation, and one allows no new legislation after 1990, as of Apr. 1, 2020. [1]
|
# Internet & "Stupidity" - Pros & Cons - ProCon.org'
**Argument**
The internet is causing us to lose the ability to perform simple tasks.
“Hey, Alexa, turn on the bathroom light… play my favorite music playlist, cook rice in the Instant Pot… read me the news… what’s the weather today…”
“Hey, Siri, set a timer… call my sister… get directions to Los Angeles… what time is it in Tokyo… who stars in that TV show I like…”
While much of the technology is too new to have been thoroughly researched, we rely on the internet for everything from email to seeing who is at our front doors to looking up information, so much so that we forget how to or never learn to complete simple tasks. And the accessibility of information online makes us believe we are smarter than we are.
In the 2018 election, Virginia state officials learned that young adults in Generation Z wanted to vote by mail but did not know where to buy stamps because they are so used to communicating online rather than via US mail.
We require GPS maps narrated by the voice of a digital assistant to drive across the towns in which we have lived for years. Nora Newcombe, PhD, Professor of Psychology at Temple University, stated, “GPS devices cause our navigational skills to atrophy, and there’s increasing evidence for it. The problem is that you don’t see an overview of the area, and where you are in relation to other things. You’re not actively navigating — you’re just listening to the voice.”
Millennials were more likely to use pre-prepared foods, use the internet for recipes, and use a meal delivery service. They were least likely to know offhand how to prepare lasagna, carve a turkey, or fry chicken, and fewer reported being a “good cook” than Generation X or Baby Boomers, who were less likely to rely on the internet for cooking tasks.
Using the internet to store information we previously would have committed to memory (how to roast a chicken, for example) is “offloading.” According to Benjamin Storm, PhD, Associate Professor of Psychology at the University of California at Santa Cruz, “Offloading robs you of the opportunity to develop the long-term knowledge structures that help you make creative connections, have novel insights and deepen your knowledge.”
**Background**
In a 2008 article for The Atlantic, Nicholas Carr asked, “Is Google Making Us Stupid?” Carr argued that the internet as a whole, not just Google, has been “chipping away [at his] capacity for concentration and contemplation.” He was concerned that the internet was “reprogramming us.” [1]
However, Carr also noted that we should “be skeptical of [his] skepticism,” because maybe he’s “just a worrywart.” He explained, “Just as there’s a tendency to glorify technological progress, there’s a countertendency to expect the worst of every new tool or machine.” [1]
The article, and Carr’s subsequent book, The Shallows: What the Internet Is Doing to Our Brains (2010, revised in 2020), ignited a continuing debate on and off the internet about how the medium is changing the ways we think, how we interact with text and each other, and the very fabric of society as a whole. [1]
ProCon asked readers their thoughts on how the internet affects their brains and whether online information is reliable and trustworthy. While 52.7% agreed or strongly agreed that being on the internet has caused a decline in their attention span and ability to concentrate, only 21.5% thought the internet caused them to lose the ability to perform simple tasks like reading a map. [41]
Only 18% believed online information was true. Nearly 60% admitted difficulty in determining if information online was truthful. And 77% desired a more effective way of managing and filtering information on the Internet to differentiate between fact, opinion, and overt disinformation. [41]
Between Apr. 28, 2021, and Sep. 1, 2022, the survey garnered 15,740 responses. To see the complete results, click here. To add your thoughts, complete the survey. [41]
|
# Should the Federal Minimum Wage Be Increased?'
**Argument**
Increasing the minimum wage would reduce poverty.
A person working full time at the federal minimum wage of $7.25 per hour earns $15,080 in a year, which is 20% higher than the 2015 federal poverty level of $12,331 for a one-person household under 65 years of age but 8% below the 2015 federal poverty level of $16,337 for a single-parent family with a child under 18 years of age. According to a 2014 Congressional Budget Office report, increasing the minimum wage to $9 would lift 300,000 people out of poverty, and an increase to $10.10 would lift 900,000 people out of poverty. A 2013 study by University of Massachusetts at Amherst economist Arindrajit Dube, PhD, estimated that increasing the minimum wage to $10.10 is “projected to reduce the number of non-elderly living in poverty by around 4.6 million, or by 6.8 million when longer term effects are accounted for.”
**Background**
The federal minimum wage was introduced in 1938 during the Great Depression under President Franklin Delano Roosevelt. It was initially set at $0.25 per hour and has been increased by Congress 22 times, most recently in 2009 when it went from $6.55 to $7.25 an hour. 29 states plus the District of Columbia (DC) have a minimum wage higher than the federal minimum wage. 1.8 million workers (or 2.3% of the hourly paid working population) earn the federal minimum wage or below.
Proponents of a higher minimum wage state that the current federal minimum wage of $7.25 per hour is too low for anyone to live on; that a higher minimum wage will help create jobs and grow the economy; that the declining value of the minimum wage is one of the primary causes of wage inequality between low- and middle-income workers; and that a majority of Americans, including a slim majority of self-described conservatives, support increasing the minimum wage.
Opponents say that many businesses cannot afford to pay their workers more, and will be forced to close, lay off workers, or reduce hiring; that increases have been shown to make it more difficult for low-skilled workers with little or no work experience to find jobs or become upwardly mobile; and that raising the minimum wage at the federal level does not take into account regional cost-of-living variations where raising the minimum wage could hurt low-income communities in particular. Read more background…
|
# Do Violent Video Games Contribute to Youth Violence?'
**Argument**
Simulating violence such as shooting guns and hand-to-hand combat in video games can cause real-life violent behavior.
Video games often require players to simulate violent actions, such as stabbing, shooting, or dismembering someone with an ax, sword, chainsaw, or other weapons.
**Background**
Around 73% of American kids age 2-17 played video games in 2019, a 6% increase over 2018. Video games accounted for 17% of kids’ entertainment time and 11% of their entertainment spending. The global video game industry was worth contributing $159.3 billion in 2020, a 9.3% increase of 9.3% from 2019.
Violent video games have been blamed for school shootings, increases in bullying, and violence towards women. Critics argue that these games desensitize players to violence, reward players for simulating violence, and teach children that violence is an acceptable way to resolve conflicts.
Video game advocates contend that a majority of the research on the topic is deeply flawed and that no causal relationship has been found between video games and social violence. They argue that violent video games may provide a safe outlet for aggressive and angry feelings and may reduce crime. Read more background…
|
# Is Human Activity Primarily Responsible for Global Climate Change?'
**Argument**
Permafrost is melting at unprecedented rates due to global warming, causing further climate changes.
According to the IPCC, there is “high confidence” (about an 8 out of 10 chance) that anthropogenic global warming is causing permafrost, a subsurface layer of frozen soil, to melt in high-latitude regions and in high-elevation regions. As permafrost melts it releases methane, a greenhouse gas that absorbs 84 times more heat than CO2 for the first 20 years it is in the atmosphere, creating even more global warming in a positive feedback loop.
By the end of the 21st century, warming temperatures in the Arctic will cause a 30%-70% decline in permafrost. As human-caused global warming continues, Arctic air temperatures are expected to increase at twice the global rate, increasing the rate of permafrost melt, changing the local hydrology, and impacting critical habitat for native species and migratory birds. According to the 2014 National Climate Assessment, some climate models suggest that near-surface permafrost will be “lost entirely” from large parts of Alaska by the end of the 21st century.
**Background**
Average surface temperatures on earth have risen more than 2°F over the past 100 years. During this time period, atmospheric levels of greenhouse gases such as carbon dioxide (CO2) and methane (CH4) have notably increased. This site explores the debate on whether climate change is caused by humans (also known as anthropogenic climate change).
The pro side argues rising levels of atmospheric greenhouse gases are a direct result of human activities such as burning fossil fuels, and that these increases are causing significant and increasingly severe climate changes including global warming, loss of sea ice, sea level rise, stronger storms, and more droughts. They contend that immediate international action to reduce greenhouse gas emissions is necessary to prevent dire climate changes.
The con side argues human-generated greenhouse gas emissions are too small to substantially change the earth’s climate and that the planet is capable of absorbing those increases. They contend that warming over the 20th century resulted primarily from natural processes such as fluctuations in the sun’s heat and ocean currents. They say the theory of human-caused global climate change is based on questionable measurements, faulty climate models, and misleading science. Read more background…
|
# Should the Federal Minimum Wage Be Increased?'
**Argument**
Raising the minimum wage would help reduce the federal deficit.
According to Aaron Pacitti, PhD, Associate Professor of Economics at Siena College, raising the minimum wage would help reduce the federal budget deficit “by lowering spending on public assistance programs and increasing tax revenue. Since firms are allowed to pay poverty-level wages to 3.6 million people — 5 percent of the workforce — these workers must rely on Federal income support programs. This means that taxpayers have been subsidizing businesses, whose profits have risen to record levels over the past 30 years.” According to James K. Galbraith, PhD, Professor of Government at the University of Texas in Austin, “[b]ecause payroll- and income-tax revenues would rise [as a result of an increase in the minimum wage], the federal deficit would come down.”
**Background**
The federal minimum wage was introduced in 1938 during the Great Depression under President Franklin Delano Roosevelt. It was initially set at $0.25 per hour and has been increased by Congress 22 times, most recently in 2009 when it went from $6.55 to $7.25 an hour. 29 states plus the District of Columbia (DC) have a minimum wage higher than the federal minimum wage. 1.8 million workers (or 2.3% of the hourly paid working population) earn the federal minimum wage or below.
Proponents of a higher minimum wage state that the current federal minimum wage of $7.25 per hour is too low for anyone to live on; that a higher minimum wage will help create jobs and grow the economy; that the declining value of the minimum wage is one of the primary causes of wage inequality between low- and middle-income workers; and that a majority of Americans, including a slim majority of self-described conservatives, support increasing the minimum wage.
Opponents say that many businesses cannot afford to pay their workers more, and will be forced to close, lay off workers, or reduce hiring; that increases have been shown to make it more difficult for low-skilled workers with little or no work experience to find jobs or become upwardly mobile; and that raising the minimum wage at the federal level does not take into account regional cost-of-living variations where raising the minimum wage could hurt low-income communities in particular. Read more background…
|
# Should Animals Be Used for Scientific or Commercial Testing?'
**Argument**
There is no adequate alternative to testing on a living, whole-body system.
A living systems, human beings and animals are extremely complex. Studying cell cultures in a petri dish, while sometimes useful, does not provide the opportunity to study interrelated processes occurring in the central nervous system, endocrine system, and immune system. Evaluating a drug for side effects requires a circulatory system to carry the medicine to different organs.
**Background**
An estimated 26 million animals are used every year in the United States for scientific and commercial testing. Animals are used to develop medical treatments, determine the toxicity of medications, check the safety of products destined for human use, and other biomedical, commercial, and health care uses. Research on living animals has been practiced since at least 500 BC.
Proponents of animal testing say that it has enabled the development of many life-saving treatments for both humans and animals, that there is no alternative method for researching a complete living organism, and that strict regulations prevent the mistreatment of animals in laboratories.
Opponents of animal testing say that it is cruel and inhumane to experiment on animals, that alternative methods available to researchers can replace animal testing, and that animals are so different from human beings that research on animals often yields irrelevant results. Read more background…
|
# Should the Federal Minimum Wage Be Increased?'
**Argument**
Raising the minimum wage would disadvantage low-skilled workers.
From an employer’s perspective, people with the lowest skill levels cannot justify higher wages. A study by Jeffrey Clemens, PhD, and Michael J. Wither, PhD, found that minimum wage increases result in reduced average monthly incomes for low-skilled workers ($100 less during the first year following a minimum wage increase and $50 over the next two years) due to a reduction in employment. James Dorn, PhD, Senior Fellow at the Cato Institute, stated that a 10% increase in the minimum wage “leads to a 1 to 3 percent decrease in employment of low-skilled workers” in the short term, and “to a larger decrease in the long run.” George Reisman, PhD, Professor Emeritus of Economics at Pepperdine University, stated that if the minimum wage is increased to $10.10, “and the jobs that presently pay $7.25 had to pay $10.10, then workers who previously would not have considered those jobs because of their ability to earn $8, $9, or $10 per hour will now consider them… The effect is to expose the workers whose skills do not exceed a level corresponding to $7.25 per hour to the competition of better educated, more-skilled workers presently able to earn wage rates ranging from just above $7.25 to just below $10.10.”
**Background**
The federal minimum wage was introduced in 1938 during the Great Depression under President Franklin Delano Roosevelt. It was initially set at $0.25 per hour and has been increased by Congress 22 times, most recently in 2009 when it went from $6.55 to $7.25 an hour. 29 states plus the District of Columbia (DC) have a minimum wage higher than the federal minimum wage. 1.8 million workers (or 2.3% of the hourly paid working population) earn the federal minimum wage or below.
Proponents of a higher minimum wage state that the current federal minimum wage of $7.25 per hour is too low for anyone to live on; that a higher minimum wage will help create jobs and grow the economy; that the declining value of the minimum wage is one of the primary causes of wage inequality between low- and middle-income workers; and that a majority of Americans, including a slim majority of self-described conservatives, support increasing the minimum wage.
Opponents say that many businesses cannot afford to pay their workers more, and will be forced to close, lay off workers, or reduce hiring; that increases have been shown to make it more difficult for low-skilled workers with little or no work experience to find jobs or become upwardly mobile; and that raising the minimum wage at the federal level does not take into account regional cost-of-living variations where raising the minimum wage could hurt low-income communities in particular. Read more background…
|
# Universal Basic Income Pros and Cons - Top 3 Arguments For and Against'
**Argument**
UBI removes the incentive to work, adversely affecting the economy and leading to a labor and skills shortage.
Earned income motivates people to work, be successful, work cooperatively with colleagues, and gain skills. However, “if we pay people, unconditionally, to do nothing… they will do nothing” and this leads to a less effective economy, says Charles Wyplosz, PhD, Professor of International Economics at the Graduate Institute in Geneva (Switzerland).
Economist Allison Schrager, PhD, says that a strong economy relies on people being motivated to work hard, and in order to motivate people there needs to be an element of uncertainty for the future. UBI, providing guaranteed security, removes this uncertainty.
Elizabeth Anderson, PhD, Professor of Philosophy and Women’s Studies at the University of Michigan, says that a UBI would cause people “to abjure work for a life of idle fun… [and would] depress the willingness to produce and pay taxes of those who resent having to support them.”
Guaranteed income trials in the United States in the 1960s and 1970s found that the people who received payments worked fewer hours. And, in 2016, the Swiss government opposed implementation of UBI, stating that it would entice fewer people to work and thus exacerbate the current labor and skills shortages.
Nicholas Eberstadt, PhD, Henry Wendt Chair in Political Economy, and Evan Abramsky is a Research Associate, both at American Enterprise Institute (AEI), stated, “the daily routines of existing work-free men should make proponents of the UBI think long and hard. Instead of producing new community activists, composers, and philosophers, more paid worklessness in America might only further deplete our nation’s social capital at a time when good citizenship is already in painfully short supply.”
**Background**
A universal basic income (UBI) is an unconditional cash payment given at regular intervals by the government to all residents, regardless of their earnings or employment status. [45]
Pilot UBI or more limited basic income programs that give a basic income to a smaller group of people instead of an entire population have taken place or are ongoing in Brazil, Canada, China, Finland, Germany, India, Iran, Japan, Kenya, Namibia, Spain, and The Netherlands as of Oct. 20, 2020 [46]
In the United States, the Alaska Permanent Fund (AFP), created in 1976, is funded by oil revenues. AFP provides dividends to permanent residents of the state. The amount varies each year based on the stock market and other factors, and has ranged from $331.29 (1984) to $2,072 (2015). The payout for 2020 was $992.00, the smallest check received since 2013.[46] [47] [48] [49]
UBI has been in American news mostly thanks to the 2020 presidential campaign of Andrew Yang whose continued promotion of a UBI resulted in the formation of a nonprofit, Humanity Forward. [53]
|
# Space Colonization - Pros & Cons - ProCon.org'
**Argument**
Technological advancement into space can exist alongside conservation efforts on Earth.
While Earth is experiencing devastating climate change effects that should be addressed, Earth will be habitable for at least 150 million years, if not over a billion years, based on current predictive models. Humans have time to explore and colonize space at the same time as we mend the effects of climate change on Earth.
Brian Patrick Green stated, “Furthermore, we have to realize that solving Earth’s environmental problems is extremely difficult and so will take a very long time. And we can do this while also pursuing colonization.”
Jeff Bezos suggested that we move all heavy industry off Earth and then zone Earth for residences and light industry only. Doing so could reverse some of the effects of climate change while colonizing space.
Munevar also suggested something similar in more detail: “In the shorter term, a strong human presence throughout the solar system will be able to prevent catastrophes on Earth by, for example, deflecting asteroids on a collision course with us. This would also help preserve the rest of terrestrial life — presumably something the critics would approve of. But eventually, we should be able to construct space colonies… [structures in free space rather than on a planet or moon], which could house millions. These colonies would be positioned to construct massive solar power satellites to provide clean power to the Earth, as well as set up industries that on Earth create much environmental damage. Far from messing up environments that exist now, we would be creating them, with extraordinary attention to environmental sustainability.”
Space Ecologist Joe Mascaro, PhD, summarized, “To save the Earth, we have to go to Mars.” Mascaro argues that expanding technology to go to Mars will help solve problems on Earth: “The challenge of colonising Mars shares remarkable DNA with the challenges we face here on Earth. Living on Mars will require mastery of recycling matter and water, producing food from barren and arid soil, generating carbon-free nuclear and solar energy, building advanced batteries and materials, and extracting and storing carbon from atmospheric carbon dioxide – and doing it all at once. The dreamers, thinkers and explorers who decide to go to Mars will, by necessity, fuel unprecedented lateral innovations [that will solve problems on Earth].”
**Background**
While humans have long thought of gods living in the sky, the idea of space travel or humans living in space dates to at least 1610 after the invention of the telescope when German astronomer Johannes Kepler wrote to Italian astronomer Galileo: “Let us create vessels and sails adjusted to the heavenly ether, and there will be plenty of people unafraid of the empty wastes. In the meantime, we shall prepare, for the brave sky-travellers, maps of the celestial bodies.” [1]
In popular culture, space travel dates back to at least the mid-1600s when Cyrano de Bergerac first wrote of traveling to space in a rocket. Space fantasies flourished after Jules Verne’s “From Earth to the Moon” was published in 1865, and again when RKO Pictures released a film adaptation, A Trip to the Moon, in 1902. Dreams of space settlement hit a zenith in the 1950s with Walt Disney productions such as “Man and the Moon,” and science fiction novels including Ray Bradbury’s The Martian Chronicles (1950). [2] [3] [4]
Fueling popular imagination at the time was the American space race with Russia, amid which NASA (National Aeronautics and Space Administration) was formed in the United States on July 29, 1958, when President Eisenhower signed the National Aeronautics and Space Act into law. After the Russians put the first person, Yuri Gagarin, in space on Apr. 12, 1961, NASA put the first people, Neil Armstrong and Buzz Aldrin, on the Moon in July 1969. What was science fiction began to look more like possibility. Over the next six decades, NASA would launch space stations, land rovers on Mars, and orbit Pluto and Jupiter, among other accomplishments. Launched by President Trump in 2017, NASA’s ongoing Artemis program intends to return humans to the Moon by 2024, landing the first woman on the lunar surface. The lunar launch is more likely to happen in 2025, due to a lag in space suit technology and delays with the Space Launch System rocket, the Orion capsule, and the lunar lander[5] [6] [7] [8] [36]
As of June 17, 2021, three countries had space programs with human space flight capabilities: China, Russia, and the United States. India’s planned human space flights have been delayed by the COVID-19 pandemic, but they may launch in 2023. However, NASA ended its space shuttle program in 2011 when the shuttle Atlantis landed at Kennedy Space Center in Florida on July 21. NASA astronauts going into space afterward rode along with Russians until 2020 when SpaceX took over and first launched NASA astronauts into space on Apr. 23, 2021. SpaceX is a commercial space travel business owned by Elon Musk that has ignited commercial space travel enthusiasm and the idea of “space tourism.” Richard Branson’s Virgin Galactic and Jeff Bezo’s Blue Origin have generated similar excitement. [9] [10] [11] [12] [13]
Richard Branson launched himself, two pilots, and three mission specialists into space from New Mexico for a 90-minute flight on the Virgin Galactic Unity 22 mission on July 11, 2021. The flight marked the first time that passengers, rather than astronauts, went into space. [14] [15]
Jeff Bezos followed on July 20, 2021, accompanied by his brother, Mark, and both the oldest and youngest people to go to space: 82-year-old Wally Funk, a female pilot who tested with NASA in the 1960s but never flew, and Oliver Daemen, an 18-year-old student from the Netherlands. The fully automated, unpiloted Blue Origin New Shepard rocket launched on the 52nd anniversary of the Apollo 11 moon landing and was named after Alan Shepard, who was the first American to travel into space on May 5, 1961. [16] [17]
On Apr. 8, 2022, a SpaceX capsule launched, carrying three paying customers and a former NASA astronaut on a roundtrip to the International Space Station (ISS). Mission AX-1 docked at the ISS on Apr. 9 with former NASA astronaut, current Axiom Space employee, and mission commander, Michael Lopez-Alegría, Israeli businessman Eytan Stibbe, Canadian investor Mark Pathy, and American real estate magnate Larry Connor. The group returned to Earth on Apr. 25, 2022. While this is not the first time paying customers or non-astronauts have traveled to ISS (Russia has sold Soyuz seats), this is the first American mission and the first with no government astronaut corps members. [38] [39]
The International Space Station has been continuously occupied by groups of six astronauts since Nov. 2000, for a total of 243 astronauts from 19 countries as of May 13, 2021. Astronauts spend an average of 182 days (about six months) aboard the ISS. As of Feb. 2020, Russian Valery Polyakov had spent the longest continuous time in space (437.7 days in 1994-1995 on space station Mir), followed by Russian Sergei Avdeyev (379.6 days in 1998-1999 on Mir), Russians Vladimir Titov and Musa Manarov (365 days in 1987-1988 on Mir), American Mark Vande Hei (355 days on ISS) Russian Mikhail Kornienko and American Scott Kelly (340.4 days in 2015-2016 on Mir and ISS respectively), and American Christina Koch (328 days in 2019-20 in ISS). [18] [19] [40]
In Jan. 2022, Space Entertainment Enterprise (SEE) announced plans for a film production studio and a sports arena in space. The module will be named SEE-1 and will dock on Axiom Station, which is the commercial wing of the International Space Station. SEE plans to host film and sports events, as well as content creation by Dec. 2024. [37]
In a 2018 poll, 50% of Americans believed space tourism will be routine for ordinary people by 2068. 32% believed long-term habitable space colonies will be built by 2068. But 58% said they were definitely or probably not interested in going to space. And the majority (63%) stated NASA’s top priority should be monitoring Earth’s climate, while only 18% said sending astronauts to Mars should be the highest priority and only 13% would prioritize sending astronauts to the Moon. [20]
The most common ideas for space colonization include: settling Earth’s Moon, building on Mars, and constructing free-floating space stations.
|
# Ride-Sharing Apps - Pros & Cons - ProCon.org'
**Argument**
Ride-hailing apps are convenient, affordable, and safe for riders and other drivers.
The technology used by ride-hailing companies increases reliability and decreases wait times for consumers, and can offer a 20% to 30% discount over the cost of a taxi.
These apps have built-in safety features, such as displaying the license plate and car model to ensure that riders get into the correct vehicle, the ability to share the route with friends and family, GPS tracking, cash-free transactions, and driver ratings.
A full third of ride-hailing passengers who own vehicles (33%) said the main reason they use the service is to avoid driving while they are drunk.
Fatal alcohol-related car accidents dropped between 10% and 11.4% after the introduction of ride-hailing services and DUI (Driving Under the Influence) citations went down as much as 9.2% in some cities. Researchers estimate that if ride-hailing were fully implemented across the country, the resulting drop in DUI-related accidents could save 500 lives and $1.3 billion in American taxpayer money annually.
**Background**
OverviewPro/Con ArgumentsDiscussion QuestionsTake Action
The first Uber ride was on July 5, 2010 in San Francisco, CA. The app launched internationally in 2011 and reached one billion rides on Dec. 30, 2015, quickly followed by five billion on May 20, 2017 and 10 billion on June 10, 2018. [40]
On May 22, 2012, Lyft launched in San Francisco as a part of Zimride and expanded to 60 cities in 2014 and to 100 more in 2017, at which point Lyft claimed more than one million rides a day. On Nov. 13, 2017, Lyft went international, allowing the company to reach one billion rides on Sep. 18, 2018. [41]
Other ride-hailing and ride-sharing apps include Gett (which partners with Lyft in the US), Curb, Wingz, Via, Scoop, and Bridj. [42]
36% of Americans said they used ride-hailing services such as Uber or Lyft, according to a Jan. 4, 2019 Pew Research Center Survey. Use is up significantly from 2015 when just 15% had used the apps. [38]
But use varies among populations. 45% of urban residents, 51% of people who were 18 to 29, 53% of people who earned $75,000 or more per year, and 55% of people with college degrees, used the apps, compared to 19% of rural residents, 24% of people aged 50 or older, 24% of people who earn $30,000 or less per year, and 20% of people with a high school diploma or less. [38]
In 2018, 70% of Uber and Lyft trips occurred in nine big metropolitan areas: Boston, Chicago, Los Angeles, Miami, New York, Philadelphia, San Francisco, Seattle, and Washington, DC. [3]
Uber officially overtook yellow cabs in New York City in July 2017, when it reported an average of 289,000 trips per day compared to 277,000 taxi rides. More than 2.61 billion ride-hailing trips were taken in 2017, a 37% increase over the 1.90 billion trips in 2016. Ride-hailing trips were down significantly in 2020 and 2021 due to the COVID-19 pandemic. [3] [4] [39]
|
# US Supreme Court Packing - Pros & Cons - ProCon.org'
**Argument**
Historical precedent most strongly supports a nine-judge Supreme Court.
While the US Constitution does not specify the number of Supreme Court justices, neither does it specify that justices must have law degrees or have served as judges.
However, historical precedent has set basic job requirements for the position as well as solidified the number of justices. The Supreme Court has had nine justices consistently since 1868, when Ulysses S. Grant was president.
Changing the number of justices has been linked to political conniving, whether the 1866 shrinkage to prevent Johnson appointments or the 1801 removal of one seat by President John Adams to prevent incoming President Thomas Jefferson from filling a seat or the 1937 attempt by Roosevelt to get the New Deal past the court. We should not break with over 150 years of historical precedent to play political games with the Supreme Court.
Jeff Greenfield, journalist, warns that breaking with precedent would cause trouble, stating, “if Congress pushes through a restructuring of the court on a strictly partisan vote, giving Americans a Supreme Court that looks unlike anything they grew up with, and unlike the institution we’ve had for more than 240 years, it’s hard to imagine the country as a whole would see its decisions as legitimate.”
**Background**
Court packing is increasing the number of seats on a court to change the ideological makeup of the court. [1] The US Constitution does not dictate the number of justices on the Supreme Court, but states only: “The judicial Power of the United States, shall be vested in one supreme Court, and in such inferior Courts as the Congress may from time to time ordain and establish. The Judges, both of the supreme and inferior Courts, shall hold their Offices during good Behavior, and shall, at stated Times, receive for their Services a Compensation which shall not be diminished during their Continuance in Office.” [2]
The number of justices on the Court, set at nine since the mid-19th century, has changed over the years. The court was founded in 1789 with six justices, but was reduced to five in 1801 and increased to six in 1802, followed by small changes over the subsequent 67 years. [14] [15] As explained in Encyclopaedia Britannica, “In 1807 a seventh justice was added, followed by an eighth and a ninth in 1837 and a tenth in 1863. The size of the court has sometimes been subject to political manipulation; for example, in 1866 Congress provided for the gradual reduction (through attrition) of the court to seven justices to ensure that President Andrew Johnson, whom the House of Representatives later impeached and the Senate only narrowly acquitted, could not appoint a new justice. The number of justices reached eight before Congress, after Johnson had left office, adopted new legislation (1869) setting the number at nine, where it has remained ever since.” [3]
The idea of court packing dates to 1937 when President Franklin D. Roosevelt proposed adding a new justice to the Supreme Court for every justice who refused to retire at 70 years old, up to a maximum of 15 justices. The effort is frequently framed as a battle between “an entrenched, reactionary Supreme Court, which overturned a slew of Roosevelt’s New Deal economic reforms, against a hubristic president willing to take the unprecedented step of asking Congress to appoint six new, and sympathetic, justices to the bench,” according to Cicero Institute Senior Policy Advisor Judge Glock, PhD. Roosevelt’s proposal was seen by many as a naked power grab for control of a second branch of government. Plus, as Glock points out, a then new law reducing Supreme Court pensions was preventing retirements at the very time Roosevelt was calling for them. [4] [5] [6]
The contemporary debate has been heavily influenced by events following the Feb. 13, 2016, death of conservative Associate Justice Antonin Scalia. Citing the upcoming 2016 election, Senate Majority Leader Mitch McConnell (R-KY) refused to consider President Barack Obama’s liberal Supreme Court nominee, Merrick Garland. At the time, there were 342 days remaining in Obama’s presidency, 237 days until the 2016 election, and neither the 2016 Democratic nor Republican nominee had been chosen. Because the Senate approval process was delayed until 2017, the next president, Donald Trump, was allowed to appoint a new justice (conservative Neil Gorsuch) to what many Democrats called a “stolen seat” that should have been filled by Obama. [5] [7]
The court packing debate was reinvigorated in 2019 with the appointment of conservative Associate Justice Brett Kavanaugh by President Trump after liberal-leaning swing vote Associate Justice Anthony Kennedy retired in July 2018. [1] In the wake of this appointment, South Bend, Indiana, Mayor Pete Buttigieg, then also a 2020 presidential candidate, suggested expanding the court to 15 justices in the Oct. 15, 2019, Democratic presidential debate. [8]
Then largely brushed aside as “radical,” the topic resurfaced once again upon the death of liberal stalwart Associate Justice Ruth Bader Ginsburg on Sep. 18, 2020. Liberals, and some conservatives, argued that the 2016 precedent should be followed and that Justice Ginsburg’s seat should remain empty until after the 2020 presidential election or the Jan. 2021 presidential inauguration. However, McConnell and the Republicans in control of the Senate, and thus the approval process, indicated they would move forward with a Trump nomination without delay. McConnell defended these actions by stating the President and the Senate are of the same party (which was not the case in 2016, negating—from his perspective—that incident as a precedent that needed following), and thus the country had confirmed Republican rule. [5] [7] [9]
Others argued as well that, since there was a chance that the results of the 2020 election could be challenged in the courts, and perhaps even at the Supreme Court level (due to concerns over the handling of mailed-in ballots), it was critical for an odd number of justices to sit on the Court (for an even number, such as eight, could mean a split 4-4 decision on the critical question of who would be deemed the next U.S. president, sending the country into a constitutional crisis). At the time of McConnell’s Sept. 18 announcement via Twitter, there were 124 days left in Trump’s term and 45 days until the 2020 election. Some called the impending nomination to replace Ginsburg and the 2016/2017 events a version of court packing by Republicans. [5] [7] [9]
Supreme Court nominees can be confirmed by the US Senate with a simple majority vote, with the Vice President called in to break a 50-50 tie. Amy Coney Barrett was confirmed by the Senate on Oct. 26, 2020 with a 52-48 vote to replace Justice Ginsburg, eight days before the 2020 election. [7] [24]
|
# Should Vaccines Be Required for Children?'
**Argument**
Diseases that vaccines target have essentially disappeared.
There is no reason to vaccinate against diseases that no longer occur in the United States. The CDC reported 57 cases of and nine deaths from diphtheria between 1980 and 2016 in the United States. Fewer than 64 cases and 11 deaths per year from tetanus have been reported since 1989. Polio has been declared eradicated in the United States since 1979. There have been only 32 deaths from mumps and 42 deaths from rubella since 1979.
**Background**
Vaccines have been in the news over the past year due to the COVID-19 pandemic. To date no state has yet added the COVID-19 vaccine to their required vaccinations roster. On Sep. 9, 2021, Los Angeles Unified School District, the second largest in the country, mandated the COVID-19 vaccine for students ages 12 and up by Jan. 10, 2022 (pushed back to fall 2022 in Dec. 2021), the first in the country to mandate the coronavirus vaccine. On Oct. 1, 2021, Governor Newsom stated the COVID-19 vaccine would be mandated for all schoolchildren once approved by the FDA.
However, the Centers for Disease Control (CDC) recommends getting 29 doses of 9 other vaccines (plus a yearly flu shot after six months old) for kids aged 0 to six. No US federal laws mandate vaccination, but all 50 states require certain vaccinations for children entering public schools. Most states offer medical and religious exemptions; and some states allow philosophical exemptions.
Proponents say that vaccination is safe and one of the greatest health developments of the 20th century. They point out that illnesses, including rubella, diphtheria, smallpox, polio, and whooping cough, are now prevented by vaccination and millions of children’s lives are saved. They contend adverse reactions to vaccines are extremely rare.
Opponents say that children’s immune systems can deal with most infections naturally, and that injecting questionable vaccine ingredients into a child may cause side effects, including seizures, paralysis, and death. They contend that numerous studies prove that vaccines may trigger problems like ADHD and diabetes. Read more background…
|
# Pokémon Go - Pros & Cons - ProCon.org'
**Argument**
People are playing the game in inappropriate places.
In their quest to capture creatures, players are failing to respect their surroundings, spawning countless articles, such as Evan Dashevsky’s compilation, “18 Completely Inappropriate Places to Play Pokemon Go,” for PCMag. The list includes evidence that players have captured Pokémon in the emergency room, birthing rooms, Auschwitz, funerals, and on an active battlefield near Mosul, among others.
Arlington National Cemetery released a statement saying, “Out of respect for all those interred at Arlington National Cemetery, we require the highest level of decorum from our guests and visitors. Playing games such as Pokémon Go on these hallowed grounds would not be deemed appropriate.”
The US Holocaust Memorial Museum has also asked visitors to stop catching Pokémon on site.
The 9/11 Memorial in New York City is also inundated with players. “A lot of people died here. It’s a place to reflect, not to play a game,” a visitor told TIME magazine.
**Background**
OverviewPro/Con ArgumentsDiscussion QuestionsTake Action
Pokémon Go had more than 21 million daily active users in the United States in its debut week in July 2016, becoming the most popular US mobile game ever. It has surpassed social media apps such as WhatsApp, Instagram, and Twitter for daily use on Android devices. [1] [2] The basic premise of the game is that players try to capture Pokémon in a kind of scavenger hunt that uses the GPS on their mobile phones while walking around in the real world. The game’s slogan is “Gotta catch ’em all.” [3]
As of July 8, 2020, Pokémon Go was still the most popular location-based game with 576.7 million unique downloads globally in the game’s first four years. The game is estimated to have earned $3.6 billion worldwide since 2016, with $445.3 million in the first half of 2020 during COVID-19 (coronavirus) lockdowns, via micro-transactions within the game. [18]
|
# Ride-Sharing Apps - Pros & Cons - ProCon.org'
**Argument**
Ride-hailing services have a history of poor driver screening that puts passengers at risk.
While taxi drivers are subject to rigorous security screening involving fingerprint checks through the FBI database, ride-hailing drivers are only subject to limited background checks. A 2016 lawsuit brought by the cities of Los Angeles and San Francisco revealed that 25 drivers with serious criminal records, such as murder and kidnapping, had passed Uber’s background checks.
San Francisco District Attorney George Gascon, who sued Uber for allegedly failing to protect consumers from fraud and harm, said of the company’s security screening process that does not include fingerprinting, “It is completely worthless.”
A Dec. 2019 report from Uber stated that, among riders and drivers, there had been 10 murders in 2017 and nine in 2018, and 2,936 sexual assaults ranging from nonconsensual touching to rape in 2017 and 3,045 in 2018. One woman wrote in an open letter from 14 victims of sexual harassment and rape by Uber drivers, “Although I immediately reported what happened to Uber, shockingly, this predator continues to drive for Uber to this day. I am 21 years old and will have to live with this the rest of my life.”
**Background**
OverviewPro/Con ArgumentsDiscussion QuestionsTake Action
The first Uber ride was on July 5, 2010 in San Francisco, CA. The app launched internationally in 2011 and reached one billion rides on Dec. 30, 2015, quickly followed by five billion on May 20, 2017 and 10 billion on June 10, 2018. [40]
On May 22, 2012, Lyft launched in San Francisco as a part of Zimride and expanded to 60 cities in 2014 and to 100 more in 2017, at which point Lyft claimed more than one million rides a day. On Nov. 13, 2017, Lyft went international, allowing the company to reach one billion rides on Sep. 18, 2018. [41]
Other ride-hailing and ride-sharing apps include Gett (which partners with Lyft in the US), Curb, Wingz, Via, Scoop, and Bridj. [42]
36% of Americans said they used ride-hailing services such as Uber or Lyft, according to a Jan. 4, 2019 Pew Research Center Survey. Use is up significantly from 2015 when just 15% had used the apps. [38]
But use varies among populations. 45% of urban residents, 51% of people who were 18 to 29, 53% of people who earned $75,000 or more per year, and 55% of people with college degrees, used the apps, compared to 19% of rural residents, 24% of people aged 50 or older, 24% of people who earn $30,000 or less per year, and 20% of people with a high school diploma or less. [38]
In 2018, 70% of Uber and Lyft trips occurred in nine big metropolitan areas: Boston, Chicago, Los Angeles, Miami, New York, Philadelphia, San Francisco, Seattle, and Washington, DC. [3]
Uber officially overtook yellow cabs in New York City in July 2017, when it reported an average of 289,000 trips per day compared to 277,000 taxi rides. More than 2.61 billion ride-hailing trips were taken in 2017, a 37% increase over the 1.90 billion trips in 2016. Ride-hailing trips were down significantly in 2020 and 2021 due to the COVID-19 pandemic. [3] [4] [39]
|
# Should Recreational Marijuana Be Legal?'
**Argument**
Legalizing marijuana is opposed by major public health organizations.
Some of the public health associations that oppose legalizing marijuana for recreational use include the American Medical Association (AMA), the American Society of Addiction Medicine (ASAM), the American Academy of Child and Adolescent Psychiatry, and the American Academy of Pediatrics.
“Legalization campaigns that imply that marijuana is a benign substance present a significant challenge for educating the public about its known risks and adverse effects,” the American Academy of Pediatrics said. The ASAM “does not support the legalization of marijuana and recommends that jurisdictions that have not acted to legalize marijuana be most cautious and not adopt a policy of legalization until more can be learned.” The AMA “believes that (1) cannabis is a dangerous drug and as such is a public health concern; (2) sale of cannabis should not be legalized.”
**Background**
More than half of US adults, over 128 million people, have tried marijuana, despite it being an illegal drug under federal law. Nearly 600,000 Americans are arrested for marijuana possession annually – more than one person per minute. Public support for legalizing marijuana went from 12% in 1969 to 66% today. Recreational marijuana, also known as adult-use marijuana, was first legalized in Colorado and Washington in 2012.
Proponents of legalizing recreational marijuana say it will add billions to the economy, create hundreds of thousands of jobs, free up scarce police resources, and stop the huge racial disparities in marijuana enforcement. They contend that regulating marijuana will lower street crime, take business away from the drug cartels, and make marijuana use safer through required testing, labeling, and child-proof packaging. They say marijuana is less harmful than alcohol, and that adults should have a right to use it if they wish.
Opponents of legalizing recreational marijuana say it will increase teen use and lead to more medical emergencies including traffic deaths from driving while high. They contend that revenue from legalization falls far short of the costs in increased hospital visits, addiction treatment, environmental damage, crime, workplace accidents, and lost productivity. They say that marijuana use harms the user physically and mentally, and that its use should be strongly discouraged, not legalized. Read more background…
|
# Is Human Activity Primarily Responsible for Global Climate Change?'
**Argument**
Rising levels of human-produced gases released into the atmosphere create a greenhouse effect that traps heat and causes global warming.
Gases released into the atmosphere trap heat and cause the planet to warm through a process called the greenhouse effect. When we burn fossil fuels to heat our homes, drive our cars, and run factories, we’re releasing emissions that cause the planet to warm.
Methane, which is increasing in the atmosphere due to agriculture and fossil fuel production, traps 84 times as much heat as CO2 for the first 20 years it is in the atmosphere, and is responsible for about one-fifth of global warming since 1750. Nitrous oxide, primarily released through agricultural practices, traps 300 times as much heat as CO2. Over the 20th century, as the concentrations of CO2, CH4, and NO2 increased in the atmosphere due to human activity, the earth warmed by approximately 1.4°F.
**Background**
Average surface temperatures on earth have risen more than 2°F over the past 100 years. During this time period, atmospheric levels of greenhouse gases such as carbon dioxide (CO2) and methane (CH4) have notably increased. This site explores the debate on whether climate change is caused by humans (also known as anthropogenic climate change).
The pro side argues rising levels of atmospheric greenhouse gases are a direct result of human activities such as burning fossil fuels, and that these increases are causing significant and increasingly severe climate changes including global warming, loss of sea ice, sea level rise, stronger storms, and more droughts. They contend that immediate international action to reduce greenhouse gas emissions is necessary to prevent dire climate changes.
The con side argues human-generated greenhouse gas emissions are too small to substantially change the earth’s climate and that the planet is capable of absorbing those increases. They contend that warming over the 20th century resulted primarily from natural processes such as fluctuations in the sun’s heat and ocean currents. They say the theory of human-caused global climate change is based on questionable measurements, faulty climate models, and misleading science. Read more background…
|
# Dress Codes - Top 3 Pros and Cons | ProCon.org'
**Argument**
Uniformly mandated dress codes promote safety.
From school chemistry labs to manufacturing jobs, some dress code requirements are obviously about safety. Many places require protective glasses, steel-toed boots, fire-resistant jackets, hard hats, or reflective vests, for example.
Other items of clothing may be restricted for less obvious safety reasons. Leggings, for example, are frequently made from synthetic, flammable materials that could react with spilled chemicals and catch fire. Similarly, skin-baring clothing may also be banned around chemicals to prevent burns.
Religious headscarves have been banned in some settings, such as prisons, because wearers could be strangled by the garments in an altercation.
Still other dress codes, such as no full-face masks (like Halloween masks) allowed in movie theaters, are intended to help prevent shootings and other violence.
Other clothing restrictions at schools and public places may seem arbitrary but are used to protect against gang activity. Colors, brands, and logos may be gang-affiliated in certain locations. As Lincoln Public Schools in Nebraska explained, “Clothing and accessories associated with gangs and hate groups have the potential to disrupt the learning environment by bringing symbols that represent fear and intimidation of others into classrooms. The identification and prohibition of this clothing help decrease the impact of gangs and hate groups in school. These rules also protect students who are unaware they are wearing clothes with a gang or hate group affiliation.”
**Background**
While the most frequent debate about dress codes may be centered around K-12 schools, dress codes impact just about everyone’s daily life. From the “no shirt, no shoes, no service” signs (which exploded in popularity in the 1960s and 70s in reaction to the rise of hippies) to COVID-19 pandemic mask mandates, employer restrictions on tattoos and hairstyles, and clothing regulations on airlines, dress codes are more prevalent than we might think. [1] [2] [3] [4] [5]
While it’s difficult to pinpoint the first dress code–humans started wearing clothes around 170,000 years ago–nearly every culture and country throughout history, formally or informally, have had strictures on what to wear and not to wear. These dress codes are common “cultural signifiers,” reflecting social beliefs and cultural values, most often of the social class dominating the culture. Such codes have been prevalent in Islamic countries since the founding of the religion in the seventh century, and they continue to cause controversy today—are they appropriate regulations for maintaining piety, community, and public decency, or are they demeaning and oppressive, especially for Islamic women? [6] [7] [8] [9] [10]
In the West, people were arrested and imprisoned as early as 1565 in England for violating dress codes. The man in question, a servant named Richard Walweyn, was arrested for wearing “a very monsterous and outraygeous great payre of hose” (or trunk hose) and was imprisoned until he could show he owned other hose “of a decent & lawfull facyon.” Other dress codes of the time reserved expensive garments made of silk, fur, and velvet for nobility only, reinforcing how dress codes have been implemented for purposes of social distinction. Informal dress codes—such as high-fashion clothes with logos and the unofficial “Midtown Uniform” worn by men working in finance–underscore how often dress codes have been used to mark and maintain visual distinctions between classes and occupations. Other dress codes have been enacted overtly to police morality, as with the bans on bobbed hair and flapper dresses of the 1920s. Still other dress codes are intended to spur an atmosphere of inclusiveness and professionalism or specifically to maintain safety in the workplace. [6] [7] [8] [11] [12]
|
# Should the Drinking Age Be Lowered from 21 to a Younger Age?'
**Argument**
18 is the age of legal majority (adulthood) in the United States.
Americans enjoy a range of new rights, responsibilities, and freedoms when they turn 18 and become an adult in the eyes of the law.
18-year-olds may vote in local, state, and federal elections; may serve on juries; and may be charged as an adult if accused of a crime. 18-year-olds are responsible for any legally binding contracts they enter; are liable for negligence; and may be sued.
18-year-olds must register with the Selective Service if male and may be drafted into service at times of war. However, 17-year-olds may enter US military service.
18-year-olds may get married without parental consent; buy a house; and enjoy new privacy rights including the shielding of medical, academic, and financial information from parents.
However, drinking alcohol remains regulated under a legal age of license. An 18-year-old may legally be responsible children and legally allowed to make life decisions with years of impact, but may not legally drink a beer.
Todd Rutherford, South Carolina State Representative and Democrat House Minority Leader, who filed a bill on Nov. 10, 2021 to lower South Carolina’s MLDA to 18, stated: “This is a personal freedom issue. If you are old enough to fight for our country, if you’re old enough to vote, if you’re old enough to sign on thousands of dollars of students loans for a college education, then you are old enough to have a[n alcoholic] drink.”
**Background**
All 50 US states have set their minimum drinking age to 21 although exceptions do exist on a state-by-state basis for consumption at home, under adult supervision, for medical necessity, and other reasons.
Proponents of lowering the minimum legal drinking age (MLDA) from 21 argue that it has not stopped teen drinking, and has instead pushed underage binge drinking into private and less controlled environments, leading to more health and life-endangering behavior by teens.
Opponents of lowering the MLDA argue that teens have not yet reached an age where they can handle alcohol responsibly, and thus are more likely to harm or even kill themselves and others by drinking prior to 21. They contend that traffic fatalities decreased when the MLDA increased. Read more background…
|
# Should Adults Have the Right to Carry a Concealed Handgun?'
**Argument**
The Second Amendment does not guarantee concealed carry.
The Second Amendment states: “a well regulated Militia being necessary to the security of a free State, the right of the people to keep and bear Arms shall not be infringed.”
There is no mention of concealed guns in the Constitution or Bill of Rights.
US Supreme Court Justice Antonin Scalia wrote in the court’s majority opinion in DC v. Heller: “Like most rights, the right secured by the Second Amendment is not unlimited… the majority of the 19th-century courts to consider the question held that prohibitions on carrying concealed weapons were lawful under the Second Amendment.”
In May 2014, the US Supreme Court declined to hear Drake v. Jerejian, a case challenging New Jersey’s issuance of concealed weapons permits only to citizens who can prove a “justifiable need.”
In 2016, the US Supreme Court declined to hear an appeal to a 9th Circuit ruling that stated, “Based on the overwhelming consensus of historical sources, we conclude that the protection of the Second Amendment — whatever the scope of that protection may be — simply does not extend to the carrying of concealed firearms in public by members of the general public.”
**Background**
Carrying a concealed handgun in public is permitted in all 50 states as of 2013, when Illinois became the last state to enact concealed carry legislation. Some states require gun owners to obtain permits while others have “unrestricted carry” and do not require permits.
Proponents of concealed carry say concealed carry deters crime, keeps individuals and the public safer, is protected by the Second Amendment, and protect women and minorities who can’t always rely on the police for protection.
Opponents of concealed carry say concealed carry increases crime, increases the chances of a confrontation becoming lethal, is not protected by the Second Amendment, and that public safety should be left to professionally qualified police officers. Read more background…
|
# Should All Americans Have the Right (Be Entitled) to Health Care?'
**Argument**
Providing a right to health care could worsen a doctor shortage.
The Association of American Medical Colleges (AAMC) predicts a shortfall of up to 104,900 doctors by 2030. If a right to health care were guaranteed to all, this shortage could be much worse. Doctor shortages in the United States have led to a 30% increase in wait times for doctors appointments between 2014 and 2017.
**Background**
27.5 million people in the United States (8.5% of the US population) do not have health insurance. Among the 91.5% who do have health insurance, 67.3% have private insurance while 34.4% have government-provided coverage through programs such as Medicaid or Medicare. Employer-based health insurance is the most common type of coverage, applying to 55.1% of the US population. The United States is the only nation among the 37 OECD (Organization for Economic Co-operation and Development) nations that does not have universal health care either in practice or by constitutional right.
Proponents of the right to health care say that no one in one of the richest nations on earth should go without health care. They argue that a right to health care would stop medical bankruptcies, improve public health, reduce overall health care spending, help small businesses, and that health care should be an essential government service.
Opponents argue that a right to health care amounts to socialism and that it should be an individual’s responsibility, not the government’s role, to secure health care. They say that government provision of health care would decrease the quality and availability of health care, and would lead to larger government debt and deficits. Read more background…
|
# Should Social Security Be Privatized?'
**Argument**
Private accounts give individuals control over their retirement decisions.
Americans are capable of making their own decisions regarding how their retirement contributions are invested.
Peter Ferrara, JD, former Director of the International Center for Law and Economics, stated that private accounts “would allow workers personal ownership and control over their retirement funds and broader freedom of choice,” and if the accounts were optional (as they were in President George W. Bush’s plan) they “would also be free to choose whether to exercise the personal account option or stay entirely in the old Social Security framework.”
**Background**
Social Security accounted for 23% ($1 trillion) of total US federal spending in 2019. Since 2010, the Social Security trust fund has been paying out more in benefits than it collects in employee taxes, and is projected to run out of money by 2035. One proposal to replace the current government-administered system is the partial privatization of Social Security, which would allow workers to manage their own retirement funds through personal investment accounts.
Proponents of privatization say that workers should have the freedom to control their own retirement investments, that private accounts will give retirees higher returns than the current system can offer, and that privatization may help to restore the system’s solvency.
Opponents of privatization say that retirees could lose their benefits in a stock market downturn, that many individuals lack the knowledge to make wise investment decisions, and that privatization does nothing to address the program’s approaching insolvency. Read more background…
|
# Kneeling during the National Anthem: Top 3 Pros and Cons | ProCon.org'
**Argument**
Kneeling during the national anthem is a legal form of peaceful protest, which is a First Amendment right.
President Obama said Kaepernick was “exercising his constitutional right to make a statement. I think there’s a long history of sports figures doing so.”
The San Francisco 49ers said in a statement, “In respecting such American principles as freedom of religion and freedom of expression, we recognize the right of an individual to choose and participate, or not, in our celebration of the national anthem.”
A letter signed by 35 US veterans stated that “Far from disrespecting our troops, there is no finer form of appreciation for our sacrifice than for Americans to enthusiastically exercise their freedom of speech.”
**Background**
The debate about kneeling or sitting in protest during the national anthem was ignited by Colin Kaepernick in 2016 and escalated to become a nationally divisive issue.
San Francisco 49ers quarterback Colin Kaepernick first refused to stand during “The Star-Spangled Banner” on Aug. 26, 2016 to protest racial injustice and police brutality in the United States. Since that time, many other professional football players, high school athletes, and professional athletes in other sports have refused to stand for the national anthem. These protests have generated controversy and sparked a public conversation about the protesters’ messages and how they’ve chosen to deliver them. [7] [8] [9]
The 2017 NFL pre-season began with black players from the Seattle Seahawks, Oakland Raiders, and Philadelphia Eagles kneeling or sitting during the anthem with support of white teammates. On Aug. 21, 2017, twelve Cleveland Browns players knelt in a prayer circle during the national anthem with at least four other players standing with hands on the kneeling players’ shoulders in solidarity, the largest group of players to take a knee during the anthem to date. [20] [21]
Jabrill Peppers, a rookie safety for the Browns, said of the protest, “There’s a lot of racial and social injustices in the world that are going on right now. We just decided to take a knee and pray for the people who have been affected and just pray for the world in general… We were not trying to disrespect the flag or be a distraction to the team, but as men we thought we had the right to stand up for what we believed in, and we demonstrated that.” [21]
Seth DeValve, a tight end for the Browns and the first white NFL player to kneel for the anthem, stated, “The United States is the greatest country in the world. And it is because it provides opportunities to its citizens that no other country does. The issue is that it doesn’t provide equal opportunity to everybody, and I wanted to support my African-American teammates today who wanted to take a knee. We wanted to draw attention to the fact that there’s things in this country that still need to change.” [20]
However, some Cleveland Browns fans expressed their dissatisfaction on the team’s Facebook page. One commenter posted, “Pray before or pray after. Taking a knee during the National Anthem these days screams disrespect for our Flag, Our Country and our troops. My son and the entire armed forces deserve better than that.” [22]
On Friday, Sep. 22, 2017, President Donald Trump stated his opposition to NFL players kneeling during the anthem: “Wouldn’t you love to see one of these NFL owners, when somebody disrespects our flag, to say ‘Get that son of a bitch off the field right now. Out! He’s fired. He’s fired!” The statement set off a firestorm on both sides of the debate. Roger Goodell, NFL Commissioner, said of Trump’s comments, “Divisive comments like these demonstrate an unfortunate lack of respect for the NFL, our great game and all of our players, and a failure to understand the overwhelming force for good our clubs and players represent in our communities.” [23]
The controversy continued over the weekend as the President continued to tweet about the issue and others contributed opinions for and against kneeling during the anthem. On Sunday, Sep. 24, in London before the first NFL game played after Trump’s comments, at least two dozen Baltimore Ravens and Jacksonville Jaguars players knelt during the American national anthem, while other players, coaches, and staff locked arms, including Shad Khan, who is the only Pakistani-American Muslim NFL team owner. Throughout the day, some players, coaches, owners, and other staff kneeled or linked arms from every team except the Carolina Panthers. The Pittsburgh Steelers chose to remain in the locker room during the anthem, though offensive tackle and Army Ranger veteran Alejandro Villanueva stood at the entrance to the field alone, for which he has since apologized. Both the Seattle Seahawks and Tennessee Titans teams stayed in their locker rooms before their game, leaving the field mostly empty during the anthem. The Seahawks stated, “As a team, we have decided we will not participate in the national anthem. We will not stand for the injustice that has plagued people of color in this country. Out of love for our country and in honor of the sacrifices made on our behalf, we unite to oppose those that would deny our most basic freedoms.” [24] [25] [27]
The controversy jumped to other sports as every player on WNBA’s Indiana Fever knelt on Friday, Sep. 22 (though WNBA players had been kneeling for months); Oakland A’s catcher Bruce Maxwell kneeled on Saturday becoming the first MLB player to do so; and Joel Ward, of the NHL’s San Jose Sharks, said he would not rule out kneeling. USA soccer’s Megan Rapinoe knelt during the anthem in 2016, prompting the US Soccer Federation to issue Policy 604-1, ordering all players to stand during the anthem. [28] [29] [30] [31] [35]
The country was still debating the issue well into the week, with Trump tweeting throughout, including on Sep. 26: “The NFL has all sort of rules and regulations. The only way out for them is to set a rule that you can’t kneel during our National Anthem!” [26]
On May 23, 2018, the NFL announced that all 32 team owners agreed that all players and staff on the field shall “stand and show respect for the flag and the Anthem” or face “appropriate discipline.” However, all players will no longer be required to be on the field during the anthem and may wait off field or in the locker room. The new rules were adopted without input from the players’ union. On July 20, 2018, the NFL and the NFL Players Association (NFLPA) issued a joint statement putting the anthem policy on hold until the two organizations come to an agreement. [32] [33] [34]
During the nationwide Black Lives Matter protests following the death of George Floyd, official league positions on kneeling began to change. On June 5, 2020, NFL Commissioner Roger Goodell stated, “We, the National Football League, condemn racism and the systematic oppression of black people. We, the National Football League, admit we were wrong for not listening to NFL players earlier and encourage all players to speak out and peacefully protest.” [39]
Before the June 7, 2020 race, NASCAR lifted the guidelines that all team members must stand during the anthem, allowing NASCAR official and Army veteran Kirk Price to kneel during the anthem. [40]
On June 10, 2020, the US Soccer Federation rescinded the league’s requirement that players stand during the anthem amid the Black Lives Matter protests following the death of George Floyd. The US Soccer Federation stated, “It has become clear that this policy was wrong and detracted from the important message of Black Lives Matter.” [35]
In the wake of the 2020 killing of George Floyd and the protests that followed, 52% of Americans stated it was “OK for NFL players to kneel during the National Anthem to protest the police killing of African Americans.” [41]
The debate largely quieted after the summer of 2020, with a brief resurgence about athletes displaying political gestures on Olympic podiums of Tokyo in 2021 and Beijing in 2022.
For more on the National Anthem, see: “History of the National Anthem: Is ‘The Star-Spangled Banner’ Racist?“
|
# Bottled Water Bans - Pros & Cons - ProCon.org'
**Argument**
Banning bottled water restricts consumers' access to a product they want, and negatively affects small businesses.
A survey by Harris Poll for the International Bottled Water Association found that 93% of Americans think “bottled water should be available wherever drinks are sold,” with 31% saying that they only, or mostly only, drink bottled water.
Research by Kantor Panel Worldwide found that “40% of all water servings come in the form of bottled water.”
As one blogger said, “everyone tells me that I’m wasting away money and harming the environment, but if it weren’t for bottled water I honestly wouldn’t drink any water at all… My personal choice is just not tap. I don’t like it.”
Daniel Kenn, owner of Sudbury Coffee Works in Sudbury, MA, where a plastic water bottle ban was enacted, said, “people want water, it’s probably the biggest money maker in that cooler… almost every other town still allows plastic water bottle sales, which will put Sudbury Coffee Works at a competitive disadvantage when the ban takes effect.”
**Background**
Americans consumed 14.4 billion gallons of bottled water in 2019, up 3.6% from 2018, in what has been a steadily increasing trend since 2010. In 2016, bottled water outsold soda for the first time and has continued to do so every year since, making it the number one packaged beverage in the United States. 2020 revenue for bottled water was $61.326 million by June 15, and the overall market is expected to grow to $505.19 billion by 2028. [50] [51] [52]
Globally, about 20,000 plastic bottles were bought every second in 2017, the majority of which contained drinking water. More than half of those bottles were not turned in for recycling, and of those recycled, only 7% were turned into new bottles. [49]
In 2013, Concord, MA, became the first US city to ban single-serve plastic water bottles, citing environmental and waste concerns. Since then, many cities, colleges, entertainment venues, and national parks have followed suit, including San Francisco, the University of Vermont, the Detroit Zoo, and the Grand Canyon National Park. [17] [26] [44]
|
# Ride-Sharing Apps - Pros & Cons - ProCon.org'
**Argument**
Ride-hailing services increase traffic congestion, emissions, and total vehicle miles traveled.
Ride-hailing adds a total of 5.7 billion miles of driving each year in the nine metropolitan areas (Boston, Chicago, Los Angeles, Miami, New York, Philadelphia, San Francisco, Seattle and DC) that account for 70% of such trips in the US. At least 40% of the time, drivers are traveling without passengers in the car, adding more miles and vehicle emissions that wouldn’t exist without ride-hailing. As many as 60% of riders would have used public transit, walked, biked, or not taken a trip at all if ride-hailing weren’t an option. That means that nearly two-thirds of ride-hailing trips added additional cars to the roads.
Studies show that ride-hailing makes traffic worse during already congested rush hours because of the extra cars on the road and drivers look at their phones more for passenger pick ups and directions. Researchers found that ride-hailing contributes to a net increase in greenhouse gas emissions.
**Background**
OverviewPro/Con ArgumentsDiscussion QuestionsTake Action
The first Uber ride was on July 5, 2010 in San Francisco, CA. The app launched internationally in 2011 and reached one billion rides on Dec. 30, 2015, quickly followed by five billion on May 20, 2017 and 10 billion on June 10, 2018. [40]
On May 22, 2012, Lyft launched in San Francisco as a part of Zimride and expanded to 60 cities in 2014 and to 100 more in 2017, at which point Lyft claimed more than one million rides a day. On Nov. 13, 2017, Lyft went international, allowing the company to reach one billion rides on Sep. 18, 2018. [41]
Other ride-hailing and ride-sharing apps include Gett (which partners with Lyft in the US), Curb, Wingz, Via, Scoop, and Bridj. [42]
36% of Americans said they used ride-hailing services such as Uber or Lyft, according to a Jan. 4, 2019 Pew Research Center Survey. Use is up significantly from 2015 when just 15% had used the apps. [38]
But use varies among populations. 45% of urban residents, 51% of people who were 18 to 29, 53% of people who earned $75,000 or more per year, and 55% of people with college degrees, used the apps, compared to 19% of rural residents, 24% of people aged 50 or older, 24% of people who earn $30,000 or less per year, and 20% of people with a high school diploma or less. [38]
In 2018, 70% of Uber and Lyft trips occurred in nine big metropolitan areas: Boston, Chicago, Los Angeles, Miami, New York, Philadelphia, San Francisco, Seattle, and Washington, DC. [3]
Uber officially overtook yellow cabs in New York City in July 2017, when it reported an average of 289,000 trips per day compared to 277,000 taxi rides. More than 2.61 billion ride-hailing trips were taken in 2017, a 37% increase over the 1.90 billion trips in 2016. Ride-hailing trips were down significantly in 2020 and 2021 due to the COVID-19 pandemic. [3] [4] [39]
|
# Should People Become Vegetarian?'
**Argument**
It is cruel and unethical to kill animals for food when vegetarian options are available, especially because raising animals in confinement for slaughter is cruel, and many animals in the United States are not slaughtered humanely.
Animals are sentient beings that have emotions and social connections. Scientific studies show that cattle, pigs, chickens, and all warm-blooded animals can experience stress, pain, and fear. In 2017, the United States slaughtered a total of , including 33.7 million cows, 9.2 million chickens, 124.5 million pigs, and 2.4 million sheep. These animals should not have to die painfully and fearfully to satisfy an unnecessary dietary preference.
About 50% of meat produced in the United States came from confined animal feeding operations (CAFOs) in 2008 where mistreated animals live in filthy, overcrowded spaces with little or no access to pasture, natural light, or clean air. In CAFOs pigs have their tails cut short, chickens have their toenails, spurs, and beaks clipped, and cows have their horns removed and tails docked with no painkillers. Pregnant pigs are kept in metal gestation crates barely bigger than the pigs themselves. Baby cows raised for veal are tied up and confined in tiny stalls their entire short lives (3-18 weeks).
The Humane Methods of Slaughter Act (HMSA) mandates that livestock be stunned unconscious before slaughter to minimize suffering. However, birds such as chickens and turkey are exempted from the HMS, and a 2010 report by the US Government Accountability Organization (GAO) found that the USDA was not “taking consistent actions to enforce the HMSA.”
**Background**
Americans eat an average of 58 pounds of beef, 96 pounds of chicken, and 52 pounds of pork, per person, per year, according to the United States Department of Agriculture (USDA). Vegetarians, about 5% of the US adult population, do not eat meat (including poultry and seafood). The percentage of Americans who identify as vegetarian has remained steady for two decades. 11% of those who identify as liberal follow a vegetarian diet, compared to 2% of conservatives.
Many proponents of vegetarianism say that eating meat harms health, wastes resources, and creates pollution. They often argue that killing animals for food is cruel and unethical since non-animal food sources are plentiful.
Many opponents of a vegetarian diet say that meat consumption is healthful and humane, and that producing vegetables causes many of the same environmental problems as producing meat. They also argue that humans have been eating and enjoying meat for 2.3 million years. Read more background…
|
# Is Binge Watching Bad for You? Top 3 Pros and Cons'
**Argument**
Binge-watching makes a show more fulfilling.
While binge-watching, the viewer can feel the pleasure of full immersion (aka “the zone”), which is a great feeling similar to staying up all night to finish a book or project. Shows made for binge-watching, such as Orange Is the New Black and Stranger Things, are often more sophisticated and have multiple intricate storylines, complex relationships, and multi-dimensional characters.
Watching several episodes at once tends to make the story easier to follow and more enjoyable than a single episode. That’s a big reason why the show You went unnoticed while airing on Lifetime but became a sensation once available to binge on Netflix.
**Background**
The first usage of the term “binge-watch” dates back to 2003, but the concept of watching multiple episodes of a show in one sitting gained popularity around 2012. Netflix’s 2013 decision to release all 13-episodes in the first season of House of Cards at one time, instead of posting an episode per week, marked a new era of binge-watching streaming content. In 2015, “binge-watch” was declared the word of the year by Collins English Dictionary, which said use of the term had increased 200% in the prior year. [1] [2] [3]
73% of Americans admit to binge-watching, with the average binge lasting three hours and eight minutes. 90% of millennials and 87% of Gen Z stated they binge-watch, and 40% of those age groups binge-watch an average of six episodes of television in one sitting. [4] [5]
The coronavirus pandemic led to a sharp increase in binge-viewing: HBO, for example, saw a 65% jump in subscribers watching three or more episodes at once starting on Mar. 14, 2020, around the time when many states implemented stay-at-home measures to slow the spread of COVID-19. [28]
A 2021 Sykes survey found 38% of respondents streamed three or more hours of content on weekdays, and 48% did so on weekends. However, a Nielsen study found adults watched four or more hours of live and streaming TV a day, indicating individuals may be underestimating their TV consumption. [31]
|
# Is a College Education Worth It?'
**Argument**
Many recent college graduates are un- or underemployed.
The unemployment rate for recent college graduates (4.0%) exceeded the average for all workers, including those without a degree (3.6%) in 2019. The underemployment rate was 34% for all college graduates and 41.1% for recent grads. The underemployment (insufficient work) rate for college graduates in 2015 was 6.2% overall: 5.2% for white graduates, 8.4% for Hispanic graduates, and 9.7% for black graduates. According to the Federal Reserve Bank of New York, 44% of recent college graduates were underemployed in 2012.
**Background**
People who argue that college is worth it contend that college graduates have higher employment rates, bigger salaries, and more work benefits than high school graduates. They say college graduates also have better interpersonal skills, live longer, have healthier children, and have proven their ability to achieve a major milestone.
People who argue that college is not worth it contend that the debt from college loans is too high and delays graduates from saving for retirement, buying a house, or getting married. They say many successful people never graduated from college and that many jobs, especially trades jobs, do not require college degrees. Read more background…
|
# Should Animals Be Used for Scientific or Commercial Testing?'
**Argument**
Animals themselves benefit from the results of animal testing.
Vaccines tested on animals have saved millions of animals that would otherwise have died from rabies, distemper, feline leukemia, infectious hepatitis virus, tetanus, anthrax, and canine parvo virus. Treatments for animals developed using animal testing also include pacemakers for heart disease and remedies for glaucoma and hip dysplasia.
Animal testing has been instrumental in saving endangered species from extinction, including the black-footed ferret, the California condor and the tamarins of Brazil. The American Veterinary Medical Association (AVMA) endorses animal testing to develop safe drugs, vaccines, and medical devices.
**Background**
An estimated 26 million animals are used every year in the United States for scientific and commercial testing. Animals are used to develop medical treatments, determine the toxicity of medications, check the safety of products destined for human use, and other biomedical, commercial, and health care uses. Research on living animals has been practiced since at least 500 BC.
Proponents of animal testing say that it has enabled the development of many life-saving treatments for both humans and animals, that there is no alternative method for researching a complete living organism, and that strict regulations prevent the mistreatment of animals in laboratories.
Opponents of animal testing say that it is cruel and inhumane to experiment on animals, that alternative methods available to researchers can replace animal testing, and that animals are so different from human beings that research on animals often yields irrelevant results. Read more background…
|
# Private Prisons - Pros & Cons - ProCon.org'
**Argument**
All prisons—not just privately operated ones--should be abolished.
Author Rachel Kushner explained, “Ninety-two percent of people locked inside American prisons are held in publicly run, publicly funded facilities, and 99 percent of those in jail are in public jails. Every private prison could close tomorrow, and not a single person would go home. But the ideas that private prisons are the culprit, and that profit is the motive behind all prisons, have a firm grip on the popular imagination.”
Following that logic, Holly Genovese, PhD student in American Studies at the University of Texas at Austin, argued, “Anyone who examines privately owned US prisons has to come to the conclusion that they are abhorrent and must be eliminated. But they can also be low-hanging fruit used by opportunistic Democrats to ignore the much larger problem of — and solutions to — mass incarceration… Private prisons should be abolished. But if the problem is the profit — institutions unjustly benefiting from the labor of incarcerated people — the fight against private prisons is only a beginning. Political figures and others serious about fighting injustice must engage with the profit motives of federally and state-funded prisons as well, and seriously consider the abolition of all prisons — as they are all for profit.”
As Woods Ervin, a prison abolitionist with Critical Resistance, explained, “we have to think about the rate at which the prison-industrial complex is able to actually address rape and murder. We’ve spent astronomical amounts of our budgets at the municipal level, at the federal level, on policing and caging people. And yet I don’t think that people feel any safer from the threat of sexual assault or the threat of murder. What is the prison-industrial complex doing to actually solve those problems in our society?” Abolitionists instead focus on community-level issues to prevent the concerns that lead to incarceration in the first place. Eliminating private prisons still leaves the problems of mass incarceration and public prisons.
**Background**
Prison privatization generally operates in one of three ways: 1. Private companies provide services to a government-owned and managed prison, such as building maintenance, food supplies, or vocational training; 2. Private companies manage government-owned facilities; or 3. Private companies own and operate the prisons and charge the government to house inmates. [1]
In the United States, private prisons have their roots in slavery. Some privately owned prisons held enslaved people while the slave trade continued after the importation of slaves was banned in 1807. Recaptured runaways were also imprisoned in private facilities as were black people who were born free and then illegally captured to be sold into slavery. Many plantations were turned into private prisons from the Civil War forward; for example, the Angola Plantation became the Louisiana State Penitentiary (nicknamed “Angola” for the African homeland of many of the slaves who originally worked on the plantation), the largest maximum-security prison in the country. In 2000, the Vann Plantation in North Carolina was opened as the private, minimal security Rivers Correctional Facility (operated by GEO Group), though the facility’s federal contract expired in Mar. 2021. [2] [3] [4] [5] [6]
Inmates in private prisons in the 19th century were commonly used for labor via “convict leasing” in which the prison owners were paid for the labor of the inmates. According to the Innocence Project, Jim Crow laws after the Civil War ensured the newly freed black population was imprisoned at high rates for petty or nonexistent crimes in order to maintain the labor force needed for picking cotton and other labor previously performed by enslaved people. However, the practice of convict leasing extended beyond the American South. California awarded private management contracts for San Quentin State Prison in order to allow the winning bidder leasing rights to the convicts until 1860. Convict leasing faded in the early 20th century as states banned the practice and shifted to forced farming and other labor on the land of the prisons themselves. [2] [3] [7] [8] [9] [10]
What Americans think of now as a private prison is an institution owned by a conglomerate such as CoreCivic, GEO Group, LaSalle Corrections, or Management and Training Corporation. This sort of private prison began operations in 1984 in Tennessee and 1985 in Texas in response to the rapidly rising prison population during the war on drugs. State-run facilities were overpopulated with increasing numbers of people being convicted for drug offenses. Corrections Corporation of America (now CoreCivic) first promised to run larger prisons more cheaply to solve the problems. In 1987, Wackenhut Corrections Corporation (now GEO Group) won a federal contract to run an immigration detention center, expanding the focus of private prisons. [11] [12] [13]
In 2016, the federal government announced it would phase out the use of private prisons: a policy rescinded by Attorney General Jeff Sessions under the Trump administration but reinstated under President Biden. However, Biden’s order did not limit the use of private facilities for federal immigrant detention. 20 US states did not use private prisons as of 2019. [11] [12] [14]
In 2019, 115,428 people (8% of the prison population) were incarcerated in state or federal private prisons; 81% of the detained immigrant population (40,634 people) was held in private facilities. The federal government held the most (27,409) people in private prisons in 2019, followed by Texas (12,516), and Florida (11,915). However, Montana held the largest percentage of the state’s inmates in private prisons (47%). [11]
According to the Sentencing Project, “[p]rivate prisons incarcerated 99,754 American residents in 2020, representing 8% of the total state and federal prison population. Since 2000, the number of people housed in private prisons has increased 14%. [37]
On Jan. 20, 2022, the federal Bureau of Prisons reported 153,855 total federal inmates, 6,336 of whom were held in private facilities, or about 4% of people in federal custody. [36]
|
# Should People Become Vegetarian?'
**Argument**
A diet that includes meat does not raise risk of disease.
Saturated fats from meat are not to blame for modern diseases like heart disease, cancer, and obesity. Chemically processed and hydrogenated vegetable oils like corn and canola cause these conditions because they contain harmful free radicals and trans fats formed during chemical processing.
Lean red meat, eaten in moderation, can be a healthful part of a balanced diet. According to researchers at the British Nutrition Foundation, “there is no evidence” that moderate consumption of unprocessed lean red meat has any negative health effects.
However, charring meat during cooking can create over 20 chemicals linked to cancer, and the World Cancer Research Fund finds that processed meats like bacon, sausage, and salami, which contain preservatives such as nitrates, are strongly associated with bowel cancer and should be avoided. They emphasize that lean, unprocessed red meat can be a valuable source of nutrients and do not recommend that people remove red meat from their diets entirely, but rather, that they limit consumption to 11 ounces per week or less.
**Background**
Americans eat an average of 58 pounds of beef, 96 pounds of chicken, and 52 pounds of pork, per person, per year, according to the United States Department of Agriculture (USDA). Vegetarians, about 5% of the US adult population, do not eat meat (including poultry and seafood). The percentage of Americans who identify as vegetarian has remained steady for two decades. 11% of those who identify as liberal follow a vegetarian diet, compared to 2% of conservatives.
Many proponents of vegetarianism say that eating meat harms health, wastes resources, and creates pollution. They often argue that killing animals for food is cruel and unethical since non-animal food sources are plentiful.
Many opponents of a vegetarian diet say that meat consumption is healthful and humane, and that producing vegetables causes many of the same environmental problems as producing meat. They also argue that humans have been eating and enjoying meat for 2.3 million years. Read more background…
|
# American Socialism - Pros & Cons - ProCon.org'
**Argument**
The job of the US government is to enable free enterprise and then get out of the way of individual ingenuity and hard work. The government should promote equal opportunity, not promise equal results.
As the Dallas Federal Reserve explained, “Free enterprise means men and women have the opportunity to own economic resources, such as land, minerals, manufacturing plants and computers, and to use those tools to create goods and services for sale… Others have no intention of starting a business. If they choose, they can offer their labor, another economic resource, for wages and salaries… By allowing people to pursue their own interests, a free enterprise system can produce phenomenal results. Running shoes, walking shoes, mint toothpaste, gel toothpaste, skim milk, chocolate milk, cellular phones and BlackBerrys are just a few of the millions of products created as a result of economic freedom.”
When the government allows individuals to pursue what is best for them without interference, individuals prosper, the country enjoys “an average per capita GDP roughly six times greater than those with lower levels of economic freedom, as well as higher life expectancy, political and civil liberties, gender equality, and happiness.” In summary, “the diversity of economic freedom… helps us thrive both as individuals and as a society.”
**Background**
Socialism in the United States is an increasingly popular topic. Some argue that the country should actively move toward socialism to spur social progress and greater equity, while others demand that the country prevent this by any and all means necessary. This subject is often brought up in connection with universal healthcare and free college education, ideas that are socialist by definition, or as a general warning against leftist politics.
While some politicians openly promote socialism or socialist policies (Senator Bernie Sanders and Representative Alexandria Ocasio-Cortez, for example), others reject the socialist label (now Vice President Kamala Harris said she was “not a Democratic Socialist” during the 2020 presidential campaign) or invoke it as a dirty word that is contrary to American ideals (in the 2019 State of the Union, President Trump stated the US would “never be a socialist country” because “We are born free, and we will stay free”). [1] [2]
To consider whether the United States should adopt socialism or at least more socialist policies, the relevant terms must first be defined.
Socialism is an economic and social policy in which the public owns industry and products, rather than private individuals or corporations. Under socialism, the government controls most means of production and natural resources, among other industries, and everyone in the country is entitled to an equitable share according to their contribution to society. Individual private ownership is encouraged. [3]
Politically, socialist countries tend to be multi-party with democratic elections. Currently no country operates under a 100% socialist policy. Denmark, Iceland, Finland, Norway, and Sweden, while heavily socialist, all combine socialism with capitalism. [4] [5]
Capitalism, the United States’ current economic model, is a policy in which private individuals and corporations control production that is guided through markets, not by the government. Capitalism is also called a free market economy or free enterprise economy. Capitalism functions on private property, profit motive, and market competition. [6]
Politically, capitalist countries range from democracies to monarchies to oligarchies to despotisms. Most western countries are capitalist, including the United States, Canada, the United Kingdom, Ireland, Switzerland, Australia, and New Zealand. Also capitalist are Hong Kong, Singapore, Taiwan, and the United Arab Emirates. However, many of these countries, including the United States, have implemented socialist policies within their capitalist systems, such as social security, minimum wages, and energy subsidies. [7] [8]
Communism is frequently used as a synonym for socialism and the exact differences between the two are heavily debated. One difference is that communism provides everyone in the country with an equal share, rather than the equitable share promised by socialism. Communism is commonly summarized by the Karl Marx slogan, “From each according to his ability, to each according to his needs,” and was believed by Marx to be the step beyond socialism. Individual private ownership is illegal in most communist countries. [4] [9]
Politically, communist countries tend to be led by one communist party, and elections are only within that party. Frequently, the military has significant political power. Historically, a secret police has also shared that power, as in the former Soviet Union, the largest communist country in history. Civil liberties (such as freedom of the press, speech, and assembly) are publicly embraced, but frequently limited in practice, often by force. Countries that are currently communist include China, Cuba, Laos, North Korea, and Vietnam. Worth noting is that some of these countries, including the Democratic People’s Republic of Korea (North Korea) and the Socialist Republic of Vietnam, label themselves as democratic or socialist though they meet the definition of communism and are run by communist parties. Additionally, some communist countries, such as China and Vietnam, operate with partial free market economies, which is a cornerstone of capitalism, and some socialist policies. [4] [5] [10] [11] [12] [13]
Given those definitions, should the United States adopt more socialist policies such as free college, medicare-for-all, and the Green New Deal?
|
# Should Animals Be Used for Scientific or Commercial Testing?'
**Argument**
The vast majority of biologists and several of the largest biomedical and health organizations in the United States endorse animal testing.
A poll of 3,748 scientists by the Pew Research Center found that 89% favored the use of animals in scientific research. The American Cancer Society, American Physiological Society, National Association for Biomedical Research, American Heart Association, and the Society of Toxicology all advocate the use of animals in scientific research.
**Background**
An estimated 26 million animals are used every year in the United States for scientific and commercial testing. Animals are used to develop medical treatments, determine the toxicity of medications, check the safety of products destined for human use, and other biomedical, commercial, and health care uses. Research on living animals has been practiced since at least 500 BC.
Proponents of animal testing say that it has enabled the development of many life-saving treatments for both humans and animals, that there is no alternative method for researching a complete living organism, and that strict regulations prevent the mistreatment of animals in laboratories.
Opponents of animal testing say that it is cruel and inhumane to experiment on animals, that alternative methods available to researchers can replace animal testing, and that animals are so different from human beings that research on animals often yields irrelevant results. Read more background…
|
# Should More Gun Control Laws Be Enacted?'
**Argument**
Armed civilians are unlikely to stop crimes and are more likely to make dangerous situations, including mass shootings, more deadly.
None of the 62 mass shootings between 1982 and 2012 was stopped by an armed civilian. Gun rights activists regularly state that a 2002 mass shooting at the Appalachian School of Law in Virginia was stopped by armed students, but those students were current and former law enforcement officers and the killer was out of bullets when subdued. Other mass shootings often held up as examples of armed citizens being able to stop mass shootings involved law enforcement or military personnel and/or the shooter had stopped shooting before being subdued, such as a 1997 high school shooting in Pearl, MS; a 1998 middle school dance shooting in Edinboro, PA; a 2007 church shooting in Colorado Springs, CO; and a 2008 bar shooting in Winnemucca, NV. Jeffrey Voccola, Assistant Professor of Writing at Kutztown University, notes, “The average gun owner, no matter how responsible, is not trained in law enforcement or on how to handle life-threatening situations, so in most cases, if a threat occurs, increasing the number of guns only creates a more volatile and dangerous situation.”
**Background**
The United States has 120.5 guns per 100 people, or about 393,347,000 guns, which is the highest total and per capita number in the world. 22% of Americans own one or more guns (35% of men and 12% of women). America’s pervasive gun culture stems in part from its colonial history, revolutionary roots, frontier expansion, and the Second Amendment, which states: “A well regulated militia, being necessary to the security of a free State, the right of the people to keep and bear Arms, shall not be infringed.”
Proponents of more gun control laws state that the Second Amendment was intended for militias; that gun violence would be reduced; that gun restrictions have always existed; and that a majority of Americans, including gun owners, support new gun restrictions.
Opponents say that the Second Amendment protects an individual’s right to own guns; that guns are needed for self-defense from threats ranging from local criminals to foreign invaders; and that gun ownership deters crime rather than causes more crime. Read more background…
|
# Should the United States Continue Its Use of Drone Strikes Abroad?'
**Argument**
Drones limit the scope, scale, and casualties of military action, keeping the US military and civilians in other countries safer.
Invading Pakistan, Yemen, or Somalia with boots on the ground to capture relatively small terrorist groups would lead to expensive conflict, responsibility for destabilizing those governments, large numbers of civilian casualties, empowerment of enemies who view the United States as an occupying imperialist power, US military deaths, among other consequences. America’s attempt to destroy al Qaeda and the Taliban in Afghanistan by invading and occupying the country resulted in a war that has dragged on for over 12 years. Using drone strikes against terrorists abroad allows the United States to achieve its goals at a fraction of the cost of an invasion in money, manpower, and lives.
Drones are launched from bases in allied countries and are operated remotely by pilots in the United States, minimizing the risk of injury and death that would occur if ground soldiers and airplane pilots were used instead. Al Qaeda, the Taliban, and their affiliates often operate in distant and environmentally unforgiving locations where it would be extremely dangerous for the United States to deploy teams of special forces to track and capture terrorists. Such pursuits may pose serious risks to US troops including firefights with surrounding tribal communities, anti-aircraft shelling, land mines, improvised explosive devices (IEDs), suicide bombers, snipers, dangerous weather conditions, harsh environments, etc. [10] Further, drone pilots suffer less than traditional pilots because they do not have to be directly present on the battlefield, can live a normal civilian life in the United States, and do not risk death or serious injury. Only 4% of active-duty drone pilots are at “high risk for PTSD” compared to the 12-17% of soldiers returning from Iraq and Afghanistan.
Drones also have lower civilian casualties than “boots on the ground” missions. Between 1,193 and 2,654 civilians have died in drone strikes in Afghanistan, Pakistan, Somalia, and Yemen, or between 7% and 15% of the those killed by drones . By contrast, about 335,000 total civilians have been killed violently in the War on Terror in Afghanistan, Iraq, Pakistan, Syrian, and Yemen. [139] The traditional weapons of war – bombs, shells, mines, mortars – cause more collateral (unintended) damage to people and property than drones, whose accuracy and technical precision mostly limit casualties to combatants and intended targets. Civilian deaths in World War II are estimated at 40 to 67% of total war deaths. In the Korean, Vietnam, and Balkan Wars, civilian deaths accounted for approximately 70%, 31%, and 45% of deaths respectively.
Former Secretary of Defense Robert Gates, PhD, stated, “You can far more easily limit collateral damage with a drone than you can with a bomb, even a precision-guided munition, off an airplane.” Former CIA Director Leon Panetta, JD, concurred, ““I think this is one of the most precise weapons that we have in our arsenal.” And Former State Department Legal Advisor Harold Hongju Koh, JD, agreed that drones “have helped to make our targeting even more precise.”
**Background**
Unmanned aerial vehicles (UAVs), otherwise known as drones, are remotely-controlled aircraft which may be armed with missiles and bombs for attack missions. Since the World Trade Center attacks on Sep. 11, 2001 and the subsequent “War on Terror,” the United States has used thousands of drones to kill suspected terrorists in Pakistan, Afghanistan, Yemen, Somalia, and other countries.
Proponents state that drones strikes help prevent “boots on the ground” combat and makes America safer, that the strikes are legal under American and international law, and that they are carried out with the support of Americans and foreign governments
Opponents state that drone strikes kill civilians, creating more terrorists than they kill and sowing animosity in foreign countries, that the strikes are extrajudicial and illegal, and create a dangerous disconnect between the horrors of war and soldiers carrying out the strikes.
|
# Is Social Media Good for Society?'
**Argument**
Social media allows people to improve their relationships and make new friends.
93% of adults on Facebook use it to connect with family members, 91% use it to connect with current friends, and 87% use it to connect with friends from the past. 72% of all teens connect with friends via social media. 81% of teens age 13 to 17 reported that social media makes them feel more connected to the people in their lives, and 68% said using it makes them feel supported in tough times. 57% of teens have made new friends online.
**Background**
Around seven out of ten Americans (72%) use social media sites such as Facebook, Instagram, Twitter, LinkedIn, and Pinterest, up from 26% in 2008. [26] [189]. On social media sites, users may develop biographical profiles, communicate with friends and strangers, do research, and share thoughts, photos, music, links, and more.
Proponents of social networking sites say that the online communities promote increased interaction with friends and family; offer teachers, librarians, and students valuable access to educational support and materials; facilitate social and political change; and disseminate useful information rapidly.
Opponents of social networking say that the sites prevent face-to-face communication; waste time on frivolous activity; alter children’s brains and behavior making them more prone to ADHD; expose users to predators like pedophiles and burglars; and spread false and potentially dangerous information. Read more background…
|
# Should Teachers Get Tenure?'
**Argument**
Tenure makes it costly for schools to remove a teacher with poor performance or who is guilty of wrongdoing.
It costs an average of $313,000 to fire a teacher in New York state. New York Department of Education spent an estimated $15-20 million a year paying tenured teachers accused of incompetence and wrongdoing to report to reassignment centers (sometimes called “rubber rooms”) where they were paid to sit idly.
**Background**
Teacher tenure is the increasingly controversial form of job protection that public school teachers in 46 states receive after 1-5 years on the job. An estimated 2.3 million teachers have tenure.
Proponents of tenure argue that it protects teachers from being fired for personal or political reasons, and prevents the firing of experienced teachers to hire less expensive new teachers. They contend that since school administrators grant tenure, neither teachers nor teacher unions should be unfairly blamed for problems with the tenure system.
Opponents of tenure argue that this job protection makes the removal of poorly performing teachers so difficult and costly that most schools end up retaining their bad teachers. They contend that tenure encourages complacency among teachers who do not fear losing their jobs, and that tenure is no longer needed given current laws against job discrimination. Read more background…
|
# Should Birth Control Pills Be Available Over the Counter (OTC)?'
**Argument**
OTC birth control pills would increase access for low-income and medically underserved populations.
Twenty million women live in “contraception deserts,” places with one clinic or fewer per 1,000 women who need government-funded birth control from programs such as Medicare. In underserved communities, women could more easily find a local drug store for medication than a doctor’s office. 11-21% of sexually active low-income women studied were more likely to use the Pill if it were available OTC.
Denicia Cadena, Policy Director for Young Women United in New Mexico, stated: “Our rural communities are most profoundly impacted by our state’s health care and provider shortages. Patients face three- to six-month wait times for any primary care and even longer for specialty care… 11 of the state’s 33 counties have no obstetrics and gynecology physicians.”
Birth control can be difficult for many women to obtain, particularly teens, immigrants, women of color, and the uninsured. The National Latina Institute of Reproductive Health stated: “over-the-counter access will greatly reduce the systemic barriers, like poverty, immigration status and language, that currently prevent Latinas from regularly accessing birth control and results in higher rates of unintended pregnancy.”
Other medically underserved communities, such as LGBTQ+ people, are likely to be uninsured (16% of all LGBTQ people making less than $45,000 per year are uninsured), more likely to face economic barriers to healthcare (29% postponed necessary medical care and 24% postponed preventative screenings due to cost), and are more likely to face discrimination in the healthcare industry, resulting in less or no reproductive healthcare.
**Background**
Of the 72.2 million American women of reproductive age, 64.9% use a contraceptive. Of those, 9.1 million (12.6% of contraceptive users) use birth control pills, which are the second most commonly used method of contraception in the United States after female sterilization (aka tubal ligation or “getting your tubes tied”). The Pill is currently available by prescription only, and a debate has emerged about whether the birth control pill should be available over-the-counter (OTC), which means the Pill would be available along with other drugs such as Tylenol and Benadryl in drug store aisles. Since 1976, more than 90 drugs have switched from prescription to OTC status, including Sudafed (1976), Advil (1984), Rogaine (1996), Prilosec (2003), and Allegra (2011). Read more background…
|
# Is Cell Phone Radiation Safe?'
**Argument**
Numerous peer-reviewed studies have shown an association between cell phone use and the development of brain tumors.
In 2018, the National Toxicology Program (NTP) concluded, “high exposure to RFR (900 MHz) used by cell phones was associated with: [1] Clear evidence of an association with tumors in the hearts of male rats. The tumors were malignant schwannomas. [2] Some evidence of an association with tumors in the brains of male rats. The tumors were malignant gliomas. [and 3] Some evidence of an association with tumors in the adrenal glands of male rats. The tumors were benign, malignant, or complex combined pheochromocytoma.” The NTP indicated “clear evidence” of a link between cell phone radiation and cancer, the highest category of evidence used by the NTP.
A Feb. 2017 study concluded, “We found evidence linking mobile phone use and risk of brain tumours especially in long-term users (≥10 years). Studies with higher quality showed a trend towards high risk of brain tumour, while lower quality showed a trend towards lower risk/protection.”
Studies have also linked cell phone use to thyroid and breast cancers. And other studies have similarly concluded that there is an association between cell phone use and increased risk of developing brain and head tumors.
**Background**
|
# Should Birth Control Pills Be Available Over the Counter (OTC)?'
**Argument**
OTC status for birth control pills could result in more unwanted pregnancies.
The birth control pill is not the most effective form of birth control. Among birth control methods, the Pill ranks seventh in effectiveness. Typical use of the Pill results in nine unintended pregnancies out of 100 women after one year of use and increases steadily to 61 unintended pregnancies out of 100 after ten years of typical use.
Meanwhile, typical use of copper IUDs results in eight unintended pregnancies per 100 women after ten years of typical use, female sterilization results in five, the Levonorgestral IUD and male sterilization result in two, and hormonal implants result in just one.
Robin Marty, health writer, noted that because the more effective options “would require a doctor’s visit and the Pill would just require a trip to the store, women may be inclined to use less effective contraception for the sake of convenience.”
**Background**
Of the 72.2 million American women of reproductive age, 64.9% use a contraceptive. Of those, 9.1 million (12.6% of contraceptive users) use birth control pills, which are the second most commonly used method of contraception in the United States after female sterilization (aka tubal ligation or “getting your tubes tied”). The Pill is currently available by prescription only, and a debate has emerged about whether the birth control pill should be available over-the-counter (OTC), which means the Pill would be available along with other drugs such as Tylenol and Benadryl in drug store aisles. Since 1976, more than 90 drugs have switched from prescription to OTC status, including Sudafed (1976), Advil (1984), Rogaine (1996), Prilosec (2003), and Allegra (2011). Read more background…
|
# Fracking Pros and Cons - Top 3 Arguments For and Against'
**Argument**
Natural gas is a necessary bridge fuel to get to 100% clean energy and eliminate coal and petroleum, and fracking is the best way to extract natural gas.
Mark Little, President and Chief Executive for Suncor, explained, “The choice is not between fossil fuels and renewable energy, but rather, how do we accelerate the growth of renewables while reducing greenhouse gas emissions from the use of fossil fuels.”
Replacing coal and petroleum with natural gas obtained by fracking now allows the US to achieve short-term and immediate reductions in greenhouse gases that cause climate change while alternative energies such as solar and wind are built into viable industries.
In the 2014 State of the Union address, President Barack Obama stated, “natural gas – if extracted safely, it’s the bridge fuel that can power our economy with less of the carbon pollution that causes climate change.”
Oil and gas company BP explained, “[A]s the world works towards net zero emissions, we think natural gas will play an important role in getting us all there… Natural gas has far lower emissions than coal when burnt for power and is a much cleaner way of generating electricity. Switching from coal to gas has cut more than 500 million tonnes of CO2 from the power sector this decade alone.”
**Background**
Fracking (hydraulic fracturing) is a method of extracting natural gas from deep underground via a drilling technique. First, a vertical well is drilled and encased in steel or cement. Then, a horizontal well is drilled in the layer of rock that contains natural gas. After that, fracking fluid is pumped into the well at an extremely high pressure so that it fractures the rock in a way that allows oil and gas to flow through the cracks to the surface. [1]
Colonel Edward A. L. Roberts first developed a version of fracking in 1862. During the Civil War at the Battle of Fredericksburg, Roberts noticed how artillery blasts affected channels of water. The idea of “shooting the well” was further developed by lowering a sort of torpedo into an oil well. The torpedo was then detonated, which increased oil flow. [2]
In the 1940s, explosives were replaced by high-pressure liquids, beginning the era of hydraulic fracturing. The 21st century brought two further innovations: horizontal drilling and slick water (a mix of water, sand, and chemicals) to increase fluid flow. Spurred by increased financial investment and global oil prices, fracking picked up speed and favor. About one million oil wells were fracked between 1940 and 2014, with about one third of the wells fracked after 2000. [2] [3]
Most US states allow fracking, though four states have banned the practice as of Feb. 2021: Vermont (2012), New York (temporarily in 2014; permanently in 2020), Maryland (2017), and Washington (2019). In Apr. 2021, California banned new fracking projects as of 2024. [4] [5] [6] [7] [8] [32]
Fracking was a hot-button issue during the US presidential race of 2020, with President Trump firmly in favor of fracking and Vice President Biden expressing more concern about the practice, especially on federal lands.
|
# Should the United States Maintain Its Embargo against Cuba?'
**Argument**
The embargo harms the US economy.
The US Chamber of Commerce opposes the embargo, saying that it costs the United States $1.2 billion annually in lost sales of exports.
A study by the Cuba Policy Foundation, a nonprofit founded by former US diplomats, estimated that the annual cost to the US economy could be as high as $4.84 billion in agricultural exports and related economic output. “If the embargo were lifted, the average American farmer would feel a difference in his or her life within two to three years,” the study’s author said.
A Mar. 2010 study by Texas A&M University calculated that removing the restrictions on agricultural exports and travel to Cuba could create as many as 6,000 jobs in the US.
Nine US governors released a letter on Oct. 14, 2015 urging Congress to lift the embargo, which stated: “Foreign competitors such as Canada, Brazil and the European Union are increasingly taking market share from U.S. industry [in Cuba], as these countries do not face the same restrictions on financing… Ending the embargo will create jobs here at home, especially in rural America, and will create new opportunities for U.S. agriculture.”
**Background**
Since the 1960s, the United States has imposed an embargo against Cuba, the Communist island nation 90 miles off the coast of Florida. The embargo, known among Cubans as “el bloqueo” or “the blockade,” consists of economic sanctions against Cuba and restrictions on Cuban travel and commerce for all people and companies under US jurisdiction.
Proponents of the embargo argue that Cuba has not met the US conditions for lifting the embargo, including transitioning to democracy and improving human rights. They say that backing down without getting concessions from the Castro regime will make the United States appear weak, and that only the Cuban elite would benefit from open trade.
Opponents of the Cuba embargo argue that it should be lifted because the failed policy is a Cold War relic and has clearly not achieved its goals. They say the sanctions harm the US economy and Cuban citizens, and prevent opportunities to promote change and democracy in Cuba. They say the embargo hurts international opinion of the United States. Read more background…
|
# Should Recreational Marijuana Be Legal?'
**Argument**
Marijuana is less harmful than alcohol and tobacco, which are already legal.
Alcohol and tobacco are legal, yet they are known to cause cancer, heart failure, liver damage, and more. According to the CDC, six people die from alcohol poisoning every day and 88,000 people die annually due to excessive alcohol use in the United States. There are no recorded cases of death from marijuana overdose.
Three to four times as many Americans are dependent on alcohol as on marijuana. A study in the Lancet ranking the harmfulness of drugs put alcohol first as the most harmful, tobacco as sixth, and cannabis eighth. A national poll found that people view tobacco as a greater threat to health than marijuana by a margin of four to one (76% vs. 18%), and 72% of people surveyed believed that regular use of alcohol was more dangerous than marijuana use.
“In several respects, even sugar poses more of a threat to our nation’s health than pot,” said Dr. David L. Nathan, a clinical psychiatrist and president of Doctors for Cannabis Regulation.
**Background**
More than half of US adults, over 128 million people, have tried marijuana, despite it being an illegal drug under federal law. Nearly 600,000 Americans are arrested for marijuana possession annually – more than one person per minute. Public support for legalizing marijuana went from 12% in 1969 to 66% today. Recreational marijuana, also known as adult-use marijuana, was first legalized in Colorado and Washington in 2012.
Proponents of legalizing recreational marijuana say it will add billions to the economy, create hundreds of thousands of jobs, free up scarce police resources, and stop the huge racial disparities in marijuana enforcement. They contend that regulating marijuana will lower street crime, take business away from the drug cartels, and make marijuana use safer through required testing, labeling, and child-proof packaging. They say marijuana is less harmful than alcohol, and that adults should have a right to use it if they wish.
Opponents of legalizing recreational marijuana say it will increase teen use and lead to more medical emergencies including traffic deaths from driving while high. They contend that revenue from legalization falls far short of the costs in increased hospital visits, addiction treatment, environmental damage, crime, workplace accidents, and lost productivity. They say that marijuana use harms the user physically and mentally, and that its use should be strongly discouraged, not legalized. Read more background…
|
# Should Tablets Replace Textbooks in K-12 Schools?'
**Argument**
Tablets lower the amount of paper teachers have to print for handouts and assignments, helping to save the environment and money.
A school with 100 teachers uses on average 250,000 pieces of paper annually. A school of 1,000 students on average spends between $3,000-4,000 a month on paper, ink, and toner, not counting printer wear and tear or technical support costs.
**Background**
Textbook publishing in the United States is an $11 billion industry, with five companies – Cengage Learning, Houghton Mifflin Harcourt, McGraw-Hill, Pearson Education, and Scholastic – capturing about 80% of this market. Tablets are an $18 billion industry with 53% of US adults, 81% of US children aged eight to 17, and 42% of US children aged under eight, owning a tablet. As tablets have become more prevalent, a new debate has formed over whether K-12 school districts should switch from print textbooks to digital textbooks on tablets and e-readers.
Proponents of tablets say that they are supported by most teachers and students, are much lighter than print textbooks, and improve standardized test scores. They say tablets can hold hundreds of textbooks, save the environment by lowering the amount of printing, increase student interactivity and creativity, and that digital textbooks are cheaper than print textbooks.
Opponents of tablets say that they are expensive, too distracting for students, easy to break, and costly/time-consuming to fix. They say that tablets contribute to eyestrain, headaches, and blurred vision, increase the excuses available for students not doing their homework, require costly Wi-Fi networks, and become quickly outdated as new technologies emerge. Read more background…
|
# Was Ronald Reagan a Good President?'
**Argument**
Environment:
Between 1982 and 1988, Reagan signed 43 bills designating more than 10 million acres of federal wilderness areas in 27 states. This acreage accounted for nearly 10% of the National Wilderness Preservation System at the time. Reagan had signed more wilderness bills than any other president since the Wilderness Act was enacted in 1964.
**Background**
Ronald Wilson Reagan served as the 40th President of the United States from Jan. 20, 1981 to Jan. 19, 1989. He won the Nov. 4, 1980 presidential election, beating Democratic incumbent Jimmy Carter with 50.7% of the votes, and won his second term by a landslide of 58.8% of the votes.
Reagan’s proponents point to his accomplishments, including stimulating economic growth in the US, strengthening its national defense, revitalizing the Republican Party, and ending the global Cold War as evidence of his good presidency.
His opponents contend that Reagan’s poor policies, such as bloating the national defense, drastically cutting social services, and making missiles-for-hostages deals, led the country into record deficits and global embarrassment. Read more background…
|
# Do Violent Video Games Contribute to Youth Violence?'
**Argument**
Violent video games are a convenient scapegoat for those who would rather not deal with the actual causes of violence in the US.
Patrick Markey, PhD, Psychology Professor at Villanova University, stated: “The general story is people who play video games right after might be a little hopped up and jerky but it doesn’t fundamentally alter who they are. It is like going to see a sad movie. It might make you cry but it doesn’t make you clinically depressed… Politicians on both sides go after video games it is this weird unifying force. It makes them look like they are doing something… They [violent video games] look scary. But research just doesn’t support that there’s a link [to violent behavior].”
Markey also explained, “Because video games are disproportionately blamed as a culprit for mass shootings committed by White perpetrators, video game ‘blaming’ can be viewed as flagging a racial issue. This is because there is a stereotypical association between racial minorities and violent crime.”
Andrew Przybylski, PhD, Associate Professor, and Senior Research Fellow and Director of Research at the Oxford Internet Institute at Oxford University, stated: “Games have only become more realistic. The players of games and violent games have only become more diverse. And they’re played all around the world now. But the only place where you see this kind of narrative still hold any water, that games and violence are related to each other, is in the United States. [And, by blaming video games for violence,] we reduce the value of the political discourse on the topic, because we’re looking for easy answers instead of facing hard truths.”
Hillary Clinton, JD, Former Secretary of State and First Lady, tweeted, “People suffer from mental illness in every other country on earth; people play video games in virtually every other country on earth. The difference is the guns.”
**Background**
Around 73% of American kids age 2-17 played video games in 2019, a 6% increase over 2018. Video games accounted for 17% of kids’ entertainment time and 11% of their entertainment spending. The global video game industry was worth contributing $159.3 billion in 2020, a 9.3% increase of 9.3% from 2019.
Violent video games have been blamed for school shootings, increases in bullying, and violence towards women. Critics argue that these games desensitize players to violence, reward players for simulating violence, and teach children that violence is an acceptable way to resolve conflicts.
Video game advocates contend that a majority of the research on the topic is deeply flawed and that no causal relationship has been found between video games and social violence. They argue that violent video games may provide a safe outlet for aggressive and angry feelings and may reduce crime. Read more background…
|
# Should the Drinking Age Be Lowered from 21 to a Younger Age?'
**Argument**
MLDA creates a mindset of non-compliance with the law among young adults.
Lowering MLDA from 21 to 18 would diminish the thrill of breaking the law to get a drink. Normalizing alcohol consumption as something to be done responsibly and in moderation will make drinking alcohol less of a taboo for young adults entering college and the workforce.
High non-compliance with MLDA 21 promotes general disrespect and non-compliance with other areas of US law. MLDA 21 encourages young adults to acquire and use false identification documents to procure alcohol. It would be better to have fewer fake IDs in circulation and more respect for the law.
Further, MLDA 21 enforcement is not a priority for many law enforcement agencies. Police are inclined to ignore or under-enforce MLDA 21 because of resource limitations, statutory obstacles, perceptions that punishments are inadequate, and the time and effort required for processing and paperwork. An estimated two of every 1,000 occasions of illegal drinking by youth under 21 results in an arrest.
Combine a lack of consequences with the thrill of breaking the law, and MLDA 21 actually encourages underage drinking and potentially other illegal activities, such as driving while intoxicated and illicit drug use. Lowering the MLDA would make 18- to 20-year-olds subject to the same laws enforced for those 21 and over.
**Background**
All 50 US states have set their minimum drinking age to 21 although exceptions do exist on a state-by-state basis for consumption at home, under adult supervision, for medical necessity, and other reasons.
Proponents of lowering the minimum legal drinking age (MLDA) from 21 argue that it has not stopped teen drinking, and has instead pushed underage binge drinking into private and less controlled environments, leading to more health and life-endangering behavior by teens.
Opponents of lowering the MLDA argue that teens have not yet reached an age where they can handle alcohol responsibly, and thus are more likely to harm or even kill themselves and others by drinking prior to 21. They contend that traffic fatalities decreased when the MLDA increased. Read more background…
|
# Should Teachers Get Tenure?'
**Argument**
Tenure protects teachers from being prematurely fired after a student makes a false accusation or a parent threatens expensive legal action against the district.
After an accusation, districts might find it expedient to quickly remove a teacher instead of investigating the matter and incurring potentially expensive legal costs. The thorough removal process mandated by tenure rules ensures that teachers are not removed without a fair hearing.
**Background**
Teacher tenure is the increasingly controversial form of job protection that public school teachers in 46 states receive after 1-5 years on the job. An estimated 2.3 million teachers have tenure.
Proponents of tenure argue that it protects teachers from being fired for personal or political reasons, and prevents the firing of experienced teachers to hire less expensive new teachers. They contend that since school administrators grant tenure, neither teachers nor teacher unions should be unfairly blamed for problems with the tenure system.
Opponents of tenure argue that this job protection makes the removal of poorly performing teachers so difficult and costly that most schools end up retaining their bad teachers. They contend that tenure encourages complacency among teachers who do not fear losing their jobs, and that tenure is no longer needed given current laws against job discrimination. Read more background…
|
# Should More Gun Control Laws Be Enacted?'
**Argument**
Legally owned guns are frequently stolen and used by criminals.
A June 2013 Institute of Medicine (IOM) report states that “[a]lmost all guns used in criminal acts enter circulation via initial legal transaction.” Between 2005 and 2010, 1.4 million guns were stolen from US homes during property crimes (including burglary and car theft), a yearly average of 232,400. Ian Ayres, JD, PhD, and John J. Donohue, JD, PhD, Professors of Law at Yale Law School and Stanford Law School respectively, state, “with guns being a product that can be easily carried away and quickly sold at a relatively high fraction of the initial cost, the presence of more guns can actually serve as a stimulus to burglary and theft. Even if the gun owner had a permit to carry a concealed weapon and would never use it in furtherance of a crime, is it likely that the same can be said for the burglar who steals the gun?”
**Background**
The United States has 120.5 guns per 100 people, or about 393,347,000 guns, which is the highest total and per capita number in the world. 22% of Americans own one or more guns (35% of men and 12% of women). America’s pervasive gun culture stems in part from its colonial history, revolutionary roots, frontier expansion, and the Second Amendment, which states: “A well regulated militia, being necessary to the security of a free State, the right of the people to keep and bear Arms, shall not be infringed.”
Proponents of more gun control laws state that the Second Amendment was intended for militias; that gun violence would be reduced; that gun restrictions have always existed; and that a majority of Americans, including gun owners, support new gun restrictions.
Opponents say that the Second Amendment protects an individual’s right to own guns; that guns are needed for self-defense from threats ranging from local criminals to foreign invaders; and that gun ownership deters crime rather than causes more crime. Read more background…
|
# Should the United States Continue Its Use of Drone Strikes Abroad?'
**Argument**
Drone strikes make the United States safer by remotely decimating terrorist networks across the world.
Drone strikes in Pakistan, Afghanistan, Yemen, and Somalia have killed between 7,665 and 14,247 militants and alleged militants, including high-level commanders implicated in organizing plots against the United States.
According to President Obama, “[d]ozens of highly skilled al Qaeda commanders, trainers, bomb makers and operatives have been taken off the battlefield. Plots have been disrupted that would have targeted international aviation, U.S. transit systems, European cities and our troops in Afghanistan. Simply put, these strikes have saved lives.”
Beyond killing terrorists, that drones are remotely piloted saves US military lives. Drones are launched from bases in allied countries and are operated remotely by pilots in the United States, minimizing the risk of injury and death that would occur if ground soldiers and airplane pilots were used instead. Al Qaeda, the Taliban, and their affiliates often operate in distant and environmentally unforgiving locations where it would be extremely dangerous for the United States to deploy teams of special forces to track and capture terrorists. Such pursuits may pose serious risks to US troops including firefights with surrounding tribal communities, anti-aircraft shelling, land mines, improvised explosive devices (IEDs), suicide bombers, snipers, dangerous weather conditions, harsh environments, etc. Drone strikes eliminate all of those risks common to “boots on the ground” missions.
**Background**
Unmanned aerial vehicles (UAVs), otherwise known as drones, are remotely-controlled aircraft which may be armed with missiles and bombs for attack missions. Since the World Trade Center attacks on Sep. 11, 2001 and the subsequent “War on Terror,” the United States has used thousands of drones to kill suspected terrorists in Pakistan, Afghanistan, Yemen, Somalia, and other countries.
Proponents state that drones strikes help prevent “boots on the ground” combat and makes America safer, that the strikes are legal under American and international law, and that they are carried out with the support of Americans and foreign governments
Opponents state that drone strikes kill civilians, creating more terrorists than they kill and sowing animosity in foreign countries, that the strikes are extrajudicial and illegal, and create a dangerous disconnect between the horrors of war and soldiers carrying out the strikes.
|
# Should More Gun Control Laws Be Enacted?'
**Argument**
Guns are rarely used in self-defense.
Of the 29,618,300 violent crimes committed between 2007 and 2011, 0.79% of victims (235,700) protected themselves with a threat of use or use of a firearm, the least-employed protective behavior. In 2010 there were 230 “justifiable homicides” in which a private citizen used a firearm to kill a felon, compared to 8,275 criminal gun homicides (or, 36 criminal homicides for every “justifiable homicide”). Of the 84,495,500 property crimes committed between 2007 and 2011, 0.12% of victims (103,000) protected themselves with a threat of use or use of a firearm.
**Background**
The United States has 120.5 guns per 100 people, or about 393,347,000 guns, which is the highest total and per capita number in the world. 22% of Americans own one or more guns (35% of men and 12% of women). America’s pervasive gun culture stems in part from its colonial history, revolutionary roots, frontier expansion, and the Second Amendment, which states: “A well regulated militia, being necessary to the security of a free State, the right of the people to keep and bear Arms, shall not be infringed.”
Proponents of more gun control laws state that the Second Amendment was intended for militias; that gun violence would be reduced; that gun restrictions have always existed; and that a majority of Americans, including gun owners, support new gun restrictions.
Opponents say that the Second Amendment protects an individual’s right to own guns; that guns are needed for self-defense from threats ranging from local criminals to foreign invaders; and that gun ownership deters crime rather than causes more crime. Read more background…
|
# Was Bill Clinton a Good President?'
**Argument**
Other:
Clinton was aware of the threat of Al Qaeda and authorized the CIA to kill Osama bin Laden. He sought to hunt down bin Laden after the Oct. 12, 2000 attack on the USS Cole, but the CIA and FBI refused to certify bin Laden’s involvement in the terrorist act. “I got closer to killing him than anybody’s gotten since,” Clinton said in a Sep. 24, 2006 interview with Chris Wallace.
**Background**
William Jefferson Clinton, known as Bill Clinton, served as the 42nd President of the United States from Jan. 20, 1993 to Jan. 19, 2001.
His proponents contend that under his presidency the US enjoyed the lowest unemployment and inflation rates in recent history, high home ownership, low crime rates, and a budget surplus. They give him credit for eliminating the federal deficit and reforming welfare, despite being forced to deal with a Republican-controlled Congress.
His opponents say that Clinton cannot take credit for the economic prosperity experienced during his scandal-plagued presidency because it was the result of other factors. In fact, they blame his policies for the financial crisis that began in 2007. They point to his impeachment by Congress and his failure to pass universal health care coverage as further evidence that he was not a good president. Read more background…
|
# Should Animals Be Used for Scientific or Commercial Testing?'
**Argument**
Most experiments involving animals are flawed, wasting the lives of the animal subjects.
A peer-reviewed study found serious flaws in the majority of publicly funded US and UK animal studies using rodents and primates: “only 59% of the studies stated the hypothesis or objective of the study and the number and characteristics of the animals used.” A 2017 study found further flaws in animal studies, including “incorrect data interpretation, unforeseen technical issues, incorrectly constituted (or absent) control groups, selective data reporting, inadequate or varying software systems, and blatant fraud.”
**Background**
An estimated 26 million animals are used every year in the United States for scientific and commercial testing. Animals are used to develop medical treatments, determine the toxicity of medications, check the safety of products destined for human use, and other biomedical, commercial, and health care uses. Research on living animals has been practiced since at least 500 BC.
Proponents of animal testing say that it has enabled the development of many life-saving treatments for both humans and animals, that there is no alternative method for researching a complete living organism, and that strict regulations prevent the mistreatment of animals in laboratories.
Opponents of animal testing say that it is cruel and inhumane to experiment on animals, that alternative methods available to researchers can replace animal testing, and that animals are so different from human beings that research on animals often yields irrelevant results. Read more background…
|
# Do Violent Video Games Contribute to Youth Violence?'
**Argument**
The US military uses violent video games to train soldiers to kill.
The US Marine Corps licensed Doom II in 1996 to create Marine Doom in order to train soldiers. In 2002, the US Army released first-person shooter game America’s Army to recruit soldiers and prepare recruits for the battlefield.
While the military may benefit from training soldiers to kill using video games, kids who are exposed to these games lack the discipline and structure of the armed forces and may become more susceptible to being violent.
Dave Grossman, retired lieutenant colonel in the United States Army and former West Point psychology professor, stated: “[T]hrough interactive point-and-shoot video games, modern nations are indiscriminately introducing to their children the same weapons technology that major armies and law enforcement agencies around the world use to ‘turn off’ the midbrain ‘safety catch’” that prevents most people from killing.
**Background**
Around 73% of American kids age 2-17 played video games in 2019, a 6% increase over 2018. Video games accounted for 17% of kids’ entertainment time and 11% of their entertainment spending. The global video game industry was worth contributing $159.3 billion in 2020, a 9.3% increase of 9.3% from 2019.
Violent video games have been blamed for school shootings, increases in bullying, and violence towards women. Critics argue that these games desensitize players to violence, reward players for simulating violence, and teach children that violence is an acceptable way to resolve conflicts.
Video game advocates contend that a majority of the research on the topic is deeply flawed and that no causal relationship has been found between video games and social violence. They argue that violent video games may provide a safe outlet for aggressive and angry feelings and may reduce crime. Read more background…
|
# Is Human Activity Primarily Responsible for Global Climate Change?'
**Argument**
Many scientists disagree that human activity is primarily responsible for global climate change.
A report found more than 1,000 scientists who disagreed that humans are primarily responsible for global climate change. The claim that 97% of scientists agree on the cause of global warming is inaccurate. The research on 11,944 studies actually found that only 3,974 even expressed a view on the issue. Of those, just 64 (1.6%) said humans are the main cause.
A Purdue University survey found that 47% of climatologists challenge the idea that humans are primarily responsible for climate change and instead believe that climate change is caused by an equal combination of humans and the environment (37%), mostly by the environment (5%), or that there’s not enough information to say (5%).
**Background**
Average surface temperatures on earth have risen more than 2°F over the past 100 years. During this time period, atmospheric levels of greenhouse gases such as carbon dioxide (CO2) and methane (CH4) have notably increased. This site explores the debate on whether climate change is caused by humans (also known as anthropogenic climate change).
The pro side argues rising levels of atmospheric greenhouse gases are a direct result of human activities such as burning fossil fuels, and that these increases are causing significant and increasingly severe climate changes including global warming, loss of sea ice, sea level rise, stronger storms, and more droughts. They contend that immediate international action to reduce greenhouse gas emissions is necessary to prevent dire climate changes.
The con side argues human-generated greenhouse gas emissions are too small to substantially change the earth’s climate and that the planet is capable of absorbing those increases. They contend that warming over the 20th century resulted primarily from natural processes such as fluctuations in the sun’s heat and ocean currents. They say the theory of human-caused global climate change is based on questionable measurements, faulty climate models, and misleading science. Read more background…
|
# Should Abortion Be Legal?'
**Argument**
Abortion bans endangers healthcare for those not seeking abortions.
Medical treatment for nonviable pregnancies is often exactly the same as an abortion.
occur when a fertilized egg implants somewhere other than the uterine cavity. About one in 50 pregnancies are ectopic, and they are nonviable. Bleeding from ectopic pregnancies caused 10% of all pregnancy-related deaths, and ectopic pregnancies were the leading cause of maternal death in the first trimester.
Other pregnancies can be nonviable, including when there is little or no chance of the baby’s survival once it is born or if the baby has died in utero. The treatment for ectopic and other nonviable pregnancies is often the same as that for an abortion.
One out of every ten pregnancies ends in . The drugs used for medication abortions are the only treatment recommended for early miscarriages. For later or complicated miscarriages, the same surgical procedure used for abortions is recommended.
While some abortion bans include specific exceptions for nonviable pregnancies and miscarriages, other bans are too vague to be practicable. Healthcare providers may refuse to perform a procedure that could be interpreted as an “on-demand” abortion for fear of liability or prosecution.
Arguing that doctors and others use them as loopholes for “on demand” abortions, lobbyists are working to eliminate exceptions altogether, which would further endanger and traumatize people seeking care for dangerous medical conditions.
Some pharmacists have refused to fill prescriptions for miscarriages and ectopic pregnancies, because the drugs can also be used for abortion. In Texas, pharmacists can be sued for “aiding and abetting” an abortion.
Further, bans are a slippery slope to contraceptive and other healthcare restrictions. For example, some already wrongly view (the morning after pill) as an abortifacient and are thinking of including it in abortion bans.
**Background**
The debate over whether abortion should be a legal option has long divided people around the world. Split into two groups, pro-choice and pro-life, the two sides frequently clash in protests.
Proponents of legal abortion believe abortion is a safe medical procedure that protects lives, while abortion bans endanger pregnant people not seeking abortions, and deny bodily autonomy, creating wide-ranging repercussions.
Opponents of legal abortion believe abortion is murder because life begins at conception, that abortion creates a culture in which life is disposable, and that increased access to birth control, health insurance, and sexual education would make abortion unnecessary.Read more background…
|
# Dress Codes - Top 3 Pros and Cons | ProCon.org'
**Argument**
Dress codes promote inclusiveness and a comfortable, cooperative environment while eliminating individualistic attire that can distract from common goals.
As Bonneville Academy, a STEM school in Stansbury Park, Utah, explained, “The primary objective of a school dress code is to build constant equality among all the students. When all the students wear the same style of dress, then there will be the same kind of atmosphere across the school campus. This pattern encourages the student to concentrate more on their academic and co-curricular activities… then all the learning becomes more interesting and relevant… Students who are used to dress[ing] properly will be well equipped to evolve into the actual world, especially when they enter into the ever-competitive job market.”
Susan M. Heathfield, a management and organization development consultant, stated, “Employees appreciate guidance about appropriate business attire for your workplace—especially when you specify a rationale for the dress code that your team has selected.” Simply knowing whether suits are required or jeans are appropriate removes guesswork for employees, which leads to a more comfortable work environment. Similarly, dress codes can make a disparate group of people feel like a team—no one is left out or judged differently solely on the basis of the way they dress.
Dress codes can also make workplace hierarchies friendlier and more work-conducive. A manager who dresses in suits with ties may intimidate employees who wear branded polo shirts and khakis, preventing effective communication.
Further, dress codes mean employees and customers or clients won’t be distracted by individualistic clothing. For example, a customer of Nebraska State Bank & Trust Co. complained to the bank’s president about a branch employee’s outfit of mismatched tunic and leggings, fringed boots, and large earrings. A customer complaint can not only alienate the customer but also distract employees from their tasks and potentially embarrass or shame the employee whose outfit sparked the complaint.
**Background**
While the most frequent debate about dress codes may be centered around K-12 schools, dress codes impact just about everyone’s daily life. From the “no shirt, no shoes, no service” signs (which exploded in popularity in the 1960s and 70s in reaction to the rise of hippies) to COVID-19 pandemic mask mandates, employer restrictions on tattoos and hairstyles, and clothing regulations on airlines, dress codes are more prevalent than we might think. [1] [2] [3] [4] [5]
While it’s difficult to pinpoint the first dress code–humans started wearing clothes around 170,000 years ago–nearly every culture and country throughout history, formally or informally, have had strictures on what to wear and not to wear. These dress codes are common “cultural signifiers,” reflecting social beliefs and cultural values, most often of the social class dominating the culture. Such codes have been prevalent in Islamic countries since the founding of the religion in the seventh century, and they continue to cause controversy today—are they appropriate regulations for maintaining piety, community, and public decency, or are they demeaning and oppressive, especially for Islamic women? [6] [7] [8] [9] [10]
In the West, people were arrested and imprisoned as early as 1565 in England for violating dress codes. The man in question, a servant named Richard Walweyn, was arrested for wearing “a very monsterous and outraygeous great payre of hose” (or trunk hose) and was imprisoned until he could show he owned other hose “of a decent & lawfull facyon.” Other dress codes of the time reserved expensive garments made of silk, fur, and velvet for nobility only, reinforcing how dress codes have been implemented for purposes of social distinction. Informal dress codes—such as high-fashion clothes with logos and the unofficial “Midtown Uniform” worn by men working in finance–underscore how often dress codes have been used to mark and maintain visual distinctions between classes and occupations. Other dress codes have been enacted overtly to police morality, as with the bans on bobbed hair and flapper dresses of the 1920s. Still other dress codes are intended to spur an atmosphere of inclusiveness and professionalism or specifically to maintain safety in the workplace. [6] [7] [8] [11] [12]
|
# Should All Americans Have the Right (Be Entitled) to Health Care?'
**Argument**
A right to health care could lead to government rationing of medical services.
Countries with universal health care, including Australia, Canada, New Zealand, and the United Kingdom, all ration health care using methods such as controlled distribution, budgeting, price setting, and service restrictions. In the United Kingdom, the National Health Service (NHS) rations health care using a cost-benefit analysis. For example, in 2018 any drug that provided an extra one year of good-quality life for about $25,000 or less was generally deemed cost-effective while one that costs more might not be. In order to expand health coverage to more Americans, Obamacare created an Independent Payment Advisory Board (IPAB) to make cost-benefit analyses to keep Medicare spending from growing too fast. According to Sally Pipes, President of the Pacific Research Institute, the IPAB “is essentially charged with rationing care.” According to a Wall Street Journal editorial, “once health care is nationalized, or mostly nationalized, medical rationing is inevitable.”
**Background**
27.5 million people in the United States (8.5% of the US population) do not have health insurance. Among the 91.5% who do have health insurance, 67.3% have private insurance while 34.4% have government-provided coverage through programs such as Medicaid or Medicare. Employer-based health insurance is the most common type of coverage, applying to 55.1% of the US population. The United States is the only nation among the 37 OECD (Organization for Economic Co-operation and Development) nations that does not have universal health care either in practice or by constitutional right.
Proponents of the right to health care say that no one in one of the richest nations on earth should go without health care. They argue that a right to health care would stop medical bankruptcies, improve public health, reduce overall health care spending, help small businesses, and that health care should be an essential government service.
Opponents argue that a right to health care amounts to socialism and that it should be an individual’s responsibility, not the government’s role, to secure health care. They say that government provision of health care would decrease the quality and availability of health care, and would lead to larger government debt and deficits. Read more background…
|
# Was Ronald Reagan a Good President?'
**Argument**
Health:
Reagan almost completely ignored the growing AIDS epidemic. Although the first case of AIDS was discovered in the early 1980s, Reagan never publicly addressed the epidemic until May 31, 1987 when he spoke at an AIDS conference in Washington, DC. By that time, 36,058 Americans had been diagnosed with the disease and 20,849 had died.
**Background**
Ronald Wilson Reagan served as the 40th President of the United States from Jan. 20, 1981 to Jan. 19, 1989. He won the Nov. 4, 1980 presidential election, beating Democratic incumbent Jimmy Carter with 50.7% of the votes, and won his second term by a landslide of 58.8% of the votes.
Reagan’s proponents point to his accomplishments, including stimulating economic growth in the US, strengthening its national defense, revitalizing the Republican Party, and ending the global Cold War as evidence of his good presidency.
His opponents contend that Reagan’s poor policies, such as bloating the national defense, drastically cutting social services, and making missiles-for-hostages deals, led the country into record deficits and global embarrassment. Read more background…
|
# Should People Become Vegetarian?'
**Argument**
A vegetarian diet lowers risk of diseases.
A vegetarian diet reduces the chances of developing kidney stones and gallstones. Diets high in animal protein cause the body to excrete calcium, oxalate, and uric acid—the main components of kidney stones and gallstones.
A vegetarian diet also lowers the risk of heart disease. Vegetarians had 24% lower mortality from heart disease than meat eaters. A vegetarian diet also helps lower blood pressure, prevent hypertension, and thus reduce the risk of stroke.
Eating meat increases the risk of getting type 2 diabetes in women, and eating processed meat increases the risk in men. A vegetarian diet rich in whole grains, legumes, nuts, and soy proteins helps to improve glycemic control in people who already have diabetes.
Studies show that vegetarians are up to 40% less likely to develop cancer than meat eaters. In 2015 the World Health Organization classified red meat as “probably carcinogenic to humans” and processed meats as “carcinogenic to humans.” Consuming beef, pork, or lamb five or more times a week significantly increases the risk of colon cancer. Eating processed meats such as bacon or sausage increases this risk even further. Diets high in animal protein were associated with a 4-fold increase in cancer death risk compared to high protein diets based on plant-derived protein sources.
**Background**
Americans eat an average of 58 pounds of beef, 96 pounds of chicken, and 52 pounds of pork, per person, per year, according to the United States Department of Agriculture (USDA). Vegetarians, about 5% of the US adult population, do not eat meat (including poultry and seafood). The percentage of Americans who identify as vegetarian has remained steady for two decades. 11% of those who identify as liberal follow a vegetarian diet, compared to 2% of conservatives.
Many proponents of vegetarianism say that eating meat harms health, wastes resources, and creates pollution. They often argue that killing animals for food is cruel and unethical since non-animal food sources are plentiful.
Many opponents of a vegetarian diet say that meat consumption is healthful and humane, and that producing vegetables causes many of the same environmental problems as producing meat. They also argue that humans have been eating and enjoying meat for 2.3 million years. Read more background…
|
# Should the Federal Corporate Income Tax Rate Be Raised?'
**Argument**
Raising the corporate income tax rate would make taxes fairer.
As a 2021 Biden Administration White House statement explains, “The current tax system unfairly prioritizes large multinational corporations over Main Street American small businesses. Small businesses don’t have access to the army of lawyers and accountants that allowed 55 profitable large corporations to avoid paying any federal corporate taxes in 2020, and they cannot shift profits into tax havens to avoid paying U.S. taxes like multinational corporations can. U.S. multinationals report 60 percent of their profits abroad in just seven low tax jurisdictions that, combined, make up less than 4 percent of global GDP. These corporations do not make money in these countries; they just report it there to take a huge tax cut. In 2018, married couples making about $150,000 working at their own small business paid over 20 percent of their income in federal income and self-employment taxes. By contrast, U.S. multinational corporations paid less than 10 percent in corporate income taxes on U.S. profits.”
Large corporations have the ability to pay more taxes without much effect. Kimberly Clausing, Deputy Assistant Secretary for Tax Analysis at the US Department of the Treasury, stated, “Corporate taxes are paid only by profitable corporations, and for those without profits, any percent of zero is zero. Also, many companies can carry forward losses to offset taxes in future years. However, companies profiting in the current environment, such as Amazon or Peloton, can reasonably be expected to contribute a share of their pandemic profits in tax payments.”
Clausing explained, “The corporate tax, when it does fall on profitable companies, mostly falls on the excess profits they earn from market power or other factors (due to the dominance of large companies in markets with little competition, luck or risk-taking), not the normal return on capital investment. Treasury economists calculated that such excess profits made up more than 75% of the corporate tax base by 2013. A higher corporate tax rate can rein in market power and promote a fairer economy.”
**Background**
The creation of the federal corporate income tax occurred in 1909, when the uniform rate was 1% for all business income above $5,000. Since then the rate has increased to as high as 52.8% in 1969. Today’s rate is set at 21% for all companies.
Proponents of raising the corporate tax rate argue that corporations should pay their fair share of taxes and that those taxes will keep companies in the United States while allowing the US federal government to pay for much needed infrastructure and social programs.
Opponents of raising the corporate tax rate argue that an increase will weaken the economy and that the taxes will ultimately be paid by everyday people while driving corporations overseas. Read more background…
|
# Zoos - Pros & Cons - ProCon.org'
**Argument**
Zoos produce helpful scientific research.
228 accredited zoos published 5,175 peer-reviewed manuscripts between 1993 and 2013. In 2017, 173 accredited US zoos spent $25 million on research, studied 485 species and subspecies of animals, worked on 1,280 research projects, and published 170 research manuscripts.
Because so many diseases can be transmitted from animals to humans, such as Ebola, Hantavirus, and the bird flu, zoos frequently conduct disease surveillance research in wildlife populations and their own captive populations that can lead to a direct impact on human health. For example, the veterinary staff at the Bronx Zoo in New York alerted health officials of the presence of West Nile Virus.
Zoo research is used in other ways such as informing legislation like the Sustainable Shark Fisheries and Trade Act, helping engineers build a robot to move like a sidewinder snake, and encouraging minority students to enter STEM careers.
**Background**
OverviewPro/Con ArgumentsDiscussion QuestionsTake Action
Zoos have existed in some form since at least 2500 BCE in Egypt and Mesopotamia, where records indicate giraffes, bears, dolphins, and other animals were kept by aristocrats. The oldest still operating zoo in the world, Tiergarten Schönbrunn in Vienna, opened in 1752. [1] [2]
The contemporary zoo evolved from 19th century European zoos. Largely modeled after the London Zoo in Regent’s Park, these zoos were intended for “genteel amusement and edification,” according to Emma Marris, environmental writer and Institute Fellow at the UCLA Institute of the Environment and Sustainability. As such, reptile houses, aviaries, and insectariums were added with animals grouped taxonomically, to move zoos beyond the spectacle of big, scary animals. [40]
Carl Hegenbeck, a German exotic animal importer, introduced the modern model of more natural habitats for animals instead of obvious cages at his Animal Park in Hamburg in 1907. That change prompted the shift in zoo narrative from entertainment to the protection of animals. In the late 20th century, the narrative changed again to the conservation of animals to stave off extinction. [40]
Controversy has historically surrounded zoos, from debates over displaying “exotic” humans in exhibits to zookeepers not knowing what to feed animals. A gorilla named Madame Ningo, the first gorilla to arrive in the United States in 1911 who was to live at the Bronx Zoo, was fed hot dinners and cooked meat despite gorillas being herbivores, for example. [3] [4]
The contemporary debate about zoos tends to focus on animal welfare on both sides, whether zoos protect animals or imprison them.
|
# Should Tablets Replace Textbooks in K-12 Schools?'
**Argument**
Tablets allow teachers to better customize student learning.
There are thousands of education and tutoring applications on tablets, so teachers can tailor student learning to an individual style/personality instead of a one-size-fits-all approach. There are more than 20,000 education apps available for the iPad alone.
**Background**
Textbook publishing in the United States is an $11 billion industry, with five companies – Cengage Learning, Houghton Mifflin Harcourt, McGraw-Hill, Pearson Education, and Scholastic – capturing about 80% of this market. Tablets are an $18 billion industry with 53% of US adults, 81% of US children aged eight to 17, and 42% of US children aged under eight, owning a tablet. As tablets have become more prevalent, a new debate has formed over whether K-12 school districts should switch from print textbooks to digital textbooks on tablets and e-readers.
Proponents of tablets say that they are supported by most teachers and students, are much lighter than print textbooks, and improve standardized test scores. They say tablets can hold hundreds of textbooks, save the environment by lowering the amount of printing, increase student interactivity and creativity, and that digital textbooks are cheaper than print textbooks.
Opponents of tablets say that they are expensive, too distracting for students, easy to break, and costly/time-consuming to fix. They say that tablets contribute to eyestrain, headaches, and blurred vision, increase the excuses available for students not doing their homework, require costly Wi-Fi networks, and become quickly outdated as new technologies emerge. Read more background…
|
# Should Tablets Replace Textbooks in K-12 Schools?'
**Argument**
High-level education officials support tablets over textbooks.
Secretary of Education Arne Duncan and Federal Communications Commission chair Julius Genachowski said on Feb. 1, 2012 that schools and publishers should “switch to digital textbooks within five years to foster interactive education, save money on books, and ensure classrooms in the US use up-to-date content.” The federal government, in collaboration with several tech organizations, released a 70-page guide for schools called the “Digital Textbook Playbook,” a “roadmap for educators to accelerate the transition to digital textbooks.”
**Background**
Textbook publishing in the United States is an $11 billion industry, with five companies – Cengage Learning, Houghton Mifflin Harcourt, McGraw-Hill, Pearson Education, and Scholastic – capturing about 80% of this market. Tablets are an $18 billion industry with 53% of US adults, 81% of US children aged eight to 17, and 42% of US children aged under eight, owning a tablet. As tablets have become more prevalent, a new debate has formed over whether K-12 school districts should switch from print textbooks to digital textbooks on tablets and e-readers.
Proponents of tablets say that they are supported by most teachers and students, are much lighter than print textbooks, and improve standardized test scores. They say tablets can hold hundreds of textbooks, save the environment by lowering the amount of printing, increase student interactivity and creativity, and that digital textbooks are cheaper than print textbooks.
Opponents of tablets say that they are expensive, too distracting for students, easy to break, and costly/time-consuming to fix. They say that tablets contribute to eyestrain, headaches, and blurred vision, increase the excuses available for students not doing their homework, require costly Wi-Fi networks, and become quickly outdated as new technologies emerge. Read more background…
|
# GMO Pros and Cons - Should Genetically Modified Organisms Be Grown?'
**Argument**
Tinkering with the genetic makeup of plants may result in changes to the food supply that introduce toxins or trigger allergic reactions.
An article in Food Science and Human Welfare said, “Three major health risks potentially associated with GM foods are: toxicity, allergenicity and genetic hazards.” The authors raised concerns that the GMO process could disrupt a plant’s genetic integrity, with the potential to activate toxins or change metabolic toxin levels in a ripple effect beyond detection.
A joint commission of the World Health Organization (WHO) and the Food and Agriculture Organization of the UN (FAO) identified two potential unintended effects of genetic modification of food sources: higher levels of allergens in a host plant that contains known allergenic properties, and new proteins created by the gene insertion that could cause allergic reactions.
The insertion of a gene to modify a plant can cause problems in the resulting food. After StarLink corn was genetically altered to be insect-resistant, there were several reported cases of allergic reactions in consumers. The reactions ranged from abdominal pain and diarrhea to skin rashes to life-threatening issues.
**Background**
OverviewPro/Con ArgumentsDid You Know?Discussion QuestionsTake Action
Selective breeding techniques have been used to alter the genetic makeup of plants for thousands of years. The earliest form of selective breeding were simple and have persisted: farmers save and plant only the seeds of plants that produced the most tasty or largest (or otherwise preferable) results. In 1866, Gregor Mendel, an Austrian monk, discovered and developed the basics of DNA by crossbreeding peas. More recently, genetic engineering has allowed DNA from one species to be inserted into a different species to create genetically modified organisms (GMOs). [1] [2] [53] [55]
To create a GMO plant, scientists follow these basic steps over several years:
Identify the desired trait and find an animal or plant with that trait. For example, scientists were looking to make corn more insect-resistant. They identified a gene in a soil bacterium (Bacillus thuringiensis, or Bt), that naturally produces an insecticide commonly used in organic agriculture.Copy the specific gene for the desired trait.Insert the specific gene into the DNA of the plant scientists want to change. In the above example, the insecticide gene from Bacillus thuringiensis was inserted into corn.Grow the new plant and perform tests for safety and the desired trait. [55]
According to the Genetic Literacy Project, “The most recent data from the International Service for the Acquisition of Agri-biotech Applications (ISAAA) shows that more than 18 million farmers in 29 countries, including 19 developing nations, planted over 190 million hectares (469.5 million acres) of GMO crops in 2019.” The organization stated that a “majority” of European countries and Russia, among other countries, ban the crops. However, most countries that ban the growth of GMO crops, allow their import. Europe, for example, imports 30 million tons of corn and soy animal feeds every year, much of which is GMO. [58]
In the United States, the health and environmental safety standards for GM crops are regulated by the Environmental Protection Agency (EPA), the Food and Drug Administration (FDA), and the US Department of Agriculture (USDA). Between 1985 and Sep. 2013, the USDA approved over 17,000 different GM crops for field trials, including varieties of corn, soybean, potato, tomato, wheat, canola, and rice, with various genetic modifications such as herbicide tolerance; insect, fungal, and drought resistance; and flavor or nutrition enhancement. [44] [45]
In 1994, the “FLAVR SAVR” tomato became the first genetically modified food to be approved for public consumption by the FDA. The tomato was genetically modified to increase its firmness and extend its shelf life. [51]
Recently, the term “bioengineered” food has come into popularity, under the argument that almost all food has been “genetically modified” via selective breeding or other basic growing methods. Bioengineered food refers specifically to food that has undergone modification using rDNA technology, but does not include food genetically modified by basic cross-breeding or selective breeding. As of Jan. 10, 2022, the USDA listed 12 bioengineered products available in the US: alfalfa, Arctic apples, canola, corn, cotton, BARI Bt Begun varieties of eggplant, ringspot virus-resistant varieties of papaya, pink flesh varieties of pineapple, potato, AquAdvantage salmon, soybean, summer squash, and sugarbeet. [56] [57]
The National Bioengineered Food Disclosure Standard established mandatory national standards for labeling foods with genetically engineered ingredients in the United States. The Standard was implemented on Jan. 1, 2020 and compliance became mandatory on Jan. 1, 2022. [46]
49% of US adults believe that eating GMO foods are “worse” for one’s health, 44% say they are “neither better nor worse,” and 5% believe they are “better,” according to a 2018 Pew Research Center report. [9]
|
# Should the Penny Stay in Circulation? - Top 3 Pros and Cons'
**Argument**
Preserving the penny keeps consumer prices down and avoids harming low-income households.
Mark Weller, Executive Director of the pro-penny group Americans for Common Cents, says, “The alternative to the penny is rounding to the nickel, and that’s something that will negatively impact working families every time they buy a gallon of gas or a gallon of milk.”
The US Federal Reserve found that minorities and low-income people are more likely to use cash than credit cards. Raymond Lombra, Professor of Economics at Pennsylvania State University, says the extra rounding charges would exceed $600 million annually and would “be regressive, affecting the poor and other disadvantaged people groups disproportionately.”
One study found that penny rounding in Canada costs grocery store customers an estimated 3.27 million Canadian dollars (2.5 million USD) annually.
**Background**
OverviewPro/Con ArgumentsDiscussion QuestionsTake Action
The US Mint shipped 8.4 billion pennies for circulation in 2017, more than all nickels (1.3 billion), dimes (2.4 billion), and quarters (1.9 billion) combined. [1] While countries such as Australia, Canada, and New Zealand have phased out their one-cent pieces, Harris Poll found that 55% of Americans are in favor of keeping the penny and 29% want to abolish it. [2][3]
The US Mint produces coins as instructed by Congress, so a law would have to be passed by Congress and signed by the President in order for pennies to be removed from circulation. [4] Several unsuccessful legislative efforts have sought to bring about the penny’s extinction. Most recently, in 2017, Senators John McCain (R-AZ) and Mike Enzi (R-WY) sponsored ultimately failed legislation that would have suspended minting of the penny. [5]
Should the Penny Stay in Circulation?
Pro 1
Preserving the penny keeps consumer prices down and avoids harming low-income households.
Mark Weller, Executive Director of the pro-penny group Americans for Common Cents, says, “The alternative to the penny is rounding to the nickel, and that’s something that will negatively impact working families every time they buy a gallon of gas or a gallon of milk.” [6]
The US Federal Reserve found that minorities and low-income people are more likely to use cash than credit cards. [7] Raymond Lombra, Professor of Economics at Pennsylvania State University, says the extra rounding charges would exceed $600 million annually and would “be regressive, affecting the poor and other disadvantaged people groups disproportionately.” [9]
One study found that penny rounding in Canada costs grocery store customers an estimated 3.27 million Canadian dollars (2.5 million USD) annually. [9]
Read More
Pro 2
A penny can be used for decades and is more cost-efficient to produce than a nickel.
Most US coins have an expected circulation life of 20 to 30 years, meaning a single penny could be used thousands or even millions of times. [10][11] So what if it costs 1.8 cents to make? [1] That’s a bargain for how many times it gets used.
Without pennies, the Mint would be forced to make more five-cent pieces. That would cost an estimated $10.9 million more annually than it would cost to keep making pennies. [12]
Pennies and nickels both cost more to make than their face values, but on average over the last five years, nickels have been made at a loss of 2.58 cents per coin, compared to .65 cents per penny. [1][13] The cost of making and shipping pennies includes some fixed costs that the US Mint would continue to incur even if we abolished the penny, because the Mint would still make other coins. [12]
Read More
Pro 3
The existence of pennies helps raise a lot of money for charities.
Organizations such as the Leukemia and Lymphoma Society, the Salvation Army, and the Ronald McDonald House ask people to donate pennies to raise funds. [11] In 2009, the Leukemia and Lymphoma Society announced that school children had collected over 15 billion pennies in support of its charitable work — that’s $150 million dollars for blood cancer research and treatment. [14]
Dagmar Serota, who created a nonprofit called Good Cents for Oakland, said, “Pennies are easy to ask for and they are easy to give. And it’s very easy for a child to say, ‘Will you help me support this nonprofit, will you give me your pennies?’” [6] Elementary school students in Los Angeles, CA, gain significant leadership and civic engagement experience from USC’s Penny Harvest program by choosing how to donate the money they raise. [15]
Common Cents, a nonprofit based in Dallas, TX, has run a “Pennies from the Heart” program for 20 years, and the student-led efforts have raised over $850,000 for local charities. [16] The Ms. Cheap Penny Drive for Second Harvest in Tennessee raised enough to pay for 316,039 meals for the hungry in 2017. [17]
Read More
Con 1
The penny has practically no value and should be taken out of circulation just as other coins have been in US history.
You can’t buy anything for a penny; vending machines and parking meters won’t accept them. [18] Harvard economist Greg Mankiw stated, “The purpose of the monetary system is to facilitate exchange. The penny no longer serves that purpose. When people start leaving a monetary unit at the cash register for the next customer, the unit is too small to be useful.” [19] Former US Mint Director Philip Diehl said, “[T]he value of a penny has shrunk to the point that, if you earn more than the minimum wage, you’re losing money stopping and picking up a penny on the sidewalk.” [20]
Comedian John Oliver noted, “Two percent of Americans admitted to regularly throwing pennies in the garbage, which means the US Mint is spending millions to make garbage.” [21] Two-thirds of the billions of pennies produced are never seen in circulation again once they reach a consumer via the bank. [22]
Read More
Con 2
The process of making pennies is costly both financially and environmentally.
At a total per unit cost of 1.82 cents, it costs nearly two pennies to make one penny. [1] Aaron Klein, former Deputy Assistant Secretary for Economic Policy at the Treasury Department, estimates that the United States could see $1.78 billion in losses over the next 30 years if the penny remains in production. [23]
Making pennies also has environmental consequences from mining and transportation. Mining zinc and copper produces carbon dioxide emissions and pollutants, and uses vast amounts of energy. [24]
Over the last 35 years, 107 million pounds of carbon dioxide have been emitted due to pennies being delivered from the Mint to banks. [25] A California company called Mike’s Bikes has banned the penny from its registers because “Making pennies wastes natural resources [and] is toxic to people and the environment.” [26]
Read More
Con 3
Eliminating pennies would save time at the point of purchase without hurting customers or businesses financially.
The use of pennies in paying for goods and making change adds time to sales transactions. A study by Walgreen’s and the National Association of Convenience Stores found that pennies add 2 to 2.5 seconds to each cash transaction. [27]
As a result of that extra time per transaction, the average citizen wastes 730 seconds a year (12 minutes) paying with pennies. [28] Harvard economist Greg Mankiw says that this wasted time costs the US economy around $1 billion annually. [29] An estimate from Citizens to Retire the penny says that the 107 billion cash transactions in the United States annually add up to 120 million hours of time between customers and employees – at a cost of $2 billion to the US economy. [27]
Rounding transactions to the nearest nickel instead of using pennies wouldn’t harm consumers or stores. Robert M. Whaples, Professor of Economics at Wake Forest University, crunched the numbers and found that “The convenience stores and the customers basically broke even.” [30]
Read More
Did You Know?
1. In 1792, Congress created a national mint authorized to make gold, silver, and copper coins, including the one-cent piece known as the penny. Abraham Lincoln’s face replaced an image of Lady Liberty on the penny in 1909, the 100th anniversary of his birth, making him the first real person featured on a regular-issue American coin. [31][32][33]
2. The first penny, known as the “Fugio cent,” was reportedly designed by Benjamin Franklin in 1787. Franklin is also credited with the saying, “A penny saved is a penny earned.” [35]
3. The official name for the US penny is “one-cent piece,” according to the US Department of the Treasury, but early Americans were allegedly in the habit of using the British term “penny.” [4][35]
4. Although originally made of pure copper, pennies today are composed of 97.5% zinc and 2.5% copper. [34]
5. The Department of Defense banned the use of pennies at overseas military bases in 1980 because the coins were deemed too heavy and not cost effective to ship. [36]
6. The difference between the face value of a coin and and the actual cost to make it is known as seigniorage. [37]
7. Men are nearly twice as likely as women to favor dropping the penny (39% vs. 20%). [3]
For more on US currency, explore “Currency and the US Presidents.”
Click for an Encyclopaedia Britannica video about how coins became a form of money.
Discussion Questions
1. Should the penny stay in circulation? Why or why not?
2. Should the US government consider removing other coins from circulation as transactions become more digital? Why or why not?
3. How would removing pennies impact people who primarily rely on cash transactions? Explain your answer(s).
Take Action
1. Analyze the pro argument of Mark Weller from Americans for Common Cents.
2. Explore the penny at the US Mint.
3. Consider the con arguments from NPR’s Planet Money reporter Greg Rosalsky.
4. Consider how you felt about the issue before reading this article. After reading the pros and cons on this topic, has your thinking changed? If so, how? List two to three ways. If your thoughts have not changed, list two to three ways your better understanding of the “other side of the issue” now helps you better argue your position.
5. Push for the position and policies you support by writing US national senators and representatives.
Sources
1.United States Mint, "United States Mint 2020 Annual Report," usmint.gov, 2021
2.Brian Milligan, "The Penny Coin: Should We Follow Ireland and Phase It Out?," bbc.com, May 8, 2016
3.The Harris Poll, "Penny for Your Thoughts? Americans Oppose Abolishing the Penny," theharrispoll.com, Sep. 22, 2015
4.US Department of the Treasury, "Resource Center: Denominations," treasury.gov, June 15, 2018
5.John McCain, "Senators John Mccain & Mike Enzi Reintroduce the Coins Act to Save Billions in Taxpayer Dollars," mccain.senate.gov, Mar. 29, 2017
6.Andrew Stelzer, "Phasing out Pennies in a Bid for Change," npr.org, Nov. 29, 2009
7.Shaun O'Brien, "Consumer Preferences and the Use of Cash: Evidence from the Diary of Consumer Payments Choice Working Paper," frbsf.org, June 2014
8.Raymond E. Lombra, "Eliminating the Penny from the U.S. Coinage System: An Economic Analysis," Eastern Economic Journal, Fall 2001
9.Vancouver School of Economics, "Penny Rounding Profitable for Canadian Grocers: UBC VSE Student Research," economics.ubc.ca, Nov. 16, 2017
10.Reuters, "Pennies: The Throwaway Coins We Can't Let Go Of," latimes.com, May 31, 1994
11.Amy Livingston, "Should We Get Rid of the Penny? – 8 Reasons to Keep It vs Eliminate It," moneycrashers.com (accessed July 2, 2018)
12.Rodney J. Bosco and Kevin M. Davis, "Impact of Eliminating the Penny on the United States Mint's Costs and Profit in Fiscal Year 2011," pennies.org, Apr. 12, 2012
13.United States Mint, "United States Mint 2015 Annual Report," usmint.gov, June 2016
14.Associated Press, "US Penny Campaign Benefits Blood Cancer Research," newsday.com, Feb. 10, 2009
15.University of Southern California, "The USC Penny Harvest Wrapped up Its Third Successful Year," communities.usc.edu (accessed July 2, 2018)
16.Common Cents, "Non-Profits," commoncentsdallas.org (accessed July 2, 2018)
17.Mary Hance, "Penny Drive Sets Record in Raising Money for Second Harvest," tennessean.com, Feb. 17, 2017
18.Vlogbrothers, "I HATE PENNIES!!!! (Also Nickels.)," YouTube.com, Sep. 6, 2010
19.Greg Mankiw, "Get Rid of the Penny!," gregmankiw.blogspot.com, Apr. 22, 2006
20.Philip N. Diehl, "The Real Diehl: It’s Time for the United States to Eliminate the One Cent Coin," coinweek.com, Jan. 28, 2015
21.Last Week Tonight, "Pennies: Last Week Tonight with John Oliver (HBO)," YouTube.com, Nov. 22, 2015
22.J. William Gadsby, "Future of the Penny: Options for Congressional Consideration," gao.gov, July 16, 1996
23.Aaron Klein, "Time for Change: Modernizing to the Dollar Coin Saves Taxpayers Billions," dollarcoinalliance.org, July 22, 2013
24.Michelle Z. Donahue, "How Much Does it Really Cost (the Planet) to Make a Penny?," smithsonianmag.com, May 18, 2016
25.Josh Bloom, "Want to Help the Environment? Get Rid of Stupid Pennies," acsh.org, June 17, 2016
26.Mike's Bikes, "Pennies Don't Make 'Cents,'" mikesbikes.com (accessed July 2, 2018)
27.Retire the Penny, "It Makes 'Cents,'" retirethepenny.org (accessed July 2, 2018)
28.Sebastian Mallaby, "The Penny Stops Here," washingtonpost.com, Sep. 25, 2006
29.Greg Mankiw, "How to Make $1 Billion," gregmankiw.blogspot.com, Sep. 25, 2006
30.Consumer Affairs, "The Penny's End Is Near," consumeraffairs.com, July 2006
31.Courtney Waite, "The Origination of the Lincoln Penny," livinglincoln.web.unc.edu, Apr. 16, 2015
32.United States Mint, "Fun Facts Related to the Penny," usmint.gov (accessed July 2, 2018)
33.APMEX, "The 1909-S VDB Lincoln Cent – the King of Lincoln Cents," apmex.com (accessed July 2, 2018)
34.United States Mint, "History," usmint.gov, June 19, 2018
35.Jennie Cohen, "10 Things You Didn't Know about the Penny," history.com, Mar. 30, 2012
36.Army & Air Force Exchange Service, "Retail & General FAQs," aafes.com (accessed July 2, 2018)
37.David Kestenbaum, "What Is Seigniorage?," npr.org, Jan. 9, 2009
38.Business Wire, "Strong Support for the Penny in Recent Poll," businesswire.com, Apr. 25, 2019
39.Jenny Gross, "Will the Penny Survive Coronavirus? Some Hope Not," nytimes.com, July 29, 2020
window.addEventListener('DOMContentLoaded',function(){var e,t=document.getElementById("procon-page-data"),n=!!t&&JSON.parse(atob(t.innerHTML));n&&n.site&&n.site.theme_uri&&((e=document.createElement("script")).async=!0,e.src=n.site.theme_uri+"js/spot-im-recirculation-and-conversation.min.js?v=1593750185",document.body.appendChild(e))});
.spcv_community-question{font-size:18px;min-height:50px;padding:20px 15px}
|
# Is a College Education Worth It?'
**Argument**
College graduates are healthier and live longer.
83% of college graduates reported being in excellent health, while 73% of high school graduates reported the same. A 2018 University of Southern California study found that adults over 65 with college degrees spent more years with “good cognition” and fewer years suffering from dementia than adults who did not complete high school. In 2008, 20% of all adults were smokers, while 9% of college graduates were smokers. 63% of 25 to 34 year old college graduates reported exercising vigorously at least once a week compared to 37% of high school graduates. College degrees were linked to lower blood pressure in a 30-year peer-reviewed study and lower levels of cortisol (the stress hormone) by a Carnegie Mellon Psychology department study. In 2008, 23% of college graduates aged 35 to 44 years old were obese compared to 37% of high school graduates. College graduates, on average, live six years longer than high school graduates.
**Background**
People who argue that college is worth it contend that college graduates have higher employment rates, bigger salaries, and more work benefits than high school graduates. They say college graduates also have better interpersonal skills, live longer, have healthier children, and have proven their ability to achieve a major milestone.
People who argue that college is not worth it contend that the debt from college loans is too high and delays graduates from saving for retirement, buying a house, or getting married. They say many successful people never graduated from college and that many jobs, especially trades jobs, do not require college degrees. Read more background…
|
# Is a College Education Worth It?'
**Argument**
College allows students to explore career options.
Colleges offer career services, internships, job shadowing, job fairs, and volunteer opportunities in addition to a wide variety of courses that may provide a career direction. Over 80% of college students complete internships before graduation, giving them valuable employment experience before entering the job market.
**Background**
People who argue that college is worth it contend that college graduates have higher employment rates, bigger salaries, and more work benefits than high school graduates. They say college graduates also have better interpersonal skills, live longer, have healthier children, and have proven their ability to achieve a major milestone.
People who argue that college is not worth it contend that the debt from college loans is too high and delays graduates from saving for retirement, buying a house, or getting married. They say many successful people never graduated from college and that many jobs, especially trades jobs, do not require college degrees. Read more background…
|
# Police Body Cameras - Pros & Cons - ProCon.org'
**Argument**
Police body cameras improve police accountability and lower reports of police misconduct.
Police body cameras provide visual and audio evidence that can independently verify events. In Texas, a police officer was fired, charged with murder, and sentenced to a $10,000 fine and 15 years in prison after body-worn camera footage contradicted his initial statement in the Apr. 2017 shooting of an unarmed youth.
In Baltimore, Maryland, an officer was convicted of fabricating evidence and misconduct in office after being caught by body-worn cameras planting fake drug evidence.
A RAND study found that use of force by police officers dropped if the officers wearing cameras kept the cameras recording for the officers’ whole shift. In Miami-Dade County, Florida, researchers found a 19% reduction in police officers using physical force against citizen resistance, and civil cases against the police department for use of force dropped 74%.
In Phoenix, Arizona, complaints against officers wearing cameras decreased 23%, while complaints against officers not wearing cameras increased 10.6%.
The cameras also protect police officers against false accusations of misconduct. In San Diego, California, the use of body cameras provided the necessary evidence to exonerate police officers falsely accused of misconduct. The number of severe misconduct allegations deemed false increased 2.4% with body camera footage, and the number of officers exonerated for less severe allegations related to conduct, courtesy, procedure, and service increased 6.5%.
**Background**
OverviewPro/Con ArgumentsDiscussion QuestionsTake Action
Police body cameras (also called body-worn cameras) are small cameras worn on a law enforcement officer’s chest or head to record interactions between the officer and the public. The cameras have a microphone to capture sound and internal data storage to save video footage for later review. [37] [41]
According to the Bureau of Justice Assistance, “[t]he video and audio recordings from BWCs [body-worn cameras] can be used by law enforcement to demonstrate transparency to their communities; to document statements, observations, behaviors, and other evidence; and to deter unprofessional, illegal, and inappropriate behaviors by both law enforcement and the public.” [41] Police body cameras are in use around the world from Australia and Uruguay to the United Kingdom and South Africa. [19] [32] [35] [36]
After the police shooting death of Michael Brown on Aug. 9, 2014 in Ferguson, Missouri, President Barack Obama requested $263 million to fund body camera programs and police training on Dec. 1, 2014. [38] [46] As a result the Department of Justice (DOJ) implemented the Body-Worn Camera Policy and Implementation Program (BWC-PIP). Between fiscal year (FY) 2015 and FY 2019, the BWC-PIP has given over 493 awards worth over a collective $70 million to law enforcement agencies in 47 states, DC, Puerto Rico, and the US Virgin Islands. Agencies in Maine, Montana, and North Dakota have not been awarded federal body camera funding. [40] [42] [43] [44]
As of Oct. 29, 2018, the most recently available information, 36 states and DC had specific legislation about the use of police body cameras. At that time, another four states had pending body camera legislation. [45]
On June 7, 2021, US Deputy Attorney General Lisa Monaco, JD, directed the ATF, DEA, FBI and US Marshals “to develop and submit for review” body-worn camera policies in which agents wear cameras during “(1) a pre-planned attempt to serve an arrest warrant or other pre-planned arrest, including the apprehension of fugitives sought on state and local warrants; or (2) the execution of a search or seizure warrant or order.” [63]
|
# Should Recreational Marijuana Be Legal?'
**Argument**
Commercialized marijuana will create a “Big Marijuana” industry that exploits people for profit and targets children.
“Big Marijuana” is already using similar tactics to “Big Tobacco,” which marketed cigarettes using ads that appealed to kids, including the Joe Camel cartoon character. Marijuana food products that are colorful, sweet, or branded with cartoons are most likely to attract children. Marijuana is available in kid-friendly forms such as gummy bears and lollipops, and products sometimes resemble familiar brands, such as “Buddahfinger” or “KeefKat” in wrappers that look like a Butterfinger or KitKat candy bar.
Mark A. R. Kleiman, a drug policy expert, said, “[I]f you’re in the [for-profit] cannabis business, casual users aren’t much use to you while heavy users are your best customers, accounting for the bulk of your sales… the commercial interest demands maximizing problem use.” Rosalie Liccardo Pacula, senior economist at RAND Corporation, said heavy marijuana users account for the “vast majority of the total amount sold and/or consumed.”
**Background**
More than half of US adults, over 128 million people, have tried marijuana, despite it being an illegal drug under federal law. Nearly 600,000 Americans are arrested for marijuana possession annually – more than one person per minute. Public support for legalizing marijuana went from 12% in 1969 to 66% today. Recreational marijuana, also known as adult-use marijuana, was first legalized in Colorado and Washington in 2012.
Proponents of legalizing recreational marijuana say it will add billions to the economy, create hundreds of thousands of jobs, free up scarce police resources, and stop the huge racial disparities in marijuana enforcement. They contend that regulating marijuana will lower street crime, take business away from the drug cartels, and make marijuana use safer through required testing, labeling, and child-proof packaging. They say marijuana is less harmful than alcohol, and that adults should have a right to use it if they wish.
Opponents of legalizing recreational marijuana say it will increase teen use and lead to more medical emergencies including traffic deaths from driving while high. They contend that revenue from legalization falls far short of the costs in increased hospital visits, addiction treatment, environmental damage, crime, workplace accidents, and lost productivity. They say that marijuana use harms the user physically and mentally, and that its use should be strongly discouraged, not legalized. Read more background…
|
# Bottled Water Bans - Pros & Cons - ProCon.org'
**Argument**
Banning bottled water would save money, and public water fountains are convenient and plentiful.
Bottled water is expensive. It can cost between 400 to 2,000 times more than tap water, four times more than a gallon of milk, and three times more than a gallon of gasoline.
Mathematicians at Penn State University estimate that spending $20 on a reusable water bottle can save the average American up to $1,236 a year. For a family of four that amounts to nearly $5,000.
Eliminating plastic water bottle waste would also save local governments money. According to Food & Water Watch, US cities can spend over $100 million a year to dispose of such waste. California, Oregon, and Washington spend an estimated $500 million a year removing waste from the Pacific coastline, including waste from plastic water bottles.
In San Francisco, where single-use plastic water bottles are banned, 31 water fountains have been added to public areas with 20 more planned. New York City, where Mayor Bill DeBlasio banned single-use water bottles via executive order in Feb. 2020, has 51 water fountains, with another 500 planned by 2025. With public fountains easily available, people can refill reusable water bottles.
**Background**
Americans consumed 14.4 billion gallons of bottled water in 2019, up 3.6% from 2018, in what has been a steadily increasing trend since 2010. In 2016, bottled water outsold soda for the first time and has continued to do so every year since, making it the number one packaged beverage in the United States. 2020 revenue for bottled water was $61.326 million by June 15, and the overall market is expected to grow to $505.19 billion by 2028. [50] [51] [52]
Globally, about 20,000 plastic bottles were bought every second in 2017, the majority of which contained drinking water. More than half of those bottles were not turned in for recycling, and of those recycled, only 7% were turned into new bottles. [49]
In 2013, Concord, MA, became the first US city to ban single-serve plastic water bottles, citing environmental and waste concerns. Since then, many cities, colleges, entertainment venues, and national parks have followed suit, including San Francisco, the University of Vermont, the Detroit Zoo, and the Grand Canyon National Park. [17] [26] [44]
|
# Should Tablets Replace Textbooks in K-12 Schools?'
**Argument**
Print textbooks are heavy and cause injuries, while a tablet only weighs 1-2 pounds.
Pediatricians and chiropractors recommend that students carry less than 15% of their body weight in a backpack, but the combined average weight of textbooks in History, Mathematics, Science, and Reading/Language Arts exceeds this percentage at nearly all grade levels from 1-12. According to the US Consumer Product Safety Commission, during the 2011-12 school year more than 13,700 kids, aged 5 to 18, were treated for backpack-related injuries. The rate dropped to 6,300 in 2016 – a 54% decrease – thanks in part to the increasing use of tablets.
**Background**
Textbook publishing in the United States is an $11 billion industry, with five companies – Cengage Learning, Houghton Mifflin Harcourt, McGraw-Hill, Pearson Education, and Scholastic – capturing about 80% of this market. Tablets are an $18 billion industry with 53% of US adults, 81% of US children aged eight to 17, and 42% of US children aged under eight, owning a tablet. As tablets have become more prevalent, a new debate has formed over whether K-12 school districts should switch from print textbooks to digital textbooks on tablets and e-readers.
Proponents of tablets say that they are supported by most teachers and students, are much lighter than print textbooks, and improve standardized test scores. They say tablets can hold hundreds of textbooks, save the environment by lowering the amount of printing, increase student interactivity and creativity, and that digital textbooks are cheaper than print textbooks.
Opponents of tablets say that they are expensive, too distracting for students, easy to break, and costly/time-consuming to fix. They say that tablets contribute to eyestrain, headaches, and blurred vision, increase the excuses available for students not doing their homework, require costly Wi-Fi networks, and become quickly outdated as new technologies emerge. Read more background…
|
# Is Binge Watching Bad for You? Top 3 Pros and Cons'
**Argument**
Binge-watching leads to mental health issues.
A University of Texas study found that binge-watchers were more likely to be depressed, lonely, and have less self-control. One of the study’s authors, Yoon Hi Sung, PhD, stated: “When binge-watching becomes rampant, viewers may start to neglect their work and their relationships with others.”
Binge-watching can lead to addiction. Dr. Renee Carr, a clinical psychologist, said, “The neuronal pathways that cause heroin and sex addictions are the same as an addiction to binge watching. Your body does not discriminate against pleasure. It can become addicted to any activity or substances that consistently produces dopamine.”
A study found that rather than relieving stress, excessive TV watching is associated with regret, guilt, and feelings of failure because of a sense of wasted time. When that binge-watching session is over, the viewer may be more likely to “mourn” the loss of the show by experiencing depression, anxiety, and feelings of emptiness.
Maricarmen Vizcaino, a Research Scholar in the College of Health Solutions at the University of Arizona, stated, “you would assume that people will feel happier because they’re watching their show, or they’re [watching] some entertainment, but that’s not the case — people are more stressed out, if they’re binge-watching.”
**Background**
The first usage of the term “binge-watch” dates back to 2003, but the concept of watching multiple episodes of a show in one sitting gained popularity around 2012. Netflix’s 2013 decision to release all 13-episodes in the first season of House of Cards at one time, instead of posting an episode per week, marked a new era of binge-watching streaming content. In 2015, “binge-watch” was declared the word of the year by Collins English Dictionary, which said use of the term had increased 200% in the prior year. [1] [2] [3]
73% of Americans admit to binge-watching, with the average binge lasting three hours and eight minutes. 90% of millennials and 87% of Gen Z stated they binge-watch, and 40% of those age groups binge-watch an average of six episodes of television in one sitting. [4] [5]
The coronavirus pandemic led to a sharp increase in binge-viewing: HBO, for example, saw a 65% jump in subscribers watching three or more episodes at once starting on Mar. 14, 2020, around the time when many states implemented stay-at-home measures to slow the spread of COVID-19. [28]
A 2021 Sykes survey found 38% of respondents streamed three or more hours of content on weekdays, and 48% did so on weekends. However, a Nielsen study found adults watched four or more hours of live and streaming TV a day, indicating individuals may be underestimating their TV consumption. [31]
|
# Should Tablets Replace Textbooks in K-12 Schools?'
**Argument**
Tablets are unnecessary because print textbooks that are not brand new still convey relevant information to K-12 students.
A K-12 student learning from an older print textbook still learns the basics of anatomy, physics, algebra, geometry, and the US government.
**Background**
Textbook publishing in the United States is an $11 billion industry, with five companies – Cengage Learning, Houghton Mifflin Harcourt, McGraw-Hill, Pearson Education, and Scholastic – capturing about 80% of this market. Tablets are an $18 billion industry with 53% of US adults, 81% of US children aged eight to 17, and 42% of US children aged under eight, owning a tablet. As tablets have become more prevalent, a new debate has formed over whether K-12 school districts should switch from print textbooks to digital textbooks on tablets and e-readers.
Proponents of tablets say that they are supported by most teachers and students, are much lighter than print textbooks, and improve standardized test scores. They say tablets can hold hundreds of textbooks, save the environment by lowering the amount of printing, increase student interactivity and creativity, and that digital textbooks are cheaper than print textbooks.
Opponents of tablets say that they are expensive, too distracting for students, easy to break, and costly/time-consuming to fix. They say that tablets contribute to eyestrain, headaches, and blurred vision, increase the excuses available for students not doing their homework, require costly Wi-Fi networks, and become quickly outdated as new technologies emerge. Read more background…
|
# Is Social Media Good for Society?'
**Argument**
Social media facilitates face-to-face interaction.
People use social media to network at in-person events and get to know people before personal, business, and other meetings. Pew Research Center’s Internet and American Life Project found that messaging on social media leads to face-to-face interactions when plans are made via the sites and social media users messaged close friends an average of 39 days each year while seeing close friends in person 210 days each year.
**Background**
Around seven out of ten Americans (72%) use social media sites such as Facebook, Instagram, Twitter, LinkedIn, and Pinterest, up from 26% in 2008. [26] [189]. On social media sites, users may develop biographical profiles, communicate with friends and strangers, do research, and share thoughts, photos, music, links, and more.
Proponents of social networking sites say that the online communities promote increased interaction with friends and family; offer teachers, librarians, and students valuable access to educational support and materials; facilitate social and political change; and disseminate useful information rapidly.
Opponents of social networking say that the sites prevent face-to-face communication; waste time on frivolous activity; alter children’s brains and behavior making them more prone to ADHD; expose users to predators like pedophiles and burglars; and spread false and potentially dangerous information. Read more background…
|
# Should Students Have to Wear School Uniforms?'
**Argument**
The key findings used to tout the benefits of uniforms are questionable.
The oft-quoted improvements to school safety and student behavior in the Long Beach (CA) Unified School District from 1993-1995 may not have resulted from the introduction of school uniforms. The study in which the findings were published cautioned that “it is not clear that these results are entirely attributable to the uniform policy” and suggests that the introduction of new school security measures made at the same time may have been partly responsible.
**Background**
Traditionally favored by private and parochial institutions, school uniforms are being adopted by US public schools in increasing numbers. According to a 2020 report, the percentage of public schools that required school uniforms jumped from 12% in the 1999-2000 school year to 20% in the 2017-18 school year. School uniforms were most frequently required by elementary schools (23%), followed by middle (18%), and high schools (10%).
Proponents say that school uniforms make schools safer for students, create a “level playing field” that reduces socioeconomic disparities, and encourage children to focus on their studies rather than their clothes.
Opponents say school uniforms infringe upon students’ right to express their individuality, have no positive effect on behavior and academic achievement, and emphasize the socioeconomic disparities they are intended to disguise. Read more background…
|
# GMO Pros and Cons - Should Genetically Modified Organisms Be Grown?'
**Argument**
Genetically modified (GM) crops have been proven safe through testing and use, and can even increase the safety of common foods.
As astrophysicist Neil deGrasse Tyson explained, “Practically every food you buy in a store for consumption by humans is genetically modified food. There are no wild, seedless watermelons. There’s no wild cows… We have systematically genetically modified all the foods, the vegetables and animals that we have eaten ever since we cultivated them. It’s called artificial selection.”
A single health risk associated with GMO consumption has not been discovered in over 30 years of lab testing and over 15 years of field research. Martina Newell-McGoughlin, Director of the University of California Systemwide Biotechnology Research and Education Program, said that “GMOs are more thoroughly tested than any product produced in the history of agriculture.”
Over 2,000 global studies have affirmed the safety of GM crops. Trillions of meals containing GMO ingredients have been eaten by humans, with zero verified cases of illness related to the food being genetically altered.
GM crops can even be engineered to reduce natural allergens and toxins, making them safer and healthier. Molecular biologist Hortense Dodo, genetically engineered a hypoallergenic peanut by suppressing the protein that can lead to a deadly reaction in people with peanut allergies.
**Background**
OverviewPro/Con ArgumentsDid You Know?Discussion QuestionsTake Action
Selective breeding techniques have been used to alter the genetic makeup of plants for thousands of years. The earliest form of selective breeding were simple and have persisted: farmers save and plant only the seeds of plants that produced the most tasty or largest (or otherwise preferable) results. In 1866, Gregor Mendel, an Austrian monk, discovered and developed the basics of DNA by crossbreeding peas. More recently, genetic engineering has allowed DNA from one species to be inserted into a different species to create genetically modified organisms (GMOs). [1] [2] [53] [55]
To create a GMO plant, scientists follow these basic steps over several years:
Identify the desired trait and find an animal or plant with that trait. For example, scientists were looking to make corn more insect-resistant. They identified a gene in a soil bacterium (Bacillus thuringiensis, or Bt), that naturally produces an insecticide commonly used in organic agriculture.Copy the specific gene for the desired trait.Insert the specific gene into the DNA of the plant scientists want to change. In the above example, the insecticide gene from Bacillus thuringiensis was inserted into corn.Grow the new plant and perform tests for safety and the desired trait. [55]
According to the Genetic Literacy Project, “The most recent data from the International Service for the Acquisition of Agri-biotech Applications (ISAAA) shows that more than 18 million farmers in 29 countries, including 19 developing nations, planted over 190 million hectares (469.5 million acres) of GMO crops in 2019.” The organization stated that a “majority” of European countries and Russia, among other countries, ban the crops. However, most countries that ban the growth of GMO crops, allow their import. Europe, for example, imports 30 million tons of corn and soy animal feeds every year, much of which is GMO. [58]
In the United States, the health and environmental safety standards for GM crops are regulated by the Environmental Protection Agency (EPA), the Food and Drug Administration (FDA), and the US Department of Agriculture (USDA). Between 1985 and Sep. 2013, the USDA approved over 17,000 different GM crops for field trials, including varieties of corn, soybean, potato, tomato, wheat, canola, and rice, with various genetic modifications such as herbicide tolerance; insect, fungal, and drought resistance; and flavor or nutrition enhancement. [44] [45]
In 1994, the “FLAVR SAVR” tomato became the first genetically modified food to be approved for public consumption by the FDA. The tomato was genetically modified to increase its firmness and extend its shelf life. [51]
Recently, the term “bioengineered” food has come into popularity, under the argument that almost all food has been “genetically modified” via selective breeding or other basic growing methods. Bioengineered food refers specifically to food that has undergone modification using rDNA technology, but does not include food genetically modified by basic cross-breeding or selective breeding. As of Jan. 10, 2022, the USDA listed 12 bioengineered products available in the US: alfalfa, Arctic apples, canola, corn, cotton, BARI Bt Begun varieties of eggplant, ringspot virus-resistant varieties of papaya, pink flesh varieties of pineapple, potato, AquAdvantage salmon, soybean, summer squash, and sugarbeet. [56] [57]
The National Bioengineered Food Disclosure Standard established mandatory national standards for labeling foods with genetically engineered ingredients in the United States. The Standard was implemented on Jan. 1, 2020 and compliance became mandatory on Jan. 1, 2022. [46]
49% of US adults believe that eating GMO foods are “worse” for one’s health, 44% say they are “neither better nor worse,” and 5% believe they are “better,” according to a 2018 Pew Research Center report. [9]
|
# Should More Gun Control Laws Be Enacted?'
**Argument**
Civilians, including hunters, should not own military-grade firearms or firearm accessories.
President Ronald Reagan and others did not think the AR-15 military rifle (also called M16s by the Air Force) should be owned by civilians and, when the AR-15 was included in the assault weapons ban of 1994 (which expired on Sep. 13, 2004), the NRA supported the legislation. The Second Amendment was written at a time when the most common arms were long rifles that had to be reloaded after every shot. Civilians today have access to folding, detaching, or telescoping stocks that make the guns more easily concealed and carried; silencers to muffle gunshot sounds; flash suppressors to fire in low-light conditions without being blinded by the flash and to conceal the shooter’s location; or grenade launcher attachments. Jonathan Lowy, Director of Legal Action Project at the Brady Center to Prevent Gun Violence, stated, “These are weapons that will shred your venison before you eat it, or go through the walls of your apartment when you’re trying to defend yourself… [they are] made for mass killing, but not useful for law-abiding citizens.”
**Background**
The United States has 120.5 guns per 100 people, or about 393,347,000 guns, which is the highest total and per capita number in the world. 22% of Americans own one or more guns (35% of men and 12% of women). America’s pervasive gun culture stems in part from its colonial history, revolutionary roots, frontier expansion, and the Second Amendment, which states: “A well regulated militia, being necessary to the security of a free State, the right of the people to keep and bear Arms, shall not be infringed.”
Proponents of more gun control laws state that the Second Amendment was intended for militias; that gun violence would be reduced; that gun restrictions have always existed; and that a majority of Americans, including gun owners, support new gun restrictions.
Opponents say that the Second Amendment protects an individual’s right to own guns; that guns are needed for self-defense from threats ranging from local criminals to foreign invaders; and that gun ownership deters crime rather than causes more crime. Read more background…
|
# Is Human Activity Primarily Responsible for Global Climate Change?'
**Argument**
Human-caused global warming is changing weather systems and making heat waves and droughts more intense and more frequent.
A National Climate Assessment report said human-caused climate changes, such as increased heat waves and drought, “are visible in every state.” The American Meteorological Society found that anthropogenic climate change “greatly increased” (up to 10 times) the risk for extreme heat waves. Globally, 75% of extremely hot days are attributable to warming caused by human activity. A World Weather Attribution study found that anthropogenic climate change increased the likelihood of wildfires such as the ones that raged across Australia in 2019-2020 by at least 30% since 1900.
**Background**
Average surface temperatures on earth have risen more than 2°F over the past 100 years. During this time period, atmospheric levels of greenhouse gases such as carbon dioxide (CO2) and methane (CH4) have notably increased. This site explores the debate on whether climate change is caused by humans (also known as anthropogenic climate change).
The pro side argues rising levels of atmospheric greenhouse gases are a direct result of human activities such as burning fossil fuels, and that these increases are causing significant and increasingly severe climate changes including global warming, loss of sea ice, sea level rise, stronger storms, and more droughts. They contend that immediate international action to reduce greenhouse gas emissions is necessary to prevent dire climate changes.
The con side argues human-generated greenhouse gas emissions are too small to substantially change the earth’s climate and that the planet is capable of absorbing those increases. They contend that warming over the 20th century resulted primarily from natural processes such as fluctuations in the sun’s heat and ocean currents. They say the theory of human-caused global climate change is based on questionable measurements, faulty climate models, and misleading science. Read more background…
|
# Should the Federal Minimum Wage Be Increased?'
**Argument**
Raising the minimum wage would decrease employee benefits and increase tax payments.
According to James Sherk, MA, Senior Policy Analyst at the Heritage Foundation, a single mother working full time and earning the federal minimum wage of $7.25 an hour would be over $260 a month worse off if the minimum wage were raised to $10.10: “While her market income rises by $494, she loses $71 in EITC [earned income tax credit] refunds, pays $37 more in payroll taxes and $45 more in state income taxes. She also loses $88 in food stamp benefits and $528 in child-care subsidies.” A 2014 study of 400 US Chief Financial Officers (CFOs) by Campbell Harvey, PhD, J. Paul Sticht Professor of International Business at Duke University, found that 40% of CFOs would reduce employee benefits if the minimum wage were raised to $10 an hour. Some staff at the Seattle-area nonprofit organization, Full Life Care, asked for a reduction in hours after the minimum wage was raised, citing concerns that the increase will mean they lose their housing subsidies yet they are still unable to afford market-rate rents.
**Background**
The federal minimum wage was introduced in 1938 during the Great Depression under President Franklin Delano Roosevelt. It was initially set at $0.25 per hour and has been increased by Congress 22 times, most recently in 2009 when it went from $6.55 to $7.25 an hour. 29 states plus the District of Columbia (DC) have a minimum wage higher than the federal minimum wage. 1.8 million workers (or 2.3% of the hourly paid working population) earn the federal minimum wage or below.
Proponents of a higher minimum wage state that the current federal minimum wage of $7.25 per hour is too low for anyone to live on; that a higher minimum wage will help create jobs and grow the economy; that the declining value of the minimum wage is one of the primary causes of wage inequality between low- and middle-income workers; and that a majority of Americans, including a slim majority of self-described conservatives, support increasing the minimum wage.
Opponents say that many businesses cannot afford to pay their workers more, and will be forced to close, lay off workers, or reduce hiring; that increases have been shown to make it more difficult for low-skilled workers with little or no work experience to find jobs or become upwardly mobile; and that raising the minimum wage at the federal level does not take into account regional cost-of-living variations where raising the minimum wage could hurt low-income communities in particular. Read more background…
|
# Should All Americans Have the Right (Be Entitled) to Health Care?'
**Argument**
Providing a right to health care could raise taxes.
In European countries with a universal right to health care, the cost of coverage is paid through higher taxes. In the United Kingdom and other European countries, payroll taxes average 37% – much higher than the 15.3% payroll taxes paid by the average US worker. According to Paul R. Gregory, PhD, a Research Fellow at the Hoover Institution, financing a universal right to health care in the United States would cause payroll taxes to double.
**Background**
27.5 million people in the United States (8.5% of the US population) do not have health insurance. Among the 91.5% who do have health insurance, 67.3% have private insurance while 34.4% have government-provided coverage through programs such as Medicaid or Medicare. Employer-based health insurance is the most common type of coverage, applying to 55.1% of the US population. The United States is the only nation among the 37 OECD (Organization for Economic Co-operation and Development) nations that does not have universal health care either in practice or by constitutional right.
Proponents of the right to health care say that no one in one of the richest nations on earth should go without health care. They argue that a right to health care would stop medical bankruptcies, improve public health, reduce overall health care spending, help small businesses, and that health care should be an essential government service.
Opponents argue that a right to health care amounts to socialism and that it should be an individual’s responsibility, not the government’s role, to secure health care. They say that government provision of health care would decrease the quality and availability of health care, and would lead to larger government debt and deficits. Read more background…
|
# Is Human Activity Primarily Responsible for Global Climate Change?'
**Argument**
The specific type of CO2 that is increasing in earth’s atmosphere can be directly connected to human activity.
We can tell that CO2 produced by humans burning fossil fuels such as oil and coal is different than naturally occurring CO2 by looking at the specific isotopic ratio. According to the Intergovernmental Panel on Climate Change (IPCC), 20th century measurements of CO2 isotope ratios in the atmosphere confirm that rising CO2 levels are the result of human activity as opposed to gas coming off the oceans, volcanic activity, or other natural causes.
The US Environmental Protection Agency says that “Human activities are responsible for almost all of the increase in greenhouse gases in the atmosphere over the last 150 years.”
**Background**
Average surface temperatures on earth have risen more than 2°F over the past 100 years. During this time period, atmospheric levels of greenhouse gases such as carbon dioxide (CO2) and methane (CH4) have notably increased. This site explores the debate on whether climate change is caused by humans (also known as anthropogenic climate change).
The pro side argues rising levels of atmospheric greenhouse gases are a direct result of human activities such as burning fossil fuels, and that these increases are causing significant and increasingly severe climate changes including global warming, loss of sea ice, sea level rise, stronger storms, and more droughts. They contend that immediate international action to reduce greenhouse gas emissions is necessary to prevent dire climate changes.
The con side argues human-generated greenhouse gas emissions are too small to substantially change the earth’s climate and that the planet is capable of absorbing those increases. They contend that warming over the 20th century resulted primarily from natural processes such as fluctuations in the sun’s heat and ocean currents. They say the theory of human-caused global climate change is based on questionable measurements, faulty climate models, and misleading science. Read more background…
|
# Should Tablets Replace Textbooks in K-12 Schools?'
**Argument**
E-textbooks on tablets cost less than print textbooks.
According to the School Library Journal, the average price of a K-12 print textbook is approximately $70 compared with $45-$55 for a 6-year subscription to a digital textbook. E-textbook prices continue to drop with VitalSource reporting, on average, a 31% drop in price between 2016 and 2018. Tablet prices also continue to drop, making them increasingly affordable. Tablets cost on average $489 in 2011 compared with $299 in 2018.
**Background**
Textbook publishing in the United States is an $11 billion industry, with five companies – Cengage Learning, Houghton Mifflin Harcourt, McGraw-Hill, Pearson Education, and Scholastic – capturing about 80% of this market. Tablets are an $18 billion industry with 53% of US adults, 81% of US children aged eight to 17, and 42% of US children aged under eight, owning a tablet. As tablets have become more prevalent, a new debate has formed over whether K-12 school districts should switch from print textbooks to digital textbooks on tablets and e-readers.
Proponents of tablets say that they are supported by most teachers and students, are much lighter than print textbooks, and improve standardized test scores. They say tablets can hold hundreds of textbooks, save the environment by lowering the amount of printing, increase student interactivity and creativity, and that digital textbooks are cheaper than print textbooks.
Opponents of tablets say that they are expensive, too distracting for students, easy to break, and costly/time-consuming to fix. They say that tablets contribute to eyestrain, headaches, and blurred vision, increase the excuses available for students not doing their homework, require costly Wi-Fi networks, and become quickly outdated as new technologies emerge. Read more background…
|
# Should Pit Bulls Be Banned? Top 3 Pros and Cons'
**Argument**
There is no evidence BSL makes communities safer.
BSL is ineffective because it treats the result (a dog bite) instead of the cause (bad animal owners). For example, Miami-Dade County, Florida, has had a pit bull ban since the 1980s, but the county still euthanizes about 800 illegally owned pit bulls per year. Aragon, Spain, saw no changes in dog bite numbers five years before and five years after BSL was enacted.
People who are breeding or training dogs for illegal fighting or to protect illegal activities will simply turn to another dog breed if pit bulls are banned. For example, following a 2005 pit bull ban in Council Bluff, Iowa, Boxer and Labrador Retriever bites increased, as did overall dog bites.
In 1990 when pit bulls were banned in Winnipeg, Canada, Rottweiler bites immediately increased. When the city changed the law in 2000 to be breed-neutral, all dog bites decreased.
**Background**
Breed-specific legislation (BSL) is a “blanket term for laws that regulate or ban certain dog breeds in an effort to decrease dog attacks on humans and other animals,” according to the American Society for the Prevention of Cruelty to Animals (ASPCA). The laws are also called pit bull bans and breed-discriminatory laws. [1]
The legislation frequently covers any dog deemed a “pit bull,” which can include American Pit Bull Terriers, American Staffordshire Terriers, Staffordshire Bull Terriers, English Bull Terriers, and pit bull mixes, though any dog that resembles a pit bull or pit bull mix can be included in the bans. Other dogs are also sometimes regulated, including American Bulldogs, Rottweilers, Mastiffs, Dalmatians, Chow Chows, German Shepherds, and Doberman Pinschers, as well as mixes of these breeds or, again, dogs that simply resemble the restricted breeds. [1]
The term “pit bull” refers to a dog with certain characteristics, rather than a specific breed. Generally, the dogs have broad heads and muscular bodies. Pit bulls are targeted because of their history in dog fighting. [2]
Dog fighting dates to at least 43 CE, when the Romans invaded Britain, and both sides brought fighting dogs to the war. The Romans believed the British to have better-trained fighting dogs and began importing (and later exporting) the dogs for war and entertainment wherein the dogs were made to fight against wild animals, including elephants. From the 12th century until the 19th century, dogs were used for baiting chained bears and bulls. In 1835, England outlawed baiting, which then increased the popularity of dog-on-dog fights. [3] [4]
Fighting dogs arrived in the United States in 1817, whereupon Americans crossbred several breeds to create the American Pit Bull. The United Kennel Club endorsed the fights and provided referees. Dog fighting was legal in most US states until the 1860s, and it was not completely outlawed in all states until 1976. Today, dog fighting is a felony offense in all 50 states, though the fights thrive in illegal underground venues. [3] [4]
More than 700 cities in 29 states have breed-specific legislation, while 20 states do not allow breed-specific legislation, and one allows no new legislation after 1990, as of Apr. 1, 2020. [1]
|
# Reparations for Slavery - Pros & Cons - ProCon.org'
**Argument**
Slavery left African American communities at the mercy of the “slave health deficit,” which should be addressed with reparations.
Health Policy Research Scholar Brittney Butler, PhD, explains, “The health effects of slavery and racism in the U.S has transcended generations and laid the foundation of poor health for Black families in the U.S…. The connection between health disparities and racism dates back to slavery. The Slave trade introduced European diseases to African and Indigenous populations, and prior to arriving to these shores, the long journey to North America and the horrible ship conditions increased risk for disease and mortality with the leading cause of death being dysentery. If they survived the treacherous journey, they were forced to live and work under inhumane conditions that further exacerbated their risk for chronic and respiratory diseases. During slavery, white physicians experimented on, exploited and discarded Black bodies under the auspice of advancing medicine … once the enslaved people were free, they had minimal access to health care and other basic necessities.”
Post-slavery, health disparities continued in terms of differences in access to and care within the health care system, as well as higher levels of disease due to higher rates of exposure and differing life opportunities. Black Americans are more likely to be underinsured or uninsured, and less likely to have a primary care physician. High blood pressure, asthma, strokes, heart disease, cancer, and diabetes are more prevalent among African Americans than white Americans.
Oliver T. Brooks, President of the National Medical Association, stated, “It is known that the social determinants of health (SDoH) play as important a role in a person’s health as genetics or medical treatment. There are broadly six SDoH categories: economic stability, physical environment, education, food community and social content and healthcare systems. African Americans are adversely affected in this arena.”
Brooks continued, in terms of COVID-19, “with poorer housing we cannot generally socially isolate at home each in a different wing of the house; in some instances, there may be six people in a 2-bedroom apartment. We work in types of employment that will not allow us to work from home; going out to work puts one at a higher risk of acquiring the infection. Many of these jobs also do not provide healthcare coverage.” Reparations could bolster African American healthcare as well as the underlying social conditions that have resulted in the health disparity.
**Background**
OverviewPro/Con ArgumentsDiscussion QuestionsTake Action
Reparations are payments (monetary and otherwise) given to a group that has suffered harm. For example, Japanese-Americans who were interned in the United States during World War II have received reparations. [1]
Arguments for reparations for slavery date to at least Jan. 12, 1865, when President Abraham Lincoln’s Secretary of War Edwin M. Stanton and Union General William T. Sherman met with 20 African American ministers in Savannah, Georgia. Stanton and Sherman asked 12 questions, including: “State in what manner you think you can take care of yourselves, and how can you best assist the Government in maintaining your freedom.” Appointed spokesperson, Baptist minister, and former slave Garrison Frazier replied, “The way we can best take care of ourselves is to have land, and turn it and till it by our own labor … and we can soon maintain ourselves and have something to spare … We want to be placed on land until we are able to buy it and make it our own.” [2] [3]
On Jan. 16, 1865, Sherman issued Special Field Order No. 15 that authorized 400,000 acres of coastal land from Charleston, South Carolina to the St. John’s River in Florida to be divided into forty-acre plots and given to newly freed slaves for their exclusive use. The land had been confiscated by the Union from white slaveholders during the Civil War. Because Sherman later gave orders for the Army to lend mules to the freedmen, the phrase “forty acres and a mule” became popular. [1] [4]
However, shortly after Vice President Andrew Johnson became president following Abraham Lincoln’s assassination on Apr. 14, 1865, he worked to rescind the order and revert the land back to the white landowners. At the end of the Civil War, the federal government had confiscated 850,000 acres of former Confederates’ land. By mid-1867, all but 75,000 acres had been returned to the Confederate owners. [1] [4] [5]
Other efforts and arguments have been made to institute or deny reparations to descendants of slaves since the 1860s, and the issue remains divisive and hotly debated. An Oct. 2019 Associated Press-NORC Center for Public Affairs Research poll found 29% of Americans overall approved of reparations. When separated by race, the poll showed 74% of black Americans, 44% of Hispanics, and 15% of white Americans were in favor of reparations. [6]
While Americans generally think of reparations as monetary, Michelle Bachelet, MD, UN High Commissioner for Human Rights, in the office’s June 1, 2021 annual report, stated: “Measures taken to address the past should seek to transform the future. Structures and systems that were designed and shaped by enslavement, colonialism and successive racially discriminatory policies and systems must be transformed. Reparations should not only be equated with financial compensation. They also comprise measures aimed at restitution, rehabilitation, satisfaction and guarantees of non-repetition, including, for example, formal acknowledgment and apologies, memorialization and institutional and educational reforms. Reparations are essential for transforming relationships of discrimination and inequity and for mutually committing to and investing in a stronger, more resilient future of dignity, equality and non-discrimination for all. Reparatory justice requires a multipronged approach that is grounded in international human rights law. Reparations are one element of accountability and redress. For every violation, there should be repair of the harms caused through adequate, effective and prompt reparation. Reparations help to promote trust in institutions and the social reintegration of people whose rights may have been discounted, providing recognition to victims and survivors as rights holders.” [46]
President Obama outlined the political difficulty of reparations on his podcast with Bruce Springsteen, “Renegades: Born in the U.S.A.” He said, “So, if you ask me theoretically: ‘Are reparations justified?’ The answer is yes. There’s not much question that the wealth of this country, the power of this country was built in significant part — not exclusively, maybe not even the majority of it — but a large portion of it was built on the backs of slaves. What I saw during my presidency was the politics of white resistance and resentment, the talk of welfare queens and the talk of the undeserving poor and the backlash against affirmative action… all that made the prospect of actually proposing any kind of coherent, meaningful reparations program struck me as, politically, not only a non-starter but potentially counterproductive.” [47]
An Oct. 2021 Gallup Center on Black Voices survey found 62% of American adults believe the federal government has an obligation to reduce the effects of slavery; 37% believe the government has no such obligation. Of those who support government action, 65% believe all black Americans should benefit, while 32% believe only the descendants of enslaved people should benefit. [48]
|
# Is Artificial Intelligence Good for Society? Top 3 Pros and Cons'
**Argument**
AI repeats and exacerbates human racism.
Facial recognition has been found to be racially biased, easily recognizing the faces of white men while wrongly identifying black women 35% of the time. One study of Amazon’s Rekognition AI program falsely matched 28 members of the US Congress with mugshots from a criminal database. 40% of the errors were people of color.
AI has also been disproportionately employed against black and brown communities, with more federal and local police surveillance cameras in neighborhoods of color, and more social media surveillance of Black Lives Matter and other black activists. The same technologies are used for housing and employment decisions and TSA airport screenings. Some cities, including Boston and San Francisco, have banned police use of facial recognition for these reasons.
One particular AI software tasked with predicting recidivism risk for US courts–the Correctional Offender Management Profiling for Alternative Sanctions (Compas)–was found to falsely label black defendants as high risk at twice the rate of white defenders, and to falsely label white defendants as low risk more often.
In China, facial recognition AI has been used to track Uyghurs, a largely Muslim minority. The US and other governments have accused the Chinese government of genocide and forced labor in Xinjiang where a large population of Uyghurs live.
Beyond facial recognition, online AI algorithms frequently fail to recognize and censor racial slurs, such as a recent incident in an Amazon product description for a black doll. AI is also incapable of distinguishing between when the N-word is being used as a slur and when it’s being used culturally by a black person. AI algorithms have also been found to show a “persistent anti-Muslim bias,” by associating violence with the word “Muslim” at a higher rate than with words describing other religions including Christians, Jews, Sikhs, or Buddhists.
**Background**
Artificial intelligence (AI) is the use of “computers and machines to mimic the problem-solving and decision-making capabilities of the human mind,” according to IBM. [1]
The idea of AI goes back at least 2,700 years. As Adrienne Mayor, research scholar, folklorist, and science historian at Stanford University, explained: “Our ability to imagine artificial intelligence goes back to the ancient times. Long before technological advances made self-moving devices possible, ideas about creating artificial life and robots were explored in ancient myths.” [2]
Mayor noted that the myths about Hephaestus, the Greek god of invention and blacksmithing, included precursors to AI. For example, Hephaestus created the giant bronze man, Talos, which had a mysterious life force from the gods called ichor. Hephaestus also created Pandora and her infamous box, as well as a set of automated servants made of gold that were given the knowledge of the gods. Mayor concluded, “Not one of those myths has a good ending once the artificial beings are sent to Earth. It’s almost as if the myths say that it’s great to have these artificial things up in heaven used by the gods. But once they interact with humans, we get chaos and destruction.” [2]
The modern version of AI largely began when Alan Turing, who contributed to breaking the Nazi’s Enigma code during World War II, created the Turing test to determine if a computer is capable of “thinking.” The value and legitimacy of the test have long been the subject of debate. [1] [3] [4]
The “Father of Artificial Intelligence,” John McCarthy, coined the term “artificial intelligence” when he, with Marvin Minsky and Claude Shannon, proposed a 1956 summer workshop on the topic at Dartmouth College. McCarthy defined artificial intelligence as “the science and engineering of making intelligent machines.” He later created the computer programming language LISP (which is still used in AI), hosted computer chess games against human Russian opponents, and developed the first computer with ”hand-eye” capability, all important building blocks for AI. [1] [5] [6] [7]
The first AI program designed to mimic how humans solve problems, Logic Theorist, was created by Allen Newell, J.C. Shaw, and Herbert Simon in 1955-1956. The program was designed to solve problems from Principia Mathematica (1910-13) written by Alfred North Whitehead and Bertrand Russell. [1] [8]
In 1958, Frank Rosenblatt invented the Perceptron, which he claimed was “the first machine which is capable of having an original idea.” Though the machine was hounded by skeptics, it was later praised as the “foundations for all of this artificial intelligence.” [1] [9]
As computers became cheaper in the 1960s and 70s, AI programs such as Joseph Weizenbaum’s ELIZA flourished, and US government agencies including the Defense Advanced Research Projects Agency (DARPA) began to fund AI-related research. But computers were still too weak to manage the language tasks researchers asked of them. Another influx of funding in the 1980s and early 90s furthered the research, including the invention of expert systems by Edward Feigenbaum and Joshua Lederberg. But progress again waned with a drop in government funding. [10]
In 1997, Gary Kasparov, reigning world chess champion and grand master, was defeated by IBM’s Deep Blue AI computer program, a huge step for AI researchers. More recently, advances in computer storage limits and speeds have opened new avenues for AI research and implementation, such as aiding in scientific research and forging new paths in medicine for patient diagnosis, robotic surgery, and drug development. [1] [10] [11] [12]
Now, artificial intelligence is used for a variety of everyday implementations including facial recognition software, online shopping algorithms, search engines, digital assistants like Siri and Alexa, translation services, automated safety functions on cars (and the promised self-driving cars of the future), cybersecurity, airport body scanning security, poker playing strategy, and fighting disinformation on social media, among others. [13] [58]
|
# Is Human Activity Primarily Responsible for Global Climate Change?'
**Argument**
Deep ocean currents, not human activity, are a primary driver of natural climate warming and cooling cycles.
Over the 20th century there have been two Arctic warming periods with a cooling period (1940-1970) in between. According to a study in Geophysical Research Letters, natural shifts in the ocean currents are the major cause of these climate changes, not human-generated greenhouse gases. William Gray, PhD, Emeritus Professor of Atmospheric Science at Colorado State University, said most of the climate changes over the last century are natural and “due to multi-decadal and multi-century changes in deep global ocean currents.”
Global cooling from 1940 to the 1970s, and warming from the 1970s to 2008, coincided with fluctuations in ocean currents and cloud cover driven by the Pacific Decadal Oscillation (PDO) – a naturally occurring rearrangement in atmospheric and oceanic circulation patterns. According to Don Easterbrook, PhD, Professor Emeritus of Geology at Western Washington University, the “PDO cool mode has replaced the warm mode in the Pacific Ocean, virtually assuring us of about 30 years of global cooling, perhaps much deeper than the global cooling from about 1945 to 1977.”
**Background**
Average surface temperatures on earth have risen more than 2°F over the past 100 years. During this time period, atmospheric levels of greenhouse gases such as carbon dioxide (CO2) and methane (CH4) have notably increased. This site explores the debate on whether climate change is caused by humans (also known as anthropogenic climate change).
The pro side argues rising levels of atmospheric greenhouse gases are a direct result of human activities such as burning fossil fuels, and that these increases are causing significant and increasingly severe climate changes including global warming, loss of sea ice, sea level rise, stronger storms, and more droughts. They contend that immediate international action to reduce greenhouse gas emissions is necessary to prevent dire climate changes.
The con side argues human-generated greenhouse gas emissions are too small to substantially change the earth’s climate and that the planet is capable of absorbing those increases. They contend that warming over the 20th century resulted primarily from natural processes such as fluctuations in the sun’s heat and ocean currents. They say the theory of human-caused global climate change is based on questionable measurements, faulty climate models, and misleading science. Read more background…
|
# Banned Books Pros and Cons - Top 3 Arguments For and Against'
**Argument**
Books are a portal to different life experiences and reading encourages empathy and social-emotional development.
One study found that reading J.K. Rowling’s Harry Potter series, which is frequently challenged for religious concerns about witchcraft, “improved attitudes” about immigrants, homosexuals, and refugees.
Another study found that reading narrative fiction helped readers understand their peers and raised social abilities.
A study published in Basic and Applied Social Psychology found that people who read a story about a Muslim woman were less likely to make broad judgments based on race.
Neil Gaiman, author of the frequently challenged novel Neverwhere, among other books, stated that fiction “build[s] empathy… You get to feel things, visit places and worlds you would never otherwise know. You learn that everyone else out there is a me, as well. You’re being someone else, and when you return to your own world, you’re going to be slightly changed. Empathy is a tool for building people into groups, for allowing us to function as more than self-obsessed individuals.”
**Background**
The American Library Association (ALA) has tracked book challenges, which are attempts to remove or restrict materials, since 1990. In 2020, the ALA recorded 156 reported book challenges in the United States, a significant decrease from the 377 reported challenges in 2019 perhaps due to the COVID-19 pandemic. However, challenges jumped to an all-time high in 2021 with 729 challenges, containing a total of 1,597 books. [22] [27] [28]
In most years, about 10% of the reported challenges result in removal or ban from the school or library. However, in 2016, five of the top ten most challenged books were removed. The ALA estimates that only about 3% to 18% of challenges are reported to its Office for Intellectual Freedom, meaning that the actual number of attempts to ban books is likely much higher. [1] [24]
In 2021, challenges were most frequently brought by parents (39%), followed by patrons (24%), a board or administration (18%), librarians or teachers (6%), elected officials (2%), and students (1%). Books were most often challenged at school libraries (44%), public libraries (37%), schools (18%), and academic libraries (1%). [30]
Sexually explicit content, offensive language, and “unsuited to any age group” are the top three reasons cited for requesting a book be removed. The percentage of Americans who thought any books should be banned increased from 18% in 2011 to 28% in 2015, and 60% of people surveyed believed that children should not have access to books containing explicit language in school libraries, according to The Harris Poll. A 2022 poll found 71% disagreed with efforts to have books removed, including 75% of Democrats, 58% of independents, and 70% of Republicans. [1] [3] [28]
|
# Should Recreational Marijuana Be Legal?'
**Argument**
The United States has signed international treaties that prevent us from legalizing marijuana.
Three United Nations treaties set worldwide drug controls. As a party to the treaties, the United States has agreed to limit the use of marijuana “exclusively to medical and scientific purposes.” The move by some US states to legalize adult-use marijuana has upset the UN monitoring organization, which stated that legalization “cannot be reconciled with the legal obligation” to uphold the Single Convention treaty.
Legalizing marijuana puts the United States in a position of weakness when we need to hold other nations accountable to legal agreements. “It is a path the United States—with its strong interest in international institutions and the rule of law—should tread with great caution,” wrote Wells Bennett, a Fellow in National Security Law at the Brookings Institution, and John Walsh, Senior Associate at the Washington Office on Latin America.
**Background**
More than half of US adults, over 128 million people, have tried marijuana, despite it being an illegal drug under federal law. Nearly 600,000 Americans are arrested for marijuana possession annually – more than one person per minute. Public support for legalizing marijuana went from 12% in 1969 to 66% today. Recreational marijuana, also known as adult-use marijuana, was first legalized in Colorado and Washington in 2012.
Proponents of legalizing recreational marijuana say it will add billions to the economy, create hundreds of thousands of jobs, free up scarce police resources, and stop the huge racial disparities in marijuana enforcement. They contend that regulating marijuana will lower street crime, take business away from the drug cartels, and make marijuana use safer through required testing, labeling, and child-proof packaging. They say marijuana is less harmful than alcohol, and that adults should have a right to use it if they wish.
Opponents of legalizing recreational marijuana say it will increase teen use and lead to more medical emergencies including traffic deaths from driving while high. They contend that revenue from legalization falls far short of the costs in increased hospital visits, addiction treatment, environmental damage, crime, workplace accidents, and lost productivity. They say that marijuana use harms the user physically and mentally, and that its use should be strongly discouraged, not legalized. Read more background…
|
# Should Gay Marriage Be Legal?'
**Argument**
Gay marriage is contrary to the word of God and is incompatible with the beliefs, sacred texts, and traditions of many religious groups.
The Bible, in Leviticus 18:22, states: “Thou shalt not lie with mankind, as with womankind: it is abomination,” thus condemning homosexual relationships.
The Catholic Church, United Methodist Church, Southern Baptist Convention, Church of Jesus Christ of Latter-day Saints, National Association of Evangelicals, and American Baptist Churches USA all oppose same-sex marriage.
According to a July 31, 2003 statement from the Congregation for the Doctrine of the Faith and approved by Pope John Paul II, marriage “was established by the Creator with its own nature, essential properties and purpose. No ideology can erase from the human spirit the certainty that marriage exists solely between a man and a woman.” Pope Benedict stated in Jan. 2012 that gay marriage threatened “the future of humanity itself.”
Two orthodox Jewish groups, the Orthodox Agudath Israel of America and the Orthodox Union, also oppose gay marriage, as does mainstream Islam.
In Islamic tradition, several hadiths (passages attributed to the Prophet Muhammad) condemn gay and lesbian relationships, including the sayings “When a man mounts another man, the throne of God shakes,” and “Sihaq [lesbian sex] of women is zina [illegitimate sexual intercourse].”
**Background**
This site was archived on Dec. 15, 2021. A reconsideration of the topic on this site is possible in the future.
On June 26, 2015, the US Supreme Court ruled that gay marriage is a right protected by the US Constitution in all 50 states. Prior to their decision, same-sex marriage was already legal in 37 states and Washington DC, but was banned in the remaining 13. US public opinion had shifted significantly over the years, from 27% approval of gay marriage in 1996 to 55% in 2015, the year it became legal throughout the United States, to 61% in 2019.
Proponents of legal gay marriage contend that gay marriage bans are discriminatory and unconstitutional, and that same-sex couples should have access to all the benefits enjoyed by different-sex couples.
Opponents contend that marriage has traditionally been defined as being between one man and one woman, and that marriage is primarily for procreation. Read more background…
|
# Should Gay Marriage Be Legal?'
**Argument**
Marriage is not only for procreation, otherwise infertile couples or couples not wishing to have children would be prevented from marrying.
Ability or desire to create offspring has never been a qualification for marriage. From 1970 through 2012 roughly 30% of all US households were married couples without children, and in 2012, married couples without children outnumbered married couples with children by 9%.
6% of married women aged 15-44 are infertile, according to the US Centers for Disease Control and Prevention.
In a 2010 Pew Research Center survey, both married and unmarried people rated love, commitment, and companionship higher than having children as “very important” reasons to get married, and only 44% of unmarried people and 59% of married people rated having children as a very important reason.
As US Supreme Court Justice Elena Kagan noted, a marriage license would be granted to a couple in which the man and woman are both over the age of 55, even though “there are not a lot of children coming out of that marriage.”
**Background**
This site was archived on Dec. 15, 2021. A reconsideration of the topic on this site is possible in the future.
On June 26, 2015, the US Supreme Court ruled that gay marriage is a right protected by the US Constitution in all 50 states. Prior to their decision, same-sex marriage was already legal in 37 states and Washington DC, but was banned in the remaining 13. US public opinion had shifted significantly over the years, from 27% approval of gay marriage in 1996 to 55% in 2015, the year it became legal throughout the United States, to 61% in 2019.
Proponents of legal gay marriage contend that gay marriage bans are discriminatory and unconstitutional, and that same-sex couples should have access to all the benefits enjoyed by different-sex couples.
Opponents contend that marriage has traditionally been defined as being between one man and one woman, and that marriage is primarily for procreation. Read more background…
|
# Should the United States Continue Its Use of Drone Strikes Abroad?'
**Argument**
Americans support drone strikes.
A Jan. 23, 2020 poll, after the Jan. 3 drone strikes that killed Iranian Quds Force commander Qasem Soleimani, found that 35% of Americans agree that drone strikes are a “very effective way to achieve US foreign policy,” an increase from 23% in 2015. Fewer people believe signing international agreements (29%), sanctions (23%), or military intervention (17%) are very effective. Meanwhile ,47% supported President Trump’s decision to order the strikes that killed Soleimani and others.
According to a July 18, 2013 survey by Pew Research, 61% of Americans supported drone strikes in Pakistan, Yemen, and Somalia. Support spanned the political divide, including Republicans (69%), independents (60%), and Democrats (59%).
A Mar. 20, 2013 poll by the Gallup organization found that 65% of Americans believed the US government should “use drones to launch airstrikes in other countries against suspected terrorists” and 74% of Americans who “very” or “somewhat” closely follow news stories about drones supported the attacks.
A May 28, 2013 Christian Science Monitor/TIPP poll found that 57% of Americans supported drone strikes targeting “al Qaeda targets and other terrorists in foreign countries.”
**Background**
Unmanned aerial vehicles (UAVs), otherwise known as drones, are remotely-controlled aircraft which may be armed with missiles and bombs for attack missions. Since the World Trade Center attacks on Sep. 11, 2001 and the subsequent “War on Terror,” the United States has used thousands of drones to kill suspected terrorists in Pakistan, Afghanistan, Yemen, Somalia, and other countries.
Proponents state that drones strikes help prevent “boots on the ground” combat and makes America safer, that the strikes are legal under American and international law, and that they are carried out with the support of Americans and foreign governments
Opponents state that drone strikes kill civilians, creating more terrorists than they kill and sowing animosity in foreign countries, that the strikes are extrajudicial and illegal, and create a dangerous disconnect between the horrors of war and soldiers carrying out the strikes.
|
# Should the Federal Corporate Income Tax Rate Be Raised?'
**Argument**
Raising the corporate income tax rate would allow the federal government to pay for much-needed social and infrastructure programs.
Corporate taxes pay for public services and investments that help the companies suceed. By not paying their fair share of taxes, corporations transfer the tax burden to small companies and individuals.
A lower federal corporate tax rate means less government tax revenue, thus reducing federal programs, investments, and job-creating opportunities. When the Tax Reform Act of 1986 reduced the top marginal rate from 46% to 34%, the federal deficit increased from $149.7 billion to $255 billion from 1987-1993. The Congressional Budget Office estimates that President Trump’s Tax Cuts and Jobs Act will increase the projected federal deficit from $16 trillion in 2018 to $29 trillion by 2028.
Experts at the Center on Budget and Policy Priorities concluded of President Trump’s tax cuts (which includied a corporate tax rate cut): “All of that tax cutting also significantly reduced federal revenues. Federal revenues as a share of the economy (gross domestic product, or GDP) stood at 20 percent in 2000. In 2019, at a similar peak in the business cycle, federal revenues had fallen to just 16.3 percent of GDP — which is far too low to support the kinds of investments needed for a 21st century economy that broadens opportunity, supports workers and helps those out of work, and ensures health care for everyone.”
**Background**
The creation of the federal corporate income tax occurred in 1909, when the uniform rate was 1% for all business income above $5,000. Since then the rate has increased to as high as 52.8% in 1969. Today’s rate is set at 21% for all companies.
Proponents of raising the corporate tax rate argue that corporations should pay their fair share of taxes and that those taxes will keep companies in the United States while allowing the US federal government to pay for much needed infrastructure and social programs.
Opponents of raising the corporate tax rate argue that an increase will weaken the economy and that the taxes will ultimately be paid by everyday people while driving corporations overseas. Read more background…
|
# Is Binge Watching Bad for You? Top 3 Pros and Cons'
**Argument**
Binge-watching has health benefits like stress relief.
According to psychiatrists, binge-watching releases dopamine in the brain, which creates a feeling of pleasure and can help people to relax and relieve stress. Psychologists say that finishing a series can give viewers feelings of control and power, which can be beneficial if viewers are not feeling that in their daily lives.
John Mayer, PhD, clinical psychologist, stated, “We are all bombarded with stress from everyday living… It is hard to shut our minds down and tune out the stress and pressures. A binge can work like a steel door that blocks our brains from thinking about those constant stressors that force themselves into our thoughts.”
With the rise of at-home workouts, binge-watching can be paired with exercise. Adding a favorite show to an exercise routine can make the time pass more quickly, add motivation, and increase compliance with an exercise routine. Jan Van den Bulck, Professor of Communication and Media at the University of Michigan in Ann Arbor, stated, “Two years ago, I bought a good indoor rower and told myself I am allowed to watch whatever I want when I am on that machine, and it has helped me to row every day for 45 minutes with no feelings of guilt and no boredom. The cliffhangers work to my advantage: it makes me want to row more the next day.”
**Background**
The first usage of the term “binge-watch” dates back to 2003, but the concept of watching multiple episodes of a show in one sitting gained popularity around 2012. Netflix’s 2013 decision to release all 13-episodes in the first season of House of Cards at one time, instead of posting an episode per week, marked a new era of binge-watching streaming content. In 2015, “binge-watch” was declared the word of the year by Collins English Dictionary, which said use of the term had increased 200% in the prior year. [1] [2] [3]
73% of Americans admit to binge-watching, with the average binge lasting three hours and eight minutes. 90% of millennials and 87% of Gen Z stated they binge-watch, and 40% of those age groups binge-watch an average of six episodes of television in one sitting. [4] [5]
The coronavirus pandemic led to a sharp increase in binge-viewing: HBO, for example, saw a 65% jump in subscribers watching three or more episodes at once starting on Mar. 14, 2020, around the time when many states implemented stay-at-home measures to slow the spread of COVID-19. [28]
A 2021 Sykes survey found 38% of respondents streamed three or more hours of content on weekdays, and 48% did so on weekends. However, a Nielsen study found adults watched four or more hours of live and streaming TV a day, indicating individuals may be underestimating their TV consumption. [31]
|
# Should Animals Be Used for Scientific or Commercial Testing?'
**Argument**
Animals are very different from human beings and therefore make poor test subjects.
The anatomic, metabolic, and cellular differences between animals and people make animals poor models for human beings. Paul Furlong, Professor of Clinical Neuroimaging at Aston University (UK), states that “it’s very hard to create an animal model that even equates closely to what we’re trying to achieve in the human.” Thomas Hartung, Professor of evidence-based toxicology at Johns Hopkins University, argues for alternatives to animal testing because “we are not 70 kg rats.”
**Background**
An estimated 26 million animals are used every year in the United States for scientific and commercial testing. Animals are used to develop medical treatments, determine the toxicity of medications, check the safety of products destined for human use, and other biomedical, commercial, and health care uses. Research on living animals has been practiced since at least 500 BC.
Proponents of animal testing say that it has enabled the development of many life-saving treatments for both humans and animals, that there is no alternative method for researching a complete living organism, and that strict regulations prevent the mistreatment of animals in laboratories.
Opponents of animal testing say that it is cruel and inhumane to experiment on animals, that alternative methods available to researchers can replace animal testing, and that animals are so different from human beings that research on animals often yields irrelevant results. Read more background…
|
# Should the Federal Minimum Wage Be Increased?'
**Argument**
Raising the minimum wage would increase poverty.
A study from the Federal Reserve Bank of Cleveland found that although low-income workers see wage increases when the minimum wage is raised, “their hours and employment decline, and the combined effect of these changes is a decline in earned income… minimum wages increase the proportion of families that are poor or near-poor.” As explained by George Reisman, PhD, Professor Emeritus of Economics at Pepperdine University, “The higher wages are, the higher costs of production are. The higher costs of production are, the higher prices are. The higher prices are, the smaller the quantities of goods and services demanded and the number of workers employed in producing them.” Thomas Grennes, MA, Professor Emeritus at North Carolina State University, and Andris Strazds, MSc, Lecturer at the Stockholm School of Economics in Riga (Latvia), stated: “the net effect of higher minimum wages would be unfavorable for impoverished households, even if there are no job losses. To the extent that some poor households also lose jobs, their net losses would be greater.”
**Background**
The federal minimum wage was introduced in 1938 during the Great Depression under President Franklin Delano Roosevelt. It was initially set at $0.25 per hour and has been increased by Congress 22 times, most recently in 2009 when it went from $6.55 to $7.25 an hour. 29 states plus the District of Columbia (DC) have a minimum wage higher than the federal minimum wage. 1.8 million workers (or 2.3% of the hourly paid working population) earn the federal minimum wage or below.
Proponents of a higher minimum wage state that the current federal minimum wage of $7.25 per hour is too low for anyone to live on; that a higher minimum wage will help create jobs and grow the economy; that the declining value of the minimum wage is one of the primary causes of wage inequality between low- and middle-income workers; and that a majority of Americans, including a slim majority of self-described conservatives, support increasing the minimum wage.
Opponents say that many businesses cannot afford to pay their workers more, and will be forced to close, lay off workers, or reduce hiring; that increases have been shown to make it more difficult for low-skilled workers with little or no work experience to find jobs or become upwardly mobile; and that raising the minimum wage at the federal level does not take into account regional cost-of-living variations where raising the minimum wage could hurt low-income communities in particular. Read more background…
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.