Dataset Viewer
Auto-converted to Parquet
text
stringlengths
558
17k
# Was Bill Clinton a Good President?' **Argument** Science / Technology: Clinton cut NASA’s budget by $715 million in 1995 (about 5%) and did not restore the bulk of the money until three months before he left office. The result was a space program struggling to operate with less money for most of Clinton’s time in office. Some blame the 2003 Space Shuttle Columbia explosion on Clinton’s decision to slash NASA’s budget by an aggregate of $56 million over his presidency. **Background** William Jefferson Clinton, known as Bill Clinton, served as the 42nd President of the United States from Jan. 20, 1993 to Jan. 19, 2001. His proponents contend that under his presidency the US enjoyed the lowest unemployment and inflation rates in recent history, high home ownership, low crime rates, and a budget surplus. They give him credit for eliminating the federal deficit and reforming welfare, despite being forced to deal with a Republican-controlled Congress. His opponents say that Clinton cannot take credit for the economic prosperity experienced during his scandal-plagued presidency because it was the result of other factors. In fact, they blame his policies for the financial crisis that began in 2007. They point to his impeachment by Congress and his failure to pass universal health care coverage as further evidence that he was not a good president. Read more background…
# Animal Dissection - Pros & Cons - ProCon.org' **Argument** Animal dissection is a productive and worthwhile use for dead animals. A large portion of dissected animals were already dead before being allocated for dissection. Having students dissect the animals allows for a learning opportunity instead of just wasting the animal. Bio Corp, a biological supply company, reported that more than 98% of the animals they received were already dead. Bill Wadd, Co-Owner of Bio Corp, stated, “We just take what people would throw away. Instead of throwing it in the trash, why not have students learn from it?” Most animals used in classroom dissections are purchased from biological supply companies. Some animals, such as cats, are sourced from shelters that have already euthanized the animals. However, cats and dogs account for fewer than 1% of lab animals. Fetal pigs are byproducts of the meat industry that would have otherwise been sent to a landfill. **Background** Dissecting a frog might be one of the most memorable school experiences for many students, whether they are enthusiastic participants, prefer lab time to lectures, or are conscientious objectors to dissection. The use of animal dissection in education goes back as far as the 1500s when Belgian doctor Andreas Vesalius used the practice as an instructional method for his medical students. [1] Animal dissections became part of American K-12 school curricula in the 1920s. About 75-80% of North American students will dissect an animal by the time they graduate high school. An estimated six to 12 million animals are dissected in American schools each year. In at least 21 states and DC, K-12 students have the legal option to request an alternate assignment to animal dissection. [2] [3] [27] While frogs are the most common animal for K-12 students to dissect, students also encounter fetal pigs, cats, rabbits, guinea pigs, rats, minks, birds, turtles, snakes, crayfish, perch, starfish, and earthworms, as well as grasshoppers and other insects. Sometimes students dissect parts of animals such as sheep lungs, cows’ eyes, and bull testicles. [2] Are animal dissections in K-12 schools crucial learning opportunities that encourage science careers and make good use of dead animals? Or are animal dissections unnecessary experiments that promote environmental damage when ethical alternatives exist?
# Should the Federal Minimum Wage Be Increased?' **Argument** If the minimum wage is increased, companies may use more robots and automated processes to replace service employees. If companies cannot afford to pay a higher minimum wage for low-skilled service employees, they will use automation to avoid hiring people in those positions altogether. Oxford University researchers Carl Benedikt Frey, PhD, and Michael A. Osborne, DPhil, stated in a 2013 study that “robots are already performing many simple service tasks such as vacuuming, mopping, lawn mowing, and gutter cleaning” and that “commercial service robots are now able to perform more complex tasks in food preparation, health care, commercial cleaning, and elderly care.” As attorney Andrew Woodman, JD, predicted in his blog for the Huffington Post, a minimum wage increase “could ultimately be the undoing of low-income service-industry jobs in the United States.” The Washington Post observed that as minimum wage campaigns gain traction around the country, “Many [restaurant] chains are already at work looking for ingenious ways to take humans out of the picture, threatening workers in an industry that employs 2.4 million wait staffers, nearly 3 million cooks and food preparers and many of the nation’s 3.3 million cashiers.” **Background** The federal minimum wage was introduced in 1938 during the Great Depression under President Franklin Delano Roosevelt. It was initially set at $0.25 per hour and has been increased by Congress 22 times, most recently in 2009 when it went from $6.55 to $7.25 an hour. 29 states plus the District of Columbia (DC) have a minimum wage higher than the federal minimum wage. 1.8 million workers (or 2.3% of the hourly paid working population) earn the federal minimum wage or below. Proponents of a higher minimum wage state that the current federal minimum wage of $7.25 per hour is too low for anyone to live on; that a higher minimum wage will help create jobs and grow the economy; that the declining value of the minimum wage is one of the primary causes of wage inequality between low- and middle-income workers; and that a majority of Americans, including a slim majority of self-described conservatives, support increasing the minimum wage. Opponents say that many businesses cannot afford to pay their workers more, and will be forced to close, lay off workers, or reduce hiring; that increases have been shown to make it more difficult for low-skilled workers with little or no work experience to find jobs or become upwardly mobile; and that raising the minimum wage at the federal level does not take into account regional cost-of-living variations where raising the minimum wage could hurt low-income communities in particular. Read more background…
# Should Animals Be Used for Scientific or Commercial Testing?' **Argument** Animal testing contributes to life-saving cures and treatments. The California Biomedical Research Association states that nearly every medical breakthrough in the last 100 years has resulted directly from research using animals.  Animal research has contributed to major advances in treating conditions such as breast cancer, brain injury, childhood leukemia, cystic fibrosis, multiple sclerosis, tuberculosis, and more, and was instrumental in the development of pacemakers, cardiac valve substitutes, and anesthetics. **Background** An estimated 26 million animals are used every year in the United States for scientific and commercial testing. Animals are used to develop medical treatments, determine the toxicity of medications, check the safety of products destined for human use, and other biomedical, commercial, and health care uses. Research on living animals has been practiced since at least 500 BC. Proponents of animal testing say that it has enabled the development of many life-saving treatments for both humans and animals, that there is no alternative method for researching a complete living organism, and that strict regulations prevent the mistreatment of animals in laboratories. Opponents of animal testing say that it is cruel and inhumane to experiment on animals, that alternative methods available to researchers can replace animal testing, and that animals are so different from human beings that research on animals often yields irrelevant results. Read more background…
# Should Pit Bulls Be Banned? Top 3 Pros and Cons' **Argument** BSL is expensive to enact. Nationwide, BSL would cost an estimated $476 million per year, including enforcement of the law, related vet and shelter care, euthanization and disposal, and legal fees. There are about 4.5 million dog bites per year, resulting in about 40 deaths, making each death cost taxpayers about $11.9 million. That’s a steep cost for a relatively small, albeit important, issue. There are about 78 million dogs in the United States, meaning less than 17% of dogs bite less than 1.4% and kill less than 0.00001% of the US population. And, of course, breeds not covered by BSL also bite. One study of 35 common breeds found Chihuahuas were the most aggressive. **Background** Breed-specific legislation (BSL) is a “blanket term for laws that regulate or ban certain dog breeds in an effort to decrease dog attacks on humans and other animals,” according to the American Society for the Prevention of Cruelty to Animals (ASPCA). The laws are also called pit bull bans and breed-discriminatory laws. [1] The legislation frequently covers any dog deemed a “pit bull,” which can include American Pit Bull Terriers, American Staffordshire Terriers, Staffordshire Bull Terriers, English Bull Terriers, and pit bull mixes, though any dog that resembles a pit bull or pit bull mix can be included in the bans. Other dogs are also sometimes regulated, including American Bulldogs, Rottweilers, Mastiffs, Dalmatians, Chow Chows, German Shepherds, and Doberman Pinschers, as well as mixes of these breeds or, again, dogs that simply resemble the restricted breeds. [1] The term “pit bull” refers to a dog with certain characteristics, rather than a specific breed. Generally, the dogs have broad heads and muscular bodies. Pit bulls are targeted because of their history in dog fighting. [2] Dog fighting dates to at least 43 CE, when the Romans invaded Britain, and both sides brought fighting dogs to the war. The Romans believed the British to have better-trained fighting dogs and began importing (and later exporting) the dogs for war and entertainment wherein the dogs were made to fight against wild animals, including elephants. From the 12th century until the 19th century, dogs were used for baiting chained bears and bulls. In 1835, England outlawed baiting, which then increased the popularity of dog-on-dog fights. [3] [4] Fighting dogs arrived in the United States in 1817, whereupon Americans crossbred several breeds to create the American Pit Bull. The United Kennel Club endorsed the fights and provided referees. Dog fighting was legal in most US states until the 1860s, and it was not completely outlawed in all states until 1976. Today, dog fighting is a felony offense in all 50 states, though the fights thrive in illegal underground venues. [3] [4] More than 700 cities in 29 states have breed-specific legislation, while 20 states do not allow breed-specific legislation, and one allows no new legislation after 1990, as of Apr. 1, 2020. [1]
# Internet & "Stupidity" - Pros & Cons - ProCon.org' **Argument** The internet is causing us to lose the ability to perform simple tasks. “Hey, Alexa, turn on the bathroom light… play my favorite music playlist, cook rice in the Instant Pot… read me the news… what’s the weather today…” “Hey, Siri, set a timer… call my sister… get directions to Los Angeles… what time is it in Tokyo… who stars in that TV show I like…” While much of the technology is too new to have been thoroughly researched, we rely on the internet for everything from email to seeing who is at our front doors to looking up information, so much so that we forget how to or never learn to complete simple tasks. And the accessibility of information online makes us believe we are smarter than we are. In the 2018 election, Virginia state officials learned that young adults in Generation Z wanted to vote by mail but did not know where to buy stamps because they are so used to communicating online rather than via US mail. We require GPS maps narrated by the voice of a digital assistant to drive across the towns in which we have lived for years. Nora Newcombe, PhD, Professor of Psychology at Temple University, stated, “GPS devices cause our navigational skills to atrophy, and there’s increasing evidence for it. The problem is that you don’t see an overview of the area, and where you are in relation to other things. You’re not actively navigating — you’re just listening to the voice.” Millennials were more likely to use pre-prepared foods, use the internet for recipes, and use a meal delivery service. They were least likely to know offhand how to prepare lasagna, carve a turkey, or fry chicken, and fewer reported being a “good cook” than Generation X or Baby Boomers, who were less likely to rely on the internet for cooking tasks. Using the internet to store information we previously would have committed to memory (how to roast a chicken, for example) is “offloading.” According to Benjamin Storm, PhD, Associate Professor of Psychology at the University of California at Santa Cruz, “Offloading robs you of the opportunity to develop the long-term knowledge structures that help you make creative connections, have novel insights and deepen your knowledge.” **Background** In a 2008 article for The Atlantic, Nicholas Carr asked, “Is Google Making Us Stupid?” Carr argued that the internet as a whole, not just Google, has been “chipping away [at his] capacity for concentration and contemplation.” He was concerned that the internet was “reprogramming us.” [1] However, Carr also noted that we should “be skeptical of [his] skepticism,” because maybe he’s “just a worrywart.” He explained, “Just as there’s a tendency to glorify technological progress, there’s a countertendency to expect the worst of every new tool or machine.” [1] The article, and Carr’s subsequent book, The Shallows: What the Internet Is Doing to Our Brains (2010, revised in 2020), ignited a continuing debate on and off the internet about how the medium is changing the ways we think, how we interact with text and each other, and the very fabric of society as a whole. [1] ProCon asked readers their thoughts on how the internet affects their brains and whether online information is reliable and trustworthy. While 52.7% agreed or strongly agreed that being on the internet has caused a decline in their attention span and ability to concentrate, only 21.5% thought the internet caused them to lose the ability to perform simple tasks like reading a map. [41] Only 18% believed online information was true. Nearly 60% admitted difficulty in determining if information online was truthful. And 77% desired a more effective way of managing and filtering information on the Internet to differentiate between fact, opinion, and overt disinformation. [41] Between Apr. 28, 2021, and Sep. 1, 2022, the survey garnered 15,740 responses. To see the complete results, click here. To add your thoughts, complete the survey. [41]
# Should the Federal Minimum Wage Be Increased?' **Argument** Increasing the minimum wage would reduce poverty. A person working full time at the federal minimum wage of $7.25 per hour earns $15,080 in a year, which is 20% higher than the 2015 federal poverty level of $12,331 for a one-person household under 65 years of age but 8% below the 2015 federal poverty level of $16,337 for a single-parent family with a child under 18 years of age. According to a 2014 Congressional Budget Office report, increasing the minimum wage to $9 would lift 300,000 people out of poverty, and an increase to $10.10 would lift 900,000 people out of poverty. A 2013 study by University of Massachusetts at Amherst economist Arindrajit Dube, PhD, estimated that increasing the minimum wage to $10.10 is “projected to reduce the number of non-elderly living in poverty by around 4.6 million, or by 6.8 million when longer term effects are accounted for.” **Background** The federal minimum wage was introduced in 1938 during the Great Depression under President Franklin Delano Roosevelt. It was initially set at $0.25 per hour and has been increased by Congress 22 times, most recently in 2009 when it went from $6.55 to $7.25 an hour. 29 states plus the District of Columbia (DC) have a minimum wage higher than the federal minimum wage. 1.8 million workers (or 2.3% of the hourly paid working population) earn the federal minimum wage or below. Proponents of a higher minimum wage state that the current federal minimum wage of $7.25 per hour is too low for anyone to live on; that a higher minimum wage will help create jobs and grow the economy; that the declining value of the minimum wage is one of the primary causes of wage inequality between low- and middle-income workers; and that a majority of Americans, including a slim majority of self-described conservatives, support increasing the minimum wage. Opponents say that many businesses cannot afford to pay their workers more, and will be forced to close, lay off workers, or reduce hiring; that increases have been shown to make it more difficult for low-skilled workers with little or no work experience to find jobs or become upwardly mobile; and that raising the minimum wage at the federal level does not take into account regional cost-of-living variations where raising the minimum wage could hurt low-income communities in particular. Read more background…
# Do Violent Video Games Contribute to Youth Violence?' **Argument** Simulating violence such as shooting guns and hand-to-hand combat in video games can cause real-life violent behavior. Video games often require players to simulate violent actions, such as stabbing, shooting, or dismembering someone with an ax, sword, chainsaw, or other weapons. **Background** Around 73% of American kids age 2-17 played video games in 2019, a 6% increase over 2018. Video games accounted for 17% of kids’ entertainment time and 11% of their entertainment spending. The global video game industry was worth contributing $159.3 billion in 2020, a 9.3% increase of 9.3% from 2019. Violent video games have been blamed for school shootings, increases in bullying, and violence towards women. Critics argue that these games desensitize players to violence, reward players for simulating violence, and teach children that violence is an acceptable way to resolve conflicts. Video game advocates contend that a majority of the research on the topic is deeply flawed and that no causal relationship has been found between video games and social violence. They argue that violent video games may provide a safe outlet for aggressive and angry feelings and may reduce crime. Read more background…
# Is Human Activity Primarily Responsible for Global Climate Change?' **Argument** Permafrost is melting at unprecedented rates due to global warming, causing further climate changes. According to the IPCC, there is “high confidence” (about an 8 out of 10 chance) that anthropogenic global warming is causing permafrost, a subsurface layer of frozen soil, to melt in high-latitude regions and in high-elevation regions. As permafrost melts it releases methane, a greenhouse gas that absorbs 84 times more heat than CO2 for the first 20 years it is in the atmosphere, creating even more global warming in a positive feedback loop. By the end of the 21st century, warming temperatures in the Arctic will cause a 30%-70% decline in permafrost. As human-caused global warming continues, Arctic air temperatures are expected to increase at twice the global rate, increasing the rate of permafrost melt, changing the local hydrology, and impacting critical habitat for native species and migratory birds. According to the 2014 National Climate Assessment, some climate models suggest that near-surface permafrost will be “lost entirely” from large parts of Alaska by the end of the 21st century. **Background** Average surface temperatures on earth have risen more than 2°F over the past 100 years. During this time period, atmospheric levels of greenhouse gases such as carbon dioxide (CO2) and methane (CH4) have notably increased. This site explores the debate on whether climate change is caused by humans (also known as anthropogenic climate change). The pro side argues rising levels of atmospheric greenhouse gases are a direct result of human activities such as burning fossil fuels, and that these increases are causing significant and increasingly severe climate changes including global warming, loss of sea ice, sea level rise, stronger storms, and more droughts. They contend that immediate international action to reduce greenhouse gas emissions is necessary to prevent dire climate changes. The con side argues human-generated greenhouse gas emissions are too small to substantially change the earth’s climate and that the planet is capable of absorbing those increases. They contend that warming over the 20th century resulted primarily from natural processes such as fluctuations in the sun’s heat and ocean currents. They say the theory of human-caused global climate change is based on questionable measurements, faulty climate models, and misleading science. Read more background…
# Should the Federal Minimum Wage Be Increased?' **Argument** Raising the minimum wage would help reduce the federal deficit. According to Aaron Pacitti, PhD, Associate Professor of Economics at Siena College, raising the minimum wage would help reduce the federal budget deficit “by lowering spending on public assistance programs and increasing tax revenue. Since firms are allowed to pay poverty-level wages to 3.6 million people — 5 percent of the workforce — these workers must rely on Federal income support programs. This means that taxpayers have been subsidizing businesses, whose profits have risen to record levels over the past 30 years.” According to James K. Galbraith, PhD, Professor of Government at the University of Texas in Austin, “[b]ecause payroll- and income-tax revenues would rise [as a result of an increase in the minimum wage], the federal deficit would come down.” **Background** The federal minimum wage was introduced in 1938 during the Great Depression under President Franklin Delano Roosevelt. It was initially set at $0.25 per hour and has been increased by Congress 22 times, most recently in 2009 when it went from $6.55 to $7.25 an hour. 29 states plus the District of Columbia (DC) have a minimum wage higher than the federal minimum wage. 1.8 million workers (or 2.3% of the hourly paid working population) earn the federal minimum wage or below. Proponents of a higher minimum wage state that the current federal minimum wage of $7.25 per hour is too low for anyone to live on; that a higher minimum wage will help create jobs and grow the economy; that the declining value of the minimum wage is one of the primary causes of wage inequality between low- and middle-income workers; and that a majority of Americans, including a slim majority of self-described conservatives, support increasing the minimum wage. Opponents say that many businesses cannot afford to pay their workers more, and will be forced to close, lay off workers, or reduce hiring; that increases have been shown to make it more difficult for low-skilled workers with little or no work experience to find jobs or become upwardly mobile; and that raising the minimum wage at the federal level does not take into account regional cost-of-living variations where raising the minimum wage could hurt low-income communities in particular. Read more background…
# Should Animals Be Used for Scientific or Commercial Testing?' **Argument** There is no adequate alternative to testing on a living, whole-body system. A living systems, human beings and animals are extremely complex. Studying cell cultures in a petri dish, while sometimes useful, does not provide the opportunity to study interrelated processes occurring in the central nervous system, endocrine system, and immune system. Evaluating a drug for side effects requires a circulatory system to carry the medicine to different organs. **Background** An estimated 26 million animals are used every year in the United States for scientific and commercial testing. Animals are used to develop medical treatments, determine the toxicity of medications, check the safety of products destined for human use, and other biomedical, commercial, and health care uses. Research on living animals has been practiced since at least 500 BC. Proponents of animal testing say that it has enabled the development of many life-saving treatments for both humans and animals, that there is no alternative method for researching a complete living organism, and that strict regulations prevent the mistreatment of animals in laboratories. Opponents of animal testing say that it is cruel and inhumane to experiment on animals, that alternative methods available to researchers can replace animal testing, and that animals are so different from human beings that research on animals often yields irrelevant results. Read more background…
# Should the Federal Minimum Wage Be Increased?' **Argument** Raising the minimum wage would disadvantage low-skilled workers. From an employer’s perspective, people with the lowest skill levels cannot justify higher wages. A study by Jeffrey Clemens, PhD, and Michael J. Wither, PhD, found that minimum wage increases result in reduced average monthly incomes for low-skilled workers ($100 less during the first year following a minimum wage increase and $50 over the next two years) due to a reduction in employment. James Dorn, PhD, Senior Fellow at the Cato Institute, stated that a 10% increase in the minimum wage “leads to a 1 to 3 percent decrease in employment of low-skilled workers” in the short term, and “to a larger decrease in the long run.” George Reisman, PhD, Professor Emeritus of Economics at Pepperdine University, stated that if the minimum wage is increased to $10.10, “and the jobs that presently pay $7.25 had to pay $10.10, then workers who previously would not have considered those jobs because of their ability to earn $8, $9, or $10 per hour will now consider them… The effect is to expose the workers whose skills do not exceed a level corresponding to $7.25 per hour to the competition of better educated, more-skilled workers presently able to earn wage rates ranging from just above $7.25 to just below $10.10.” **Background** The federal minimum wage was introduced in 1938 during the Great Depression under President Franklin Delano Roosevelt. It was initially set at $0.25 per hour and has been increased by Congress 22 times, most recently in 2009 when it went from $6.55 to $7.25 an hour. 29 states plus the District of Columbia (DC) have a minimum wage higher than the federal minimum wage. 1.8 million workers (or 2.3% of the hourly paid working population) earn the federal minimum wage or below. Proponents of a higher minimum wage state that the current federal minimum wage of $7.25 per hour is too low for anyone to live on; that a higher minimum wage will help create jobs and grow the economy; that the declining value of the minimum wage is one of the primary causes of wage inequality between low- and middle-income workers; and that a majority of Americans, including a slim majority of self-described conservatives, support increasing the minimum wage. Opponents say that many businesses cannot afford to pay their workers more, and will be forced to close, lay off workers, or reduce hiring; that increases have been shown to make it more difficult for low-skilled workers with little or no work experience to find jobs or become upwardly mobile; and that raising the minimum wage at the federal level does not take into account regional cost-of-living variations where raising the minimum wage could hurt low-income communities in particular. Read more background…
# Universal Basic Income Pros and Cons - Top 3 Arguments For and Against' **Argument** UBI removes the incentive to work, adversely affecting the economy and leading to a labor and skills shortage. Earned income motivates people to work, be successful, work cooperatively with colleagues, and gain skills. However, “if we pay people, unconditionally, to do nothing… they will do nothing” and this leads to a less effective economy, says Charles Wyplosz, PhD, Professor of International Economics at the Graduate Institute in Geneva (Switzerland). Economist Allison Schrager, PhD, says that a strong economy relies on people being motivated to work hard, and in order to motivate people there needs to be an element of uncertainty for the future. UBI, providing guaranteed security, removes this uncertainty. Elizabeth Anderson, PhD, Professor of Philosophy and Women’s Studies at the University of Michigan, says that a UBI would cause people “to abjure work for a life of idle fun… [and would] depress the willingness to produce and pay taxes of those who resent having to support them.” Guaranteed income trials in the United States in the 1960s and 1970s found that the people who received payments worked fewer hours. And, in 2016, the Swiss government opposed implementation of UBI, stating that it would entice fewer people to work and thus exacerbate the current labor and skills shortages. Nicholas Eberstadt, PhD, Henry Wendt Chair in Political Economy, and Evan Abramsky is a Research Associate, both at American Enterprise Institute (AEI), stated, “the daily routines of existing work-free men should make proponents of the UBI think long and hard. Instead of producing new community activists, composers, and philosophers, more paid worklessness in America might only further deplete our nation’s social capital at a time when good citizenship is already in painfully short supply.” **Background** A universal basic income (UBI) is an unconditional cash payment given at regular intervals by the government to all residents, regardless of their earnings or employment status. [45] Pilot UBI or more limited basic income programs that give a basic income to a smaller group of people instead of an entire population have taken place or are ongoing in Brazil, Canada, China, Finland, Germany, India, Iran, Japan, Kenya, Namibia, Spain, and The Netherlands as of Oct. 20, 2020 [46] In the United States, the Alaska Permanent Fund (AFP), created in 1976, is funded by oil revenues. AFP provides dividends to permanent residents of the state. The amount varies each year based on the stock market and other factors, and has ranged from $331.29 (1984) to $2,072 (2015). The payout for 2020 was $992.00, the smallest check received since 2013.[46] [47] [48] [49] UBI has been in American news mostly thanks to the 2020 presidential campaign of Andrew Yang whose continued promotion of a UBI resulted in the formation of a nonprofit, Humanity Forward. [53]
# Space Colonization - Pros & Cons - ProCon.org' **Argument** Technological advancement into space can exist alongside conservation efforts on Earth. While Earth is experiencing devastating climate change effects that should be addressed, Earth will be habitable for at least 150 million years, if not over a billion years, based on current predictive models. Humans have time to explore and colonize space at the same time as we mend the effects of climate change on Earth. Brian Patrick Green stated, “Furthermore, we have to realize that solving Earth’s environmental problems is extremely difficult and so will take a very long time. And we can do this while also pursuing colonization.” Jeff Bezos suggested that we move all heavy industry off Earth and then zone Earth for residences and light industry only. Doing so could reverse some of the effects of climate change while colonizing space. Munevar also suggested something similar in more detail: “In the shorter term, a strong human presence throughout the solar system will be able to prevent catastrophes on Earth by, for example, deflecting asteroids on a collision course with us. This would also help preserve the rest of terrestrial life — presumably something the critics would approve of. But eventually, we should be able to construct space colonies… [structures in free space rather than on a planet or moon], which could house millions. These colonies would be positioned to construct massive solar power satellites to provide clean power to the Earth, as well as set up industries that on Earth create much environmental damage. Far from messing up environments that exist now, we would be creating them, with extraordinary attention to environmental sustainability.” Space Ecologist Joe Mascaro, PhD, summarized, “To save the Earth, we have to go to Mars.” Mascaro argues that expanding technology to go to Mars will help solve problems on Earth: “The challenge of colonising Mars shares remarkable DNA with the challenges we face here on Earth. Living on Mars will require mastery of recycling matter and water, producing food from barren and arid soil, generating carbon-free nuclear and solar energy, building advanced batteries and materials, and extracting and storing carbon from atmospheric carbon dioxide – and doing it all at once. The dreamers, thinkers and explorers who decide to go to Mars will, by necessity, fuel unprecedented lateral innovations [that will solve problems on Earth].” **Background** While humans have long thought of gods living in the sky, the idea of space travel or humans living in space dates to at least 1610 after the invention of the telescope when German astronomer Johannes Kepler wrote to Italian astronomer Galileo: “Let us create vessels and sails adjusted to the heavenly ether, and there will be plenty of people unafraid of the empty wastes. In the meantime, we shall prepare, for the brave sky-travellers, maps of the celestial bodies.” [1] In popular culture, space travel dates back to at least the mid-1600s when Cyrano de Bergerac first wrote of traveling to space in a rocket. Space fantasies flourished after Jules Verne’s “From Earth to the Moon” was published in 1865, and again when RKO Pictures released a film adaptation, A Trip to the Moon, in 1902. Dreams of space settlement hit a zenith in the 1950s with Walt Disney productions such as “Man and the Moon,” and science fiction novels including Ray Bradbury’s The Martian Chronicles (1950). [2] [3] [4] Fueling popular imagination at the time was the American space race with Russia, amid which NASA (National Aeronautics and Space Administration) was formed in the United States on July 29, 1958, when President Eisenhower signed the National Aeronautics and Space Act into law. After the Russians put the first person, Yuri Gagarin, in space on Apr. 12, 1961, NASA put the first people, Neil Armstrong and Buzz Aldrin, on the Moon in July 1969. What was science fiction began to look more like possibility. Over the next six decades, NASA would launch space stations, land rovers on Mars, and orbit Pluto and Jupiter, among other accomplishments. Launched by President Trump in 2017, NASA’s ongoing Artemis program intends to return humans to the Moon by 2024, landing the first woman on the lunar surface. The lunar launch is more likely to happen in 2025, due to a lag in space suit technology and delays with the Space Launch System rocket, the Orion capsule, and the lunar lander[5] [6] [7] [8] [36] As of June 17, 2021, three countries had space programs with human space flight capabilities: China, Russia, and the United States. India’s planned human space flights have been delayed by the COVID-19 pandemic, but they may launch in 2023. However, NASA ended its space shuttle program in 2011 when the shuttle Atlantis landed at Kennedy Space Center in Florida on July 21. NASA astronauts going into space afterward rode along with Russians until 2020 when SpaceX took over and first launched NASA astronauts into space on Apr. 23, 2021. SpaceX is a commercial space travel business owned by Elon Musk that has ignited commercial space travel enthusiasm and the idea of “space tourism.” Richard Branson’s Virgin Galactic and Jeff Bezo’s Blue Origin have generated similar excitement. [9] [10] [11] [12] [13] Richard Branson launched himself, two pilots, and three mission specialists into space from New Mexico for a 90-minute flight on the Virgin Galactic Unity 22 mission on July 11, 2021. The flight marked the first time that passengers, rather than astronauts, went into space. [14] [15] Jeff Bezos followed on July 20, 2021, accompanied by his brother, Mark, and both the oldest and youngest people to go to space: 82-year-old Wally Funk, a female pilot who tested with NASA in the 1960s but never flew, and Oliver Daemen, an 18-year-old student from the Netherlands. The fully automated, unpiloted Blue Origin New Shepard rocket launched on the 52nd anniversary of the Apollo 11 moon landing and was named after Alan Shepard, who was the first American to travel into space on May 5, 1961. [16] [17] On Apr. 8, 2022, a SpaceX capsule launched, carrying three paying customers and a former NASA astronaut on a roundtrip to the International Space Station (ISS). Mission AX-1 docked at the ISS on Apr. 9 with former NASA astronaut, current Axiom Space employee, and mission commander, Michael Lopez-Alegría, Israeli businessman Eytan Stibbe, Canadian investor Mark Pathy, and American real estate magnate Larry Connor. The group returned to Earth on Apr. 25, 2022. While this is not the first time paying customers or non-astronauts have traveled to ISS (Russia has sold Soyuz seats), this is the first American mission and the first with no government astronaut corps members. [38] [39] The International Space Station has been continuously occupied by groups of six astronauts since Nov. 2000, for a total of 243 astronauts from 19 countries as of May 13, 2021. Astronauts spend an average of 182 days (about six months) aboard the ISS. As of Feb. 2020, Russian Valery Polyakov had spent the longest continuous time in space (437.7 days in 1994-1995 on space station Mir), followed by Russian Sergei Avdeyev (379.6 days in 1998-1999 on Mir),  Russians Vladimir Titov and Musa Manarov (365 days in 1987-1988 on Mir), American Mark Vande Hei (355 days on ISS) Russian Mikhail Kornienko and American Scott Kelly (340.4 days in 2015-2016 on Mir and ISS respectively), and American Christina Koch (328 days in 2019-20 in ISS). [18] [19] [40] In Jan. 2022, Space Entertainment Enterprise (SEE) announced plans for a film production studio and a sports arena in space. The module will be named SEE-1 and will dock on Axiom Station, which is the commercial wing of the International Space Station. SEE plans to host film and sports events, as well as content creation by Dec. 2024. [37] In a 2018 poll, 50% of Americans believed space tourism will be routine for ordinary people by 2068. 32% believed long-term habitable space colonies will be built by 2068. But 58% said they were definitely or probably not interested in going to space. And the majority (63%) stated NASA’s top priority should be monitoring Earth’s climate, while only 18% said sending astronauts to Mars should be the highest priority and only 13% would prioritize sending astronauts to the Moon. [20] The most common ideas for space colonization include: settling Earth’s Moon, building on Mars, and constructing free-floating space stations.
# Ride-Sharing Apps - Pros & Cons - ProCon.org' **Argument** Ride-hailing apps are convenient, affordable, and safe for riders and other drivers. The technology used by ride-hailing companies increases reliability and decreases wait times for consumers, and can offer a 20% to 30% discount over the cost of a taxi. These apps have built-in safety features, such as displaying the license plate and car model to ensure that riders get into the correct vehicle, the ability to share the route with friends and family, GPS tracking, cash-free transactions, and driver ratings. A full third of ride-hailing passengers who own vehicles (33%) said the main reason they use the service is to avoid driving while they are drunk. Fatal alcohol-related car accidents dropped between 10% and 11.4% after the introduction of ride-hailing services and DUI (Driving Under the Influence) citations went down as much as 9.2% in some cities. Researchers estimate that if ride-hailing were fully implemented across the country, the resulting drop in DUI-related accidents could save 500 lives and $1.3 billion in American taxpayer money annually.   **Background** OverviewPro/Con ArgumentsDiscussion QuestionsTake Action The first Uber ride was on July 5, 2010 in San Francisco, CA. The app launched internationally in 2011 and reached one billion rides on Dec. 30, 2015, quickly followed by five billion on May 20, 2017 and 10 billion on June 10, 2018. [40] On May 22, 2012, Lyft launched in San Francisco as a part of Zimride and expanded to 60 cities in 2014 and to 100 more in 2017, at which point Lyft claimed more than one million rides a day. On Nov. 13, 2017, Lyft went international, allowing the company to reach one billion rides on Sep. 18, 2018. [41] Other ride-hailing and ride-sharing apps include Gett (which partners with Lyft in the US), Curb, Wingz, Via, Scoop, and Bridj. [42] 36% of Americans said they used ride-hailing services such as Uber or Lyft, according to a Jan. 4, 2019 Pew Research Center Survey. Use is up significantly from 2015 when just 15% had used the apps. [38] But use varies among populations. 45% of urban residents, 51% of people who were 18 to 29, 53% of people who earned $75,000 or more per year, and 55% of people with college degrees, used the apps, compared to 19% of rural residents, 24% of people aged 50 or older, 24% of people who earn $30,000 or less per year, and 20% of people with a high school diploma or less. [38] In 2018, 70% of Uber and Lyft trips occurred in nine big metropolitan areas: Boston, Chicago, Los Angeles, Miami, New York, Philadelphia, San Francisco, Seattle, and Washington, DC. [3] Uber officially overtook yellow cabs in New York City in July 2017, when it reported an average of 289,000 trips per day compared to 277,000 taxi rides. More than 2.61 billion ride-hailing trips were taken in 2017, a 37% increase over the 1.90 billion trips in 2016. Ride-hailing trips were down significantly in 2020 and 2021 due to the COVID-19 pandemic. [3] [4] [39]
# US Supreme Court Packing - Pros & Cons - ProCon.org' **Argument** Historical precedent most strongly supports a nine-judge Supreme Court. While the US Constitution does not specify the number of Supreme Court justices, neither does it specify that justices must have law degrees or have served as judges. However, historical precedent has set basic job requirements for the position as well as solidified the number of justices. The Supreme Court has had nine justices consistently since 1868, when Ulysses S. Grant was president. Changing the number of justices has been linked to political conniving, whether the 1866 shrinkage to prevent Johnson appointments or the 1801 removal of one seat by President John Adams to prevent incoming President Thomas Jefferson from filling a seat or the 1937 attempt by Roosevelt to get the New Deal past the court. We should not break with over 150 years of historical precedent to play political games with the Supreme Court. Jeff Greenfield, journalist, warns that breaking with precedent would cause trouble, stating, “if Congress pushes through a restructuring of the court on a strictly partisan vote, giving Americans a Supreme Court that looks unlike anything they grew up with, and unlike the institution we’ve had for more than 240 years, it’s hard to imagine the country as a whole would see its decisions as legitimate.” **Background** Court packing is increasing the number of seats on a court to change the ideological makeup of the court. [1] The US Constitution does not dictate the number of justices on the Supreme Court, but states only: “The judicial Power of the United States, shall be vested in one supreme Court, and in such inferior Courts as the Congress may from time to time ordain and establish. The Judges, both of the supreme and inferior Courts, shall hold their Offices during good Behavior, and shall, at stated Times, receive for their Services a Compensation which shall not be diminished during their Continuance in Office.” [2] The number of justices on the Court, set at nine since the mid-19th century, has changed over the years. The court was founded in 1789 with six justices, but was reduced to five in 1801 and increased to six in 1802, followed by small changes over the subsequent 67 years. [14] [15] As explained in Encyclopaedia Britannica, “In 1807 a seventh justice was added, followed by an eighth and a ninth in 1837 and a tenth in 1863. The size of the court has sometimes been subject to political manipulation; for example, in 1866 Congress provided for the gradual reduction (through attrition) of the court to seven justices to ensure that President Andrew Johnson, whom the House of Representatives later impeached and the Senate only narrowly acquitted, could not appoint a new justice. The number of justices reached eight before Congress, after Johnson had left office, adopted new legislation (1869) setting the number at nine, where it has remained ever since.” [3] The idea of court packing dates to 1937 when President Franklin D. Roosevelt proposed adding a new justice to the Supreme Court for every justice who refused to retire at 70 years old, up to a maximum of 15 justices. The effort is frequently framed as a battle between “an entrenched, reactionary Supreme Court, which overturned a slew of Roosevelt’s New Deal economic reforms, against a hubristic president willing to take the unprecedented step of asking Congress to appoint six new, and sympathetic, justices to the bench,” according to Cicero Institute Senior Policy Advisor Judge Glock, PhD. Roosevelt’s proposal was seen by many as a naked power grab for control of a second branch of government. Plus, as Glock points out, a then new law reducing Supreme Court pensions was preventing retirements at the very time Roosevelt was calling for them. [4] [5] [6] The contemporary debate has been heavily influenced by events following the Feb. 13, 2016, death of conservative Associate Justice Antonin Scalia. Citing the upcoming 2016 election, Senate Majority Leader Mitch McConnell (R-KY) refused to consider President Barack Obama’s liberal Supreme Court nominee, Merrick Garland. At the time, there were 342 days remaining in Obama’s presidency, 237 days until the 2016 election, and neither the 2016 Democratic nor Republican nominee had been chosen. Because the Senate approval process was delayed until 2017, the next president, Donald Trump, was allowed to appoint a new justice (conservative Neil Gorsuch) to what many Democrats called a “stolen seat” that should have been filled by Obama. [5] [7] The court packing debate was reinvigorated in 2019 with the appointment of conservative Associate Justice Brett Kavanaugh by President Trump after liberal-leaning swing vote Associate Justice Anthony Kennedy retired in July 2018. [1] In the wake of this appointment, South Bend, Indiana, Mayor Pete Buttigieg, then also a 2020 presidential candidate, suggested expanding the court to 15 justices in the Oct. 15, 2019, Democratic presidential debate. [8] Then largely brushed aside as “radical,” the topic resurfaced once again upon the death of liberal stalwart Associate Justice Ruth Bader Ginsburg on Sep. 18, 2020. Liberals, and some conservatives, argued that the 2016 precedent should be followed and that Justice Ginsburg’s seat should remain empty until after the 2020 presidential election or the Jan. 2021 presidential inauguration. However, McConnell and the Republicans in control of the Senate, and thus the approval process, indicated they would move forward with a Trump nomination without delay. McConnell defended these actions by stating the President and the Senate are of the same party (which was not the case in 2016, negating—from his perspective—that incident as a precedent that needed following), and thus the country had confirmed Republican rule. [5] [7] [9] Others argued as well that, since there was a chance that the results of the 2020 election could be challenged in the courts, and perhaps even at the Supreme Court level (due to concerns over the handling of mailed-in ballots), it was critical for an odd number of justices to sit on the Court (for an even number, such as eight, could mean a split 4-4 decision on the critical question of who would be deemed the next U.S. president, sending the country into a constitutional crisis). At the time of McConnell’s Sept. 18 announcement via Twitter, there were 124 days left in Trump’s term and 45 days until the 2020 election. Some called the impending nomination to replace Ginsburg and the 2016/2017 events a version of court packing by Republicans. [5] [7] [9] Supreme Court nominees can be confirmed by the US Senate with a simple majority vote, with the Vice President called in to break a 50-50 tie. Amy Coney Barrett was confirmed by the Senate on Oct. 26, 2020 with a 52-48 vote to replace Justice Ginsburg, eight days before the 2020 election. [7] [24]
# Should Vaccines Be Required for Children?' **Argument** Diseases that vaccines target have essentially disappeared. There is no reason to vaccinate against diseases that no longer occur in the United States. The CDC reported 57 cases of and nine deaths from diphtheria between 1980 and 2016 in the United States. Fewer than 64 cases and 11 deaths per year from tetanus have been reported since 1989. Polio has been declared eradicated in the United States since 1979. There have been only 32 deaths from mumps and 42 deaths from rubella since 1979. **Background** Vaccines have been in the news over the past year due to the COVID-19 pandemic. To date no state has yet added the COVID-19 vaccine to their required vaccinations roster. On Sep. 9, 2021, Los Angeles Unified School District, the second largest in the country, mandated the COVID-19 vaccine for students ages 12 and up by Jan. 10, 2022 (pushed back to fall 2022 in Dec. 2021), the first in the country to mandate the coronavirus vaccine. On Oct. 1, 2021, Governor Newsom stated the COVID-19 vaccine would be mandated for all schoolchildren once approved by the FDA. However, the Centers for Disease Control (CDC) recommends getting 29 doses of 9 other vaccines (plus a yearly flu shot after six months old) for kids aged 0 to six. No US federal laws mandate vaccination, but all 50 states require certain vaccinations for children entering public schools. Most states offer medical and religious exemptions; and some states allow philosophical exemptions. Proponents say that vaccination is safe and one of the greatest health developments of the 20th century. They point out that illnesses, including rubella, diphtheria, smallpox, polio, and whooping cough, are now prevented by vaccination and millions of children’s lives are saved. They contend adverse reactions to vaccines are extremely rare. Opponents say that children’s immune systems can deal with most infections naturally, and that injecting questionable vaccine ingredients into a child may cause side effects, including seizures, paralysis, and death. They contend that numerous studies prove that vaccines may trigger problems like ADHD and diabetes. Read more background…
# Pokémon Go - Pros & Cons - ProCon.org' **Argument** People are playing the game in inappropriate places. In their quest to capture creatures, players are failing to respect their surroundings, spawning countless articles, such as Evan Dashevsky’s compilation, “18 Completely Inappropriate Places to Play Pokemon Go,” for PCMag. The list includes evidence that players have captured Pokémon in the emergency room, birthing rooms, Auschwitz, funerals, and on an active battlefield near Mosul, among others. Arlington National Cemetery released a statement saying, “Out of respect for all those interred at Arlington National Cemetery, we require the highest level of decorum from our guests and visitors. Playing games such as Pokémon Go on these hallowed grounds would not be deemed appropriate.” The US Holocaust Memorial Museum has also asked visitors to stop catching Pokémon on site. The 9/11 Memorial in New York City is also inundated with players. “A lot of people died here. It’s a place to reflect, not to play a game,” a visitor told TIME magazine. **Background** OverviewPro/Con ArgumentsDiscussion QuestionsTake Action Pokémon Go had more than 21 million daily active users in the United States in its debut week in July 2016, becoming the most popular US mobile game ever. It has surpassed social media apps such as WhatsApp, Instagram, and Twitter for daily use on Android devices. [1] [2] The basic premise of the game is that players try to capture Pokémon in a kind of scavenger hunt that uses the GPS on their mobile phones while walking around in the real world. The game’s slogan is “Gotta catch ’em all.” [3] As of July 8, 2020, Pokémon Go was still the most popular location-based game with 576.7 million unique downloads globally in the game’s first four years. The game is estimated to have earned $3.6 billion worldwide since 2016, with $445.3 million in the first half of 2020 during COVID-19 (coronavirus) lockdowns, via micro-transactions within the game. [18]
# Ride-Sharing Apps - Pros & Cons - ProCon.org' **Argument** Ride-hailing services have a history of poor driver screening that puts passengers at risk. While taxi drivers are subject to rigorous security screening involving fingerprint checks through the FBI database, ride-hailing drivers are only subject to limited background checks. A 2016 lawsuit brought by the cities of Los Angeles and San Francisco revealed that 25 drivers with serious criminal records, such as murder and kidnapping, had passed Uber’s background checks.   San Francisco District Attorney George Gascon, who sued Uber for allegedly failing to protect consumers from fraud and harm, said of the company’s security screening process that does not include fingerprinting, “It is completely worthless.” A Dec. 2019 report from Uber stated that, among riders and drivers, there had been 10 murders in 2017 and nine in 2018, and 2,936 sexual assaults ranging from nonconsensual touching to rape in 2017 and 3,045 in 2018. One woman wrote in an open letter from 14 victims of sexual harassment and rape by Uber drivers, “Although I immediately reported what happened to Uber, shockingly, this predator continues to drive for Uber to this day. I am 21 years old and will have to live with this the rest of my life.” **Background** OverviewPro/Con ArgumentsDiscussion QuestionsTake Action The first Uber ride was on July 5, 2010 in San Francisco, CA. The app launched internationally in 2011 and reached one billion rides on Dec. 30, 2015, quickly followed by five billion on May 20, 2017 and 10 billion on June 10, 2018. [40] On May 22, 2012, Lyft launched in San Francisco as a part of Zimride and expanded to 60 cities in 2014 and to 100 more in 2017, at which point Lyft claimed more than one million rides a day. On Nov. 13, 2017, Lyft went international, allowing the company to reach one billion rides on Sep. 18, 2018. [41] Other ride-hailing and ride-sharing apps include Gett (which partners with Lyft in the US), Curb, Wingz, Via, Scoop, and Bridj. [42] 36% of Americans said they used ride-hailing services such as Uber or Lyft, according to a Jan. 4, 2019 Pew Research Center Survey. Use is up significantly from 2015 when just 15% had used the apps. [38] But use varies among populations. 45% of urban residents, 51% of people who were 18 to 29, 53% of people who earned $75,000 or more per year, and 55% of people with college degrees, used the apps, compared to 19% of rural residents, 24% of people aged 50 or older, 24% of people who earn $30,000 or less per year, and 20% of people with a high school diploma or less. [38] In 2018, 70% of Uber and Lyft trips occurred in nine big metropolitan areas: Boston, Chicago, Los Angeles, Miami, New York, Philadelphia, San Francisco, Seattle, and Washington, DC. [3] Uber officially overtook yellow cabs in New York City in July 2017, when it reported an average of 289,000 trips per day compared to 277,000 taxi rides. More than 2.61 billion ride-hailing trips were taken in 2017, a 37% increase over the 1.90 billion trips in 2016. Ride-hailing trips were down significantly in 2020 and 2021 due to the COVID-19 pandemic. [3] [4] [39]
# Should Recreational Marijuana Be Legal?' **Argument** Legalizing marijuana is opposed by major public health organizations. Some of the public health associations that oppose legalizing marijuana for recreational use include the American Medical Association (AMA), the American Society of Addiction Medicine (ASAM), the American Academy of Child and Adolescent Psychiatry, and the American Academy of Pediatrics.   “Legalization campaigns that imply that marijuana is a benign substance present a significant challenge for educating the public about its known risks and adverse effects,” the American Academy of Pediatrics said. The ASAM “does not support the legalization of marijuana and recommends that jurisdictions that have not acted to legalize marijuana be most cautious and not adopt a policy of legalization until more can be learned.” The AMA “believes that (1) cannabis is a dangerous drug and as such is a public health concern; (2) sale of cannabis should not be legalized.” **Background** More than half of US adults, over 128 million people, have tried marijuana, despite it being an illegal drug under federal law. Nearly 600,000 Americans are arrested for marijuana possession annually – more than one person per minute. Public support for legalizing marijuana went from 12% in 1969 to 66% today. Recreational marijuana, also known as adult-use marijuana, was first legalized in Colorado and Washington in 2012. Proponents of legalizing recreational marijuana say it will add billions to the economy, create hundreds of thousands of jobs, free up scarce police resources, and stop the huge racial disparities in marijuana enforcement. They contend that regulating marijuana will lower street crime, take business away from the drug cartels, and make marijuana use safer through required testing, labeling, and child-proof packaging. They say marijuana is less harmful than alcohol, and that adults should have a right to use it if they wish. Opponents of legalizing recreational marijuana say it will increase teen use and lead to more medical emergencies including traffic deaths from driving while high. They contend that revenue from legalization falls far short of the costs in increased hospital visits, addiction treatment, environmental damage, crime, workplace accidents, and lost productivity. They say that marijuana use harms the user physically and mentally, and that its use should be strongly discouraged, not legalized. Read more background…
# Is Human Activity Primarily Responsible for Global Climate Change?' **Argument** Rising levels of human-produced gases released into the atmosphere create a greenhouse effect that traps heat and causes global warming. Gases released into the atmosphere trap heat and cause the planet to warm through a process called the greenhouse effect.  When we burn fossil fuels to heat our homes, drive our cars, and run factories, we’re releasing emissions that cause the planet to warm.   Methane, which is increasing in the atmosphere due to agriculture and fossil fuel production, traps 84 times as much heat as CO2 for the first 20 years it is in the atmosphere, and is responsible for about one-fifth of global warming since 1750. Nitrous oxide, primarily released through agricultural practices, traps 300 times as much heat as CO2. Over the 20th century, as the concentrations of CO2, CH4, and NO2 increased in the atmosphere due to human activity, the earth warmed by approximately 1.4°F. **Background** Average surface temperatures on earth have risen more than 2°F over the past 100 years. During this time period, atmospheric levels of greenhouse gases such as carbon dioxide (CO2) and methane (CH4) have notably increased. This site explores the debate on whether climate change is caused by humans (also known as anthropogenic climate change). The pro side argues rising levels of atmospheric greenhouse gases are a direct result of human activities such as burning fossil fuels, and that these increases are causing significant and increasingly severe climate changes including global warming, loss of sea ice, sea level rise, stronger storms, and more droughts. They contend that immediate international action to reduce greenhouse gas emissions is necessary to prevent dire climate changes. The con side argues human-generated greenhouse gas emissions are too small to substantially change the earth’s climate and that the planet is capable of absorbing those increases. They contend that warming over the 20th century resulted primarily from natural processes such as fluctuations in the sun’s heat and ocean currents. They say the theory of human-caused global climate change is based on questionable measurements, faulty climate models, and misleading science. Read more background…
# Dress Codes - Top 3 Pros and Cons | ProCon.org' **Argument** Uniformly mandated dress codes promote safety. From school chemistry labs to manufacturing jobs, some dress code requirements are obviously about safety. Many places require protective glasses, steel-toed boots, fire-resistant jackets, hard hats, or reflective vests, for example. Other items of clothing may be restricted for less obvious safety reasons. Leggings, for example, are frequently made from synthetic, flammable materials that could react with spilled chemicals and catch fire. Similarly, skin-baring clothing may also be banned around chemicals to prevent burns. Religious headscarves have been banned in some settings, such as prisons, because wearers could be strangled by the garments in an altercation. Still other dress codes, such as no full-face masks (like Halloween masks) allowed in movie theaters, are intended to help prevent shootings and other violence. Other clothing restrictions at schools and public places may seem arbitrary but are used to protect against gang activity. Colors, brands, and logos may be gang-affiliated in certain locations. As Lincoln Public Schools in Nebraska explained, “Clothing and accessories associated with gangs and hate groups have the potential to disrupt the learning environment by bringing symbols that represent fear and intimidation of others into classrooms. The identification and prohibition of this clothing help decrease the impact of gangs and hate groups in school. These rules also protect students who are unaware they are wearing clothes with a gang or hate group affiliation.” **Background** While the most frequent debate about dress codes may be centered around K-12 schools, dress codes impact just about everyone’s daily life. From the “no shirt, no shoes, no service” signs (which exploded in popularity in the 1960s and 70s in reaction to the rise of hippies) to COVID-19 pandemic mask mandates, employer restrictions on tattoos and hairstyles, and clothing regulations on airlines, dress codes are more prevalent than we might think. [1] [2] [3] [4] [5] While it’s difficult to pinpoint the first dress code–humans started wearing clothes around 170,000 years ago–nearly every culture and country throughout history, formally or informally, have had strictures on what to wear and not to wear. These dress codes are common “cultural signifiers,” reflecting social beliefs and cultural values, most often of the social class dominating the culture.  Such codes have been prevalent in Islamic countries since the founding of the religion in the seventh century, and they continue to cause controversy today—are they appropriate regulations for maintaining piety, community, and public decency, or are they demeaning and oppressive, especially for Islamic women? [6] [7] [8] [9] [10] In the West, people were arrested and imprisoned as early as ​​1565 in England for violating dress codes. The man in question, a servant named Richard Walweyn, was arrested for wearing “a very monsterous and outraygeous great payre of hose” (or trunk hose) and was imprisoned until he could show he owned other hose “of a decent & lawfull facyon.” Other dress codes of the time reserved expensive garments made of silk, fur, and velvet for nobility only, reinforcing how dress codes have been implemented for purposes of social distinction. Informal dress codes—such as high-fashion clothes with logos and the unofficial “Midtown Uniform” worn by men working in finance–underscore how often dress codes have been used to mark and maintain visual distinctions between classes and occupations.  Other dress codes have been enacted overtly to police morality, as with the bans on bobbed hair and flapper dresses of the 1920s. Still other dress codes are intended to spur an atmosphere of inclusiveness and professionalism or specifically to maintain safety in the workplace. [6] [7] [8] [11] [12]
# Should the Drinking Age Be Lowered from 21 to a Younger Age?' **Argument** 18 is the age of legal majority (adulthood) in the United States. Americans enjoy a range of new rights, responsibilities, and freedoms when they turn 18 and become an adult in the eyes of the law. 18-year-olds may vote in local, state, and federal elections; may serve on juries; and may be charged as an adult if accused of a crime. 18-year-olds are responsible for any legally binding contracts they enter; are liable for negligence; and may be sued. 18-year-olds must register with the Selective Service if male and may be drafted into service at times of war. However, 17-year-olds may enter US military service. 18-year-olds may get married without parental consent; buy a house; and enjoy new privacy rights including the shielding of medical, academic, and financial information from parents. However, drinking alcohol remains regulated under a legal age of license. An 18-year-old may legally be responsible children and legally allowed to make life decisions with years of impact, but may not legally drink a beer. Todd Rutherford, South Carolina State Representative and Democrat House Minority Leader, who filed a bill on Nov. 10, 2021 to lower South Carolina’s MLDA to 18, stated: “This is a personal freedom issue. If you are old enough to fight for our country, if you’re old enough to vote, if you’re old enough to sign on thousands of dollars of students loans for a college education, then you are old enough to have a[n alcoholic] drink.” **Background** All 50 US states have set their minimum drinking age to 21 although exceptions do exist on a state-by-state basis for consumption at home, under adult supervision, for medical necessity, and other reasons. Proponents of lowering the minimum legal drinking age (MLDA) from 21 argue that it has not stopped teen drinking, and has instead pushed underage binge drinking into private and less controlled environments, leading to more health and life-endangering behavior by teens. Opponents of lowering the MLDA argue that teens have not yet reached an age where they can handle alcohol responsibly, and thus are more likely to harm or even kill themselves and others by drinking prior to 21. They contend that traffic fatalities decreased when the MLDA increased. Read more background…
# Should Adults Have the Right to Carry a Concealed Handgun?' **Argument** The Second Amendment does not guarantee concealed carry. The Second Amendment states: “a well regulated Militia being necessary to the security of a free State, the right of the people to keep and bear Arms shall not be infringed.” There is no mention of concealed guns in the Constitution or Bill of Rights. US Supreme Court Justice Antonin Scalia wrote in the court’s majority opinion in DC v. Heller: “Like most rights, the right secured by the Second Amendment is not unlimited… the majority of the 19th-century courts to consider the question held that prohibitions on carrying concealed weapons were lawful under the Second Amendment.” In May 2014, the US Supreme Court declined to hear Drake v. Jerejian, a case challenging New Jersey’s issuance of concealed weapons permits only to citizens who can prove a “justifiable need.” In 2016, the US Supreme Court declined to hear an appeal to a 9th Circuit ruling that stated, “Based on the overwhelming consensus of historical sources, we conclude that the protection of the Second Amendment — whatever the scope of that protection may be — simply does not extend to the carrying of concealed firearms in public by members of the general public.” **Background** Carrying a concealed handgun in public is permitted in all 50 states as of 2013, when Illinois became the last state to enact concealed carry legislation. Some states require gun owners to obtain permits while others have “unrestricted carry” and do not require permits. Proponents of concealed carry say concealed carry deters crime, keeps individuals and the public safer, is protected by the Second Amendment, and protect women and minorities who can’t always rely on the police for protection. Opponents of concealed carry say concealed carry increases crime, increases the chances of a confrontation becoming lethal, is not protected by the Second Amendment, and that public safety should be left to professionally qualified police officers.  Read more background…
# Should All Americans Have the Right (Be Entitled) to Health Care?' **Argument** Providing a right to health care could worsen a doctor shortage. The Association of American Medical Colleges (AAMC) predicts a shortfall of up to 104,900 doctors by 2030. If a right to health care were guaranteed to all, this shortage could be much worse. Doctor shortages in the United States have led to a 30% increase in wait times for doctors appointments between 2014 and 2017. **Background** 27.5 million people in the United States (8.5% of the US population) do not have health insurance. Among the 91.5% who do have health insurance, 67.3% have private insurance while 34.4% have government-provided coverage through programs such as Medicaid or Medicare. Employer-based health insurance is the most common type of coverage, applying to 55.1% of the US population. The United States is the only nation among the 37 OECD (Organization for Economic Co-operation and Development) nations that does not have universal health care either in practice or by constitutional right. Proponents of the right to health care say that no one in one of the richest nations on earth should go without health care. They argue that a right to health care would stop medical bankruptcies, improve public health, reduce overall health care spending, help small businesses, and that health care should be an essential government service. Opponents argue that a right to health care amounts to socialism and that it should be an individual’s responsibility, not the government’s role, to secure health care. They say that government provision of health care would decrease the quality and availability of health care, and would lead to larger government debt and deficits. Read more background…
# Should Social Security Be Privatized?' **Argument** Private accounts give individuals control over their retirement decisions. Americans are capable of making their own decisions regarding how their retirement contributions are invested. Peter Ferrara, JD, former Director of the International Center for Law and Economics, stated that private accounts “would allow workers personal ownership and control over their retirement funds and broader freedom of choice,” and if the accounts were optional (as they were in President George W. Bush’s plan) they “would also be free to choose whether to exercise the personal account option or stay entirely in the old Social Security framework.” **Background** Social Security accounted for 23% ($1 trillion) of total US federal spending in 2019. Since 2010, the Social Security trust fund has been paying out more in benefits than it collects in employee taxes, and is projected to run out of money by 2035. One proposal to replace the current government-administered system is the partial privatization of Social Security, which would allow workers to manage their own retirement funds through personal investment accounts. Proponents of privatization say that workers should have the freedom to control their own retirement investments, that private accounts will give retirees higher returns than the current system can offer, and that privatization may help to restore the system’s solvency. Opponents of privatization say that retirees could lose their benefits in a stock market downturn, that many individuals lack the knowledge to make wise investment decisions, and that privatization does nothing to address the program’s approaching insolvency. Read more background…
# Kneeling during the National Anthem: Top 3 Pros and Cons | ProCon.org' **Argument** Kneeling during the national anthem is a legal form of peaceful protest, which is a First Amendment right. President Obama said Kaepernick was “exercising his constitutional right to make a statement. I think there’s a long history of sports figures doing so.” The San Francisco 49ers said in a statement, “In respecting such American principles as freedom of religion and freedom of expression, we recognize the right of an individual to choose and participate, or not, in our celebration of the national anthem.” A letter signed by 35 US veterans stated that “Far from disrespecting our troops, there is no finer form of appreciation for our sacrifice than for Americans to enthusiastically exercise their freedom of speech.” **Background** The debate about kneeling or sitting in protest during the national anthem was ignited by Colin Kaepernick in 2016 and escalated to become a nationally divisive issue. San Francisco 49ers quarterback Colin Kaepernick first refused to stand during “The Star-Spangled Banner” on Aug. 26, 2016 to protest racial injustice and police brutality in the United States. Since that time, many other professional football players, high school athletes, and professional athletes in other sports have refused to stand for the national anthem. These protests have generated controversy and sparked a public conversation about the protesters’ messages and how they’ve chosen to deliver them. [7] [8] [9] The 2017 NFL pre-season began with black players from the Seattle Seahawks, Oakland Raiders, and Philadelphia Eagles kneeling or sitting during the anthem with support of white teammates. On Aug. 21, 2017, twelve Cleveland Browns players knelt in a prayer circle during the national anthem with at least four other players standing with hands on the kneeling players’ shoulders in solidarity, the largest group of players to take a knee during the anthem to date. [20] [21] Jabrill Peppers, a rookie safety for the Browns, said of the protest, “There’s a lot of racial and social injustices in the world that are going on right now. We just decided to take a knee and pray for the people who have been affected and just pray for the world in general… We were not trying to disrespect the flag or be a distraction to the team, but as men we thought we had the right to stand up for what we believed in, and we demonstrated that.” [21] Seth DeValve, a tight end for the Browns and the first white NFL player to kneel for the anthem, stated, “The United States is the greatest country in the world. And it is because it provides opportunities to its citizens that no other country does. The issue is that it doesn’t provide equal opportunity to everybody, and I wanted to support my African-American teammates today who wanted to take a knee. We wanted to draw attention to the fact that there’s things in this country that still need to change.” [20] However, some Cleveland Browns fans expressed their dissatisfaction on the team’s Facebook page. One commenter posted, “Pray before or pray after. Taking a knee during the National Anthem these days screams disrespect for our Flag, Our Country and our troops. My son and the entire armed forces deserve better than that.” [22] On Friday, Sep. 22, 2017, President Donald Trump stated his opposition to NFL players kneeling during the anthem: “Wouldn’t you love to see one of these NFL owners, when somebody disrespects our flag, to say ‘Get that son of a bitch off the field right now. Out! He’s fired. He’s fired!” The statement set off a firestorm on both sides of the debate. Roger Goodell, NFL Commissioner, said of Trump’s comments, “Divisive comments like these demonstrate an unfortunate lack of respect for the NFL, our great game and all of our players, and a failure to understand the overwhelming force for good our clubs and players represent in our communities.” [23] The controversy continued over the weekend as the President continued to tweet about the issue and others contributed opinions for and against kneeling during the anthem. On Sunday, Sep. 24, in London before the first NFL game played after Trump’s comments, at least two dozen Baltimore Ravens and Jacksonville Jaguars players knelt during the American national anthem, while other players, coaches, and staff locked arms, including Shad Khan, who is the only Pakistani-American Muslim NFL team owner. Throughout the day, some players, coaches, owners, and other staff kneeled or linked arms from every team except the Carolina Panthers. The Pittsburgh Steelers chose to remain in the locker room during the anthem, though offensive tackle and Army Ranger veteran Alejandro Villanueva stood at the entrance to the field alone, for which he has since apologized. Both the Seattle Seahawks and Tennessee Titans teams stayed in their locker rooms before their game, leaving the field mostly empty during the anthem. The Seahawks stated, “As a team, we have decided we will not participate in the national anthem. We will not stand for the injustice that has plagued people of color in this country. Out of love for our country and in honor of the sacrifices made on our behalf, we unite to oppose those that would deny our most basic freedoms.” [24] [25] [27] The controversy jumped to other sports as every player on WNBA’s Indiana Fever knelt on Friday, Sep. 22 (though WNBA players had been kneeling for months); Oakland A’s catcher Bruce Maxwell kneeled on Saturday becoming the first MLB player to do so; and Joel Ward, of the NHL’s San Jose Sharks, said he would not rule out kneeling. USA soccer’s Megan Rapinoe knelt during the anthem in 2016, prompting the US Soccer Federation to issue Policy 604-1, ordering all players to stand during the anthem. [28] [29] [30] [31] [35] The country was still debating the issue well into the week, with Trump tweeting throughout, including on Sep. 26: “The NFL has all sort of rules and regulations. The only way out for them is to set a rule that you can’t kneel during our National Anthem!” [26] On May 23, 2018, the NFL announced that all 32 team owners agreed that all players and staff on the field shall “stand and show respect for the flag and the Anthem” or face “appropriate discipline.” However, all players will no longer be required to be on the field during the anthem and may wait off field or in the locker room. The new rules were adopted without input from the players’ union. On July 20, 2018, the NFL and the NFL Players Association (NFLPA) issued a joint statement putting the anthem policy on hold until the two organizations come to an agreement. [32] [33] [34] During the nationwide Black Lives Matter protests following the death of George Floyd, official league positions on kneeling began to change. On June 5, 2020, NFL Commissioner Roger Goodell stated, “We, the National Football League, condemn racism and the systematic oppression of black people. We, the National Football League, admit we were wrong for not listening to NFL players earlier and encourage all players to speak out and peacefully protest.” [39] Before the June 7, 2020 race, NASCAR lifted the guidelines that all team members must stand during the anthem, allowing NASCAR official and Army veteran Kirk Price to kneel during the anthem. [40] On June 10, 2020, the US Soccer Federation rescinded the league’s requirement that players stand during the anthem amid the Black Lives Matter protests following the death of George Floyd. The US Soccer Federation stated, “It has become clear that this policy was wrong and detracted from the important message of Black Lives Matter.” [35] In the wake of the 2020 killing of George Floyd and the protests that followed, 52% of Americans stated it was “OK for NFL players to kneel during the National Anthem to protest the police killing of African Americans.” [41] The debate largely quieted after the summer of 2020, with a brief resurgence about athletes displaying political gestures on Olympic podiums of Tokyo in 2021 and Beijing in 2022. For more on the National Anthem, see: “History of the National Anthem: Is ‘The Star-Spangled Banner’ Racist?“
# Bottled Water Bans - Pros & Cons - ProCon.org' **Argument** Banning bottled water restricts consumers' access to a product they want, and negatively affects small businesses. A survey by Harris Poll for the International Bottled Water Association found that 93% of Americans think “bottled water should be available wherever drinks are sold,” with 31% saying that they only, or mostly only, drink bottled water. Research by Kantor Panel Worldwide found that “40% of all water servings come in the form of bottled water.” As one blogger said, “everyone tells me that I’m wasting away money and harming the environment, but if it weren’t for bottled water I honestly wouldn’t drink any water at all… My personal choice is just not tap. I don’t like it.” Daniel Kenn, owner of Sudbury Coffee Works in Sudbury, MA, where a plastic water bottle ban was enacted, said, “people want water, it’s probably the biggest money maker in that cooler… almost every other town still allows plastic water bottle sales, which will put Sudbury Coffee Works at a competitive disadvantage when the ban takes effect.” **Background** Americans consumed 14.4 billion gallons of bottled water in 2019, up 3.6% from 2018, in what has been a steadily increasing trend since 2010. In 2016, bottled water outsold soda for the first time and has continued to do so every year since, making it the number one packaged beverage in the United States. 2020 revenue for bottled water was $61.326 million by June 15, and the overall market is expected to grow to $505.19 billion by 2028. [50] [51] [52] Globally, about 20,000 plastic bottles were bought every second in 2017, the majority of which contained drinking water. More than half of those bottles were not turned in for recycling, and of those recycled, only 7% were turned into new bottles. [49] In 2013, Concord, MA, became the first US city to ban single-serve plastic water bottles, citing environmental and waste concerns. Since then, many cities, colleges, entertainment venues, and national parks have followed suit, including San Francisco, the University of Vermont, the Detroit Zoo, and the Grand Canyon National Park. [17] [26] [44]
# Ride-Sharing Apps - Pros & Cons - ProCon.org' **Argument** Ride-hailing services increase traffic congestion, emissions, and total vehicle miles traveled. Ride-hailing adds a total of 5.7 billion miles of driving each year in the nine metropolitan areas (Boston, Chicago, Los Angeles, Miami, New York, Philadelphia, San Francisco, Seattle and DC) that account for 70% of such trips in the US. At least 40% of the time, drivers are traveling without passengers in the car, adding more miles and vehicle emissions that wouldn’t exist without ride-hailing. As many as 60% of riders would have used public transit, walked, biked, or not taken a trip at all if ride-hailing weren’t an option. That means that nearly two-thirds of ride-hailing trips added additional cars to the roads.   Studies show that ride-hailing makes traffic worse during already congested rush hours because of the extra cars on the road and drivers look at their phones more for passenger pick ups and directions. Researchers found that ride-hailing contributes to a net increase in greenhouse gas emissions.   **Background** OverviewPro/Con ArgumentsDiscussion QuestionsTake Action The first Uber ride was on July 5, 2010 in San Francisco, CA. The app launched internationally in 2011 and reached one billion rides on Dec. 30, 2015, quickly followed by five billion on May 20, 2017 and 10 billion on June 10, 2018. [40] On May 22, 2012, Lyft launched in San Francisco as a part of Zimride and expanded to 60 cities in 2014 and to 100 more in 2017, at which point Lyft claimed more than one million rides a day. On Nov. 13, 2017, Lyft went international, allowing the company to reach one billion rides on Sep. 18, 2018. [41] Other ride-hailing and ride-sharing apps include Gett (which partners with Lyft in the US), Curb, Wingz, Via, Scoop, and Bridj. [42] 36% of Americans said they used ride-hailing services such as Uber or Lyft, according to a Jan. 4, 2019 Pew Research Center Survey. Use is up significantly from 2015 when just 15% had used the apps. [38] But use varies among populations. 45% of urban residents, 51% of people who were 18 to 29, 53% of people who earned $75,000 or more per year, and 55% of people with college degrees, used the apps, compared to 19% of rural residents, 24% of people aged 50 or older, 24% of people who earn $30,000 or less per year, and 20% of people with a high school diploma or less. [38] In 2018, 70% of Uber and Lyft trips occurred in nine big metropolitan areas: Boston, Chicago, Los Angeles, Miami, New York, Philadelphia, San Francisco, Seattle, and Washington, DC. [3] Uber officially overtook yellow cabs in New York City in July 2017, when it reported an average of 289,000 trips per day compared to 277,000 taxi rides. More than 2.61 billion ride-hailing trips were taken in 2017, a 37% increase over the 1.90 billion trips in 2016. Ride-hailing trips were down significantly in 2020 and 2021 due to the COVID-19 pandemic. [3] [4] [39]
# Should People Become Vegetarian?' **Argument** It is cruel and unethical to kill animals for food when vegetarian options are available, especially because raising animals in confinement for slaughter is cruel, and many animals in the United States are not slaughtered humanely. Animals are sentient beings that have emotions and social connections. Scientific studies show that cattle, pigs, chickens, and all warm-blooded animals can experience stress, pain, and fear. In 2017, the United States slaughtered a total of , including 33.7 million cows, 9.2 million chickens, 124.5 million pigs, and 2.4 million sheep. These animals should not have to die painfully and fearfully to satisfy an unnecessary dietary preference. About 50% of meat produced in the United States came from confined animal feeding operations (CAFOs) in 2008 where mistreated animals live in filthy, overcrowded spaces with little or no access to pasture, natural light, or clean air. In CAFOs pigs have their tails cut short, chickens have their toenails, spurs, and beaks clipped, and cows have their horns removed and tails docked with no painkillers. Pregnant pigs are kept in metal gestation crates barely bigger than the pigs themselves. Baby cows raised for veal are tied up and confined in tiny stalls their entire short lives (3-18 weeks). The Humane Methods of Slaughter Act (HMSA) mandates that livestock be stunned unconscious before slaughter to minimize suffering. However, birds such as chickens and turkey are exempted from the HMS, and a 2010 report by the US Government Accountability Organization (GAO) found that the USDA was not “taking consistent actions to enforce the HMSA.” **Background** Americans eat an average of 58 pounds of beef, 96 pounds of chicken, and 52 pounds of pork, per person, per year, according to the United States Department of Agriculture (USDA). Vegetarians, about 5% of the US adult population, do not eat meat (including poultry and seafood). The percentage of Americans who identify as vegetarian has remained steady for two decades. 11% of those who identify as liberal follow a vegetarian diet, compared to 2% of conservatives. Many proponents of vegetarianism say that eating meat harms health, wastes resources, and creates pollution. They often argue that killing animals for food is cruel and unethical since non-animal food sources are plentiful. Many opponents of a vegetarian diet say that meat consumption is healthful and humane, and that producing vegetables causes many of the same environmental problems as producing meat. They also argue that humans have been eating and enjoying meat for 2.3 million years. Read more background…
# Is Binge Watching Bad for You? Top 3 Pros and Cons' **Argument** Binge-watching makes a show more fulfilling. While binge-watching, the viewer can feel the pleasure of full immersion (aka “the zone”), which is a great feeling similar to staying up all night to finish a book or project. Shows made for binge-watching, such as Orange Is the New Black and Stranger Things, are often more sophisticated and have multiple intricate storylines, complex relationships, and multi-dimensional characters. Watching several episodes at once tends to make the story easier to follow and more enjoyable than a single episode. That’s a big reason why the show You went unnoticed while airing on Lifetime but became a sensation once available to binge on Netflix. **Background** The first usage of the term “binge-watch” dates back to 2003, but the concept of watching multiple episodes of a show in one sitting gained popularity around 2012. Netflix’s 2013 decision to release all 13-episodes in the first season of House of Cards at one time, instead of posting an episode per week, marked a new era of binge-watching streaming content. In 2015, “binge-watch” was declared the word of the year by Collins English Dictionary, which said use of the term had increased 200% in the prior year. [1] [2] [3] 73% of Americans admit to binge-watching, with the average binge lasting three hours and eight minutes. 90% of millennials and 87% of Gen Z stated they binge-watch, and 40% of those age groups binge-watch an average of six episodes of television in one sitting. [4] [5] The coronavirus pandemic led to a sharp increase in binge-viewing: HBO, for example, saw a 65% jump in subscribers watching three or more episodes at once starting on Mar. 14, 2020, around the time when many states implemented stay-at-home measures to slow the spread of COVID-19. [28] A 2021 Sykes survey found 38% of respondents streamed three or more hours of content on weekdays, and 48% did so on weekends. However, a Nielsen study found adults watched four or more hours of live and streaming TV a day, indicating individuals may be underestimating their TV consumption. [31]
# Is a College Education Worth It?' **Argument** Many recent college graduates are un- or underemployed. The unemployment rate for recent college graduates (4.0%) exceeded the average for all workers, including those without a degree (3.6%) in 2019. The underemployment rate was 34% for all college graduates and 41.1% for recent grads. The underemployment (insufficient work) rate for college graduates in 2015 was 6.2% overall: 5.2% for white graduates, 8.4% for Hispanic graduates, and 9.7% for black graduates. According to the Federal Reserve Bank of New York, 44% of recent college graduates were underemployed in 2012. **Background** People who argue that college is worth it contend that college graduates have higher employment rates, bigger salaries, and more work benefits than high school graduates. They say college graduates also have better interpersonal skills, live longer, have healthier children, and have proven their ability to achieve a major milestone. People who argue that college is not worth it contend that the debt from college loans is too high and delays graduates from saving for retirement, buying a house, or getting married. They say many successful people never graduated from college and that many jobs, especially trades jobs, do not require college degrees. Read more background…
# Should Animals Be Used for Scientific or Commercial Testing?' **Argument** Animals themselves benefit from the results of animal testing. Vaccines tested on animals have saved millions of animals that would otherwise have died from rabies, distemper, feline leukemia, infectious hepatitis virus, tetanus, anthrax, and canine parvo virus. Treatments for animals developed using animal testing also include pacemakers for heart disease and remedies for glaucoma and hip dysplasia. Animal testing has been instrumental in saving endangered species from extinction, including the black-footed ferret, the California condor and the tamarins of Brazil.  The American Veterinary Medical Association (AVMA) endorses animal testing to develop safe drugs, vaccines, and medical devices. **Background** An estimated 26 million animals are used every year in the United States for scientific and commercial testing. Animals are used to develop medical treatments, determine the toxicity of medications, check the safety of products destined for human use, and other biomedical, commercial, and health care uses. Research on living animals has been practiced since at least 500 BC. Proponents of animal testing say that it has enabled the development of many life-saving treatments for both humans and animals, that there is no alternative method for researching a complete living organism, and that strict regulations prevent the mistreatment of animals in laboratories. Opponents of animal testing say that it is cruel and inhumane to experiment on animals, that alternative methods available to researchers can replace animal testing, and that animals are so different from human beings that research on animals often yields irrelevant results. Read more background…
# Private Prisons - Pros & Cons - ProCon.org' **Argument** All prisons—not just privately operated ones--should be abolished. Author Rachel Kushner explained, “Ninety-two percent of people locked inside American prisons are held in publicly run, publicly funded facilities, and 99 percent of those in jail are in public jails. Every private prison could close tomorrow, and not a single person would go home. But the ideas that private prisons are the culprit, and that profit is the motive behind all prisons, have a firm grip on the popular imagination.” Following that logic, Holly Genovese, PhD student in American Studies at the University of Texas at Austin, argued, “Anyone who examines privately owned US prisons has to come to the conclusion that they are abhorrent and must be eliminated. But they can also be low-hanging fruit used by opportunistic Democrats to ignore the much larger problem of — and solutions to — mass incarceration… Private prisons should be abolished. But if the problem is the profit — institutions unjustly benefiting from the labor of incarcerated people — the fight against private prisons is only a beginning. Political figures and others serious about fighting injustice must engage with the profit motives of federally and state-funded prisons as well, and seriously consider the abolition of all prisons — as they are all for profit.” As Woods Ervin, a prison abolitionist with Critical Resistance, explained, “we have to think about the rate at which the prison-industrial complex is able to actually address rape and murder. We’ve spent astronomical amounts of our budgets at the municipal level, at the federal level, on policing and caging people. And yet I don’t think that people feel any safer from the threat of sexual assault or the threat of murder. What is the prison-industrial complex doing to actually solve those problems in our society?” Abolitionists instead focus on community-level issues to prevent the concerns that lead to incarceration in the first place. Eliminating private prisons still leaves the problems of mass incarceration and public prisons. **Background** Prison privatization generally operates in one of three ways: 1. Private companies provide services to a government-owned and managed prison, such as building maintenance, food supplies, or vocational training; 2. Private companies manage government-owned facilities; or 3. Private companies own and operate the prisons and charge the government to house inmates. [1] In the United States, private prisons have their roots in slavery. Some privately owned prisons held enslaved people while the slave trade continued after the importation of slaves was banned in 1807. Recaptured runaways were also imprisoned in private facilities as were black people who were born free and then illegally captured to be sold into slavery. Many plantations were turned into private prisons from the Civil War forward; for example, the Angola Plantation became the Louisiana State Penitentiary (nicknamed “Angola” for the African homeland of many of the slaves who originally worked on the plantation), the largest maximum-security prison in the country. In 2000, the Vann Plantation in North Carolina was opened as the private, minimal security Rivers Correctional Facility (operated by GEO Group), though the facility’s federal contract expired in Mar. 2021. [2] [3] [4] [5] [6] Inmates in private prisons in the 19th century were commonly used for labor via “convict leasing” in which the prison owners were paid for the labor of the inmates. According to the Innocence Project, Jim Crow laws after the Civil War ensured the newly freed black population was imprisoned at high rates for petty or nonexistent crimes in order to maintain the labor force needed for picking cotton and other labor previously performed by enslaved people. However, the practice of convict leasing extended beyond the American South. California awarded private management contracts for San Quentin State Prison in order to allow the winning bidder leasing rights to the convicts until 1860. Convict leasing faded in the early 20th century as states banned the practice and shifted to forced farming and other labor on the land of the prisons themselves. [2] [3] [7] [8] [9] [10] What Americans think of now as a private prison is an institution owned by a conglomerate such as CoreCivic, GEO Group, LaSalle Corrections, or Management and Training Corporation. This sort of private prison began operations in 1984 in Tennessee and 1985 in Texas in response to the rapidly rising prison population during the war on drugs. State-run facilities were overpopulated with increasing numbers of people being convicted for drug offenses. Corrections Corporation of America (now CoreCivic) first promised to run larger prisons more cheaply to solve the problems. In 1987, Wackenhut Corrections Corporation (now GEO Group) won a federal contract to run an immigration detention center, expanding the focus of private prisons. [11] [12] [13] In 2016, the federal government announced it would phase out the use of private prisons: a policy rescinded by Attorney General Jeff Sessions under the Trump administration but reinstated under President Biden. However, Biden’s order did not limit the use of private facilities for federal immigrant detention. 20 US states did not use private prisons as of 2019. [11] [12] [14] In 2019, 115,428 people (8% of the prison population) were incarcerated in state or federal private prisons; 81% of the detained immigrant population (40,634 people) was held in private facilities. The federal government held the most (27,409) people in private prisons in 2019, followed by Texas (12,516), and Florida (11,915). However, Montana held the largest percentage of the state’s inmates in private prisons (47%). [11] According to the Sentencing Project, “[p]rivate prisons incarcerated 99,754 American residents in 2020, representing 8% of the total state and federal prison population. Since 2000, the number of people housed in private prisons has increased 14%. [37] On Jan. 20, 2022, the federal Bureau of Prisons reported 153,855 total federal inmates, 6,336 of whom were held in private facilities, or about 4% of people in federal custody. [36]
# Should People Become Vegetarian?' **Argument** A diet that includes meat does not raise risk of disease. Saturated fats from meat are not to blame for modern diseases like heart disease, cancer, and obesity. Chemically processed and hydrogenated vegetable oils like corn and canola cause these conditions because they contain harmful free radicals and trans fats formed during chemical processing. Lean red meat, eaten in moderation, can be a healthful part of a balanced diet. According to researchers at the British Nutrition Foundation, “there is no evidence” that moderate consumption of unprocessed lean red meat has any negative health effects. However, charring meat during cooking can create over 20 chemicals linked to cancer, and the World Cancer Research Fund finds that processed meats like bacon, sausage, and salami, which contain preservatives such as nitrates, are strongly associated with bowel cancer and should be avoided. They emphasize that lean, unprocessed red meat can be a valuable source of nutrients and do not recommend that people remove red meat from their diets entirely, but rather, that they limit consumption to 11 ounces per week or less. **Background** Americans eat an average of 58 pounds of beef, 96 pounds of chicken, and 52 pounds of pork, per person, per year, according to the United States Department of Agriculture (USDA). Vegetarians, about 5% of the US adult population, do not eat meat (including poultry and seafood). The percentage of Americans who identify as vegetarian has remained steady for two decades. 11% of those who identify as liberal follow a vegetarian diet, compared to 2% of conservatives. Many proponents of vegetarianism say that eating meat harms health, wastes resources, and creates pollution. They often argue that killing animals for food is cruel and unethical since non-animal food sources are plentiful. Many opponents of a vegetarian diet say that meat consumption is healthful and humane, and that producing vegetables causes many of the same environmental problems as producing meat. They also argue that humans have been eating and enjoying meat for 2.3 million years. Read more background…
# American Socialism - Pros & Cons - ProCon.org' **Argument** The job of the US government is to enable free enterprise and then get out of the way of individual ingenuity and hard work. The government should promote equal opportunity, not promise equal results. As the Dallas Federal Reserve explained, “Free enterprise means men and women have the opportunity to own economic resources, such as land, minerals, manufacturing plants and computers, and to use those tools to create goods and services for sale… Others have no intention of starting a business. If they choose, they can offer their labor, another economic resource, for wages and salaries… By allowing people to pursue their own interests, a free enterprise system can produce phenomenal results. Running shoes, walking shoes, mint toothpaste, gel toothpaste, skim milk, chocolate milk, cellular phones and BlackBerrys are just a few of the millions of products created as a result of economic freedom.” When the government allows individuals to pursue what is best for them without interference, individuals prosper, the country enjoys “an average per capita GDP roughly six times greater than those with lower levels of economic freedom, as well as higher life expectancy, political and civil liberties, gender equality, and happiness.” In summary, “the diversity of economic freedom… helps us thrive both as individuals and as a society.” **Background** Socialism in the United States is an increasingly popular topic. Some argue that the country should actively move toward socialism to spur social progress and greater equity, while others demand that the country prevent this by any and all means necessary. This subject is often brought up in connection with universal healthcare and free college education, ideas that are socialist by definition, or as a general warning against leftist politics.   While some politicians openly promote socialism or socialist policies (Senator Bernie Sanders and Representative Alexandria Ocasio-Cortez, for example), others reject the socialist label (now Vice President Kamala Harris said she was “not a Democratic Socialist” during the 2020 presidential campaign) or invoke it as a dirty word that is contrary to American ideals (in the 2019 State of the Union, President Trump stated the US would “never be a socialist country” because “We are born free, and we will stay free”). [1] [2] To consider whether the United States should adopt socialism or at least more socialist policies, the relevant terms must first be defined. Socialism is an economic and social policy in which the public owns industry and products, rather than private individuals or corporations. Under socialism, the government controls most means of production and natural resources, among other industries, and everyone in the country is entitled to an equitable share according to their contribution to society. Individual private ownership is encouraged. [3] Politically, socialist countries tend to be multi-party with democratic elections. Currently no country operates under a 100% socialist policy. Denmark, Iceland, Finland, Norway, and Sweden, while heavily socialist, all combine socialism with capitalism. [4] [5] Capitalism, the United States’ current economic model, is a policy in which private individuals and corporations control production that is guided through markets, not by the government. Capitalism is also called a free market economy or free enterprise economy. Capitalism functions on private property, profit motive, and market competition. [6] Politically, capitalist countries range from democracies to monarchies to oligarchies to despotisms. Most western countries are capitalist, including the United States, Canada, the United Kingdom, Ireland, Switzerland, Australia, and New Zealand. Also capitalist are Hong Kong, Singapore, Taiwan, and the United Arab Emirates. However, many of these countries, including the United States, have implemented socialist policies within their capitalist systems, such as social security, minimum wages, and energy subsidies. [7] [8] Communism is frequently used as a synonym for socialism and the exact differences between the two are heavily debated. One difference is that communism provides everyone in the country with an equal share, rather than the equitable share promised by socialism. Communism is commonly summarized by the Karl Marx slogan, “From each according to his ability, to each according to his needs,” and was believed by Marx to be the step beyond socialism. Individual private ownership is illegal in most communist countries. [4] [9] Politically, communist countries tend to be led by one communist party, and elections are only within that party. Frequently, the military has significant political power. Historically, a secret police has also shared that power, as in the former Soviet Union, the largest communist country in history. Civil liberties (such as freedom of the press, speech, and assembly) are publicly embraced, but frequently limited in practice, often by force. Countries that are currently communist include China, Cuba, Laos, North Korea, and Vietnam. Worth noting is that some of these countries, including the Democratic People’s Republic of Korea (North Korea) and the Socialist Republic of Vietnam, label themselves as democratic or socialist though they meet the definition of communism and are run by communist parties. Additionally, some communist countries, such as China and Vietnam, operate with partial free market economies, which is a cornerstone of capitalism, and some socialist policies. [4] [5] [10] [11] [12] [13] Given those definitions, should the United States adopt more socialist policies such as free college, medicare-for-all, and the Green New Deal?
# Should Animals Be Used for Scientific or Commercial Testing?' **Argument** The vast majority of biologists and several of the largest biomedical and health organizations in the United States endorse animal testing. A poll of 3,748 scientists by the Pew Research Center found that 89% favored the use of animals in scientific research. The American Cancer Society, American Physiological Society, National Association for Biomedical Research, American Heart Association, and the Society of Toxicology all advocate the use of animals in scientific research. **Background** An estimated 26 million animals are used every year in the United States for scientific and commercial testing. Animals are used to develop medical treatments, determine the toxicity of medications, check the safety of products destined for human use, and other biomedical, commercial, and health care uses. Research on living animals has been practiced since at least 500 BC. Proponents of animal testing say that it has enabled the development of many life-saving treatments for both humans and animals, that there is no alternative method for researching a complete living organism, and that strict regulations prevent the mistreatment of animals in laboratories. Opponents of animal testing say that it is cruel and inhumane to experiment on animals, that alternative methods available to researchers can replace animal testing, and that animals are so different from human beings that research on animals often yields irrelevant results. Read more background…
# Should More Gun Control Laws Be Enacted?' **Argument** Armed civilians are unlikely to stop crimes and are more likely to make dangerous situations, including mass shootings, more deadly. None of the 62 mass shootings between 1982 and 2012 was stopped by an armed civilian. Gun rights activists regularly state that a 2002 mass shooting at the Appalachian School of Law in Virginia was stopped by armed students, but those students were current and former law enforcement officers and the killer was out of bullets when subdued. Other mass shootings often held up as examples of armed citizens being able to stop mass shootings involved law enforcement or military personnel and/or the shooter had stopped shooting before being subdued, such as a 1997 high school shooting in Pearl, MS; a 1998 middle school dance shooting in Edinboro, PA; a 2007 church shooting in Colorado Springs, CO; and a 2008 bar shooting in Winnemucca, NV. Jeffrey Voccola, Assistant Professor of Writing at Kutztown University, notes, “The average gun owner, no matter how responsible, is not trained in law enforcement or on how to handle life-threatening situations, so in most cases, if a threat occurs, increasing the number of guns only creates a more volatile and dangerous situation.” **Background** The United States has 120.5 guns per 100 people, or about 393,347,000 guns, which is the highest total and per capita number in the world. 22% of Americans own one or more guns (35% of men and 12% of women). America’s pervasive gun culture stems in part from its colonial history, revolutionary roots, frontier expansion, and the Second Amendment, which states: “A well regulated militia, being necessary to the security of a free State, the right of the people to keep and bear Arms, shall not be infringed.” Proponents of more gun control laws state that the Second Amendment was intended for militias; that gun violence would be reduced; that gun restrictions have always existed; and that a majority of Americans, including gun owners, support new gun restrictions. Opponents say that the Second Amendment protects an individual’s right to own guns; that guns are needed for self-defense from threats ranging from local criminals to foreign invaders; and that gun ownership deters crime rather than causes more crime. Read more background…
# Should the United States Continue Its Use of Drone Strikes Abroad?' **Argument** Drones limit the scope, scale, and casualties of military action, keeping the US military and civilians in other countries safer. Invading Pakistan, Yemen, or Somalia with boots on the ground to capture relatively small terrorist groups would lead to expensive conflict, responsibility for destabilizing those governments, large numbers of civilian casualties, empowerment of enemies who view the United States as an occupying imperialist power, US military deaths, among other consequences. America’s attempt to destroy al Qaeda and the Taliban in Afghanistan by invading and occupying the country resulted in a war that has dragged on for over 12 years. Using drone strikes against terrorists abroad allows the United States to achieve its goals at a fraction of the cost of an invasion in money, manpower, and lives. Drones are launched from bases in allied countries and are operated remotely by pilots in the United States, minimizing the risk of injury and death that would occur if ground soldiers and airplane pilots were used instead. Al Qaeda, the Taliban, and their affiliates often operate in distant and environmentally unforgiving locations where it would be extremely dangerous for the United States to deploy teams of special forces to track and capture terrorists. Such pursuits may pose serious risks to US troops including firefights with surrounding tribal communities, anti-aircraft shelling, land mines, improvised explosive devices (IEDs), suicide bombers, snipers, dangerous weather conditions, harsh environments, etc. [10] Further, drone pilots suffer less than traditional pilots because they do not have to be directly present on the battlefield, can live a normal civilian life in the United States, and do not risk death or serious injury. Only 4% of active-duty drone pilots are at “high risk for PTSD” compared to the 12-17% of soldiers returning from Iraq and Afghanistan. Drones also have lower civilian casualties than “boots on the ground” missions. Between 1,193 and 2,654 civilians have died in drone strikes in Afghanistan, Pakistan, Somalia, and Yemen, or between 7% and 15% of the those killed by drones . By contrast, about 335,000 total civilians have been killed violently in the War on Terror in Afghanistan, Iraq, Pakistan, Syrian, and Yemen. [139] The traditional weapons of war – bombs, shells, mines, mortars – cause more collateral (unintended) damage to people and property than drones, whose accuracy and technical precision mostly limit casualties to combatants and intended targets. Civilian deaths in World War II are estimated at 40 to 67% of total war deaths. In the Korean, Vietnam, and Balkan Wars, civilian deaths accounted for approximately 70%, 31%, and 45% of deaths respectively. Former Secretary of Defense Robert Gates, PhD, stated, “You can far more easily limit collateral damage with a drone than you can with a bomb, even a precision-guided munition, off an airplane.” Former CIA Director Leon Panetta, JD, concurred, ““I think this is one of the most precise weapons that we have in our arsenal.” And Former State Department Legal Advisor Harold Hongju Koh, JD, agreed that drones “have helped to make our targeting even more precise.” **Background** Unmanned aerial vehicles (UAVs), otherwise known as drones, are remotely-controlled aircraft which may be armed with missiles and bombs for attack missions. Since the World Trade Center attacks on Sep. 11, 2001 and the subsequent “War on Terror,” the United States has used thousands of drones to kill suspected terrorists in Pakistan, Afghanistan, Yemen, Somalia, and other countries. Proponents state that drones strikes help prevent “boots on the ground” combat and makes America safer, that the strikes are legal under American and international law, and that they are carried out with the support of Americans and foreign governments Opponents state that drone strikes kill civilians, creating more terrorists than they kill and sowing animosity in foreign countries, that the strikes are extrajudicial and illegal, and create a dangerous disconnect between the horrors of war and soldiers carrying out the strikes.
# Is Social Media Good for Society?' **Argument** Social media allows people to improve their relationships and make new friends. 93% of adults on Facebook use it to connect with family members, 91% use it to connect with current friends, and 87% use it to connect with friends from the past.  72% of all teens connect with friends via social media. 81% of teens age 13 to 17 reported that social media makes them feel more connected to the people in their lives, and 68% said using it makes them feel supported in tough times. 57% of teens have made new friends online. **Background** Around seven out of ten Americans (72%) use social media sites such as Facebook, Instagram, Twitter, LinkedIn, and Pinterest, up from 26% in 2008. [26] [189]. On social media sites, users may develop biographical profiles, communicate with friends and strangers, do research, and share thoughts, photos, music, links, and more. Proponents of social networking sites say that the online communities promote increased interaction with friends and family; offer teachers, librarians, and students valuable access to educational support and materials; facilitate social and political change; and disseminate useful information rapidly. Opponents of social networking say that the sites prevent face-to-face communication; waste time on frivolous activity; alter children’s brains and behavior making them more prone to ADHD; expose users to predators like pedophiles and burglars; and spread false and potentially dangerous information. Read more background…
# Should Teachers Get Tenure?' **Argument** Tenure makes it costly for schools to remove a teacher with poor performance or who is guilty of wrongdoing. It costs an average of $313,000 to fire a teacher in New York state. New York Department of Education spent an estimated $15-20 million a year paying tenured teachers accused of incompetence and wrongdoing to report to reassignment centers (sometimes called “rubber rooms”) where they were paid to sit idly. **Background** Teacher tenure is the increasingly controversial form of job protection that public school teachers in 46 states receive after 1-5 years on the job. An estimated 2.3 million teachers have tenure. Proponents of tenure argue that it protects teachers from being fired for personal or political reasons, and prevents the firing of experienced teachers to hire less expensive new teachers. They contend that since school administrators grant tenure, neither teachers nor teacher unions should be unfairly blamed for problems with the tenure system. Opponents of tenure argue that this job protection makes the removal of poorly performing teachers so difficult and costly that most schools end up retaining their bad teachers. They contend that tenure encourages complacency among teachers who do not fear losing their jobs, and that tenure is no longer needed given current laws against job discrimination. Read more background…
# Should Birth Control Pills Be Available Over the Counter (OTC)?' **Argument** OTC birth control pills would increase access for low-income and medically underserved populations. Twenty million women live in “contraception deserts,” places with one clinic or fewer per 1,000 women who need government-funded birth control from programs such as Medicare. In underserved communities, women could more easily find a local drug store for medication than a doctor’s office. 11-21% of sexually active low-income women studied were more likely to use the Pill if it were available OTC. Denicia Cadena, Policy Director for Young Women United in New Mexico, stated: “Our rural communities are most profoundly impacted by our state’s health care and provider shortages. Patients face three- to six-month wait times for any primary care and even longer for specialty care… 11 of the state’s 33 counties have no obstetrics and gynecology physicians.” Birth control can be difficult for many women to obtain, particularly teens, immigrants, women of color, and the uninsured. The National Latina Institute of Reproductive Health stated: “over-the-counter access will greatly reduce the systemic barriers, like poverty, immigration status and language, that currently prevent Latinas from regularly accessing birth control and results in higher rates of unintended pregnancy.” Other medically underserved communities, such as LGBTQ+ people, are likely to be uninsured (16% of all LGBTQ people making less than $45,000 per year are uninsured), more likely to face economic barriers to healthcare (29% postponed necessary medical care and 24% postponed preventative screenings due to cost), and are more likely to face discrimination in the healthcare industry, resulting in less or no reproductive healthcare. **Background** Of the 72.2 million American women of reproductive age, 64.9% use a contraceptive. Of those, 9.1 million (12.6% of contraceptive users) use birth control pills, which are the second most commonly used method of contraception in the United States after female sterilization (aka tubal ligation or “getting your tubes tied”). The Pill is currently available by prescription only, and a debate has emerged about whether the birth control pill should be available over-the-counter (OTC), which means the Pill would be available along with other drugs such as Tylenol and Benadryl in drug store aisles. Since 1976, more than 90 drugs have switched from prescription to OTC status, including Sudafed (1976), Advil (1984), Rogaine (1996), Prilosec (2003), and Allegra (2011). Read more background…
# Is Cell Phone Radiation Safe?' **Argument** Numerous peer-reviewed studies have shown an association between cell phone use and the development of brain tumors. In 2018, the National Toxicology Program (NTP) concluded, “high exposure to RFR (900 MHz) used by cell phones was associated with: [1] Clear evidence of an association with tumors in the hearts of male rats. The tumors were malignant schwannomas. [2] Some evidence of an association with tumors in the brains of male rats. The tumors were malignant gliomas. [and 3] Some evidence of an association with tumors in the adrenal glands of male rats. The tumors were benign, malignant, or complex combined pheochromocytoma.” The NTP indicated “clear evidence” of a link between cell phone radiation and cancer, the highest category of evidence used by the NTP. A Feb. 2017 study concluded, “We found evidence linking mobile phone use and risk of brain tumours especially in long-term users (≥10 years). Studies with higher quality showed a trend towards high risk of brain tumour, while lower quality showed a trend towards lower risk/protection.” Studies have also linked cell phone use to thyroid and breast cancers. And other studies have similarly concluded that there is an association between cell phone use and increased risk of developing brain and head tumors. **Background**
# Should Birth Control Pills Be Available Over the Counter (OTC)?' **Argument** OTC status for birth control pills could result in more unwanted pregnancies. The birth control pill is not the most effective form of birth control. Among birth control methods, the Pill ranks seventh in effectiveness. Typical use of the Pill results in nine unintended pregnancies out of 100 women after one year of use and increases steadily to 61 unintended pregnancies out of 100 after ten years of typical use. Meanwhile, typical use of copper IUDs results in eight unintended pregnancies per 100 women after ten years of typical use, female sterilization results in five, the Levonorgestral IUD and male sterilization result in two, and hormonal implants result in just one. Robin Marty, health writer, noted that because the more effective options “would require a doctor’s visit and the Pill would just require a trip to the store, women may be inclined to use less effective contraception for the sake of convenience.” **Background** Of the 72.2 million American women of reproductive age, 64.9% use a contraceptive. Of those, 9.1 million (12.6% of contraceptive users) use birth control pills, which are the second most commonly used method of contraception in the United States after female sterilization (aka tubal ligation or “getting your tubes tied”). The Pill is currently available by prescription only, and a debate has emerged about whether the birth control pill should be available over-the-counter (OTC), which means the Pill would be available along with other drugs such as Tylenol and Benadryl in drug store aisles. Since 1976, more than 90 drugs have switched from prescription to OTC status, including Sudafed (1976), Advil (1984), Rogaine (1996), Prilosec (2003), and Allegra (2011). Read more background…
# Fracking Pros and Cons - Top 3 Arguments For and Against' **Argument** Natural gas is a necessary bridge fuel to get to 100% clean energy and eliminate coal and petroleum, and fracking is the best way to extract natural gas. Mark Little, President and Chief Executive for Suncor, explained, “The choice is not between fossil fuels and renewable energy, but rather, how do we accelerate the growth of renewables while reducing greenhouse gas emissions from the use of fossil fuels.” Replacing coal and petroleum with natural gas obtained by fracking now allows the US to achieve short-term and immediate reductions in greenhouse gases that cause climate change while alternative energies such as solar and wind are built into viable industries. In the 2014 State of the Union address, President Barack Obama stated, “natural gas – if extracted safely, it’s the bridge fuel that can power our economy with less of the carbon pollution that causes climate change.” Oil and gas company BP explained, “[A]s the world works towards net zero emissions, we think natural gas will play an important role in getting us all there… Natural gas has far lower emissions than coal when burnt for power and is a much cleaner way of generating electricity. Switching from coal to gas has cut more than 500 million tonnes of CO2 from the power sector this decade alone.” **Background** Fracking (hydraulic fracturing) is a method of extracting natural gas from deep underground via a drilling technique. First, a vertical well is drilled and encased in steel or cement. Then, a horizontal well is drilled in the layer of rock that contains natural gas. After that, fracking fluid is pumped into the well at an extremely high pressure so that it fractures the rock in a way that allows oil and gas to flow through the cracks to the surface. [1] Colonel Edward A. L. Roberts first developed a version of fracking in 1862. During the Civil War at the Battle of Fredericksburg, Roberts noticed how artillery blasts affected channels of water. The idea of “shooting the well” was further developed by lowering a sort of torpedo into an oil well. The torpedo was then detonated, which increased oil flow. [2] In the 1940s, explosives were replaced by high-pressure liquids, beginning the era of hydraulic fracturing. The 21st century brought two further innovations: horizontal drilling and slick water (a mix of water, sand, and chemicals) to increase fluid flow. Spurred by increased financial investment and global oil prices, fracking picked up speed and favor. About one million oil wells were fracked between 1940 and 2014, with about one third of the wells fracked after 2000. [2] [3] Most US states allow fracking, though four states have banned the practice as of Feb. 2021: Vermont (2012), New York (temporarily in 2014; permanently in 2020), Maryland (2017), and Washington (2019). In Apr. 2021, California banned new fracking projects as of 2024. [4] [5] [6] [7] [8] [32] Fracking was a hot-button issue during the US presidential race of 2020, with President Trump firmly in favor of fracking and Vice President Biden expressing more concern about the practice, especially on federal lands.
# Should the United States Maintain Its Embargo against Cuba?' **Argument** The embargo harms the US economy. The US Chamber of Commerce opposes the embargo, saying that it costs the United States $1.2 billion annually in lost sales of exports. A study by the Cuba Policy Foundation, a nonprofit founded by former US diplomats, estimated that the annual cost to the US economy could be as high as $4.84 billion in agricultural exports and related economic output. “If the embargo were lifted, the average American farmer would feel a difference in his or her life within two to three years,” the study’s author said. A Mar. 2010 study by Texas A&M University calculated that removing the restrictions on agricultural exports and travel to Cuba could create as many as 6,000 jobs in the US. Nine US governors released a letter on Oct. 14, 2015 urging Congress to lift the embargo, which stated: “Foreign competitors such as Canada, Brazil and the European Union are increasingly taking market share from U.S. industry [in Cuba], as these countries do not face the same restrictions on financing… Ending the embargo will create jobs here at home, especially in rural America, and will create new opportunities for U.S. agriculture.” **Background** Since the 1960s, the United States has imposed an embargo against Cuba, the Communist island nation 90 miles off the coast of Florida. The embargo, known among Cubans as “el bloqueo” or “the blockade,” consists of economic sanctions against Cuba and restrictions on Cuban travel and commerce for all people and companies under US jurisdiction. Proponents of the embargo argue that Cuba has not met the US conditions for lifting the embargo, including transitioning to democracy and improving human rights. They say that backing down without getting concessions from the Castro regime will make the United States appear weak, and that only the Cuban elite would benefit from open trade. Opponents of the Cuba embargo argue that it should be lifted because the failed policy is a Cold War relic and has clearly not achieved its goals. They say the sanctions harm the US economy and Cuban citizens, and prevent opportunities to promote change and democracy in Cuba. They say the embargo hurts international opinion of the United States. Read more background…
# Should Recreational Marijuana Be Legal?' **Argument** Marijuana is less harmful than alcohol and tobacco, which are already legal. Alcohol and tobacco are legal, yet they are known to cause cancer, heart failure, liver damage, and more. According to the CDC, six people die from alcohol poisoning every day and 88,000 people die annually due to excessive alcohol use in the United States. There are no recorded cases of death from marijuana overdose.   Three to four times as many Americans are dependent on alcohol as on marijuana. A study in the Lancet ranking the harmfulness of drugs put alcohol first as the most harmful, tobacco as sixth, and cannabis eighth. A national poll found that people view tobacco as a greater threat to health than marijuana by a margin of four to one (76% vs. 18%), and 72% of people surveyed believed that regular use of alcohol was more dangerous than marijuana use.   “In several respects, even sugar poses more of a threat to our nation’s health than pot,” said Dr. David L. Nathan, a clinical psychiatrist and president of Doctors for Cannabis Regulation. **Background** More than half of US adults, over 128 million people, have tried marijuana, despite it being an illegal drug under federal law. Nearly 600,000 Americans are arrested for marijuana possession annually – more than one person per minute. Public support for legalizing marijuana went from 12% in 1969 to 66% today. Recreational marijuana, also known as adult-use marijuana, was first legalized in Colorado and Washington in 2012. Proponents of legalizing recreational marijuana say it will add billions to the economy, create hundreds of thousands of jobs, free up scarce police resources, and stop the huge racial disparities in marijuana enforcement. They contend that regulating marijuana will lower street crime, take business away from the drug cartels, and make marijuana use safer through required testing, labeling, and child-proof packaging. They say marijuana is less harmful than alcohol, and that adults should have a right to use it if they wish. Opponents of legalizing recreational marijuana say it will increase teen use and lead to more medical emergencies including traffic deaths from driving while high. They contend that revenue from legalization falls far short of the costs in increased hospital visits, addiction treatment, environmental damage, crime, workplace accidents, and lost productivity. They say that marijuana use harms the user physically and mentally, and that its use should be strongly discouraged, not legalized. Read more background…
# Should Tablets Replace Textbooks in K-12 Schools?' **Argument** Tablets lower the amount of paper teachers have to print for handouts and assignments, helping to save the environment and money. A school with 100 teachers uses on average 250,000 pieces of paper annually. A school of 1,000 students on average spends between $3,000-4,000 a month on paper, ink, and toner, not counting printer wear and tear or technical support costs. **Background** Textbook publishing in the United States is an $11 billion industry, with five companies – Cengage Learning, Houghton Mifflin Harcourt, McGraw-Hill, Pearson Education, and Scholastic – capturing about 80% of this market. Tablets are an $18 billion industry with 53% of US adults, 81% of US children aged eight to 17, and 42% of US children aged under eight, owning a tablet. As tablets have become more prevalent, a new debate has formed over whether K-12 school districts should switch from print textbooks to digital textbooks on tablets and e-readers.  Proponents of tablets say that they are supported by most teachers and students, are much lighter than print textbooks, and improve standardized test scores. They say tablets can hold hundreds of textbooks, save the environment by lowering the amount of printing, increase student interactivity and creativity, and that digital textbooks are cheaper than print textbooks. Opponents of tablets say that they are expensive, too distracting for students, easy to break, and costly/time-consuming to fix. They say that tablets contribute to eyestrain, headaches, and blurred vision, increase the excuses available for students not doing their homework, require costly Wi-Fi networks, and become quickly outdated as new technologies emerge. Read more background…
# Was Ronald Reagan a Good President?' **Argument** Environment: Between 1982 and 1988, Reagan signed 43 bills designating more than 10 million acres of federal wilderness areas in 27 states. This acreage accounted for nearly 10% of the National Wilderness Preservation System at the time. Reagan had signed more wilderness bills than any other president since the Wilderness Act was enacted in 1964. **Background** Ronald Wilson Reagan served as the 40th President of the United States from Jan. 20, 1981 to Jan. 19, 1989. He won the Nov. 4, 1980 presidential election, beating Democratic incumbent Jimmy Carter with 50.7% of the votes, and won his second term by a landslide of 58.8% of the votes. Reagan’s proponents point to his accomplishments, including stimulating economic growth in the US, strengthening its national defense, revitalizing the Republican Party, and ending the global Cold War as evidence of his good presidency. His opponents contend that Reagan’s poor policies, such as bloating the national defense, drastically cutting social services, and making missiles-for-hostages deals, led the country into record deficits and global embarrassment. Read more background…
# Do Violent Video Games Contribute to Youth Violence?' **Argument** Violent video games are a convenient scapegoat for those who would rather not deal with the actual causes of violence in the US. Patrick Markey, PhD, Psychology Professor at Villanova University, stated: “The general story is people who play video games right after might be a little hopped up and jerky but it doesn’t fundamentally alter who they are. It is like going to see a sad movie. It might make you cry but it doesn’t make you clinically depressed… Politicians on both sides go after video games it is this weird unifying force. It makes them look like they are doing something… They [violent video games] look scary. But research just doesn’t support that there’s a link [to violent behavior].” Markey also explained, “Because video games are disproportionately blamed as a culprit for mass shootings committed by White perpetrators, video game ‘blaming’ can be viewed as flagging a racial issue. This is because there is a stereotypical association between racial minorities and violent crime.” Andrew Przybylski, PhD, Associate Professor, and Senior Research Fellow and Director of Research at the Oxford Internet Institute at Oxford University, stated: “Games have only become more realistic. The players of games and violent games have only become more diverse. And they’re played all around the world now. But the only place where you see this kind of narrative still hold any water, that games and violence are related to each other, is in the United States. [And, by blaming video games for violence,] we reduce the value of the political discourse on the topic, because we’re looking for easy answers instead of facing hard truths.” Hillary Clinton, JD, Former Secretary of State and First Lady, tweeted, “People suffer from mental illness in every other country on earth; people play video games in virtually every other country on earth. The difference is the guns.” **Background** Around 73% of American kids age 2-17 played video games in 2019, a 6% increase over 2018. Video games accounted for 17% of kids’ entertainment time and 11% of their entertainment spending. The global video game industry was worth contributing $159.3 billion in 2020, a 9.3% increase of 9.3% from 2019. Violent video games have been blamed for school shootings, increases in bullying, and violence towards women. Critics argue that these games desensitize players to violence, reward players for simulating violence, and teach children that violence is an acceptable way to resolve conflicts. Video game advocates contend that a majority of the research on the topic is deeply flawed and that no causal relationship has been found between video games and social violence. They argue that violent video games may provide a safe outlet for aggressive and angry feelings and may reduce crime. Read more background…
# Should the Drinking Age Be Lowered from 21 to a Younger Age?' **Argument** MLDA creates a mindset of non-compliance with the law among young adults. Lowering MLDA from 21 to 18 would diminish the thrill of breaking the law to get a drink. Normalizing alcohol consumption as something to be done responsibly and in moderation will make drinking alcohol less of a taboo for young adults entering college and the workforce. High non-compliance with MLDA 21 promotes general disrespect and non-compliance with other areas of US law. MLDA 21 encourages young adults to acquire and use false identification documents to procure alcohol. It would be better to have fewer fake IDs in circulation and more respect for the law. Further, MLDA 21 enforcement is not a priority for many law enforcement agencies. Police are inclined to ignore or under-enforce MLDA 21 because of resource limitations, statutory obstacles, perceptions that punishments are inadequate, and the time and effort required for processing and paperwork. An estimated two of every 1,000 occasions of illegal drinking by youth under 21 results in an arrest. Combine a lack of consequences with the thrill of breaking the law, and MLDA 21 actually encourages underage drinking and potentially other illegal activities, such as driving while intoxicated and illicit drug use. Lowering the MLDA would make 18- to 20-year-olds subject to the same laws enforced for those 21 and over. **Background** All 50 US states have set their minimum drinking age to 21 although exceptions do exist on a state-by-state basis for consumption at home, under adult supervision, for medical necessity, and other reasons. Proponents of lowering the minimum legal drinking age (MLDA) from 21 argue that it has not stopped teen drinking, and has instead pushed underage binge drinking into private and less controlled environments, leading to more health and life-endangering behavior by teens. Opponents of lowering the MLDA argue that teens have not yet reached an age where they can handle alcohol responsibly, and thus are more likely to harm or even kill themselves and others by drinking prior to 21. They contend that traffic fatalities decreased when the MLDA increased. Read more background…
# Should Teachers Get Tenure?' **Argument** Tenure protects teachers from being prematurely fired after a student makes a false accusation or a parent threatens expensive legal action against the district. After an accusation, districts might find it expedient to quickly remove a teacher instead of investigating the matter and incurring potentially expensive legal costs. The thorough removal process mandated by tenure rules ensures that teachers are not removed without a fair hearing. **Background** Teacher tenure is the increasingly controversial form of job protection that public school teachers in 46 states receive after 1-5 years on the job. An estimated 2.3 million teachers have tenure. Proponents of tenure argue that it protects teachers from being fired for personal or political reasons, and prevents the firing of experienced teachers to hire less expensive new teachers. They contend that since school administrators grant tenure, neither teachers nor teacher unions should be unfairly blamed for problems with the tenure system. Opponents of tenure argue that this job protection makes the removal of poorly performing teachers so difficult and costly that most schools end up retaining their bad teachers. They contend that tenure encourages complacency among teachers who do not fear losing their jobs, and that tenure is no longer needed given current laws against job discrimination. Read more background…
# Should More Gun Control Laws Be Enacted?' **Argument** Legally owned guns are frequently stolen and used by criminals. A June 2013 Institute of Medicine (IOM) report states that “[a]lmost all guns used in criminal acts enter circulation via initial legal transaction.” Between 2005 and 2010, 1.4 million guns were stolen from US homes during property crimes (including burglary and car theft), a yearly average of 232,400. Ian Ayres, JD, PhD, and John J. Donohue, JD, PhD, Professors of Law at Yale Law School and Stanford Law School respectively, state, “with guns being a product that can be easily carried away and quickly sold at a relatively high fraction of the initial cost, the presence of more guns can actually serve as a stimulus to burglary and theft. Even if the gun owner had a permit to carry a concealed weapon and would never use it in furtherance of a crime, is it likely that the same can be said for the burglar who steals the gun?” **Background** The United States has 120.5 guns per 100 people, or about 393,347,000 guns, which is the highest total and per capita number in the world. 22% of Americans own one or more guns (35% of men and 12% of women). America’s pervasive gun culture stems in part from its colonial history, revolutionary roots, frontier expansion, and the Second Amendment, which states: “A well regulated militia, being necessary to the security of a free State, the right of the people to keep and bear Arms, shall not be infringed.” Proponents of more gun control laws state that the Second Amendment was intended for militias; that gun violence would be reduced; that gun restrictions have always existed; and that a majority of Americans, including gun owners, support new gun restrictions. Opponents say that the Second Amendment protects an individual’s right to own guns; that guns are needed for self-defense from threats ranging from local criminals to foreign invaders; and that gun ownership deters crime rather than causes more crime. Read more background…
# Should the United States Continue Its Use of Drone Strikes Abroad?' **Argument** Drone strikes make the United States safer by remotely decimating terrorist networks across the world. Drone strikes in Pakistan, Afghanistan, Yemen, and Somalia have killed between 7,665 and 14,247 militants and alleged militants, including high-level commanders implicated in organizing plots against the United States. According to President Obama, “[d]ozens of highly skilled al Qaeda commanders, trainers, bomb makers and operatives have been taken off the battlefield. Plots have been disrupted that would have targeted international aviation, U.S. transit systems, European cities and our troops in Afghanistan. Simply put, these strikes have saved lives.” Beyond killing terrorists, that drones are remotely piloted saves US military lives. Drones are launched from bases in allied countries and are operated remotely by pilots in the United States, minimizing the risk of injury and death that would occur if ground soldiers and airplane pilots were used instead. Al Qaeda, the Taliban, and their affiliates often operate in distant and environmentally unforgiving locations where it would be extremely dangerous for the United States to deploy teams of special forces to track and capture terrorists. Such pursuits may pose serious risks to US troops including firefights with surrounding tribal communities, anti-aircraft shelling, land mines, improvised explosive devices (IEDs), suicide bombers, snipers, dangerous weather conditions, harsh environments, etc. Drone strikes eliminate all of those risks common to “boots on the ground” missions. **Background** Unmanned aerial vehicles (UAVs), otherwise known as drones, are remotely-controlled aircraft which may be armed with missiles and bombs for attack missions. Since the World Trade Center attacks on Sep. 11, 2001 and the subsequent “War on Terror,” the United States has used thousands of drones to kill suspected terrorists in Pakistan, Afghanistan, Yemen, Somalia, and other countries. Proponents state that drones strikes help prevent “boots on the ground” combat and makes America safer, that the strikes are legal under American and international law, and that they are carried out with the support of Americans and foreign governments Opponents state that drone strikes kill civilians, creating more terrorists than they kill and sowing animosity in foreign countries, that the strikes are extrajudicial and illegal, and create a dangerous disconnect between the horrors of war and soldiers carrying out the strikes.
# Should More Gun Control Laws Be Enacted?' **Argument** Guns are rarely used in self-defense. Of the 29,618,300 violent crimes committed between 2007 and 2011, 0.79% of victims (235,700) protected themselves with a threat of use or use of a firearm, the least-employed protective behavior. In 2010 there were 230 “justifiable homicides” in which a private citizen used a firearm to kill a felon, compared to 8,275 criminal gun homicides (or, 36 criminal homicides for every “justifiable homicide”). Of the 84,495,500 property crimes committed between 2007 and 2011, 0.12% of victims (103,000) protected themselves with a threat of use or use of a firearm. **Background** The United States has 120.5 guns per 100 people, or about 393,347,000 guns, which is the highest total and per capita number in the world. 22% of Americans own one or more guns (35% of men and 12% of women). America’s pervasive gun culture stems in part from its colonial history, revolutionary roots, frontier expansion, and the Second Amendment, which states: “A well regulated militia, being necessary to the security of a free State, the right of the people to keep and bear Arms, shall not be infringed.” Proponents of more gun control laws state that the Second Amendment was intended for militias; that gun violence would be reduced; that gun restrictions have always existed; and that a majority of Americans, including gun owners, support new gun restrictions. Opponents say that the Second Amendment protects an individual’s right to own guns; that guns are needed for self-defense from threats ranging from local criminals to foreign invaders; and that gun ownership deters crime rather than causes more crime. Read more background…
# Was Bill Clinton a Good President?' **Argument** Other: Clinton was aware of the threat of Al Qaeda and authorized the CIA to kill Osama bin Laden. He sought to hunt down bin Laden after the Oct. 12, 2000 attack on the USS Cole, but the CIA and FBI refused to certify bin Laden’s involvement in the terrorist act. “I got closer to killing him than anybody’s gotten since,” Clinton said in a Sep. 24, 2006 interview with Chris Wallace. **Background** William Jefferson Clinton, known as Bill Clinton, served as the 42nd President of the United States from Jan. 20, 1993 to Jan. 19, 2001. His proponents contend that under his presidency the US enjoyed the lowest unemployment and inflation rates in recent history, high home ownership, low crime rates, and a budget surplus. They give him credit for eliminating the federal deficit and reforming welfare, despite being forced to deal with a Republican-controlled Congress. His opponents say that Clinton cannot take credit for the economic prosperity experienced during his scandal-plagued presidency because it was the result of other factors. In fact, they blame his policies for the financial crisis that began in 2007. They point to his impeachment by Congress and his failure to pass universal health care coverage as further evidence that he was not a good president. Read more background…
# Should Animals Be Used for Scientific or Commercial Testing?' **Argument** Most experiments involving animals are flawed, wasting the lives of the animal subjects. A peer-reviewed study found serious flaws in the majority of publicly funded US and UK animal studies using rodents and primates: “only 59% of the studies stated the hypothesis or objective of the study and the number and characteristics of the animals used.” A 2017 study found further flaws in animal studies, including “incorrect data interpretation, unforeseen technical issues, incorrectly constituted (or absent) control groups, selective data reporting, inadequate or varying software systems, and blatant fraud.” **Background** An estimated 26 million animals are used every year in the United States for scientific and commercial testing. Animals are used to develop medical treatments, determine the toxicity of medications, check the safety of products destined for human use, and other biomedical, commercial, and health care uses. Research on living animals has been practiced since at least 500 BC. Proponents of animal testing say that it has enabled the development of many life-saving treatments for both humans and animals, that there is no alternative method for researching a complete living organism, and that strict regulations prevent the mistreatment of animals in laboratories. Opponents of animal testing say that it is cruel and inhumane to experiment on animals, that alternative methods available to researchers can replace animal testing, and that animals are so different from human beings that research on animals often yields irrelevant results. Read more background…
# Do Violent Video Games Contribute to Youth Violence?' **Argument** The US military uses violent video games to train soldiers to kill. The US Marine Corps licensed Doom II in 1996 to create Marine Doom in order to train soldiers. In 2002, the US Army released first-person shooter game America’s Army to recruit soldiers and prepare recruits for the battlefield. While the military may benefit from training soldiers to kill using video games, kids who are exposed to these games lack the discipline and structure of the armed forces and may become more susceptible to being violent. Dave Grossman, retired lieutenant colonel in the United States Army and former West Point psychology professor, stated: “[T]hrough interactive point-and-shoot video games, modern nations are indiscriminately introducing to their children the same weapons technology that major armies and law enforcement agencies around the world use to ‘turn off’ the midbrain ‘safety catch’” that prevents most people from killing. **Background** Around 73% of American kids age 2-17 played video games in 2019, a 6% increase over 2018. Video games accounted for 17% of kids’ entertainment time and 11% of their entertainment spending. The global video game industry was worth contributing $159.3 billion in 2020, a 9.3% increase of 9.3% from 2019. Violent video games have been blamed for school shootings, increases in bullying, and violence towards women. Critics argue that these games desensitize players to violence, reward players for simulating violence, and teach children that violence is an acceptable way to resolve conflicts. Video game advocates contend that a majority of the research on the topic is deeply flawed and that no causal relationship has been found between video games and social violence. They argue that violent video games may provide a safe outlet for aggressive and angry feelings and may reduce crime. Read more background…
# Is Human Activity Primarily Responsible for Global Climate Change?' **Argument** Many scientists disagree that human activity is primarily responsible for global climate change. A report found more than 1,000 scientists who disagreed that humans are primarily responsible for global climate change.  The claim that 97% of scientists agree on the cause of global warming is inaccurate. The research on 11,944 studies actually found that only 3,974 even expressed a view on the issue. Of those, just 64 (1.6%) said humans are the main cause.  A Purdue University survey found that 47% of climatologists challenge the idea that humans are primarily responsible for climate change and instead believe that climate change is caused by an equal combination of humans and the environment (37%), mostly by the environment (5%), or that there’s not enough information to say (5%). **Background** Average surface temperatures on earth have risen more than 2°F over the past 100 years. During this time period, atmospheric levels of greenhouse gases such as carbon dioxide (CO2) and methane (CH4) have notably increased. This site explores the debate on whether climate change is caused by humans (also known as anthropogenic climate change). The pro side argues rising levels of atmospheric greenhouse gases are a direct result of human activities such as burning fossil fuels, and that these increases are causing significant and increasingly severe climate changes including global warming, loss of sea ice, sea level rise, stronger storms, and more droughts. They contend that immediate international action to reduce greenhouse gas emissions is necessary to prevent dire climate changes. The con side argues human-generated greenhouse gas emissions are too small to substantially change the earth’s climate and that the planet is capable of absorbing those increases. They contend that warming over the 20th century resulted primarily from natural processes such as fluctuations in the sun’s heat and ocean currents. They say the theory of human-caused global climate change is based on questionable measurements, faulty climate models, and misleading science. Read more background…
# Should Abortion Be Legal?' **Argument** Abortion bans endangers healthcare for those not seeking abortions. Medical treatment for nonviable pregnancies is often exactly the same as an abortion. occur when a fertilized egg implants somewhere other than the uterine cavity. About one in 50 pregnancies are ectopic, and they are nonviable. Bleeding from ectopic pregnancies caused 10% of all pregnancy-related deaths, and ectopic pregnancies were the leading cause of maternal death in the first trimester. Other pregnancies can be nonviable, including when there is little or no chance of the baby’s survival once it is born or if the baby has died in utero. The treatment for ectopic and other nonviable pregnancies is often the same as that for an abortion. One out of every ten pregnancies ends in . The drugs used for medication abortions are the only treatment recommended for early miscarriages. For later or complicated miscarriages, the same surgical procedure used for abortions is recommended. While some abortion bans include specific exceptions for nonviable pregnancies and miscarriages, other bans are too vague to be practicable. Healthcare providers may refuse to perform a procedure that could be interpreted as an “on-demand” abortion for fear of liability or prosecution. Arguing that doctors and others use them as loopholes for “on demand” abortions, lobbyists are working to eliminate exceptions altogether, which would further endanger and traumatize people seeking care for dangerous medical conditions. Some pharmacists have refused to fill prescriptions for miscarriages and ectopic pregnancies, because the drugs can also be used for abortion. In Texas, pharmacists can be sued for “aiding and abetting” an abortion. Further, bans are a slippery slope to contraceptive and other healthcare restrictions. For example, some already wrongly view (the morning after pill) as an abortifacient and are thinking of including it in abortion bans. **Background** The debate over whether abortion should be a legal option has long divided people around the world. Split into two groups, pro-choice and pro-life, the two sides frequently clash in protests. Proponents of legal abortion believe abortion is a safe medical procedure that protects lives, while abortion bans endanger pregnant people not seeking abortions, and deny bodily autonomy, creating wide-ranging repercussions. Opponents of legal abortion believe abortion is murder because life begins at conception, that abortion creates a culture in which life is disposable, and that increased access to birth control, health insurance, and sexual education would make abortion unnecessary.Read more background…
# Dress Codes - Top 3 Pros and Cons | ProCon.org' **Argument** Dress codes promote inclusiveness and a comfortable, cooperative environment while eliminating individualistic attire that can distract from common goals. As Bonneville Academy, a STEM school in Stansbury Park, Utah, explained, “The primary objective of a school dress code is to build constant equality among all the students. When all the students wear the same style of dress, then there will be the same kind of atmosphere across the school campus. This pattern encourages the student to concentrate more on their academic and co-curricular activities… then all the learning becomes more interesting and relevant… Students who are used to dress[ing] properly will be well equipped to evolve into the actual world, especially when they enter into the ever-competitive job market.” Susan M. Heathfield, a management and organization development consultant, stated, “Employees appreciate guidance about appropriate business attire for your workplace—especially when you specify a rationale for the dress code that your team has selected.” Simply knowing whether suits are required or jeans are appropriate removes guesswork for employees, which leads to a more comfortable work environment. Similarly, dress codes can make a disparate group of people feel like a team—no one is left out or judged differently solely on the basis of the way they dress. Dress codes can also make workplace hierarchies friendlier and more work-conducive. A manager who dresses in suits with ties may intimidate employees who wear branded polo shirts and khakis, preventing effective communication. Further, dress codes mean employees and customers or clients won’t be distracted by individualistic clothing. For example, a customer of Nebraska State Bank & Trust Co. complained to the bank’s president about a branch employee’s outfit of mismatched tunic and leggings, fringed boots, and large earrings. A customer complaint can not only alienate the customer but also distract employees from their tasks and potentially embarrass or shame the employee whose outfit sparked the complaint. **Background** While the most frequent debate about dress codes may be centered around K-12 schools, dress codes impact just about everyone’s daily life. From the “no shirt, no shoes, no service” signs (which exploded in popularity in the 1960s and 70s in reaction to the rise of hippies) to COVID-19 pandemic mask mandates, employer restrictions on tattoos and hairstyles, and clothing regulations on airlines, dress codes are more prevalent than we might think. [1] [2] [3] [4] [5] While it’s difficult to pinpoint the first dress code–humans started wearing clothes around 170,000 years ago–nearly every culture and country throughout history, formally or informally, have had strictures on what to wear and not to wear. These dress codes are common “cultural signifiers,” reflecting social beliefs and cultural values, most often of the social class dominating the culture.  Such codes have been prevalent in Islamic countries since the founding of the religion in the seventh century, and they continue to cause controversy today—are they appropriate regulations for maintaining piety, community, and public decency, or are they demeaning and oppressive, especially for Islamic women? [6] [7] [8] [9] [10] In the West, people were arrested and imprisoned as early as ​​1565 in England for violating dress codes. The man in question, a servant named Richard Walweyn, was arrested for wearing “a very monsterous and outraygeous great payre of hose” (or trunk hose) and was imprisoned until he could show he owned other hose “of a decent & lawfull facyon.” Other dress codes of the time reserved expensive garments made of silk, fur, and velvet for nobility only, reinforcing how dress codes have been implemented for purposes of social distinction. Informal dress codes—such as high-fashion clothes with logos and the unofficial “Midtown Uniform” worn by men working in finance–underscore how often dress codes have been used to mark and maintain visual distinctions between classes and occupations.  Other dress codes have been enacted overtly to police morality, as with the bans on bobbed hair and flapper dresses of the 1920s. Still other dress codes are intended to spur an atmosphere of inclusiveness and professionalism or specifically to maintain safety in the workplace. [6] [7] [8] [11] [12]
# Should All Americans Have the Right (Be Entitled) to Health Care?' **Argument** A right to health care could lead to government rationing of medical services. Countries with universal health care, including Australia, Canada, New Zealand, and the United Kingdom, all ration health care using methods such as controlled distribution, budgeting, price setting, and service restrictions. In the United Kingdom, the National Health Service (NHS) rations health care using a cost-benefit analysis. For example, in 2018 any drug that provided an extra one year of good-quality life for about $25,000 or less was generally deemed cost-effective while one that costs more might not be. In order to expand health coverage to more Americans, Obamacare created an Independent Payment Advisory Board (IPAB) to make cost-benefit analyses to keep Medicare spending from growing too fast. According to Sally Pipes, President of the Pacific Research Institute, the IPAB “is essentially charged with rationing care.” According to a Wall Street Journal editorial, “once health care is nationalized, or mostly nationalized, medical rationing is inevitable.” **Background** 27.5 million people in the United States (8.5% of the US population) do not have health insurance. Among the 91.5% who do have health insurance, 67.3% have private insurance while 34.4% have government-provided coverage through programs such as Medicaid or Medicare. Employer-based health insurance is the most common type of coverage, applying to 55.1% of the US population. The United States is the only nation among the 37 OECD (Organization for Economic Co-operation and Development) nations that does not have universal health care either in practice or by constitutional right. Proponents of the right to health care say that no one in one of the richest nations on earth should go without health care. They argue that a right to health care would stop medical bankruptcies, improve public health, reduce overall health care spending, help small businesses, and that health care should be an essential government service. Opponents argue that a right to health care amounts to socialism and that it should be an individual’s responsibility, not the government’s role, to secure health care. They say that government provision of health care would decrease the quality and availability of health care, and would lead to larger government debt and deficits. Read more background…
# Was Ronald Reagan a Good President?' **Argument** Health: Reagan almost completely ignored the growing AIDS epidemic. Although the first case of AIDS was discovered in the early 1980s, Reagan never publicly addressed the epidemic until May 31, 1987 when he spoke at an AIDS conference in Washington, DC. By that time, 36,058 Americans had been diagnosed with the disease and 20,849 had died. **Background** Ronald Wilson Reagan served as the 40th President of the United States from Jan. 20, 1981 to Jan. 19, 1989. He won the Nov. 4, 1980 presidential election, beating Democratic incumbent Jimmy Carter with 50.7% of the votes, and won his second term by a landslide of 58.8% of the votes. Reagan’s proponents point to his accomplishments, including stimulating economic growth in the US, strengthening its national defense, revitalizing the Republican Party, and ending the global Cold War as evidence of his good presidency. His opponents contend that Reagan’s poor policies, such as bloating the national defense, drastically cutting social services, and making missiles-for-hostages deals, led the country into record deficits and global embarrassment. Read more background…
# Should People Become Vegetarian?' **Argument** A vegetarian diet lowers risk of diseases. A vegetarian diet reduces the chances of developing kidney stones and gallstones. Diets high in animal protein cause the body to excrete calcium, oxalate, and uric acid—the main components of kidney stones and gallstones. A vegetarian diet also lowers the risk of heart disease. Vegetarians had 24% lower mortality from heart disease than meat eaters. A vegetarian diet also helps lower blood pressure, prevent hypertension, and thus reduce the risk of stroke. Eating meat increases the risk of getting type 2 diabetes in women, and eating processed meat increases the risk in men. A vegetarian diet rich in whole grains, legumes, nuts, and soy proteins helps to improve glycemic control in people who already have diabetes. Studies show that vegetarians are up to 40% less likely to develop cancer than meat eaters. In 2015 the World Health Organization classified red meat as “probably carcinogenic to humans” and processed meats as “carcinogenic to humans.” Consuming beef, pork, or lamb five or more times a week significantly increases the risk of colon cancer. Eating processed meats such as bacon or sausage increases this risk even further. Diets high in animal protein were associated with a 4-fold increase in cancer death risk compared to high protein diets based on plant-derived protein sources. **Background** Americans eat an average of 58 pounds of beef, 96 pounds of chicken, and 52 pounds of pork, per person, per year, according to the United States Department of Agriculture (USDA). Vegetarians, about 5% of the US adult population, do not eat meat (including poultry and seafood). The percentage of Americans who identify as vegetarian has remained steady for two decades. 11% of those who identify as liberal follow a vegetarian diet, compared to 2% of conservatives. Many proponents of vegetarianism say that eating meat harms health, wastes resources, and creates pollution. They often argue that killing animals for food is cruel and unethical since non-animal food sources are plentiful. Many opponents of a vegetarian diet say that meat consumption is healthful and humane, and that producing vegetables causes many of the same environmental problems as producing meat. They also argue that humans have been eating and enjoying meat for 2.3 million years. Read more background…
# Should the Federal Corporate Income Tax Rate Be Raised?' **Argument** Raising the corporate income tax rate would make taxes fairer. As a 2021 Biden Administration White House statement explains, “The current tax system unfairly prioritizes large multinational corporations over Main Street American small businesses. Small businesses don’t have access to the army of lawyers and accountants that allowed 55 profitable large corporations to avoid paying any federal corporate taxes in 2020, and they cannot shift profits into tax havens to avoid paying U.S. taxes like multinational corporations can. U.S. multinationals report 60 percent of their profits abroad in just seven low tax jurisdictions that, combined, make up less than 4 percent of global GDP. These corporations do not make money in these countries; they just report it there to take a huge tax cut. In 2018, married couples making about $150,000 working at their own small business paid over 20 percent of their income in federal income and self-employment taxes. By contrast, U.S. multinational corporations paid less than 10 percent in corporate income taxes on U.S. profits.” Large corporations have the ability to pay more taxes without much effect. Kimberly Clausing, Deputy Assistant Secretary for Tax Analysis at the US Department of the Treasury, stated, “Corporate taxes are paid only by profitable corporations, and for those without profits, any percent of zero is zero. Also, many companies can carry forward losses to offset taxes in future years. However, companies profiting in the current environment, such as Amazon or Peloton, can reasonably be expected to contribute a share of their pandemic profits in tax payments.” Clausing explained, “The corporate tax, when it does fall on profitable companies, mostly falls on the excess profits they earn from market power or other factors (due to the dominance of large companies in markets with little competition, luck or risk-taking), not the normal return on capital investment. Treasury economists calculated that such excess profits made up more than 75% of the corporate tax base by 2013. A higher corporate tax rate can rein in market power and promote a fairer economy.” **Background** The creation of the federal corporate income tax occurred in 1909, when the uniform rate was 1% for all business income above $5,000. Since then the rate has increased to as high as 52.8% in 1969. Today’s rate is set at 21% for all companies.  Proponents of raising the corporate tax rate argue that corporations should pay their fair share of taxes and that those taxes will keep companies in the United States while allowing the US federal government to pay for much needed infrastructure and social programs. Opponents of raising the corporate tax rate argue that an increase will weaken the economy and that the taxes will ultimately be paid by everyday people while driving corporations overseas. Read more background…
# Zoos - Pros & Cons - ProCon.org' **Argument** Zoos produce helpful scientific research. 228 accredited zoos published 5,175 peer-reviewed manuscripts between 1993 and 2013. In 2017, 173 accredited US zoos spent $25 million on research, studied 485 species and subspecies of animals, worked on 1,280 research projects, and published 170 research manuscripts. Because so many diseases can be transmitted from animals to humans, such as Ebola, Hantavirus, and the bird flu, zoos frequently conduct disease surveillance research in wildlife populations and their own captive populations that can lead to a direct impact on human health. For example, the veterinary staff at the Bronx Zoo in New York alerted health officials of the presence of West Nile Virus. Zoo research is used in other ways such as informing legislation like the Sustainable Shark Fisheries and Trade Act, helping engineers build a robot to move like a sidewinder snake, and encouraging minority students to enter STEM careers. **Background** OverviewPro/Con ArgumentsDiscussion QuestionsTake Action Zoos have existed in some form since at least 2500 BCE in Egypt and Mesopotamia, where records indicate giraffes, bears, dolphins, and other animals were kept by aristocrats. The oldest still operating zoo in the world, Tiergarten Schönbrunn in Vienna, opened in 1752. [1] [2] The contemporary zoo evolved from 19th century European zoos. Largely modeled after the London Zoo in Regent’s Park, these zoos were intended for “genteel amusement and edification,” according to Emma Marris, environmental writer and Institute Fellow at the UCLA Institute of the Environment and Sustainability. As such, reptile houses, aviaries, and insectariums were added with animals grouped taxonomically, to move zoos beyond the spectacle of big, scary animals. [40] Carl Hegenbeck, a German exotic animal importer, introduced the modern model of more natural habitats for animals instead of obvious cages at his Animal Park in Hamburg in 1907. That change prompted the shift in zoo narrative from entertainment to the protection of animals. In the late 20th century, the narrative changed again to the conservation of animals to stave off extinction. [40] Controversy has historically surrounded zoos, from debates over displaying “exotic” humans in exhibits to zookeepers not knowing what to feed animals. A gorilla named Madame Ningo, the first gorilla to arrive in the United States in 1911 who was to live at the Bronx Zoo, was fed hot dinners and cooked meat despite gorillas being herbivores, for example. [3] [4] The contemporary debate about zoos tends to focus on animal welfare on both sides, whether zoos protect animals or imprison them.
# Should Tablets Replace Textbooks in K-12 Schools?' **Argument** Tablets allow teachers to better customize student learning. There are thousands of education and tutoring applications on tablets, so teachers can tailor student learning to an individual style/personality instead of a one-size-fits-all approach. There are more than 20,000 education apps available for the iPad alone. **Background** Textbook publishing in the United States is an $11 billion industry, with five companies – Cengage Learning, Houghton Mifflin Harcourt, McGraw-Hill, Pearson Education, and Scholastic – capturing about 80% of this market. Tablets are an $18 billion industry with 53% of US adults, 81% of US children aged eight to 17, and 42% of US children aged under eight, owning a tablet. As tablets have become more prevalent, a new debate has formed over whether K-12 school districts should switch from print textbooks to digital textbooks on tablets and e-readers.  Proponents of tablets say that they are supported by most teachers and students, are much lighter than print textbooks, and improve standardized test scores. They say tablets can hold hundreds of textbooks, save the environment by lowering the amount of printing, increase student interactivity and creativity, and that digital textbooks are cheaper than print textbooks. Opponents of tablets say that they are expensive, too distracting for students, easy to break, and costly/time-consuming to fix. They say that tablets contribute to eyestrain, headaches, and blurred vision, increase the excuses available for students not doing their homework, require costly Wi-Fi networks, and become quickly outdated as new technologies emerge. Read more background…
# Should Tablets Replace Textbooks in K-12 Schools?' **Argument** High-level education officials support tablets over textbooks. Secretary of Education Arne Duncan and Federal Communications Commission chair Julius Genachowski said on Feb. 1, 2012 that schools and publishers should “switch to digital textbooks within five years to foster interactive education, save money on books, and ensure classrooms in the US use up-to-date content.” The federal government, in collaboration with several tech organizations, released a 70-page guide for schools called the “Digital Textbook Playbook,” a “roadmap for educators to accelerate the transition to digital textbooks.” **Background** Textbook publishing in the United States is an $11 billion industry, with five companies – Cengage Learning, Houghton Mifflin Harcourt, McGraw-Hill, Pearson Education, and Scholastic – capturing about 80% of this market. Tablets are an $18 billion industry with 53% of US adults, 81% of US children aged eight to 17, and 42% of US children aged under eight, owning a tablet. As tablets have become more prevalent, a new debate has formed over whether K-12 school districts should switch from print textbooks to digital textbooks on tablets and e-readers.  Proponents of tablets say that they are supported by most teachers and students, are much lighter than print textbooks, and improve standardized test scores. They say tablets can hold hundreds of textbooks, save the environment by lowering the amount of printing, increase student interactivity and creativity, and that digital textbooks are cheaper than print textbooks. Opponents of tablets say that they are expensive, too distracting for students, easy to break, and costly/time-consuming to fix. They say that tablets contribute to eyestrain, headaches, and blurred vision, increase the excuses available for students not doing their homework, require costly Wi-Fi networks, and become quickly outdated as new technologies emerge. Read more background…
# GMO Pros and Cons - Should Genetically Modified Organisms Be Grown?' **Argument** Tinkering with the genetic makeup of plants may result in changes to the food supply that introduce toxins or trigger allergic reactions. An article in Food Science and Human Welfare said, “Three major health risks potentially associated with GM foods are: toxicity, allergenicity and genetic hazards.” The authors raised concerns that the GMO process could disrupt a plant’s genetic integrity, with the potential to activate toxins or change metabolic toxin levels in a ripple effect beyond detection. A joint commission of the World Health Organization (WHO) and the Food and Agriculture Organization of the UN (FAO) identified two potential unintended effects of genetic modification of food sources: higher levels of allergens in a host plant that contains known allergenic properties, and new proteins created by the gene insertion that could cause allergic reactions. The insertion of a gene to modify a plant can cause problems in the resulting food. After StarLink corn was genetically altered to be insect-resistant, there were several reported cases of allergic reactions in consumers. The reactions ranged from abdominal pain and diarrhea to skin rashes to life-threatening issues. **Background** OverviewPro/Con ArgumentsDid You Know?Discussion QuestionsTake Action Selective breeding techniques have been used to alter the genetic makeup of plants for thousands of years. The earliest form of selective breeding were simple and have persisted: farmers save and plant only the seeds of plants that produced the most tasty or largest (or otherwise preferable) results. In 1866, Gregor Mendel, an Austrian monk, discovered and developed the basics of DNA by crossbreeding peas. More recently, genetic engineering has allowed DNA from one species to be inserted into a different species to create genetically modified organisms (GMOs). [1] [2] [53] [55] To create a GMO plant, scientists follow these basic steps over several years: Identify the desired trait and find an animal or plant with that trait. For example, scientists were looking to make corn more insect-resistant. They identified a gene in a soil bacterium (Bacillus thuringiensis, or Bt), that naturally produces an insecticide commonly used in organic agriculture.Copy the specific gene for the desired trait.Insert the specific gene into the DNA of the plant scientists want to change. In the above example, the insecticide gene from Bacillus thuringiensis was inserted into corn.Grow the new plant and perform tests for safety and the desired trait. [55] According to the Genetic Literacy Project, “The most recent data from the International Service for the Acquisition of Agri-biotech Applications (ISAAA) shows that more than 18 million farmers in 29 countries, including 19 developing nations, planted over 190 million hectares (469.5 million acres) of GMO crops in 2019.” The organization stated that a “majority” of European countries and Russia, among other countries, ban the crops. However, most countries that ban the growth of GMO crops, allow their import. Europe, for example, imports 30 million tons of corn and soy animal feeds every year, much of which is GMO. [58] In the United States, the health and environmental safety standards for GM crops are regulated by the Environmental Protection Agency (EPA), the Food and Drug Administration (FDA), and the US Department of Agriculture (USDA). Between 1985 and Sep. 2013, the USDA approved over 17,000 different GM crops for field trials, including varieties of corn, soybean, potato, tomato, wheat, canola, and rice, with various genetic modifications such as herbicide tolerance; insect, fungal, and drought resistance; and flavor or nutrition enhancement. [44] [45] In 1994, the “FLAVR SAVR” tomato became the first genetically modified food to be approved for public consumption by the FDA. The tomato was genetically modified to increase its firmness and extend its shelf life. [51] Recently, the term “bioengineered” food has come into popularity, under the argument that almost all food has been “genetically modified” via selective breeding or other basic growing methods. Bioengineered food refers specifically to food that has undergone modification using rDNA technology, but does not include food genetically modified by basic cross-breeding or selective breeding. As of Jan. 10, 2022, the USDA listed 12 bioengineered products available in the US: alfalfa, Arctic apples, canola, corn, cotton, BARI Bt Begun varieties of eggplant, ringspot virus-resistant varieties of papaya, pink flesh varieties of pineapple, potato, AquAdvantage salmon, soybean, summer squash, and sugarbeet. [56] [57] The National Bioengineered Food Disclosure Standard established mandatory national standards for labeling foods with genetically engineered ingredients in the United States. The Standard was implemented on Jan. 1, 2020 and compliance became mandatory on Jan. 1, 2022. [46] 49% of US adults believe that eating GMO foods are “worse” for one’s health, 44% say they are “neither better nor worse,” and 5% believe they are “better,” according to a 2018 Pew Research Center report. [9]
End of preview. Expand in Data Studio

Arguments & Debates

Carefully chosen argumentative texts suitable for exercising argument mapping and logical analysis.

Chosen and postprocessed from great sources that can be accessed online:

  • different editions of "Pros and Cons - A Debater's Handbook"
  • Britannica's procon.org
  • NYT column "Room for Debate" via I. Habernal's ARC task repo 🙏
  • Debatabase by idebate.net

Subset categories:

  • arguments-*: one main argument per text
  • debates-*-handful: 3-5 arguments per text
  • debates-*-full: 5+x arguments per text

Licensing Information

This collection is released under the Open Data Commons Attribution License (ODC-By) v1.0 license.

Downloads last month
533