id
stringlengths
6
15
question_type
stringclasses
1 value
question
stringlengths
15
683
choices
listlengths
4
4
answer
stringclasses
5 values
explanation
stringclasses
481 values
prompt
stringlengths
1.75k
10.9k
sciq-10714
multiple_choice
In what state of matter is butter at room temperature?
[ "solid", "gel", "liqued", "gas" ]
A
Relavent Documents: Document 0::: Conceptual questions or conceptual problems in science, technology, engineering, and mathematics (STEM) education are questions that can be answered based only on the knowledge of relevant concepts, rather than performing extensive calculations. They contrast with most homework and exam problems in science and engineering that typically require plugging in numerical values into previously discussed formulas. Such "plug-and-chug" numerical problems can often be solved correctly by just matching the pattern of the problem to a previously discussed problem and changing the numerical inputs, which requires significant amounts of time to perform the calculations but does not test or deepen the understanding of how the concepts and formulas should work together. Conceptual questions, therefore, provide a good complement to conventional numerical problems because they need minimal or no calculations and instead encourage the students to engage more deeply with the underlying concepts and how they relate to formulas. Conceptual problems are often formulated as multiple-choice questions, making them easy to use during in-class discussions, particularly when utilizing active learning, peer instruction, and audience response. An example of a conceptual question in undergraduate thermodynamics is provided below: During adiabatic expansion of an ideal gas, its temperatureincreases decreases stays the same Impossible to tell/need more information The use of conceptual questions in physics was popularized by Eric Mazur, particularly in the form of multiple-choice tests that he called ConcepTests. In recent years, multiple websites that maintain lists of conceptual questions have been created by instructors for various disciplines. Some books on physics provide many examples of conceptual questions as well. Multiple conceptual questions can be assembled into a concept inventory to test the working knowledge of students at the beginning of a course or to track the improvement in Document 1::: GRE Subject Biochemistry, Cell and Molecular Biology was a standardized exam provided by ETS (Educational Testing Service) that was discontinued in December 2016. It is a paper-based exam and there are no computer-based versions of it. ETS places this exam three times per year: once in April, once in October and once in November. Some graduate programs in the United States recommend taking this exam, while others require this exam score as a part of the application to their graduate programs. ETS sends a bulletin with a sample practice test to each candidate after registration for the exam. There are 180 questions within the biochemistry subject test. Scores are scaled and then reported as a number between 200 and 990; however, in recent versions of the test, the maximum and minimum reported scores have been 760 (corresponding to the 99 percentile) and 320 (1 percentile) respectively. The mean score for all test takers from July, 2009, to July, 2012, was 526 with a standard deviation of 95. After learning that test content from editions of the GRE® Biochemistry, Cell and Molecular Biology (BCM) Test has been compromised in Israel, ETS made the decision not to administer this test worldwide in 2016–17. Content specification Since many students who apply to graduate programs in biochemistry do so during the first half of their fourth year, the scope of most questions is largely that of the first three years of a standard American undergraduate biochemistry curriculum. A sampling of test item content is given below: Biochemistry (36%) A Chemical and Physical Foundations Thermodynamics and kinetics Redox states Water, pH, acid-base reactions and buffers Solutions and equilibria Solute-solvent interactions Chemical interactions and bonding Chemical reaction mechanisms B Structural Biology: Structure, Assembly, Organization and Dynamics Small molecules Macromolecules (e.g., nucleic acids, polysaccharides, proteins and complex lipids) Supramolecular complexes (e.g. Document 2::: Advanced Placement (AP) Physics C: Electricity and Magnetism (also known as AP Physics C: E&M or AP E&M) is an introductory physics course administered by the College Board as part of its Advanced Placement program. It is intended to proxy a second-semester calculus-based university course in electricity and magnetism. The content of Physics C: E&M overlaps with that of AP Physics 2, but Physics 2 is algebra-based and covers other topics outside of electromagnetism, while Physics C is calculus-based and only covers electromagnetism. Physics C: E&M may be combined with its mechanics counterpart to form a year-long course that prepares for both exams. Course content E&M is equivalent to an introductory college course in electricity and magnetism for physics or engineering majors. The course modules are: Electrostatics Conductors, capacitors, and dielectrics Electric circuits Magnetic fields Electromagnetism. Methods of calculus are used wherever appropriate in formulating physical principles and in applying them to physical problems. Therefore, students should have completed or be concurrently enrolled in a calculus class. AP test The course culminates in an optional exam for which high-performing students may receive some credit towards their college coursework, depending on the institution. Registration The AP examination for AP Physics C: Electricity and Magnetism is separate from the AP examination for AP Physics C: Mechanics. Before 2006, test-takers paid only once and were given the choice of taking either one or two parts of the Physics C test. Format The exam is typically administered on a Monday afternoon in May. The exam is configured in two categories: a 35-question multiple choice section and a 3-question free response section. Test takers are allowed to use an approved calculator during the entire exam. The test is weighted such that each section is worth half of the final score. This and AP Physics C: Mechanics are the shortest AP exams, with Document 3::: Female education in STEM refers to child and adult female representation in the educational fields of science, technology, engineering, and mathematics (STEM). In 2017, 33% of students in STEM fields were women. The organization UNESCO has stated that this gender disparity is due to discrimination, biases, social norms and expectations that influence the quality of education women receive and the subjects they study. UNESCO also believes that having more women in STEM fields is desirable because it would help bring about sustainable development. Current status of girls and women in STEM education Overall trends in STEM education Gender differences in STEM education participation are already visible in early childhood care and education in science- and math-related play, and become more pronounced at higher levels of education. Girls appear to lose interest in STEM subjects with age, particularly between early and late adolescence. This decreased interest affects participation in advanced studies at the secondary level and in higher education. Female students represent 35% of all students enrolled in STEM-related fields of study at this level globally. Differences are also observed by disciplines, with female enrollment lowest in engineering, manufacturing and construction, natural science, mathematics and statistics and ICT fields. Significant regional and country differences in female representation in STEM studies can be observed, though, suggesting the presence of contextual factors affecting girls’ and women's engagement in these fields. Women leave STEM disciplines in disproportionate numbers during their higher education studies, in their transition to the world of work and even in their career cycle. Learning achievement in STEM education Data on gender differences in learning achievement present a complex picture, depending on what is measured (subject, knowledge acquisition against knowledge application), the level of education/age of students, and Document 4::: In mathematical psychology and education theory, a knowledge space is a combinatorial structure used to formulate mathematical models describing the progression of a human learner. Knowledge spaces were introduced in 1985 by Jean-Paul Doignon and Jean-Claude Falmagne, and remain in extensive use in the education theory. Modern applications include two computerized tutoring systems, ALEKS and the defunct RATH. Formally, a knowledge space assumes that a domain of knowledge is a collection of concepts or skills, each of which must be eventually mastered. Not all concepts are interchangeable; some require other concepts as prerequisites. Conversely, competency at one skill may ease the acquisition of another through similarity. A knowledge space marks out which collections of skills are feasible: they can be learned without mastering any other skills. Under reasonable assumptions, the collection of feasible competencies forms the mathematical structure known as an antimatroid. Researchers and educators usually explore the structure of a discipline's knowledge space as a latent class model. Motivation Knowledge Space Theory attempts to address shortcomings of standardized testing when used in educational psychometry. Common tests, such as the SAT and ACT, compress a student's knowledge into a very small range of ordinal ranks, in the process effacing the conceptual dependencies between questions. Consequently, the tests cannot distinguish between true understanding and guesses, nor can they identify a student's particular weaknesses, only the general proportion of skills mastered. The goal of knowledge space theory is to provide a language by which exams can communicate What the student can do and What the student is ready to learn. Model structure Knowledge Space Theory-based models presume that an educational subject can be modeled as a finite set of concepts, skills, or topics. Each feasible state of knowledge about is then a subset of ; the set of The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. In what state of matter is butter at room temperature? A. solid B. gel C. liqued D. gas Answer:
sciq-11604
multiple_choice
What's the best way humans can conserve water?
[ "use more", "salt it", "use less", "boil it" ]
C
Relavent Documents: Document 0::: Conceptual questions or conceptual problems in science, technology, engineering, and mathematics (STEM) education are questions that can be answered based only on the knowledge of relevant concepts, rather than performing extensive calculations. They contrast with most homework and exam problems in science and engineering that typically require plugging in numerical values into previously discussed formulas. Such "plug-and-chug" numerical problems can often be solved correctly by just matching the pattern of the problem to a previously discussed problem and changing the numerical inputs, which requires significant amounts of time to perform the calculations but does not test or deepen the understanding of how the concepts and formulas should work together. Conceptual questions, therefore, provide a good complement to conventional numerical problems because they need minimal or no calculations and instead encourage the students to engage more deeply with the underlying concepts and how they relate to formulas. Conceptual problems are often formulated as multiple-choice questions, making them easy to use during in-class discussions, particularly when utilizing active learning, peer instruction, and audience response. An example of a conceptual question in undergraduate thermodynamics is provided below: During adiabatic expansion of an ideal gas, its temperatureincreases decreases stays the same Impossible to tell/need more information The use of conceptual questions in physics was popularized by Eric Mazur, particularly in the form of multiple-choice tests that he called ConcepTests. In recent years, multiple websites that maintain lists of conceptual questions have been created by instructors for various disciplines. Some books on physics provide many examples of conceptual questions as well. Multiple conceptual questions can be assembled into a concept inventory to test the working knowledge of students at the beginning of a course or to track the improvement in Document 1::: A pre-STEM program is a course of study at any two-year college that prepares a student to transfer to a four-year school to earn a bachelor's degree in a STEM field. Overview The concept of a pre-STEM program is being developed to address America's need for more college-trained professionals in science, technology, engineering, and mathematics (STEM). It is an innovation meant to fill a gap at community colleges that do not have 'major' degree paths that students identify with on their way to earning an Associates degree. Students must complete a considerable amount of STEM coursework before transferring from a two-year school to a four-year school and earn a baccalaureate degree in a STEM field. Schools with a pre-STEM program are able to identify those students and support them with STEM-specific academic and career advising, increasing the student's chances of going on to earn a STEM baccalaureate degree in a timely fashion. With over 50% of America's college-bound students starting their college career at public or private two-year school, and with a very small proportion of students who start college at a two-year school matriculating to and earning STEM degrees from four-year schools, pre-STEM programs have great potential for broadening participation in baccalaureate STEM studies. Example programs The effectiveness of pre-STEM programs is being investigated by a consortium of schools in Missouri: Moberly Area Community College, St. Charles Community College, Metropolitan Community College, and Truman State University. A larger group of schools met at the Belknap Springs Meetings in October 2009 to discuss the challenges and opportunities presented by STEM-focused partnerships between 2-year and 4-year schools. Each program represented a two-year school and a four-year school that were trying to increase the number of people who earn a baccalaureate degree in a STEM area through various means, some of which were pre-STEM programs. Other methods includes Document 2::: The STEM (Science, Technology, Engineering, and Mathematics) pipeline is a critical infrastructure for fostering the development of future scientists, engineers, and problem solvers. It's the educational and career pathway that guides individuals from early childhood through to advanced research and innovation in STEM-related fields. Description The "pipeline" metaphor is based on the idea that having sufficient graduates requires both having sufficient input of students at the beginning of their studies, and retaining these students through completion of their academic program. The STEM pipeline is a key component of workplace diversity and of workforce development that ensures sufficient qualified candidates are available to fill scientific and technical positions. The STEM pipeline was promoted in the United States from the 1970s onwards, as “the push for STEM (science, technology, engineering, and mathematics) education appears to have grown from a concern for the low number of future professionals to fill STEM jobs and careers and economic and educational competitiveness.” Today, this metaphor is commonly used to describe retention problems in STEM fields, called “leaks” in the pipeline. For example, the White House reported in 2012 that 80% of minority groups and women who enroll in a STEM field switch to a non-STEM field or drop out during their undergraduate education. These leaks often vary by field, gender, ethnic and racial identity, socioeconomic background, and other factors, drawing attention to structural inequities involved in STEM education and careers. Current efforts The STEM pipeline concept is a useful tool for programs aiming at increasing the total number of graduates, and is especially important in efforts to increase the number of underrepresented minorities and women in STEM fields. Using STEM methodology, educational policymakers can examine the quantity and retention of students at all stages of the K–12 educational process and beyo Document 3::: The Bachelor of Science in Aquatic Resources and Technology (B.Sc. in AQT) (or Bachelor of Aquatic Resource) is an undergraduate degree that prepares students to pursue careers in the public, private, or non-profit sector in areas such as marine science, fisheries science, aquaculture, aquatic resource technology, food science, management, biotechnology and hydrography. Post-baccalaureate training is available in aquatic resource management and related areas. The Department of Animal Science and Export Agriculture, at the Uva Wellassa University of Badulla, Sri Lanka, has the largest enrollment of undergraduate majors in Aquatic Resources and Technology, with about 200 students as of 2014. The Council on Education for Aquatic Resources and Technology includes undergraduate AQT degrees in the accreditation review of Aquatic Resources and Technology programs and schools. See also Marine Science Ministry of Fisheries and Aquatic Resources Development Document 4::: Water-use efficiency (WUE) refers to the ratio of water used in plant metabolism to water lost by the plant through transpiration. Two types of water-use efficiency are referred to most frequently: photosynthetic water-use efficiency (also called instantaneous water-use efficiency), which is defined as the ratio of the rate of carbon assimilation (photosynthesis) to the rate of transpiration, and water-use efficiency of productivity (also called integrated water-use efficiency), which is typically defined as the ratio of biomass produced to the rate of transpiration. Increases in water-use efficiency are commonly cited as a response mechanism of plants to moderate to severe soil water deficits and have been the focus of many programs that seek to increase crop tolerance to drought. However, there is some question as to the benefit of increased water-use efficiency of plants in agricultural systems, as the processes of increased yield production and decreased water loss due to transpiration (that is, the main driver of increases in water-use efficiency) are fundamentally opposed. If there existed a situation where water deficit induced lower transpirational rates without simultaneously decreasing photosynthetic rates and biomass production, then water-use efficiency would be both greatly improved and the desired trait in crop production. The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. What's the best way humans can conserve water? A. use more B. salt it C. use less D. boil it Answer:
sciq-4545
multiple_choice
Respiratory therapists or respiratory practitioners evaluate and treat patients diseases affecting what part of the body?
[ "byproduct , heart , and blood vessels", "brain, stomach, heart", "lung, heart, and blood vessels", "heart, lungs, stomach" ]
C
Relavent Documents: Document 0::: Medical Science Educator is a peer-reviewed journal that focuses on teaching the sciences that are fundamental to modern medicine and health. Coverage includes basic science education, clinical teaching and the incorporation of modern educational technologies. MSE offers all who teach in healthcare the most current information to succeed in their task by publishing scholarly activities, opinions, and resources in medical science education. MSE provides the readership a better understanding of teaching and learning techniques in order to advance medical science education. It is the official publication of the International Association of Medical Science Educators (IAMSE). Document 1::: Medical physics deals with the application of the concepts and methods of physics to the prevention, diagnosis and treatment of human diseases with a specific goal of improving human health and well-being. Since 2008, medical physics has been included as a health profession according to International Standard Classification of Occupation of the International Labour Organization. Although medical physics may sometimes also be referred to as biomedical physics, medical biophysics, applied physics in medicine, physics applications in medical science, radiological physics or hospital radio-physics, a "medical physicist" is specifically a health professional with specialist education and training in the concepts and techniques of applying physics in medicine and competent to practice independently in one or more of the subfields of medical physics. Traditionally, medical physicists are found in the following healthcare specialties: radiation oncology (also known as radiotherapy or radiation therapy), diagnostic and interventional radiology (also known as medical imaging), nuclear medicine, and radiation protection. Medical physics of radiation therapy can involve work such as dosimetry, linac quality assurance, and brachytherapy. Medical physics of diagnostic and interventional radiology involves medical imaging techniques such as magnetic resonance imaging, ultrasound, computed tomography and x-ray. Nuclear medicine will include positron emission tomography and radionuclide therapy. However one can find Medical Physicists in many other areas such as physiological monitoring, audiology, neurology, neurophysiology, cardiology and others. Medical physics departments may be found in institutions such as universities, hospitals, and laboratories. University departments are of two types. The first type are mainly concerned with preparing students for a career as a hospital Medical Physicist and research focuses on improving the practice of the profession. A second type (in Document 2::: Medical simulation, or more broadly, healthcare simulation, is a branch of simulation related to education and training in medical fields of various industries. Simulations can be held in the classroom, in situational environments, or in spaces built specifically for simulation practice. It can involve simulated human patients (whether artificial, human or a combination of the two), educational documents with detailed simulated animations, casualty assessment in homeland security and military situations, emergency response, and support for virtual health functions with holographic simulation. In the past, its main purpose was to train medical professionals to reduce errors during surgery, prescription, crisis interventions, and general practice. Combined with methods in debriefing, it is now also used to train students in anatomy, physiology, and communication during their schooling. History Modern-day simulation for training was first utilized by anesthesia physicians to reduce accidents. When simulation skyrocketed in popularity during the 1930s due to the invention of the Trainer Building Link Trainer for flight and military applications, many field experts attempted to adapt simulation to their own needs. Medical simulation was not immediately accepted as a useful training technique, both because of technological limitations and because of the limited availability of medical expertise at the time. However, extensive military use demonstrated that medical simulation could be cost-effective. Additionally, valuable simulation hardware and software was developed, and medical standards were established. Gradually, medical simulation became affordable, although it remained un-standardized. By the 1980s software simulations became available. With the help of a UCSD School of Medicine student, Computer Gaming World reported that a Surgeon (1986) for the Apple Macintosh very accurately simulated operating on an aortic aneurysm. Others followed, such as Life & Death (1 Document 3::: Medical education is education related to the practice of being a medical practitioner, including the initial training to become a physician (i.e., medical school and internship) and additional training thereafter (e.g., residency, fellowship, and continuing medical education). Medical education and training varies considerably across the world. Various teaching methodologies have been used in medical education, which is an active area of educational research. Medical education is also the subject-didactic academic field of educating medical doctors at all levels, including entry-level, post-graduate, and continuing medical education. Specific requirements such as entrustable professional activities must be met before moving on in stages of medical education. Common techniques and evidence base Medical education applies theories of pedagogy specifically in the context of medical education. Medical education has been a leader in the field of evidence-based education, through the development of evidence syntheses such as the Best Evidence Medical Education collection, formed in 1999, which aimed to "move from opinion-based education to evidence-based education". Common evidence-based techniques include the Objective structured clinical examination (commonly known as the 'OSCE) to assess clinical skills, and reliable checklist-based assessments to determine the development of soft skills such as professionalism. However, there is a persistence of ineffective instructional methods in medical education, such as the matching of teaching to learning styles and Edgar Dales' "Cone of Learning". Entry-level education Entry-level medical education programs are tertiary-level courses undertaken at a medical school. Depending on jurisdiction and university, these may be either undergraduate-entry (most of Europe, Asia, South America and Oceania), or graduate-entry programs (mainly Australia, Philippines and North America). Some jurisdictions and universities provide both u Document 4::: TIME-ITEM is an ontology of Topics that describes the content of undergraduate medical education. TIME is an acronym for "Topics for Indexing Medical Education"; ITEM is an acronym for "Index de thèmes pour l’éducation médicale." Version 1.0 of the taxonomy has been released and the web application that allows users to work with it is still under development. Its developers are seeking more collaborators to expand and validate the taxonomy and to guide future development of the web application. History The development of TIME-ITEM began at the University of Ottawa in 2006. It was initially developed to act as a content index for a curriculum map being constructed there. After its initial presentation at the 2006 conference of the Canadian Association for Medical Education, early collaborators included the University of British Columbia, McMaster University and Queen's University. Features The TIME-ITEM ontology is unique in that it is designed specifically for undergraduate medical education. As such, it includes fewer strictly biomedical entries than other common medical vocabularies (such as MeSH or SNOMED CT) but more entries relating to the medico-social concepts of communication, collaboration, professionalism, etc. Topics within TIME-ITEM are arranged poly-hierarchically, meaning any Topic can have more than one parent. Relationships are established based on the logic that learning about a Topic contributes to the learning of all its parent Topics. In addition to housing the ontology of Topics, the TIME-ITEM web application can house multiple Outcome frameworks. All Outcomes, whether private Outcomes entered by single institutions or publicly available medical education Outcomes (such as CanMeds 2005) are hierarchically linked to one or more Topics in the ontology. In this way, the contribution of each Topic to multiple Outcomes is made explicit. The structure of the XML documents exported from TIME-ITEM (which contain the hierarchy of Outco The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. Respiratory therapists or respiratory practitioners evaluate and treat patients diseases affecting what part of the body? A. byproduct , heart , and blood vessels B. brain, stomach, heart C. lung, heart, and blood vessels D. heart, lungs, stomach Answer:
sciq-9745
multiple_choice
Predators can replace what tool in agriculture.
[ "pesticides", "fertilizer", "irrigators", "harvesters" ]
A
Relavent Documents: Document 0::: Animal science is described as "studying the biology of animals that are under the control of humankind". It can also be described as the production and management of farm animals. Historically, the degree was called animal husbandry and the animals studied were livestock species, like cattle, sheep, pigs, poultry, and horses. Today, courses available look at a broader area, including companion animals, like dogs and cats, and many exotic species. Degrees in Animal Science are offered at a number of colleges and universities. Animal science degrees are often offered at land-grant universities, which will often have on-campus farms to give students hands-on experience with livestock animals. Education Professional education in animal science prepares students for careers in areas such as animal breeding, food and fiber production, nutrition, animal agribusiness, animal behavior, and welfare. Courses in a typical Animal Science program may include genetics, microbiology, animal behavior, nutrition, physiology, and reproduction. Courses in support areas, such as genetics, soils, agricultural economics and marketing, legal aspects, and the environment also are offered. Bachelor degree At many universities, a Bachelor of Science (BS) degree in Animal Science allows emphasis in certain areas. Typical areas are species-specific or career-specific. Species-specific areas of emphasis prepare students for a career in dairy management, beef management, swine management, sheep or small ruminant management, poultry production, or the horse industry. Other career-specific areas of study include pre-veterinary medicine studies, livestock business and marketing, animal welfare and behavior, animal nutrition science, animal reproduction science, or genetics. Youth programs are also an important part of animal science programs. Pre-veterinary emphasis Many schools that offer a degree option in Animal Science also offer a pre-veterinary emphasis such as Iowa State University, th Document 1::: A pre-STEM program is a course of study at any two-year college that prepares a student to transfer to a four-year school to earn a bachelor's degree in a STEM field. Overview The concept of a pre-STEM program is being developed to address America's need for more college-trained professionals in science, technology, engineering, and mathematics (STEM). It is an innovation meant to fill a gap at community colleges that do not have 'major' degree paths that students identify with on their way to earning an Associates degree. Students must complete a considerable amount of STEM coursework before transferring from a two-year school to a four-year school and earn a baccalaureate degree in a STEM field. Schools with a pre-STEM program are able to identify those students and support them with STEM-specific academic and career advising, increasing the student's chances of going on to earn a STEM baccalaureate degree in a timely fashion. With over 50% of America's college-bound students starting their college career at public or private two-year school, and with a very small proportion of students who start college at a two-year school matriculating to and earning STEM degrees from four-year schools, pre-STEM programs have great potential for broadening participation in baccalaureate STEM studies. Example programs The effectiveness of pre-STEM programs is being investigated by a consortium of schools in Missouri: Moberly Area Community College, St. Charles Community College, Metropolitan Community College, and Truman State University. A larger group of schools met at the Belknap Springs Meetings in October 2009 to discuss the challenges and opportunities presented by STEM-focused partnerships between 2-year and 4-year schools. Each program represented a two-year school and a four-year school that were trying to increase the number of people who earn a baccalaureate degree in a STEM area through various means, some of which were pre-STEM programs. Other methods includes Document 2::: The University of Florida Institute of Food and Agricultural Sciences (UF/IFAS) is a teaching, research and Extension scientific organization focused on agriculture and natural resources. It is a partnership of federal, state, and county governments that includes an Extension office in each of Florida's 67 counties, 12 off-campus research and education centers, five demonstration units, the University of Florida College of Agricultural and Life Sciences (including the School of Forest, Fisheries and Geomatics Sciences and the School of Natural Resources and Environment), three 4-H camps, portions of the UF College of Veterinary Medicine, the Florida Sea Grant program, the Emerging Pathogens Institute, the UF Water Institute and the UF Genetics Institute. UF/IFAS research and development covers natural resource industries that have a $101 billion annual impact. The program is ranked #1 in the nation in federally financed higher education R&D expenditures in agricultural sciences and natural resources conservation by the National Science Foundation for FY 2019. Because of this mission and the diversity of Florida's climate and agricultural commodities, IFAS has facilities located throughout Florida. On July 13, 2020, Dr. J. Scott Angle became leader of UF/IFAS and UF's vice president for agriculture and natural resources. History Research The mission of UF/IFAS is to develop knowledge in agricultural, human, and natural resources, and to make that knowledge accessible to sustain and enhance the quality of human life. Faculty members pursue fundamental and applied research that furthers understanding of natural and human systems. Research is supported by state and federally appropriated funds and supplemented by grants and contracts. UF/IFAS received $155.6 million in annual research expenditures in sponsored research for FY 2021. The Florida Agricultural Experiment Station administers and supports research programs in UF/IFAS. The research program was created in Document 3::: The Bachelor of Science in Aquatic Resources and Technology (B.Sc. in AQT) (or Bachelor of Aquatic Resource) is an undergraduate degree that prepares students to pursue careers in the public, private, or non-profit sector in areas such as marine science, fisheries science, aquaculture, aquatic resource technology, food science, management, biotechnology and hydrography. Post-baccalaureate training is available in aquatic resource management and related areas. The Department of Animal Science and Export Agriculture, at the Uva Wellassa University of Badulla, Sri Lanka, has the largest enrollment of undergraduate majors in Aquatic Resources and Technology, with about 200 students as of 2014. The Council on Education for Aquatic Resources and Technology includes undergraduate AQT degrees in the accreditation review of Aquatic Resources and Technology programs and schools. See also Marine Science Ministry of Fisheries and Aquatic Resources Development Document 4::: Genetically modified agriculture includes: Genetically modified crops Genetically modified livestock Genetic engineering Genetically modified organisms The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. Predators can replace what tool in agriculture. A. pesticides B. fertilizer C. irrigators D. harvesters Answer:
sciq-10144
multiple_choice
What are the poles labeled?
[ "north and south", "east and west", "west and south", "southwest and south" ]
A
Relavent Documents: Document 0::: A pre-STEM program is a course of study at any two-year college that prepares a student to transfer to a four-year school to earn a bachelor's degree in a STEM field. Overview The concept of a pre-STEM program is being developed to address America's need for more college-trained professionals in science, technology, engineering, and mathematics (STEM). It is an innovation meant to fill a gap at community colleges that do not have 'major' degree paths that students identify with on their way to earning an Associates degree. Students must complete a considerable amount of STEM coursework before transferring from a two-year school to a four-year school and earn a baccalaureate degree in a STEM field. Schools with a pre-STEM program are able to identify those students and support them with STEM-specific academic and career advising, increasing the student's chances of going on to earn a STEM baccalaureate degree in a timely fashion. With over 50% of America's college-bound students starting their college career at public or private two-year school, and with a very small proportion of students who start college at a two-year school matriculating to and earning STEM degrees from four-year schools, pre-STEM programs have great potential for broadening participation in baccalaureate STEM studies. Example programs The effectiveness of pre-STEM programs is being investigated by a consortium of schools in Missouri: Moberly Area Community College, St. Charles Community College, Metropolitan Community College, and Truman State University. A larger group of schools met at the Belknap Springs Meetings in October 2009 to discuss the challenges and opportunities presented by STEM-focused partnerships between 2-year and 4-year schools. Each program represented a two-year school and a four-year school that were trying to increase the number of people who earn a baccalaureate degree in a STEM area through various means, some of which were pre-STEM programs. Other methods includes Document 1::: Tech City College (Formerly STEM Academy) is a free school sixth form located in the Islington area of the London Borough of Islington, England. It originally opened in September 2013, as STEM Academy Tech City and specialised in Science, Technology, Engineering and Maths (STEM) and the Creative Application of Maths and Science. In September 2015, STEM Academy joined the Aspirations Academy Trust was renamed Tech City College. Tech City College offers A-levels and BTECs as programmes of study for students. Document 2::: Science, technology, engineering, and mathematics (STEM) is an umbrella term used to group together the distinct but related technical disciplines of science, technology, engineering, and mathematics. The term is typically used in the context of education policy or curriculum choices in schools. It has implications for workforce development, national security concerns (as a shortage of STEM-educated citizens can reduce effectiveness in this area), and immigration policy, with regard to admitting foreign students and tech workers. There is no universal agreement on which disciplines are included in STEM; in particular, whether or not the science in STEM includes social sciences, such as psychology, sociology, economics, and political science. In the United States, these are typically included by organizations such as the National Science Foundation (NSF), the Department of Labor's O*Net online database for job seekers, and the Department of Homeland Security. In the United Kingdom, the social sciences are categorized separately and are instead grouped with humanities and arts to form another counterpart acronym HASS (Humanities, Arts, and Social Sciences), rebranded in 2020 as SHAPE (Social Sciences, Humanities and the Arts for People and the Economy). Some sources also use HEAL (health, education, administration, and literacy) as the counterpart of STEM. Terminology History Previously referred to as SMET by the NSF, in the early 1990s the acronym STEM was used by a variety of educators, including Charles E. Vela, the founder and director of the Center for the Advancement of Hispanics in Science and Engineering Education (CAHSEE). Moreover, the CAHSEE started a summer program for talented under-represented students in the Washington, D.C., area called the STEM Institute. Based on the program's recognized success and his expertise in STEM education, Charles Vela was asked to serve on numerous NSF and Congressional panels in science, mathematics, and engineering edu Document 3::: Female education in STEM refers to child and adult female representation in the educational fields of science, technology, engineering, and mathematics (STEM). In 2017, 33% of students in STEM fields were women. The organization UNESCO has stated that this gender disparity is due to discrimination, biases, social norms and expectations that influence the quality of education women receive and the subjects they study. UNESCO also believes that having more women in STEM fields is desirable because it would help bring about sustainable development. Current status of girls and women in STEM education Overall trends in STEM education Gender differences in STEM education participation are already visible in early childhood care and education in science- and math-related play, and become more pronounced at higher levels of education. Girls appear to lose interest in STEM subjects with age, particularly between early and late adolescence. This decreased interest affects participation in advanced studies at the secondary level and in higher education. Female students represent 35% of all students enrolled in STEM-related fields of study at this level globally. Differences are also observed by disciplines, with female enrollment lowest in engineering, manufacturing and construction, natural science, mathematics and statistics and ICT fields. Significant regional and country differences in female representation in STEM studies can be observed, though, suggesting the presence of contextual factors affecting girls’ and women's engagement in these fields. Women leave STEM disciplines in disproportionate numbers during their higher education studies, in their transition to the world of work and even in their career cycle. Learning achievement in STEM education Data on gender differences in learning achievement present a complex picture, depending on what is measured (subject, knowledge acquisition against knowledge application), the level of education/age of students, and Document 4::: Animal science is described as "studying the biology of animals that are under the control of humankind". It can also be described as the production and management of farm animals. Historically, the degree was called animal husbandry and the animals studied were livestock species, like cattle, sheep, pigs, poultry, and horses. Today, courses available look at a broader area, including companion animals, like dogs and cats, and many exotic species. Degrees in Animal Science are offered at a number of colleges and universities. Animal science degrees are often offered at land-grant universities, which will often have on-campus farms to give students hands-on experience with livestock animals. Education Professional education in animal science prepares students for careers in areas such as animal breeding, food and fiber production, nutrition, animal agribusiness, animal behavior, and welfare. Courses in a typical Animal Science program may include genetics, microbiology, animal behavior, nutrition, physiology, and reproduction. Courses in support areas, such as genetics, soils, agricultural economics and marketing, legal aspects, and the environment also are offered. Bachelor degree At many universities, a Bachelor of Science (BS) degree in Animal Science allows emphasis in certain areas. Typical areas are species-specific or career-specific. Species-specific areas of emphasis prepare students for a career in dairy management, beef management, swine management, sheep or small ruminant management, poultry production, or the horse industry. Other career-specific areas of study include pre-veterinary medicine studies, livestock business and marketing, animal welfare and behavior, animal nutrition science, animal reproduction science, or genetics. Youth programs are also an important part of animal science programs. Pre-veterinary emphasis Many schools that offer a degree option in Animal Science also offer a pre-veterinary emphasis such as Iowa State University, th The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. What are the poles labeled? A. north and south B. east and west C. west and south D. southwest and south Answer:
sciq-7714
multiple_choice
Bronchial tubes in the lungs branch into ever-smaller structures, finally ending in alveoli. the alveoli act like what?
[ "tiny bubbles", "springs", "filters", "bellows" ]
A
Relavent Documents: Document 0::: Lung receptors sense irritation or inflammation in the bronchi and alveoli. Document 1::: The posterior surfaces of the ciliary processes are covered by a bilaminar layer of black pigment cells, which is continued forward from the retina, and is named the pars ciliaris retinae. Document 2::: The cilium (: cilia; ), is a membrane-bound organelle found on most types of eukaryotic cell. Cilia are absent in bacteria and archaea. The cilium has the shape of a slender threadlike projection that extends from the surface of the much larger cell body. Eukaryotic flagella found on sperm cells and many protozoans have a similar structure to motile cilia that enables swimming through liquids; they are longer than cilia and have a different undulating motion. There are two major classes of cilia: motile and non-motile cilia, each with a subtype, giving four types in all. A cell will typically have one primary cilium or many motile cilia. The structure of the cilium core called the axoneme determines the cilium class. Most motile cilia have a central pair of single microtubules surrounded by nine pairs of double microtubules called a 9+2 axoneme. Most non-motile cilia have a 9+0 axoneme that lacks the central pair of microtubules. Also lacking are the associated components that enable motility including the outer and inner dynein arms, and radial spokes. Some motile cilia lack the central pair, and some non-motile cilia have the central pair, hence the four types. Most non-motile cilia are termed primary cilia or sensory cilia and serve solely as sensory organelles. Most vertebrate cell types possess a single non-motile primary cilium, which functions as a cellular antenna. Olfactory neurons possess a great many non-motile cilia. Non-motile cilia that have a central pair of microtubules are the kinocilia present on hair cells. Motile cilia are found in large numbers on respiratory epithelial cells – around 200 cilia per cell, where they function in mucociliary clearance, and also have mechanosensory and chemosensory functions. Motile cilia on ependymal cells move the cerebrospinal fluid through the ventricular system of the brain. Motile cilia are also present in the oviducts (fallopian tubes) of female (therian) mammals where they function in moving the egg cell Document 3::: In anatomy, a lobe is a clear anatomical division or extension of an organ (as seen for example in the brain, lung, liver, or kidney) that can be determined without the use of a microscope at the gross anatomy level. This is in contrast to the much smaller lobule, which is a clear division only visible under the microscope. Interlobar ducts connect lobes and interlobular ducts connect lobules. Examples of lobes The four main lobes of the brain the frontal lobe the parietal lobe the occipital lobe the temporal lobe The three lobes of the human cerebellum the flocculonodular lobe the anterior lobe the posterior lobe The two lobes of the thymus The two and three lobes of the lungs Left lung: superior and inferior Right lung: superior, middle, and inferior The four lobes of the liver Left lobe of liver Right lobe of liver Quadrate lobe of liver Caudate lobe of liver The renal lobes of the kidney Earlobes Examples of lobules the cortical lobules of the kidney the testicular lobules of the testis the lobules of the mammary gland the pulmonary lobules of the lung the lobules of the thymus Document 4::: Speech science refers to the study of production, transmission and perception of speech. Speech science involves anatomy, in particular the anatomy of the oro-facial region and neuroanatomy, physiology, and acoustics. Speech production The production of speech is a highly complex motor task that involves approximately 100 orofacial, laryngeal, pharyngeal, and respiratory muscles. Precise and expeditious timing of these muscles is essential for the production of temporally complex speech sounds, which are characterized by transitions as short as 10 ms between frequency bands and an average speaking rate of approximately 15 sounds per second. Speech production requires airflow from the lungs (respiration) to be phonated through the vocal folds of the larynx (phonation) and resonated in the vocal cavities shaped by the jaw, soft palate, lips, tongue and other articulators (articulation). Respiration Respiration is the physical process of gas exchange between an organism and its environment involving four steps (ventilation, distribution, perfusion and diffusion) and two processes (inspiration and expiration). Respiration can be described as the mechanical process of air flowing into and out of the lungs on the principle of Boyle's law, stating that, as the volume of a container increases, the air pressure will decrease. This relatively negative pressure will cause air to enter the container until the pressure is equalized. During inspiration of air, the diaphragm contracts and the lungs expand drawn by pleurae through surface tension and negative pressure. When the lungs expand, air pressure becomes negative compared to atmospheric pressure and air will flow from the area of higher pressure to fill the lungs. Forced inspiration for speech uses accessory muscles to elevate the rib cage and enlarge the thoracic cavity in the vertical and lateral dimensions. During forced expiration for speech, muscles of the trunk and abdomen reduce the size of the thoracic cavity by The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. Bronchial tubes in the lungs branch into ever-smaller structures, finally ending in alveoli. the alveoli act like what? A. tiny bubbles B. springs C. filters D. bellows Answer:
sciq-4128
multiple_choice
Distance traveled divided by time is equal to what?
[ "direction", "speed", "frequency", "momentum" ]
B
Relavent Documents: Document 0::: Conceptual questions or conceptual problems in science, technology, engineering, and mathematics (STEM) education are questions that can be answered based only on the knowledge of relevant concepts, rather than performing extensive calculations. They contrast with most homework and exam problems in science and engineering that typically require plugging in numerical values into previously discussed formulas. Such "plug-and-chug" numerical problems can often be solved correctly by just matching the pattern of the problem to a previously discussed problem and changing the numerical inputs, which requires significant amounts of time to perform the calculations but does not test or deepen the understanding of how the concepts and formulas should work together. Conceptual questions, therefore, provide a good complement to conventional numerical problems because they need minimal or no calculations and instead encourage the students to engage more deeply with the underlying concepts and how they relate to formulas. Conceptual problems are often formulated as multiple-choice questions, making them easy to use during in-class discussions, particularly when utilizing active learning, peer instruction, and audience response. An example of a conceptual question in undergraduate thermodynamics is provided below: During adiabatic expansion of an ideal gas, its temperatureincreases decreases stays the same Impossible to tell/need more information The use of conceptual questions in physics was popularized by Eric Mazur, particularly in the form of multiple-choice tests that he called ConcepTests. In recent years, multiple websites that maintain lists of conceptual questions have been created by instructors for various disciplines. Some books on physics provide many examples of conceptual questions as well. Multiple conceptual questions can be assembled into a concept inventory to test the working knowledge of students at the beginning of a course or to track the improvement in Document 1::: In mathematical psychology and education theory, a knowledge space is a combinatorial structure used to formulate mathematical models describing the progression of a human learner. Knowledge spaces were introduced in 1985 by Jean-Paul Doignon and Jean-Claude Falmagne, and remain in extensive use in the education theory. Modern applications include two computerized tutoring systems, ALEKS and the defunct RATH. Formally, a knowledge space assumes that a domain of knowledge is a collection of concepts or skills, each of which must be eventually mastered. Not all concepts are interchangeable; some require other concepts as prerequisites. Conversely, competency at one skill may ease the acquisition of another through similarity. A knowledge space marks out which collections of skills are feasible: they can be learned without mastering any other skills. Under reasonable assumptions, the collection of feasible competencies forms the mathematical structure known as an antimatroid. Researchers and educators usually explore the structure of a discipline's knowledge space as a latent class model. Motivation Knowledge Space Theory attempts to address shortcomings of standardized testing when used in educational psychometry. Common tests, such as the SAT and ACT, compress a student's knowledge into a very small range of ordinal ranks, in the process effacing the conceptual dependencies between questions. Consequently, the tests cannot distinguish between true understanding and guesses, nor can they identify a student's particular weaknesses, only the general proportion of skills mastered. The goal of knowledge space theory is to provide a language by which exams can communicate What the student can do and What the student is ready to learn. Model structure Knowledge Space Theory-based models presume that an educational subject can be modeled as a finite set of concepts, skills, or topics. Each feasible state of knowledge about is then a subset of ; the set of Document 2::: Advanced Placement (AP) Physics C: Mechanics (also known as AP Mechanics) is an introductory physics course administered by the College Board as part of its Advanced Placement program. It is intended to proxy a one-semester calculus-based university course in mechanics. The content of Physics C: Mechanics overlaps with that of AP Physics 1, but Physics 1 is algebra-based, while Physics C is calculus-based. Physics C: Mechanics may be combined with its electricity and magnetism counterpart to form a year-long course that prepares for both exams. Course content Intended to be equivalent to an introductory college course in mechanics for physics or engineering majors, the course modules are: Kinematics Newton's laws of motion Work, energy and power Systems of particles and linear momentum Circular motion and rotation Oscillations and gravitation. Methods of calculus are used wherever appropriate in formulating physical principles and in applying them to physical problems. Therefore, students should have completed or be concurrently enrolled in a Calculus I class. This course is often compared to AP Physics 1: Algebra Based for its similar course material involving kinematics, work, motion, forces, rotation, and oscillations. However, AP Physics 1: Algebra Based lacks concepts found in Calculus I, like derivatives or integrals. This course may be combined with AP Physics C: Electricity and Magnetism to make a unified Physics C course that prepares for both exams. AP test The course culminates in an optional exam for which high-performing students may receive some credit towards their college coursework, depending on the institution. Registration The AP examination for AP Physics C: Mechanics is separate from the AP examination for AP Physics C: Electricity and Magnetism. Before 2006, test-takers paid only once and were given the choice of taking either one or two parts of the Physics C test. Format The exam is typically administered on a Monday aftern Document 3::: Engineering mathematics is a branch of applied mathematics concerning mathematical methods and techniques that are typically used in engineering and industry. Along with fields like engineering physics and engineering geology, both of which may belong in the wider category engineering science, engineering mathematics is an interdisciplinary subject motivated by engineers' needs both for practical, theoretical and other considerations outside their specialization, and to deal with constraints to be effective in their work. Description Historically, engineering mathematics consisted mostly of applied analysis, most notably: differential equations; real and complex analysis (including vector and tensor analysis); approximation theory (broadly construed, to include asymptotic, variational, and perturbative methods, representations, numerical analysis); Fourier analysis; potential theory; as well as linear algebra and applied probability, outside of analysis. These areas of mathematics were intimately tied to the development of Newtonian physics, and the mathematical physics of that period. This history also left a legacy: until the early 20th century subjects such as classical mechanics were often taught in applied mathematics departments at American universities, and fluid mechanics may still be taught in (applied) mathematics as well as engineering departments. The success of modern numerical computer methods and software has led to the emergence of computational mathematics, computational science, and computational engineering (the last two are sometimes lumped together and abbreviated as CS&E), which occasionally use high-performance computing for the simulation of phenomena and the solution of problems in the sciences and engineering. These are often considered interdisciplinary fields, but are also of interest to engineering mathematics. Specialized branches include engineering optimization and engineering statistics. Engineering mathematics in tertiary educ Document 4::: This is a list of topics that are included in high school physics curricula or textbooks. Mathematical Background SI Units Scalar (physics) Euclidean vector Motion graphs and derivatives Pythagorean theorem Trigonometry Motion and forces Motion Force Linear motion Linear motion Displacement Speed Velocity Acceleration Center of mass Mass Momentum Newton's laws of motion Work (physics) Free body diagram Rotational motion Angular momentum (Introduction) Angular velocity Centrifugal force Centripetal force Circular motion Tangential velocity Torque Conservation of energy and momentum Energy Conservation of energy Elastic collision Inelastic collision Inertia Moment of inertia Momentum Kinetic energy Potential energy Rotational energy Electricity and magnetism Ampère's circuital law Capacitor Coulomb's law Diode Direct current Electric charge Electric current Alternating current Electric field Electric potential energy Electron Faraday's law of induction Ion Inductor Joule heating Lenz's law Magnetic field Ohm's law Resistor Transistor Transformer Voltage Heat Entropy First law of thermodynamics Heat Heat transfer Second law of thermodynamics Temperature Thermal energy Thermodynamic cycle Volume (thermodynamics) Work (thermodynamics) Waves Wave Longitudinal wave Transverse waves Transverse wave Standing Waves Wavelength Frequency Light Light ray Speed of light Sound Speed of sound Radio waves Harmonic oscillator Hooke's law Reflection Refraction Snell's law Refractive index Total internal reflection Diffraction Interference (wave propagation) Polarization (waves) Vibrating string Doppler effect Gravity Gravitational potential Newton's law of universal gravitation Newtonian constant of gravitation See also Outline of physics Physics education The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. Distance traveled divided by time is equal to what? A. direction B. speed C. frequency D. momentum Answer:
ai2_arc-151
multiple_choice
One similarity between a small, solid sample of aluminum and a large, liquid sample of aluminum is that both samples have
[ "a definite shape.", "a definite volume.", "the same number of atoms.", "the same amount of energy." ]
B
Relavent Documents: Document 0::: Conceptual questions or conceptual problems in science, technology, engineering, and mathematics (STEM) education are questions that can be answered based only on the knowledge of relevant concepts, rather than performing extensive calculations. They contrast with most homework and exam problems in science and engineering that typically require plugging in numerical values into previously discussed formulas. Such "plug-and-chug" numerical problems can often be solved correctly by just matching the pattern of the problem to a previously discussed problem and changing the numerical inputs, which requires significant amounts of time to perform the calculations but does not test or deepen the understanding of how the concepts and formulas should work together. Conceptual questions, therefore, provide a good complement to conventional numerical problems because they need minimal or no calculations and instead encourage the students to engage more deeply with the underlying concepts and how they relate to formulas. Conceptual problems are often formulated as multiple-choice questions, making them easy to use during in-class discussions, particularly when utilizing active learning, peer instruction, and audience response. An example of a conceptual question in undergraduate thermodynamics is provided below: During adiabatic expansion of an ideal gas, its temperatureincreases decreases stays the same Impossible to tell/need more information The use of conceptual questions in physics was popularized by Eric Mazur, particularly in the form of multiple-choice tests that he called ConcepTests. In recent years, multiple websites that maintain lists of conceptual questions have been created by instructors for various disciplines. Some books on physics provide many examples of conceptual questions as well. Multiple conceptual questions can be assembled into a concept inventory to test the working knowledge of students at the beginning of a course or to track the improvement in Document 1::: Characterization, when used in materials science, refers to the broad and general process by which a material's structure and properties are probed and measured. It is a fundamental process in the field of materials science, without which no scientific understanding of engineering materials could be ascertained. The scope of the term often differs; some definitions limit the term's use to techniques which study the microscopic structure and properties of materials, while others use the term to refer to any materials analysis process including macroscopic techniques such as mechanical testing, thermal analysis and density calculation. The scale of the structures observed in materials characterization ranges from angstroms, such as in the imaging of individual atoms and chemical bonds, up to centimeters, such as in the imaging of coarse grain structures in metals. While many characterization techniques have been practiced for centuries, such as basic optical microscopy, new techniques and methodologies are constantly emerging. In particular the advent of the electron microscope and secondary ion mass spectrometry in the 20th century has revolutionized the field, allowing the imaging and analysis of structures and compositions on much smaller scales than was previously possible, leading to a huge increase in the level of understanding as to why different materials show different properties and behaviors. More recently, atomic force microscopy has further increased the maximum possible resolution for analysis of certain samples in the last 30 years. Microscopy Microscopy is a category of characterization techniques which probe and map the surface and sub-surface structure of a material. These techniques can use photons, electrons, ions or physical cantilever probes to gather data about a sample's structure on a range of length scales. Some common examples of microscopy techniques include: Optical microscopy Scanning electron microscopy (SEM) Transmission electron mi Document 2::: Allomerism is the similarity in the crystalline structure of substances of different chemical composition. Document 3::: Oligocrystalline material owns a microstructure consisting of a few coarse grains, often columnar and parallel to the longitudinal ingot axis. This microstructure can be found in the ingots produced by electron beam melting (EBM). Document 4::: Material is a substance or mixture of substances that constitutes an object. Materials can be pure or impure, living or non-living matter. Materials can be classified on the basis of their physical and chemical properties, or on their geological origin or biological function. Materials science is the study of materials, their properties and their applications. Raw materials can be processed in different ways to influence their properties, by purification, shaping or the introduction of other materials. New materials can be produced from raw materials by synthesis. In industry, materials are inputs to manufacturing processes to produce products or more complex materials. Historical elements Materials chart the history of humanity. The system of the three prehistoric ages (Stone Age, Bronze Age, Iron Age) were succeeded by historical ages: steel age in the 19th century, polymer age in the middle of the following century (plastic age) and silicon age in the second half of the 20th century. Classification by use Materials can be broadly categorized in terms of their use, for example: Building materials are used for construction Building insulation materials are used to retain heat within buildings Refractory materials are used for high-temperature applications Nuclear materials are used for nuclear power and weapons Aerospace materials are used in aircraft and other aerospace applications Biomaterials are used for applications interacting with living systems Material selection is a process to determine which material should be used for a given application. Classification by structure The relevant structure of materials has a different length scale depending on the material. The structure and composition of a material can be determined by microscopy or spectroscopy. Microstructure In engineering, materials can be categorised according to their microscopic structure: Plastics: a wide range of synthetic or semi-synthetic materials that use polymers as a main ingred The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. One similarity between a small, solid sample of aluminum and a large, liquid sample of aluminum is that both samples have A. a definite shape. B. a definite volume. C. the same number of atoms. D. the same amount of energy. Answer:
scienceQA-3161
multiple_choice
What do these two changes have in common? cutting an apple breaking a stick in half
[ "Both are caused by cooling.", "Both are chemical changes.", "Both are caused by heating.", "Both are only physical changes." ]
D
Step 1: Think about each change. Cutting an apple is a physical change. The apple gets a different shape. But it is still made of the same type of matter as the uncut apple. Breaking a stick in half is a physical change. The stick gets broken into two pieces. But the pieces are still made of the same type of matter as the original stick. Step 2: Look at each answer choice. Both are only physical changes. Both changes are physical changes. No new matter is created. Both are chemical changes. Both changes are physical changes. They are not chemical changes. Both are caused by heating. Neither change is caused by heating. Both are caused by cooling. Neither change is caused by cooling.
Relavent Documents: Document 0::: Conceptual questions or conceptual problems in science, technology, engineering, and mathematics (STEM) education are questions that can be answered based only on the knowledge of relevant concepts, rather than performing extensive calculations. They contrast with most homework and exam problems in science and engineering that typically require plugging in numerical values into previously discussed formulas. Such "plug-and-chug" numerical problems can often be solved correctly by just matching the pattern of the problem to a previously discussed problem and changing the numerical inputs, which requires significant amounts of time to perform the calculations but does not test or deepen the understanding of how the concepts and formulas should work together. Conceptual questions, therefore, provide a good complement to conventional numerical problems because they need minimal or no calculations and instead encourage the students to engage more deeply with the underlying concepts and how they relate to formulas. Conceptual problems are often formulated as multiple-choice questions, making them easy to use during in-class discussions, particularly when utilizing active learning, peer instruction, and audience response. An example of a conceptual question in undergraduate thermodynamics is provided below: During adiabatic expansion of an ideal gas, its temperatureincreases decreases stays the same Impossible to tell/need more information The use of conceptual questions in physics was popularized by Eric Mazur, particularly in the form of multiple-choice tests that he called ConcepTests. In recent years, multiple websites that maintain lists of conceptual questions have been created by instructors for various disciplines. Some books on physics provide many examples of conceptual questions as well. Multiple conceptual questions can be assembled into a concept inventory to test the working knowledge of students at the beginning of a course or to track the improvement in Document 1::: Physical changes are changes affecting the form of a chemical substance, but not its chemical composition. Physical changes are used to separate mixtures into their component compounds, but can not usually be used to separate compounds into chemical elements or simpler compounds. Physical changes occur when objects or substances undergo a change that does not change their chemical composition. This contrasts with the concept of chemical change in which the composition of a substance changes or one or more substances combine or break up to form new substances. In general a physical change is reversible using physical means. For example, salt dissolved in water can be recovered by allowing the water to evaporate. A physical change involves a change in physical properties. Examples of physical properties include melting, transition to a gas, change of strength, change of durability, changes to crystal form, textural change, shape, size, color, volume and density. An example of a physical change is the process of tempering steel to form a knife blade. A steel blank is repeatedly heated and hammered which changes the hardness of the steel, its flexibility and its ability to maintain a sharp edge. Many physical changes also involve the rearrangement of atoms most noticeably in the formation of crystals. Many chemical changes are irreversible, and many physical changes are reversible, but reversibility is not a certain criterion for classification. Although chemical changes may be recognized by an indication such as odor, color change, or production of a gas, every one of these indicators can result from physical change. Examples Heating and cooling Many elements and some compounds change from solids to liquids and from liquids to gases when heated and the reverse when cooled. Some substances such as iodine and carbon dioxide go directly from solid to gas in a process called sublimation. Magnetism Ferro-magnetic materials can become magnetic. The process is reve Document 2::: Adaptive comparative judgement is a technique borrowed from psychophysics which is able to generate reliable results for educational assessment – as such it is an alternative to traditional exam script marking. In the approach, judges are presented with pairs of student work and are then asked to choose which is better, one or the other. By means of an iterative and adaptive algorithm, a scaled distribution of student work can then be obtained without reference to criteria. Introduction Traditional exam script marking began in Cambridge 1792 when, with undergraduate numbers rising, the importance of proper ranking of students was growing. So in 1792 the new Proctor of Examinations, William Farish, introduced marking, a process in which every examiner gives a numerical score to each response by every student, and the overall total mark puts the students in the final rank order. Francis Galton (1869) noted that, in an unidentified year about 1863, the Senior Wrangler scored 7,634 out of a maximum of 17,000, while the Second Wrangler scored 4,123. (The 'Wooden Spoon' scored only 237.) Prior to 1792, a team of Cambridge examiners convened at 5pm on the last day of examining, reviewed the 19 papers each student had sat – and published their rank order at midnight. Marking solved the problems of numbers and prevented unfair personal bias, and its introduction was a step towards modern objective testing, the format it is best suited to. But the technology of testing that followed, with its major emphasis on reliability and the automatisation of marking, has been an uncomfortable partner for some areas of educational achievement: assessing writing or speaking, and other kinds of performance need something more qualitative and judgemental. The technique of Adaptive Comparative Judgement is an alternative to marking. It returns to the pre-1792 idea of sorting papers according to their quality, but retains the guarantee of reliability and fairness. It is by far the most rel Document 3::: The Force Concept Inventory is a test measuring mastery of concepts commonly taught in a first semester of physics developed by Hestenes, Halloun, Wells, and Swackhamer (1985). It was the first such "concept inventory" and several others have been developed since for a variety of topics. The FCI was designed to assess student understanding of the Newtonian concepts of force. Hestenes (1998) found that while "nearly 80% of the [students completing introductory college physics courses] could state Newton's Third Law at the beginning of the course, FCI data showed that less than 15% of them fully understood it at the end". These results have been replicated in a number of studies involving students at a range of institutions (see sources section below), and have led to greater recognition in the physics education research community of the importance of students' "active engagement" with the materials to be mastered. The 1995 version has 30 five-way multiple choice questions. Example question (question 4): Gender differences The FCI shows a gender difference in favor of males that has been the subject of some research in regard to gender equity in education. Men score on average about 10% higher. Document 4::: Test equating traditionally refers to the statistical process of determining comparable scores on different forms of an exam. It can be accomplished using either classical test theory or item response theory. In item response theory, equating is the process of placing scores from two or more parallel test forms onto a common score scale. The result is that scores from two different test forms can be compared directly, or treated as though they came from the same test form. When the tests are not parallel, the general process is called linking. It is the process of equating the units and origins of two scales on which the abilities of students have been estimated from results on different tests. The process is analogous to equating degrees Fahrenheit with degrees Celsius by converting measurements from one scale to the other. The determination of comparable scores is a by-product of equating that results from equating the scales obtained from test results. Purpose Suppose that Dick and Jane both take a test to become licensed in a certain profession. Because the high stakes (you get to practice the profession if you pass the test) may create a temptation to cheat, the organization that oversees the test creates two forms. If we know that Dick scored 60% on form A and Jane scored 70% on form B, do we know for sure which one has a better grasp of the material? What if form A is composed of very difficult items, while form B is relatively easy? Equating analyses are performed to address this very issue, so that scores are as fair as possible. Equating in item response theory In item response theory, person "locations" (measures of some quality being assessed by a test) are estimated on an interval scale; i.e., locations are estimated in relation to a unit and origin. It is common in educational assessment to employ tests in order to assess different groups of students with the intention of establishing a common scale by equating the origins, and when appropri The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. What do these two changes have in common? cutting an apple breaking a stick in half A. Both are caused by cooling. B. Both are chemical changes. C. Both are caused by heating. D. Both are only physical changes. Answer:
sciq-7634
multiple_choice
What is the term for the process in which water vapor changes to tiny droplets of liquid water?
[ "dispersion", "vaporization", "condensation", "diffusion" ]
C
Relavent Documents: Document 0::: The Stefan flow, occasionally called Stefan's flow, is a transport phenomenon concerning the movement of a chemical species by a flowing fluid (typically in the gas phase) that is induced to flow by the production or removal of the species at an interface. Any process that adds the species of interest to or removes it from the flowing fluid may cause the Stefan flow, but the most common processes include evaporation, condensation, chemical reaction, sublimation, ablation, adsorption, absorption, and desorption. It was named after the Slovenian physicist, mathematician, and poet Josef Stefan for his early work on calculating evaporation rates. The Stefan flow is distinct from diffusion as described by Fick's law, but diffusion almost always also occurs in multi-species systems that are experiencing the Stefan flow. In systems undergoing one of the species addition or removal processes mentioned previously, the addition or removal generates a mean flow in the flowing fluid as the fluid next to the interface is displaced by the production or removal of additional fluid by the processes occurring at the interface. The transport of the species by this mean flow is the Stefan flow. When concentration gradients of the species are also present, diffusion transports the species relative to the mean flow. The total transport rate of the species is then given by a summation of the Stefan flow and diffusive contributions. An example of the Stefan flow occurs when a droplet of liquid evaporates in air. In this case, the vapor/air mixture surrounding the droplet is the flowing fluid, and liquid/vapor boundary of the droplet is the interface. As heat is absorbed by the droplet from the environment, some of the liquid evaporates into vapor at the surface of the droplet, and flows away from the droplet as it is displaced by additional vapor evaporating from the droplet. This process causes the flowing medium to move away from the droplet at some mean speed that is dependent on Document 1::: Acoustic droplet vaporization (ADV) is the process by which superheated liquid droplets are phase-transitioned into gas bubbles by means of ultrasound. Perfluorocarbons and halocarbons are often used for the dispersed medium, which forms the core of the droplet. The surfactant, which forms a stabilizing shell around the dispersive medium, is usually composed of albumin or lipids. There exist two main hypothesis that explain the mechanism by which ultrasound induces vaporization. One poses that the ultrasonic field interacts with the dispersed medium so as to cause vaporization in the bubble core. The other suggests that shockwaves from inertial cavitation, occurring near or within the droplet, cause the dispersed medium to vaporize. See also Acoustic droplet ejection Document 2::: Condensation is the change of the state of matter from the gas phase into the liquid phase, and is the reverse of vaporization. The word most often refers to the water cycle. It can also be defined as the change in the state of water vapor to liquid water when in contact with a liquid or solid surface or cloud condensation nuclei within the atmosphere. When the transition happens from the gaseous phase into the solid phase directly, the change is called deposition. Initiation Condensation is initiated by the formation of atomic/molecular clusters of that species within its gaseous volume—like rain drop or snow flake formation within clouds—or at the contact between such gaseous phase and a liquid or solid surface. In clouds, this can be catalyzed by water-nucleating proteins, produced by atmospheric microbes, which are capable of binding gaseous or liquid water molecules. Reversibility scenarios A few distinct reversibility scenarios emerge here with respect to the nature of the surface. absorption into the surface of a liquid (either of the same substance or one of its solvents)—is reversible as evaporation. adsorption (as dew droplets) onto solid surface at pressures and temperatures higher than the species' triple point—also reversible as evaporation. adsorption onto solid surface (as supplemental layers of solid) at pressures and temperatures lower than the species' triple point—is reversible as sublimation. Most common scenarios Condensation commonly occurs when a vapor is cooled and/or compressed to its saturation limit when the molecular density in the gas phase reaches its maximal threshold. Vapor cooling and compressing equipment that collects condensed liquids is called a "condenser". Measurement Psychrometry measures the rates of condensation through evaporation into the air moisture at various atmospheric pressures and temperatures. Water is the product of its vapor condensation—condensation is the process of such phase conversion. Applicatio Document 3::: Vaporization (or vaporisation) of an element or compound is a phase transition from the liquid phase to vapor. There are two types of vaporization: Evaporation and boiling. Evaporation is a surface phenomenon, where as boiling is a bulk phenomenon. Evaporation is a phase transition from the liquid phase to vapor (a state of substance below critical temperature) that occurs at temperatures below the boiling temperature at a given pressure. Evaporation occurs on the surface. Evaporation only occurs when the partial pressure of vapor of a substance is less than the equilibrium vapor pressure. For example, due to constantly decreasing pressures, vapor pumped out of a solution will eventually leave behind a cryogenic liquid. Boiling is also a phase transition from the liquid phase to gas phase, but boiling is the formation of vapor as bubbles of vapor below the surface of the liquid. Boiling occurs when the equilibrium vapor pressure of the substance is greater than or equal to the atmospheric pressure. The temperature at which boiling occurs is the boiling temperature, or boiling point. The boiling point varies with the pressure of the environment. Sublimation is a direct phase transition from the solid phase to the gas phase, skipping the intermediate liquid phase. Because it does not involve the liquid phase, it is not a form of vaporization. The term vaporization has also been used in a colloquial or hyperbolic way to refer to the physical destruction of an object that is exposed to intense heat or explosive force, where the object is actually blasted into small pieces rather than literally converted to gaseous form. Examples of this usage include the "vaporization" of the uninhabited Marshall Island of Elugelab in the 1952 Ivy Mike thermonuclear test. Many other examples can be found throughout the various MythBusters episodes that have involved explosives, chief among them being Cement Mix-Up, where they "vaporized" a cement truck with ANFO. At the moment o Document 4::: Moisture expansion is the tendency of matter to change in volume in response to a change in moisture content. The macroscopic effect is similar to that of thermal expansion but the microscopic causes are very different. Moisture expansion is caused by hygroscopy. Matter The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. What is the term for the process in which water vapor changes to tiny droplets of liquid water? A. dispersion B. vaporization C. condensation D. diffusion Answer:
sciq-3195
multiple_choice
Where does the energy from an atomic bomb come from?
[ "nucleus of atom", "neutron", "isotope of atom", "electron shell" ]
A
Relavent Documents: Document 0::: The Los Alamos Primer is a printed version of the first five lectures on the principles of nuclear weapons given to new arrivals at the top-secret Los Alamos laboratory during the Manhattan Project. The five lectures were given by physicist Robert Serber in April 1943. The notes from the lectures which became the Primer were written by Edward Condon. History The Los Alamos Primer was composed from five lectures given by the physicist Robert Serber to the newcomers at the Los Alamos Laboratory in April 1943, at the start of the Manhattan Project. The aim of the project was to build the first nuclear bomb, and these lectures were a very concise introduction into the priciples of nuclear weapon design. Serber was a postdoctoral student of J. Robert Oppenheimer, the leader of the Los Alamos Laboratory, and worked with him on the project from the very start. The five lectures were conducted at April 5, 7, 9, 12, and 14, 1943; according to Serber, between 30 and 50 people attended them. Notes were taken by Edward Condon; the Primer is just 24-pages-long. Only 36 copies were printed at the time. Serber later described the lectures: Previously the people working at the separate universities had no idea of the whole story. They only knew what part they were working on. So somebody had to give them the picture of what it was all about and what the bomb was like, what was known about the theory, and some idea why they needed the various experimental numbers. In July 1942, Oppenheimer held a "conference" at his office at Berkeley. No records were preserved, but the Primer arose from all the aspects of bomb design discussed there. Content The Primer, though only 24-pages-long, consists of 22 sections, divided into chapters: Preliminaries Neutrons and the fission process Critical mass and efficiency Detonation, pre-detonation, and fizzles Conclusion The first paragraph states the intention of the Los Alamos Laboratory during World War II: The object of the project Document 1::: Taylor–von Neumann–Sedov blast wave (or sometimes referred to as Sedov–von Neumann–Taylor blast wave) refers to a blast wave induced by a strong explosion. The blast wave was described by a self-similar solution independently by G. I. Taylor, John von Neumann and Leonid Sedov during World War II. History G. I. Taylor was told by the British Ministry of Home Security that it might be possible to produce a bomb in which a very large amount of energy would be released by nuclear fission and asked to report the effect of such weapons. Taylor presented his results on June 27, 1941. Exactly at the same time, in the United States, John von Neumann was working on the same problem and he presented his results on June 30, 1941. It was said that Leonid Sedov was also working on the problem around the same time in the USSR, although Sedov never confirmed any exact dates. The complete solution was published first by Sedov in 1946. von Neumann published his results in August 1947 in the Los Alamos scientific laboratory report on , although that report was distributed only in 1958. Taylor got clearance to publish his results in 1949 and he published his works in two papers in 1950. In the second paper, Taylor calculated the energy of the atomic bomb used in the Trinity (nuclear test) using the similarity, just by looking at the series of blast wave photographs that had a length scale and time stamps, published by Julian E Mack in 1947. This calculation of energy caused, in Taylor's own words, 'much embarrassment' (according to Grigory Barenblatt) in US government circles since the number was then still classified although the photographs published by Mack were not. Taylor's biographer George Batchelor writes This estimate of the yield of the first atom bomb explosion caused quite a stir... G.I. was mildly admonished by the US Army for publishing his deductions from their (unclassified) photographs. Mathematical description Consider a strong explosion (such as nuclear bombs) tha Document 2::: Atomic energy or energy of atoms is energy carried by atoms. The term originated in 1903 when Ernest Rutherford began to speak of the possibility of atomic energy. H. G. Wells popularized the phrase "splitting the atom", before discovery of the atomic nucleus. Atomic energy includes: Nuclear binding energy, the energy required to split a nucleus of an atom. Nuclear potential energy, the potential energy of the particles inside an atomic nucleus. Nuclear reaction, a process in which nuclei or nuclear particles interact, resulting in products different from the initial ones; see also nuclear fission and nuclear fusion. Radioactive decay, the set of various processes by which unstable atomic nuclei (nuclides) emit subatomic particles. The energy of inter-atomic or chemical bonds, which holds atoms together in compounds. Atomic energy is the source of nuclear power, which uses sustained nuclear fission to generate heat and electricity. It is also the source of the explosive force of an atomic bomb. Document 3::: Nuclear knowledge management (NKM) is knowledge management as applied in the nuclear technology field. It supports the gathering and sharing of new knowledge and the updating of the existing knowledge base. Knowledge management is of particular importance in the nuclear sector, owing to the rapid development and complexity of nuclear technologies and their hazards and security implications. The International Atomic Energy Agency (IAEA) launched a nuclear knowledge management programme in 2002. Definition of nuclear knowledge management Nuclear knowledge management is defined as knowledge management in the nuclear domain. This simple definition is consistent with the working definition used in the IAEA document "Knowledge Management for Nuclear Industry Operating Organizations" (2006). Knowledge management (KM) itself is defined as an integrated, systematic approach to identifying, acquiring, transforming, developing, disseminating, using, sharing, and preserving knowledge, relevant to achieving specified objectives. Description Knowledge management systems support nuclear organizations in strengthening and aligning their knowledge. Knowledge is the nuclear energy industry’s most valuable asset and resource, without which the industry cannot operate safely and economically. Nuclear knowledge is also very complex, expensive to acquire and maintain, and easily lost. States, suppliers, and operating organizations that deploy nuclear technology are responsible for ensuring that the associated nuclear knowledge is maintained and accessible. In the organizational context, nuclear knowledge management supports the organization's business processes, and involves applying knowledge management practices. These may be applied at any stage of a nuclear facility's life cycle: research and development, design and engineering, construction, commissioning, operations, maintenance, refurbishment and life time extension, waste management, and decommissioning. Nuclear knowledge man Document 4::: Atomic Spy: The Dark Lives of Klaus Fuchs is a 2020 biography of Klaus Fuchs, a so-called atomic spy, by Nancy Thorndike Greenspan. The book was published by Viking Press and received several reviews. Fuchs was a physicist who is best known for passing secrets from the Manhattan Project to the Soviet Union during World War II. The book paints a sympathetic picture of Fuchs, ultimately arguing that his crime was done with good intentions, for "the betterment of mankind". Though several reviews noted their opposition to this conclusion and the sympathy the book shows to Fuchs, it has received mostly positive reviews. Background Greenspan previously authored the book The End of the Certain World, a biography of the physicist Max Born, in 2005. Fuchs was a German physicist who is best known as an atomic spy, who passed secrets to the Soviet Union while working on the Manhattan Project during World War II. Fuchs moved to Great Britain from Germany in 1937 to escape the Nazi party, where he began working for Max Born at the University of Edinburgh. Despite having obtained citizenship in Britain, in May 1940, during the Second World War, Fuchs was interned as an alien in Canada along with other German Jews and prisoners of war. He was released later that same year and returned to Britain to work on the British atomic bomb project in Birmingham, during which time he became a Soviet agent. Fuchs was sent to the US to work on the Manhattan Project in 1943 before returning to Britain in 1946 for a senior post at the Atomic Energy Research Establishment. Fuchs pleaded guilty to violating the Official Secrets Act of Great Britain on 2 February 1950 and subsequently served a nine-year prison sentence. After his incarceration, he was stripped of his citizenship and was forced to move back to East Germany. Reception The book was reviewed in Nature by Sharon Weinberger, in The Wall Street Journal by Henry Hemming, in The New York Times by Ronald Radosh, and in the Indian newsp The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. Where does the energy from an atomic bomb come from? A. nucleus of atom B. neutron C. isotope of atom D. electron shell Answer:
sciq-9577
multiple_choice
What compounds contain only carbon and hydrogen?
[ "carbonates", "molecules", "particles", "hydrocarbons" ]
D
Relavent Documents: Document 0::: This is an index of lists of molecules (i.e. by year, number of atoms, etc.). Millions of molecules have existed in the universe since before the formation of Earth. Three of them, carbon dioxide, water and oxygen were necessary for the growth of life. Although humanity had always been surrounded by these substances, it has not always known what they were composed of. By century The following is an index of list of molecules organized by time of discovery of their molecular formula or their specific molecule in case of isomers: List of compounds By number of carbon atoms in the molecule List of compounds with carbon number 1 List of compounds with carbon number 2 List of compounds with carbon number 3 List of compounds with carbon number 4 List of compounds with carbon number 5 List of compounds with carbon number 6 List of compounds with carbon number 7 List of compounds with carbon number 8 List of compounds with carbon number 9 List of compounds with carbon number 10 List of compounds with carbon number 11 List of compounds with carbon number 12 List of compounds with carbon number 13 List of compounds with carbon number 14 List of compounds with carbon number 15 List of compounds with carbon number 16 List of compounds with carbon number 17 List of compounds with carbon number 18 List of compounds with carbon number 19 List of compounds with carbon number 20 List of compounds with carbon number 21 List of compounds with carbon number 22 List of compounds with carbon number 23 List of compounds with carbon number 24 List of compounds with carbon numbers 25-29 List of compounds with carbon numbers 30-39 List of compounds with carbon numbers 40-49 List of compounds with carbon numbers 50+ Other lists List of interstellar and circumstellar molecules List of gases List of molecules with unusual names See also Molecule Empirical formula Chemical formula Chemical structure Chemical compound Chemical bond Coordination complex L Document 1::: In chemistry, the carbon-hydrogen bond ( bond) is a chemical bond between carbon and hydrogen atoms that can be found in many organic compounds. This bond is a covalent, single bond, meaning that carbon shares its outer valence electrons with up to four hydrogens. This completes both of their outer shells, making them stable. Carbon–hydrogen bonds have a bond length of about 1.09 Å (1.09 × 10−10 m) and a bond energy of about 413 kJ/mol (see table below). Using Pauling's scale—C (2.55) and H (2.2)—the electronegativity difference between these two atoms is 0.35. Because of this small difference in electronegativities, the bond is generally regarded as being non-polar. In structural formulas of molecules, the hydrogen atoms are often omitted. Compound classes consisting solely of bonds and bonds are alkanes, alkenes, alkynes, and aromatic hydrocarbons. Collectively they are known as hydrocarbons. In October 2016, astronomers reported that the very basic chemical ingredients of life—the carbon-hydrogen molecule (CH, or methylidyne radical), the carbon-hydrogen positive ion () and the carbon ion ()—are the result, in large part, of ultraviolet light from stars, rather than in other ways, such as the result of turbulent events related to supernovae and young stars, as thought earlier. Bond length The length of the carbon-hydrogen bond varies slightly with the hybridisation of the carbon atom. A bond between a hydrogen atom and an sp2 hybridised carbon atom is about 0.6% shorter than between hydrogen and sp3 hybridised carbon. A bond between hydrogen and sp hybridised carbon is shorter still, about 3% shorter than sp3 C-H. This trend is illustrated by the molecular geometry of ethane, ethylene and acetylene. Reactions The C−H bond in general is very strong, so it is relatively unreactive. In several compound classes, collectively called carbon acids, the C−H bond can be sufficiently acidic for proton removal. Unactivated C−H bonds are found in alkanes and are no Document 2::: Carbon is a primary component of all known life on Earth, representing approximately 45–50% of all dry biomass. Carbon compounds occur naturally in great abundance on Earth. Complex biological molecules consist of carbon atoms bonded with other elements, especially oxygen and hydrogen and frequently also nitrogen, phosphorus, and sulfur (collectively known as CHNOPS). Because it is lightweight and relatively small in size, carbon molecules are easy for enzymes to manipulate. It is frequently assumed in astrobiology that if life exists elsewhere in the Universe, it will also be carbon-based. Critics refer to this assumption as carbon chauvinism. Characteristics Carbon is capable of forming a vast number of compounds, more than any other element, with almost ten million compounds described to date, and yet that number is but a fraction of the number of theoretically possible compounds under standard conditions. The enormous diversity of carbon-containing compounds, known as organic compounds, has led to a distinction between them and compounds that do not contain carbon, known as inorganic compounds. The branch of chemistry that studies organic compounds is known as organic chemistry. Carbon is the 15th most abundant element in the Earth's crust, and the fourth most abundant element in the universe by mass, after hydrogen, helium, and oxygen. Carbon's widespread abundance, its ability to form stable bonds with numerous other elements, and its unusual ability to form polymers at the temperatures commonly encountered on Earth enables it to serve as a common element of all known living organisms. In a 2018 study, carbon was found to compose approximately 550 billion tons of all life on Earth. It is the second most abundant element in the human body by mass (about 18.5%) after oxygen. The most important characteristics of carbon as a basis for the chemistry of life are that each carbon atom is capable of forming up to four valence bonds with other atoms simultaneously Document 3::: A carbon–carbon bond is a covalent bond between two carbon atoms. The most common form is the single bond: a bond composed of two electrons, one from each of the two atoms. The carbon–carbon single bond is a sigma bond and is formed between one hybridized orbital from each of the carbon atoms. In ethane, the orbitals are sp3-hybridized orbitals, but single bonds formed between carbon atoms with other hybridizations do occur (e.g. sp2 to sp2). In fact, the carbon atoms in the single bond need not be of the same hybridization. Carbon atoms can also form double bonds in compounds called alkenes or triple bonds in compounds called alkynes. A double bond is formed with an sp2-hybridized orbital and a p-orbital that is not involved in the hybridization. A triple bond is formed with an sp-hybridized orbital and two p-orbitals from each atom. The use of the p-orbitals forms a pi bond. Chains and branching Carbon is one of the few elements that can form long chains of its own atoms, a property called catenation. This coupled with the strength of the carbon–carbon bond gives rise to an enormous number of molecular forms, many of which are important structural elements of life, so carbon compounds have their own field of study: organic chemistry. Branching is also common in C−C skeletons. Carbon atoms in a molecule are categorized by the number of carbon neighbors they have: A primary carbon has one carbon neighbor. A secondary carbon has two carbon neighbors. A tertiary carbon has three carbon neighbors. A quaternary carbon has four carbon neighbors. In "structurally complex organic molecules", it is the three-dimensional orientation of the carbon–carbon bonds at quaternary loci which dictates the shape of the molecule. Further, quaternary loci are found in many biologically active small molecules, such as cortisone and morphine. Synthesis Carbon–carbon bond-forming reactions are organic reactions in which a new carbon–carbon bond is formed. They are important in th Document 4::: Bisnorhopanes (BNH) are a group of demethylated hopanes found in oil shales across the globe and can be used for understanding depositional conditions of the source rock. The most common member, 28,30-bisnorhopane, can be found in high concentrations in petroleum source rocks, most notably the Monterey Shale, as well as in oil and tar samples. 28,30-Bisnorhopane was first identified in samples from the Monterey Shale Formation in 1985. It occurs in abundance throughout the formation and appears in stratigraphically analogous locations along the California coast. Since its identification and analysis, 28,30-bisnorhopane has been discovered in oil shales around the globe, including lacustrine and offshore deposits of Brazil, silicified shales of the Eocene in Gabon, the Kimmeridge Clay Formation in the North Sea, and in Western Australian oil shales. Chemistry 28,30-bisnorhopane exists in three epimers: 17α,18α21β(H), 17β,18α,21α(H), and 17β,18α,21β(H). During GC-MS, the three epimers coelute at the same time and are nearly indistinguishable. However, mass spectral fragmentation of the 28,30-bisnorhopane is predominantly characterized by m/z 191, 177, and 163. The ratios of 163/191 fragments can be used to distinguish the epimers, where the βαβ orientation has the highest, m/z 163/191 ratio. Further, the D/E ring ratios can be used to create a hierarchy of epimer maturity. From this, it is believed that the ααβ epimer is the first-formed, diagenetically, supported also by its percent dominance in younger shales. 28,30-bisnorhopane is created independently from kerogen, instead derived from bitumen, unbound as free oil-hydrocarbons. As such, as oil generation increases with source maturation, the concentration of 28,30-bisnorhopane decreases. Bisnorhopane may not be a reliable diagnostic for oil maturity due to microbial biodegradation. Nomenclature Norhopanes are a family of demethylated hopanes, identical to the methylated hopane structure, minus indicated desmet The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. What compounds contain only carbon and hydrogen? A. carbonates B. molecules C. particles D. hydrocarbons Answer:
scienceQA-12022
multiple_choice
Which organ works in groups to move the body's bones?
[ "heart", "stomach", "brain", "muscles" ]
D
Relavent Documents: Document 0::: This article contains a list of organs of the human body. A general consensus is widely believed to be 79 organs (this number goes up if you count each bone and muscle as an organ on their own, which is becoming more common practice to do); however, there is no universal standard definition of what constitutes an organ, and some tissue groups' status as one is debated. Since there is no single standard definition of what an organ is, the number of organs varies depending on how one defines an organ. For example, this list contains more than 79 organs (about ~103). It is still not clear which definition of an organ is used for all the organs in this list, it seemed that it may have been compiled based on what wikipedia articles were available on organs. Musculoskeletal system Skeleton Joints Ligaments Muscular system Tendons Digestive system Mouth Teeth Tongue Lips Salivary glands Parotid glands Submandibular glands Sublingual glands Pharynx Esophagus Stomach Small intestine Duodenum Jejunum Ileum Large intestine Cecum Ascending colon Transverse colon Descending colon Sigmoid colon Rectum Liver Gallbladder Mesentery Pancreas Anal canal Appendix Respiratory system Nasal cavity Pharynx Larynx Trachea Bronchi Bronchioles and smaller air passages Lungs Muscles of breathing Urinary system Kidneys Ureter Bladder Urethra Reproductive systems Female reproductive system Internal reproductive organs Ovaries Fallopian tubes Uterus Cervix Vagina External reproductive organs Vulva Clitoris Male reproductive system Internal reproductive organs Testicles Epididymis Vas deferens Prostate External reproductive organs Penis Scrotum Endocrine system Pituitary gland Pineal gland Thyroid gland Parathyroid glands Adrenal glands Pancreas Circulatory system Circulatory system Heart Arteries Veins Capillaries Lymphatic system Lymphatic vessel Lymph node Bone marrow Thymus Spleen Gut-associated lymphoid tissue Tonsils Interstitium Nervous system Central nervous system Document 1::: Instruments used in Anatomy dissections are as follows: Instrument list Image gallery Document 2::: Work He is an associate professor of anatomy, Department of Anatomy, Howard University College of Medicine (US). He was among the most cited/influential anatomists in 2019. Books Single author or co-author books DIOGO, R. (2021). Meaning of Life, Human Nature and Delusions - How Tales about Love, Sex, Races, Gods and Progress Affect Us and Earth's Splendor. Springer (New York, US). MONTERO, R., ADESOMO, A. & R. DIOGO (2021). On viruses, pandemics, and us: a developing story [De virus, pandemias y nosotros: una historia en desarollo]. Independently published, Tucuman, Argentina. 495 pages. DIOGO, R., J. ZIERMANN, J. MOLNAR, N. SIOMAVA & V. ABDALA (2018). Muscles of Chordates: development, homologies and evolution. Taylor & Francis (Oxford, UK). 650 pages. DIOGO, R., B. SHEARER, J. M. POTAU, J. F. PASTOR, F. J. DE PAZ, J. ARIAS-MARTORELL, C. TURCOTTE, A. HAMMOND, E. VEREECKE, M. VANHOOF, S. NAUWELAERTS & B. WOOD (2017). Photographic and descriptive musculoskeletal atlas of bonobos - with notes on the weight, attachments, variations, and innervation of the muscles and comparisons with common chimpanzees and humans. Springer (New York, US). 259 pages. DIOGO, R. (2017). Evolution driven by organismal behavior: a unifying view of life, function, form, mismatches and trends. Springer Document 3::: Several universities have designed interdisciplinary courses with a focus on human biology at the undergraduate level. There is a wide variation in emphasis ranging from business, social studies, public policy, healthcare and pharmaceutical research. Americas Human Biology major at Stanford University, Palo Alto (since 1970) Stanford's Human Biology Program is an undergraduate major; it integrates the natural and social sciences in the study of human beings. It is interdisciplinary and policy-oriented and was founded in 1970 by a group of Stanford faculty (Professors Dornbusch, Ehrlich, Hamburg, Hastorf, Kennedy, Kretchmer, Lederberg, and Pittendrigh). It is a very popular major and alumni have gone to post-graduate education, medical school, law, business and government. Human and Social Biology (Caribbean) Human and Social Biology is a Level 4 & 5 subject in the secondary and post-secondary schools in the Caribbean and is optional for the Caribbean Secondary Education Certification (CSEC) which is equivalent to Ordinary Level (O-Level) under the British school system. The syllabus centers on structure and functioning (anatomy, physiology, biochemistry) of human body and the relevance to human health with Caribbean-specific experience. The syllabus is organized under five main sections: Living organisms and the environment, life processes, heredity and variation, disease and its impact on humans, the impact of human activities on the environment. Human Biology Program at University of Toronto The University of Toronto offers an undergraduate program in Human Biology that is jointly offered by the Faculty of Arts & Science and the Faculty of Medicine. The program offers several major and specialist options in: human biology, neuroscience, health & disease, global health, and fundamental genetics and its applications. Asia BSc (Honours) Human Biology at All India Institute of Medical Sciences, New Delhi (1980–2002) BSc (honours) Human Biology at AIIMS (New Document 4::: Proprioception ( ), also called kinaesthesia (or kinesthesia), is the sense of self-movement, force, and body position. Proprioception is mediated by proprioceptors, mechanosensory neurons located within muscles, tendons, and joints. Most animals possess multiple subtypes of proprioceptors, which detect distinct kinematic parameters, such as joint position, movement, and load. Although all mobile animals possess proprioceptors, the structure of the sensory organs can vary across species. Proprioceptive signals are transmitted to the central nervous system, where they are integrated with information from other sensory systems, such as the visual system and the vestibular system, to create an overall representation of body position, movement, and acceleration. In many animals, sensory feedback from proprioceptors is essential for stabilizing body posture and coordinating body movement. System overview In vertebrates, limb movement and velocity (muscle length and the rate of change) are encoded by one group of sensory neurons (type Ia sensory fiber) and another type encode static muscle length (group II neurons). These two types of sensory neurons compose muscle spindles. There is a similar division of encoding in invertebrates; different subgroups of neurons of the Chordotonal organ encode limb position and velocity. To determine the load on a limb, vertebrates use sensory neurons in the Golgi tendon organs: type Ib afferents. These proprioceptors are activated at given muscle forces, which indicate the resistance that muscle is experiencing. Similarly, invertebrates have a mechanism to determine limb load: the Campaniform sensilla. These proprioceptors are active when a limb experiences resistance. A third role for proprioceptors is to determine when a joint is at a specific position. In vertebrates, this is accomplished by Ruffini endings and Pacinian corpuscles. These proprioceptors are activated when the joint is at a threshold position, usually at the extre The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. Which organ works in groups to move the body's bones? A. heart B. stomach C. brain D. muscles Answer:
sciq-8222
multiple_choice
Like a runner passing a baton, the transmission of nerve impulses between neurons depends on what?
[ "enzymes", "neurotransmitters", "axons", "receptors" ]
B
Relavent Documents: Document 0::: In neuroscience, nerve conduction velocity (CV) is the speed at which an electrochemical impulse propagates down a neural pathway. Conduction velocities are affected by a wide array of factors, which include age, sex, and various medical conditions. Studies allow for better diagnoses of various neuropathies, especially demyelinating diseases as these conditions result in reduced or non-existent conduction velocities. CV is an important aspect of nerve conduction studies. Normal conduction velocities Ultimately, conduction velocities are specific to each individual and depend largely on an axon's diameter and the degree to which that axon is myelinated, but the majority of 'normal' individuals fall within defined ranges. Nerve impulses are extremely slow compared to the speed of electricity, where the electric field can propagate with a speed on the order of 50–99% of the speed of light; however, it is very fast compared to the speed of blood flow, with some myelinated neurons conducting at speeds up to 120 m/s (432 km/h or 275 mph). Different sensory receptors are innervated by different types of nerve fibers. Proprioceptors are innervated by type Ia, Ib and II sensory fibers, mechanoreceptors by type II and III sensory fibers, and nociceptors and thermoreceptors by type III and IV sensory fibers. Normal impulses in peripheral nerves of the legs travel at 40–45 m/s, and those in peripheral nerves of the arms at 50–65 m/s. Largely generalized, normal conduction velocities for any given nerve will be in the range of 50–60 m/s. Testing methods Nerve conduction studies Nerve conduction velocity is just one of many measurements commonly made during a nerve conduction study (NCS). The purpose of these studies is to determine whether nerve damage is present and how severe that damage may be. Nerve conduction studies are performed as follows: Two electrodes are attached to the subject's skin over the nerve being tested. Electrical impulses are sent through one elec Document 1::: Non-spiking neurons are neurons that are located in the central and peripheral nervous systems and function as intermediary relays for sensory-motor neurons. They do not exhibit the characteristic spiking behavior of action potential generating neurons. Non-spiking neural networks are integrated with spiking neural networks to have a synergistic effect in being able to stimulate some sensory or motor response while also being able to modulate the response. Discovery Animal models There are an abundance of neurons that propagate signals via action potentials and the mechanics of this particular kind of transmission is well understood. Spiking neurons exhibit action potentials as a result of a neuron characteristic known as membrane potential. Through studying these complex spiking networks in animals, a neuron that did not exhibit characteristic spiking behavior was discovered. These neurons use a graded potential to transmit data as they lack the membrane potential that spiking neurons possess. This method of transmission has a huge effect on the fidelity, strength, and lifetime of the signal. Non-spiking neurons were identified as a special kind of interneuron and function as an intermediary point of process for sensory-motor systems. Animals have become substantial models for understanding more about non-spiking neural networks and the role they play in an animal’s ability to process information and its overall function. Animal models indicate that the interneurons modulate directional and posture coordinating behaviors. Crustaceans and arthropods such as the crawfish have created many opportunities to learn about the modulatory role that these neurons have in addition to their potential to be modulated regardless of their lack of exhibiting spiking behavior. Most of the known information about nonspiking neurons is derived from animal models. Studies focus on neuromuscular junctions and modulation of abdominal motor cells. Modulatory interneurons are neurons Document 2::: Spike directivity is a vector that quantifies changes in transient charge density during action potential propagation. The digital-like uniformity of action potentials is contradicted by experimental data. Electrophysiologists have observed that the shape of recorded action potentials changes in time. Recent experimental evidence has shown that action potentials in neurons are subject to waveform modulation while they travel down axons or dendrites. The action potential waveform can be modulated by neuron geometry, local alterations in the ion conductance, and other biophysical properties including neurotransmitter release. See also Cellular neuroscience Neuron NeuroElectroDynamics Document 3::: In neuroethology and the study of learning, anti-Hebbian learning describes a particular class of learning rule by which synaptic plasticity can be controlled. These rules are based on a reversal of Hebb's postulate, and therefore can be simplistically understood as dictating reduction of the strength of synaptic connectivity between neurons following a scenario in which a neuron directly contributes to production of an action potential in another neuron. Evidence from neuroethology Neuroethological study has provided strong evidence for the existence of a system which adheres to an anti-Hebbian learning rule. Research on the mormyrid electric fish has demonstrated that the electrosensory lateral-line lobe (ELL) receives sensory input from knollenorgans (electroreceptive sensory organs) which utilize a self-generated electrical discharge (called an EOD; electric organ discharge) to extract information from the environment about objects in close proximity to the fish. In addition to information from sensory receptors, the ELL receives a signal from the area of the brain responsible for initiating the electrical discharges, known as the EOD command nucleus. This efference copy diverges, transmitted through two separate pathways, before the signals converge along with electrosensory input on Purkinje-like Medium Ganglion cells in the ELL. These cells receive information through extensive apical dendritic projections from parallel fibers that signal the transmission of an order to release an EOD. These cells also receive information from neurons conveying electrosensory information. Important to anti-Hebbian learning, the synapses between the parallel fibers and the apical dendrites of Medium Ganglion cells show a specific pattern of synaptic plasticity. Should activation of the dendrites by parallel fibers occur in a short time period preceding the initiation of a dendritic broad spike (an action potential which travels through the dendrites), the strength of the co Document 4::: Neurotransmitters are released into a synapse in packaged vesicles called quanta. One quantum generates a miniature end plate potential (MEPP) which is the smallest amount of stimulation that one neuron can send to another neuron. Quantal release is the mechanism by which most traditional endogenous neurotransmitters are transmitted throughout the body. The aggregate sum of many MEPPs is an end plate potential (EPP). A normal end plate potential usually causes the postsynaptic neuron to reach its threshold of excitation and elicit an action potential. Electrical synapses do not use quantal neurotransmitter release and instead use gap junctions between neurons to send current flows between neurons. The goal of any synapse is to produce either an excitatory postsynaptic potential (EPSP) or an inhibitory postsynaptic potential (IPSP), which generate or repress the expression, respectively, of an action potential in the postsynaptic neuron. It is estimated that an action potential will trigger the release of approximately 20% of an axon terminal's neurotransmitter load. Quantal neurotransmitter release mechanism Neurotransmitters are synthesized in the axon terminal where they are stored in vesicles. These neurotransmitter-filled vesicles are the quanta that will be released into the synapse. Quantal vesicles release their contents into the synapse by binding to the presynaptic membrane and combining their phospholipid bilayers. Individual quanta may randomly diffuse into the synapse and cause a subsequent MEPP. These spontaneous occurrences are completely random and are not the result of any kind of signaling pathway. Calcium ion signaling to the axon terminal is the usual signal for presynaptic release of neurotransmitters. Calcium ion diffusion into the presynaptic membrane signals the axon terminal to release quanta to generate either an IPSP or EPSP in the postsynaptic membrane. Release of different neurotransmitters will lead to different postsynaptic potential The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. Like a runner passing a baton, the transmission of nerve impulses between neurons depends on what? A. enzymes B. neurotransmitters C. axons D. receptors Answer:
sciq-3924
multiple_choice
What is an interation between organisms or species for the same resources in an enviroment is called?
[ "rivalry", "competition", "contention", "opposition" ]
B
Relavent Documents: Document 0::: Interspecific competition, in ecology, is a form of competition in which individuals of different species compete for the same resources in an ecosystem (e.g. food or living space). This can be contrasted with mutualism, a type of symbiosis. Competition between members of the same species is called intraspecific competition. If a tree species in a dense forest grows taller than surrounding tree species, it is able to absorb more of the incoming sunlight. However, less sunlight is then available for the trees that are shaded by the taller tree, thus interspecific competition. Leopards and lions can also be in interspecific competition, since both species feed on the same prey, and can be negatively impacted by the presence of the other because they will have less food. Competition is only one of many interacting biotic and abiotic factors that affect community structure. Moreover, competition is not always a straightforward, direct, interaction. Interspecific competition may occur when individuals of two separate species share a limiting resource in the same area. If the resource cannot support both populations, then lowered fecundity, growth, or survival may result in at least one species. Interspecific competition has the potential to alter populations, communities and the evolution of interacting species. On an individual organism level, competition can occur as interference or exploitative competition. Types All of the types described here can also apply to intraspecific competition, that is, competition among individuals within a species. Also, any specific example of interspecific competition can be described in terms of both a mechanism (e.g., resource or interference) and an outcome (symmetric or asymmetric). Based on mechanism Exploitative competition, also referred to as resource competition, is a form of competition in which one species consumes and either reduces or more efficiently uses a shared limiting resource and therefore depletes the availab Document 1::: Competition is an interaction between organisms or species in which both require a resource that is in limited supply (such as food, water, or territory). Competition lowers the fitness of both organisms involved since the presence of one of the organisms always reduces the amount of the resource available to the other. In the study of community ecology, competition within and between members of a species is an important biological interaction. Competition is one of many interacting biotic and abiotic factors that affect community structure, species diversity, and population dynamics (shifts in a population over time). There are three major mechanisms of competition: interference, exploitation, and apparent competition (in order from most direct to least direct). Interference and exploitation competition can be classed as "real" forms of competition, while apparent competition is not, as organisms do not share a resource, but instead share a predator. Competition among members of the same species is known as intraspecific competition, while competition between individuals of different species is known as interspecific competition. According to the competitive exclusion principle, species less suited to compete for resources must either adapt or die out, although competitive exclusion is rarely found in natural ecosystems. According to evolutionary theory, competition within and between species for resources is important in natural selection. More recently, however, researchers have suggested that evolutionary biodiversity for vertebrates has been driven not by competition between organisms, but by these animals adapting to colonize empty livable space; this is termed the 'Room to Roam' hypothesis. Interference competition During interference competition, also called contest competition, organisms interact directly by fighting for scarce resources. For example, large aphids defend feeding sites on cottonwood leaves by ejecting smaller aphids from better sites. Document 2::: Any action or influence that species have on each other is considered a biological interaction. These interactions between species can be considered in several ways. One such way is to depict interactions in the form of a network, which identifies the members and the patterns that connect them. Species interactions are considered primarily in terms of trophic interactions, which depict which species feed on others. Currently, ecological networks that integrate non-trophic interactions are being built. The type of interactions they can contain can be classified into six categories: mutualism, commensalism, neutralism, amensalism, antagonism, and competition. Observing and estimating the fitness costs and benefits of species interactions can be very problematic. The way interactions are interpreted can profoundly affect the ensuing conclusions. Interaction characteristics Characterization of interactions can be made according to various measures, or any combination of them. Prevalence Prevalence identifies the proportion of the population affected by a given interaction, and thus quantifies whether it is relatively rare or common. Generally, only common interactions are considered. Negative/ Positive Whether the interaction is beneficial or harmful to the species involved determines the sign of the interaction, and what type of interaction it is classified as. To establish whether they are harmful or beneficial, careful observational and/or experimental studies can be conducted, in an attempt to establish the cost/benefit balance experienced by the members. Strength The sign of an interaction does not capture the impact on fitness of that interaction. One example of this is of antagonism, in which predators may have a much stronger impact on their prey species (death), than parasites (reduction in fitness). Similarly, positive interactions can produce anything from a negligible change in fitness to a life or death impact. Relationship in space and time The rel Document 3::: Microbial population biology is the application of the principles of population biology to microorganisms. Distinguishing from other biological disciplines Microbial population biology, in practice, is the application of population ecology and population genetics toward understanding the ecology and evolution of bacteria, archaebacteria, microscopic fungi (such as yeasts), additional microscopic eukaryotes (e.g., "protozoa" and algae), and viruses. Microbial population biology also encompasses the evolution and ecology of community interactions (community ecology) between microorganisms, including microbial coevolution and predator-prey interactions. In addition, microbial population biology considers microbial interactions with more macroscopic organisms (e.g., host-parasite interactions), though strictly this should be more from the perspective of the microscopic rather than the macroscopic organism. A good deal of microbial population biology may be described also as microbial evolutionary ecology. On the other hand, typically microbial population biologists (unlike microbial ecologists) are less concerned with questions of the role of microorganisms in ecosystem ecology, which is the study of nutrient cycling and energy movement between biotic as well as abiotic components of ecosystems. Microbial population biology can include aspects of molecular evolution or phylogenetics. Strictly, however, these emphases should be employed toward understanding issues of microbial evolution and ecology rather than as a means of understanding more universal truths applicable to both microscopic and macroscopic organisms. The microorganisms in such endeavors consequently should be recognized as organisms rather than simply as molecular or evolutionary reductionist model systems. Thus, the study of RNA in vitro evolution is not microbial population biology and nor is the in silico generation of phylogenies of otherwise non-microbial sequences, even if aspects of either may Document 4::: In ecology, a biological interaction is the effect that a pair of organisms living together in a community have on each other. They can be either of the same species (intraspecific interactions), or of different species (interspecific interactions). These effects may be short-term, or long-term, both often strongly influence the adaptation and evolution of the species involved. Biological interactions range from mutualism, beneficial to both partners, to competition, harmful to both partners. Interactions can be direct when physical contact is established or indirect, through intermediaries such as shared resources, territories, ecological services, metabolic waste, toxins or growth inhibitors. This type of relationship can be shown by net effect based on individual effects on both organisms arising out of relationship. Several recent studies have suggested non-trophic species interactions such as habitat modification and mutualisms can be important determinants of food web structures. However, it remains unclear whether these findings generalize across ecosystems, and whether non-trophic interactions affect food webs randomly, or affect specific trophic levels or functional groups. History Although biological interactions, more or less individually, were studied earlier, Edward Haskell (1949) gave an integrative approach to the thematic, proposing a classification of "co-actions", later adopted by biologists as "interactions". Close and long-term interactions are described as symbiosis; symbioses that are mutually beneficial are called mutualistic. The term symbiosis was subject to a century-long debate about whether it should specifically denote mutualism, as in lichens or in parasites that benefit themselves. This debate created two different classifications for biotic interactions, one based on the time (long-term and short-term interactions), and other based on the magnitud of interaction force (competition/mutualism) or effect of individual fitness, accordi The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. What is an interation between organisms or species for the same resources in an enviroment is called? A. rivalry B. competition C. contention D. opposition Answer:
scienceQA-2804
multiple_choice
What do these two changes have in common? an old sandwich rotting in a trashcan milk going sour
[ "Both are caused by heating.", "Both are only physical changes.", "Both are chemical changes.", "Both are caused by cooling." ]
C
Step 1: Think about each change. A sandwich rotting is a chemical change. The matter in the sandwich breaks down and slowly turns into a different type of matter. Milk going sour is a chemical change. The type of matter in the milk slowly changes. The new matter that is formed gives the milk its sour taste. Step 2: Look at each answer choice. Both are only physical changes. Both changes are chemical changes. They are not physical changes. Both are chemical changes. Both changes are chemical changes. The type of matter before and after each change is different. Both are caused by heating. Neither change is caused by heating. Both are caused by cooling. Neither change is caused by cooling.
Relavent Documents: Document 0::: Conceptual questions or conceptual problems in science, technology, engineering, and mathematics (STEM) education are questions that can be answered based only on the knowledge of relevant concepts, rather than performing extensive calculations. They contrast with most homework and exam problems in science and engineering that typically require plugging in numerical values into previously discussed formulas. Such "plug-and-chug" numerical problems can often be solved correctly by just matching the pattern of the problem to a previously discussed problem and changing the numerical inputs, which requires significant amounts of time to perform the calculations but does not test or deepen the understanding of how the concepts and formulas should work together. Conceptual questions, therefore, provide a good complement to conventional numerical problems because they need minimal or no calculations and instead encourage the students to engage more deeply with the underlying concepts and how they relate to formulas. Conceptual problems are often formulated as multiple-choice questions, making them easy to use during in-class discussions, particularly when utilizing active learning, peer instruction, and audience response. An example of a conceptual question in undergraduate thermodynamics is provided below: During adiabatic expansion of an ideal gas, its temperatureincreases decreases stays the same Impossible to tell/need more information The use of conceptual questions in physics was popularized by Eric Mazur, particularly in the form of multiple-choice tests that he called ConcepTests. In recent years, multiple websites that maintain lists of conceptual questions have been created by instructors for various disciplines. Some books on physics provide many examples of conceptual questions as well. Multiple conceptual questions can be assembled into a concept inventory to test the working knowledge of students at the beginning of a course or to track the improvement in Document 1::: Physical changes are changes affecting the form of a chemical substance, but not its chemical composition. Physical changes are used to separate mixtures into their component compounds, but can not usually be used to separate compounds into chemical elements or simpler compounds. Physical changes occur when objects or substances undergo a change that does not change their chemical composition. This contrasts with the concept of chemical change in which the composition of a substance changes or one or more substances combine or break up to form new substances. In general a physical change is reversible using physical means. For example, salt dissolved in water can be recovered by allowing the water to evaporate. A physical change involves a change in physical properties. Examples of physical properties include melting, transition to a gas, change of strength, change of durability, changes to crystal form, textural change, shape, size, color, volume and density. An example of a physical change is the process of tempering steel to form a knife blade. A steel blank is repeatedly heated and hammered which changes the hardness of the steel, its flexibility and its ability to maintain a sharp edge. Many physical changes also involve the rearrangement of atoms most noticeably in the formation of crystals. Many chemical changes are irreversible, and many physical changes are reversible, but reversibility is not a certain criterion for classification. Although chemical changes may be recognized by an indication such as odor, color change, or production of a gas, every one of these indicators can result from physical change. Examples Heating and cooling Many elements and some compounds change from solids to liquids and from liquids to gases when heated and the reverse when cooled. Some substances such as iodine and carbon dioxide go directly from solid to gas in a process called sublimation. Magnetism Ferro-magnetic materials can become magnetic. The process is reve Document 2::: Rancidification is the process of complete or incomplete autoxidation or hydrolysis of fats and oils when exposed to air, light, moisture, or bacterial action, producing short-chain aldehydes, ketones and free fatty acids. When these processes occur in food, undesirable odors and flavors can result. In processed meats, these flavors are collectively known as warmed-over flavor. In certain cases, however, the flavors can be desirable (as in aged cheeses). Rancidification can also detract from the nutritional value of food, as some vitamins are sensitive to oxidation. Similar to rancidification, oxidative degradation also occurs in other hydrocarbons, such as lubricating oils, fuels, and mechanical cutting fluids. Pathways Five pathways for rancidification are recognized: Hydrolytic Hydrolytic rancidity refers to the odor that develops when triglycerides are hydrolyzed and free fatty acids are released. This reaction of lipid with water may require a catalyst (such as a lipase, or acidic or alkaline conditions) leading to the formation of free fatty acids and glycerol. In particular, short-chain fatty acids, such as butyric acid, are malodorous. When short-chain fatty acids are produced, they serve as catalysts themselves, further accelerating the reaction, a form of autocatalysis. Oxidative Oxidative rancidity is associated with the degradation by oxygen in the air. Free-radical oxidation The double bonds of an unsaturated fatty acid can be cleaved by free-radical reactions involving molecular oxygen. This reaction causes the release of malodorous and highly volatile aldehydes and ketones. Because of the nature of free-radical reactions, the reaction is catalyzed by sunlight. Oxidation primarily occurs with unsaturated fats. For example, even though meat is held under refrigeration or in a frozen state, the poly-unsaturated fat will continue to oxidize and slowly become rancid. The fat oxidation process, potentially resulting in rancidity, begins immediately Document 3::: Adaptive comparative judgement is a technique borrowed from psychophysics which is able to generate reliable results for educational assessment – as such it is an alternative to traditional exam script marking. In the approach, judges are presented with pairs of student work and are then asked to choose which is better, one or the other. By means of an iterative and adaptive algorithm, a scaled distribution of student work can then be obtained without reference to criteria. Introduction Traditional exam script marking began in Cambridge 1792 when, with undergraduate numbers rising, the importance of proper ranking of students was growing. So in 1792 the new Proctor of Examinations, William Farish, introduced marking, a process in which every examiner gives a numerical score to each response by every student, and the overall total mark puts the students in the final rank order. Francis Galton (1869) noted that, in an unidentified year about 1863, the Senior Wrangler scored 7,634 out of a maximum of 17,000, while the Second Wrangler scored 4,123. (The 'Wooden Spoon' scored only 237.) Prior to 1792, a team of Cambridge examiners convened at 5pm on the last day of examining, reviewed the 19 papers each student had sat – and published their rank order at midnight. Marking solved the problems of numbers and prevented unfair personal bias, and its introduction was a step towards modern objective testing, the format it is best suited to. But the technology of testing that followed, with its major emphasis on reliability and the automatisation of marking, has been an uncomfortable partner for some areas of educational achievement: assessing writing or speaking, and other kinds of performance need something more qualitative and judgemental. The technique of Adaptive Comparative Judgement is an alternative to marking. It returns to the pre-1792 idea of sorting papers according to their quality, but retains the guarantee of reliability and fairness. It is by far the most rel Document 4::: In cooking, proofing (also called proving) is a step in the preparation of yeast bread and other baked goods in which the dough is allowed to rest and rise a final time before baking. During this rest period, yeast ferments the dough and produces gases, thereby leavening the dough. In contrast, proofing or blooming yeast (as opposed to proofing the dough) may refer to the process of first suspending yeast in warm water, a necessary hydration step when baking with active dry yeast. Proofing can also refer to the process of testing the viability of dry yeast by suspending it in warm water with carbohydrates (sugars). If the yeast is still alive, it will feed on the sugar and produce a visible layer of foam on the surface of the water mixture. Fermentation rest periods are not always explicitly named, and can appear in recipes as "Allow dough to rise." When they are named, terms include "bulk fermentation", "first rise", "second rise", "final proof" and "shaped proof". Dough processes The process of making yeast-leavened bread involves a series of alternating work and rest periods. Work periods occur when the dough is manipulated by the baker. Some work periods are called mixing, kneading, and folding, as well as division, shaping, and panning. Work periods are typically followed by rest periods, which occur when dough is allowed to sit undisturbed. Particular rest periods include, but are not limited to, autolyse, bulk fermentation and proofing. Proofing, also sometimes called final fermentation, is the specific term for allowing dough to rise after it has been shaped and before it is baked. Some breads begin mixing with an autolyse. This refers to a period of rest after the initial mixing of flour and water, a rest period that occurs sequentially before the addition of yeast, salt and other ingredients. This rest period allows for better absorption of water and helps the gluten and starches to align. The autolyse is credited to Raymond Calvel, who recommende The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. What do these two changes have in common? an old sandwich rotting in a trashcan milk going sour A. Both are caused by heating. B. Both are only physical changes. C. Both are chemical changes. D. Both are caused by cooling. Answer:
sciq-10999
multiple_choice
Ocean water appears cyan because microbes in the water preferentially absorb what color of light?
[ "yellow", "blue", "red", "green" ]
C
Relavent Documents: Document 0::: Conceptual questions or conceptual problems in science, technology, engineering, and mathematics (STEM) education are questions that can be answered based only on the knowledge of relevant concepts, rather than performing extensive calculations. They contrast with most homework and exam problems in science and engineering that typically require plugging in numerical values into previously discussed formulas. Such "plug-and-chug" numerical problems can often be solved correctly by just matching the pattern of the problem to a previously discussed problem and changing the numerical inputs, which requires significant amounts of time to perform the calculations but does not test or deepen the understanding of how the concepts and formulas should work together. Conceptual questions, therefore, provide a good complement to conventional numerical problems because they need minimal or no calculations and instead encourage the students to engage more deeply with the underlying concepts and how they relate to formulas. Conceptual problems are often formulated as multiple-choice questions, making them easy to use during in-class discussions, particularly when utilizing active learning, peer instruction, and audience response. An example of a conceptual question in undergraduate thermodynamics is provided below: During adiabatic expansion of an ideal gas, its temperatureincreases decreases stays the same Impossible to tell/need more information The use of conceptual questions in physics was popularized by Eric Mazur, particularly in the form of multiple-choice tests that he called ConcepTests. In recent years, multiple websites that maintain lists of conceptual questions have been created by instructors for various disciplines. Some books on physics provide many examples of conceptual questions as well. Multiple conceptual questions can be assembled into a concept inventory to test the working knowledge of students at the beginning of a course or to track the improvement in Document 1::: GRE Subject Biochemistry, Cell and Molecular Biology was a standardized exam provided by ETS (Educational Testing Service) that was discontinued in December 2016. It is a paper-based exam and there are no computer-based versions of it. ETS places this exam three times per year: once in April, once in October and once in November. Some graduate programs in the United States recommend taking this exam, while others require this exam score as a part of the application to their graduate programs. ETS sends a bulletin with a sample practice test to each candidate after registration for the exam. There are 180 questions within the biochemistry subject test. Scores are scaled and then reported as a number between 200 and 990; however, in recent versions of the test, the maximum and minimum reported scores have been 760 (corresponding to the 99 percentile) and 320 (1 percentile) respectively. The mean score for all test takers from July, 2009, to July, 2012, was 526 with a standard deviation of 95. After learning that test content from editions of the GRE® Biochemistry, Cell and Molecular Biology (BCM) Test has been compromised in Israel, ETS made the decision not to administer this test worldwide in 2016–17. Content specification Since many students who apply to graduate programs in biochemistry do so during the first half of their fourth year, the scope of most questions is largely that of the first three years of a standard American undergraduate biochemistry curriculum. A sampling of test item content is given below: Biochemistry (36%) A Chemical and Physical Foundations Thermodynamics and kinetics Redox states Water, pH, acid-base reactions and buffers Solutions and equilibria Solute-solvent interactions Chemical interactions and bonding Chemical reaction mechanisms B Structural Biology: Structure, Assembly, Organization and Dynamics Small molecules Macromolecules (e.g., nucleic acids, polysaccharides, proteins and complex lipids) Supramolecular complexes (e.g. Document 2::: The SAT Subject Test in Biology was the name of a one-hour multiple choice test given on biology by the College Board. A student chose whether to take the test depending upon college entrance requirements for the schools in which the student is planning to apply. Until 1994, the SAT Subject Tests were known as Achievement Tests; and from 1995 until January 2005, they were known as SAT IIs. Of all SAT subject tests, the Biology E/M test was the only SAT II that allowed the test taker a choice between the ecological or molecular tests. A set of 60 questions was taken by all test takers for Biology and a choice of 20 questions was allowed between either the E or M tests. This test was graded on a scale between 200 and 800. The average for Molecular is 630 while Ecological is 591. On January 19 2021, the College Board discontinued all SAT Subject tests, including the SAT Subject Test in Biology E/M. This was effective immediately in the United States, and the tests were to be phased out by the following summer for international students. This was done as a response to changes in college admissions due to the impact of the COVID-19 pandemic on education. Format This test had 80 multiple-choice questions that were to be answered in one hour. All questions had five answer choices. Students received one point for each correct answer, lost ¼ of a point for each incorrect answer, and received 0 points for questions left blank. The student's score was based entirely on his or her performance in answering the multiple-choice questions. The questions covered a broad range of topics in general biology. There were more specific questions related respectively on ecological concepts (such as population studies and general Ecology) on the E test and molecular concepts such as DNA structure, translation, and biochemistry on the M test. Preparation The College Board suggested a year-long course in biology at the college preparatory level, as well as a one-year course in algebra, a Document 3::: A pre-STEM program is a course of study at any two-year college that prepares a student to transfer to a four-year school to earn a bachelor's degree in a STEM field. Overview The concept of a pre-STEM program is being developed to address America's need for more college-trained professionals in science, technology, engineering, and mathematics (STEM). It is an innovation meant to fill a gap at community colleges that do not have 'major' degree paths that students identify with on their way to earning an Associates degree. Students must complete a considerable amount of STEM coursework before transferring from a two-year school to a four-year school and earn a baccalaureate degree in a STEM field. Schools with a pre-STEM program are able to identify those students and support them with STEM-specific academic and career advising, increasing the student's chances of going on to earn a STEM baccalaureate degree in a timely fashion. With over 50% of America's college-bound students starting their college career at public or private two-year school, and with a very small proportion of students who start college at a two-year school matriculating to and earning STEM degrees from four-year schools, pre-STEM programs have great potential for broadening participation in baccalaureate STEM studies. Example programs The effectiveness of pre-STEM programs is being investigated by a consortium of schools in Missouri: Moberly Area Community College, St. Charles Community College, Metropolitan Community College, and Truman State University. A larger group of schools met at the Belknap Springs Meetings in October 2009 to discuss the challenges and opportunities presented by STEM-focused partnerships between 2-year and 4-year schools. Each program represented a two-year school and a four-year school that were trying to increase the number of people who earn a baccalaureate degree in a STEM area through various means, some of which were pre-STEM programs. Other methods includes Document 4::: In mathematical psychology and education theory, a knowledge space is a combinatorial structure used to formulate mathematical models describing the progression of a human learner. Knowledge spaces were introduced in 1985 by Jean-Paul Doignon and Jean-Claude Falmagne, and remain in extensive use in the education theory. Modern applications include two computerized tutoring systems, ALEKS and the defunct RATH. Formally, a knowledge space assumes that a domain of knowledge is a collection of concepts or skills, each of which must be eventually mastered. Not all concepts are interchangeable; some require other concepts as prerequisites. Conversely, competency at one skill may ease the acquisition of another through similarity. A knowledge space marks out which collections of skills are feasible: they can be learned without mastering any other skills. Under reasonable assumptions, the collection of feasible competencies forms the mathematical structure known as an antimatroid. Researchers and educators usually explore the structure of a discipline's knowledge space as a latent class model. Motivation Knowledge Space Theory attempts to address shortcomings of standardized testing when used in educational psychometry. Common tests, such as the SAT and ACT, compress a student's knowledge into a very small range of ordinal ranks, in the process effacing the conceptual dependencies between questions. Consequently, the tests cannot distinguish between true understanding and guesses, nor can they identify a student's particular weaknesses, only the general proportion of skills mastered. The goal of knowledge space theory is to provide a language by which exams can communicate What the student can do and What the student is ready to learn. Model structure Knowledge Space Theory-based models presume that an educational subject can be modeled as a finite set of concepts, skills, or topics. Each feasible state of knowledge about is then a subset of ; the set of The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. Ocean water appears cyan because microbes in the water preferentially absorb what color of light? A. yellow B. blue C. red D. green Answer:
ai2_arc-36
multiple_choice
Rain forests contain more species of trees than any other biome. However, scientists have found that the soil of the forest floor is relatively nutrient poor. What could most likely account for this?
[ "The lack of weathering reduces the availability of the minerals.", "The nutrients are being utilized by the plant life.", "The forest floor does not get enough sunlight.", "The animals eat the nutrients." ]
B
Relavent Documents: Document 0::: Plant ecology is a subdiscipline of ecology that studies the distribution and abundance of plants, the effects of environmental factors upon the abundance of plants, and the interactions among plants and between plants and other organisms. Examples of these are the distribution of temperate deciduous forests in North America, the effects of drought or flooding upon plant survival, and competition among desert plants for water, or effects of herds of grazing animals upon the composition of grasslands. A global overview of the Earth's major vegetation types is provided by O.W. Archibold. He recognizes 11 major vegetation types: tropical forests, tropical savannas, arid regions (deserts), Mediterranean ecosystems, temperate forest ecosystems, temperate grasslands, coniferous forests, tundra (both polar and high mountain), terrestrial wetlands, freshwater ecosystems and coastal/marine systems. This breadth of topics shows the complexity of plant ecology, since it includes plants from floating single-celled algae up to large canopy forming trees. One feature that defines plants is photosynthesis. Photosynthesis is the process of a chemical reactions to create glucose and oxygen, which is vital for plant life. One of the most important aspects of plant ecology is the role plants have played in creating the oxygenated atmosphere of earth, an event that occurred some 2 billion years ago. It can be dated by the deposition of banded iron formations, distinctive sedimentary rocks with large amounts of iron oxide. At the same time, plants began removing carbon dioxide from the atmosphere, thereby initiating the process of controlling Earth's climate. A long term trend of the Earth has been toward increasing oxygen and decreasing carbon dioxide, and many other events in the Earth's history, like the first movement of life onto land, are likely tied to this sequence of events. One of the early classic books on plant ecology was written by J.E. Weaver and F.E. Clements. It Document 1::: The forest floor, also called detritus or duff, is the part of a forest ecosystem that mediates between the living, aboveground portion of the forest and the mineral soil, principally composed of dead and decaying plant matter such as rotting wood and shed leaves. In some countries, like Canada, forest floor refers to L, F and H organic horizons. It hosts a wide variety of decomposers and predators, including invertebrates, fungi, algae, bacteria, and archaea. The forest floor serves as a bridge between the above ground living vegetation and the soil, and thus is a crucial component in nutrient transfer through the biogeochemical cycle. Leaf litter and other plant litter transmits nutrients from plants to the soil. The plant litter of the forest floor (or L horizon) prevents erosion, conserves moisture, and provides nutrients to the entire ecosystem. The F horizon consists of plant material in which decomposition is apparent, but the origins of plant residues are still distinguishable. The H horizon consists of well-decomposed plant material so that plant residues are not recognizable, with the exception of some roots or wood. The nature of the distinction between organisms "in" the soil and components "of" the soil is disputed, with some questioning whether such a distinction exists at all. The majority of carbon storage and biomass production in forests occurs below ground. Despite this, conservation policy and scientific study tends to neglect the below-ground portion of the forest ecosystem. As a crucial part of soil and the below-ground ecosystem, the forest floor profoundly impacts the entire forest. Much of the energy and carbon fixed by forests is periodically added to the forest floor through litterfall, and a substantial portion of the nutrient requirements of forest ecosystems is supplied by decomposition of organic matter in the forest floor and soil surface. Decomposers, such as arthropods and fungi, are necessary for the transformation of dead orga Document 2::: Soil biodiversity refers to the relationship of soil to biodiversity and to aspects of the soil that can be managed in relative to biodiversity. Soil biodiversity relates to some catchment management considerations. Biodiversity According to the Australian Department of the Environment and Water Resources, biodiversity is "the variety of life: the different plants, animals and micro-organisms, their genes and the ecosystems of which they are a part." Biodiversity and soil are strongly linked, because soil is the medium for a large variety of organisms, and interacts closely with the wider biosphere. Conversely, biological activity is a primary factor in the physical and chemical formation of soils. Soil provides a vital habitat, primarily for microbes (including bacteria and fungi), but also for microfauna (such as protozoa and nematodes), mesofauna (such as microarthropods and enchytraeids), and macrofauna (such as earthworms, termites, and millipedes). The primary role of soil biota is to recycle organic matter that is derived from the "above-ground plant-based food web". Soil is in close cooperation with the wider biosphere. The maintenance of fertile soil is "one of the most vital ecological services the living world performs", and the "mineral and organic contents of soil must be replenished constantly as plants consume soil elements and pass them up the food chain". The correlation of soil and biodiversity can be observed spatially. For example, both natural and agricultural vegetation boundaries correspond closely to soil boundaries, even at continental and global scales. A "subtle synchrony" is how Baskin (1997) describes the relationship that exists between the soil and the diversity of life, above and below the ground. It is not surprising that soil management has a direct effect on biodiversity. This includes practices that influence soil volume, structure, biological, and chemical characteristics, and whether soil exhibits adverse effects such as re Document 3::: Agroecosystems are the ecosystems supporting the food production systems in farms and gardens. As the name implies, at the core of an agroecosystem lies the human activity of agriculture. As such they are the basic unit of study in Agroecology, and Regenerative Agriculture using ecological approaches. Like other ecosystems, agroecosystems form partially closed systems in which animals, plants, microbes, and other living organisms and their environment are interdependent and regularly interact. They are somewhat arbitrarily defined as a spatially and functionally coherent unit of agricultural activity. An agroecosystem can be seen as not restricted to the immediate site of agricultural activity (e.g. the farm). That is, it includes the region that is impacted by this activity, usually by changes to the complexity of species assemblages and energy flows, as well as to the net nutrient balance. Agroecosystems, particularly those managed intensively, are characterized as having simpler species composition, energy and nutrient flows than "natural" ecosystems. Likewise, agroecosystems are often associated with elevated nutrient input, much of which exits the farm leading to eutrophication of connected ecosystems not directly engaged in agriculture. Utilization Forest gardens are probably the world's oldest and most resilient agroecosystem. Forest gardens originated in prehistoric times along jungle-clad river banks and in the wet foothills of monsoon regions. In the gradual process of a family improving their immediate environment, useful tree and vine species were identified, protected and improved whilst undesirable species were eliminated. Eventually superior foreign species were selected and incorporated into the family's garden. Some major organizations are hailing farming within agroecosystems as the way forward for mainstream agriculture. Current farming methods have resulted in over-stretched water resources, high levels of erosion and reduced soil fertility. Document 4::: Conceptual questions or conceptual problems in science, technology, engineering, and mathematics (STEM) education are questions that can be answered based only on the knowledge of relevant concepts, rather than performing extensive calculations. They contrast with most homework and exam problems in science and engineering that typically require plugging in numerical values into previously discussed formulas. Such "plug-and-chug" numerical problems can often be solved correctly by just matching the pattern of the problem to a previously discussed problem and changing the numerical inputs, which requires significant amounts of time to perform the calculations but does not test or deepen the understanding of how the concepts and formulas should work together. Conceptual questions, therefore, provide a good complement to conventional numerical problems because they need minimal or no calculations and instead encourage the students to engage more deeply with the underlying concepts and how they relate to formulas. Conceptual problems are often formulated as multiple-choice questions, making them easy to use during in-class discussions, particularly when utilizing active learning, peer instruction, and audience response. An example of a conceptual question in undergraduate thermodynamics is provided below: During adiabatic expansion of an ideal gas, its temperatureincreases decreases stays the same Impossible to tell/need more information The use of conceptual questions in physics was popularized by Eric Mazur, particularly in the form of multiple-choice tests that he called ConcepTests. In recent years, multiple websites that maintain lists of conceptual questions have been created by instructors for various disciplines. Some books on physics provide many examples of conceptual questions as well. Multiple conceptual questions can be assembled into a concept inventory to test the working knowledge of students at the beginning of a course or to track the improvement in The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. Rain forests contain more species of trees than any other biome. However, scientists have found that the soil of the forest floor is relatively nutrient poor. What could most likely account for this? A. The lack of weathering reduces the availability of the minerals. B. The nutrients are being utilized by the plant life. C. The forest floor does not get enough sunlight. D. The animals eat the nutrients. Answer:
sciq-8426
multiple_choice
How do the vast majority of fish reproduce?
[ "by budding", "sexually", "asexually", "cloning" ]
B
Relavent Documents: Document 0::: The Bachelor of Fisheries Science (B.F.Sc) is a bachelor's degree for studies in fisheries science in India. "Fisheries science" is the academic discipline of managing and understanding fisheries. It is a multidisciplinary science, which draws on the disciplines of aquaculture including breeding, genetics, biotechnology, nutrition, farming, diagnosis of diseases in fishes, other aquatic resources, medical treatment of aquatic animals; fish processing including curing, canning, freezing, value addition, byproducts and waste utilization, quality assurance and certification, fisheries microbiology, fisheries biochemistry; fisheries resource management including biology, anatomy, taxonomy, physiology, population dynamics; fisheries environment including oceanography, limnology, ecology, biodiversity, aquatic pollution; fishing technology including gear and craft engineering, navigation and seamanship, marine engines; fisheries economics and management and fisheries extension. Fisheries science is generally a 4-year course typically taught in a university setting, and can be the focus of an undergraduate, postgraduate or Ph.D. program. Bachelor level fisheries courses (B.F.Sc) were started by the state agricultural universities to make available the much needed technically competent personnel for teaching, research and development and transfer of technology in the field of fisheries science. History Fisheries education in India, started with the establishment of the Central Institute of Fisheries Education, Mumbai in 1961 for in service training and later the establishment of the first Fisheries College at Mangalore under the State Agricultural University (SAU) system in 1969, has grown manifold and evolved in the last four decades as a professional discipline consisting of Bachelors, Masters and Doctoral programmes in various branches of Fisheries Science. At present, 25 Fisheries Colleges offer four-year degree programme in Bachelor of Fisheries Science (B.F.Sc), whi Document 1::: Fisheries science is the academic discipline of managing and understanding fisheries. It is a multidisciplinary science, which draws on the disciplines of limnology, oceanography, freshwater biology, marine biology, meteorology, conservation, ecology, population dynamics, economics, statistics, decision analysis, management, and many others in an attempt to provide an integrated picture of fisheries. In some cases new disciplines have emerged, as in the case of bioeconomics and fisheries law. Because fisheries science is such an all-encompassing field, fisheries scientists often use methods from a broad array of academic disciplines. Over the most recent several decades, there have been declines in fish stocks (populations) in many regions along with increasing concern about the impact of intensive fishing on marine and freshwater biodiversity. Fisheries science is typically taught in a university setting, and can be the focus of an undergraduate, master's or Ph.D. program. Some universities offer fully integrated programs in fisheries science. Graduates of university fisheries programs typically find employment as scientists, fisheries managers of both recreational and commercial fisheries, researchers, aquaculturists, educators, environmental consultants and planners, conservation officers, and many others. Fisheries research Because fisheries take place in a diverse set of aquatic environments (i.e., high seas, coastal areas, large and small rivers, and lakes of all sizes), research requires different sampling equipment, tools, and techniques. For example, studying trout populations inhabiting mountain lakes requires a very different set of sampling tools than, say, studying salmon in the high seas. Ocean fisheries research vessels (FRVs) often require platforms which are capable of towing different types of fishing nets, collecting plankton or water samples from a range of depths, and carrying acoustic fish-finding equipment. Fisheries research vessels a Document 2::: Asexual reproduction in starfish takes place by fission or through autotomy of arms. In fission, the central disc breaks into two pieces and each portion then regenerates the missing parts. In autotomy, an arm is shed with part of the central disc attached, which continues to live independently as a "comet", eventually growing a new set of arms. Although almost all sea stars can regenerate their limbs, only a select few sea star species are able to reproduce in these ways. Fission Fissiparity in the starfish family Asteriidae is confined to the genera Coscinasterias, Stephanasterias and Sclerasterias. Another family in which asexual reproduction by fission has independently arisen is the Asterinidae. The life span is at least four years. A dense population of Stephanasterias albula was studied at North Lubec, Maine. All the individuals were fairly small, with arm lengths not exceeding , but no juveniles were found, suggesting that there had been no recent larval recruitment and that this species may be obligately fissiparous. Fission seemed to take place only in the spring and summer and for any individual, occurred once a year or once every two years. Another species, Coscinasterias tenuispina, has a variable number of arms but is often found with 7 arms divided into dis-similar sized groups of 3 and 4. It is unclear why fission starts in any particular part of the disc rather than any other, but the origin seemed to bear some relation to the position of the madreporites and the longest arm. This species typically reproduces sexually in the winter and by fission at other times of year. The undivided individual has 1 to 5 madreporites and at least one is found in each offspring. New arms usually appear in groups of 4 and are normally accompanied by the appearance of additional madreporites. The presence of multiple madreporites seems to be a prerequisite of fission. In Brazil, only male individuals have been found and fission takes place all the year round, thou Document 3::: A fish (: fish or fishes) is an aquatic, craniate, gill-bearing animal that lacks limbs with digits. Included in this definition are the living hagfish, lampreys, and cartilaginous and bony fish as well as various extinct related groups. Approximately 95% of living fish species are ray-finned fish, belonging to the class Actinopterygii, with around 99% of those being teleosts. The earliest organisms that can be classified as fish were soft-bodied chordates that first appeared during the Cambrian period. Although they lacked a true spine, they possessed notochords which allowed them to be more agile than their invertebrate counterparts. Fish would continue to evolve through the Paleozoic era, diversifying into a wide variety of forms. Many fish of the Paleozoic developed external armor that protected them from predators. The first fish with jaws appeared in the Silurian period, after which many (such as sharks) became formidable marine predators rather than just the prey of arthropods. Most fish are ectothermic ("cold-blooded"), allowing their body temperatures to vary as ambient temperatures change, though some of the large active swimmers like white shark and tuna can hold a higher core temperature. Fish can acoustically communicate with each other, most often in the context of feeding, aggression or courtship. Fish are abundant in most bodies of water. They can be found in nearly all aquatic environments, from high mountain streams (e.g., char and gudgeon) to the abyssal and even hadal depths of the deepest oceans (e.g., cusk-eels and snailfish), although no species has yet been documented in the deepest 25% of the ocean. With 34,300 described species, fish exhibit greater species diversity than any other group of vertebrates. Fish are an important resource for humans worldwide, especially as food. Commercial and subsistence fishers hunt fish in wild fisheries or farm them in ponds or in cages in the ocean (in aquaculture). They are also caught by recreational Document 4::: External fertilization is a mode of reproduction in which a male organism's sperm fertilizes a female organism's egg outside of the female's body. It is contrasted with internal fertilization, in which sperm are introduced via insemination and then combine with an egg inside the body of a female organism. External fertilization typically occurs in water or a moist area to facilitate the movement of sperm to the egg. The release of eggs and sperm into the water is known as spawning. In motile species, spawning females often travel to a suitable location to release their eggs. However, sessile species are less able to move to spawning locations and must release gametes locally. Among vertebrates, external fertilization is most common in amphibians and fish. Invertebrates utilizing external fertilization are mostly benthic, sessile, or both, including animals such as coral, sea anemones, and tube-dwelling polychaetes. Benthic marine plants also use external fertilization to reproduce. Environmental factors and timing are key challenges to the success of external fertilization. While in the water, the male and female must both release gametes at similar times in order to fertilize the egg. Gametes spawned into the water may also be washed away, eaten, or damaged by external factors. Sexual selection Sexual selection may not seem to occur during external fertilization, but there are ways it actually can. The two types of external fertilizers are nest builders and broadcast spawners. For female nest builders, the main choice is the location of where to lay her eggs. A female can choose a nest close to the male she wants to fertilize her eggs, but there is no guarantee that the preferred male will fertilize any of the eggs. Broadcast spawners have a very weak selection, due to the randomness of releasing gametes. To look into the effect of female choice on external fertilization, an in vitro sperm competition experiment was performed. The results concluded that ther The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. How do the vast majority of fish reproduce? A. by budding B. sexually C. asexually D. cloning Answer:
sciq-8339
multiple_choice
Gain or loss of what causes an atom to become a negatively or positively charged ion?
[ "protons", "nucleus", "electrons", "neutrons" ]
C
Relavent Documents: Document 0::: An ion () is an atom or molecule with a net electrical charge. The charge of an electron is considered to be negative by convention and this charge is equal and opposite to the charge of a proton, which is considered to be positive by convention. The net charge of an ion is not zero because its total number of electrons is unequal to its total number of protons. A cation is a positively charged ion with fewer electrons than protons while an anion is a negatively charged ion with more electrons than protons. Opposite electric charges are pulled towards one another by electrostatic force, so cations and anions attract each other and readily form ionic compounds. Ions consisting of only a single atom are termed atomic or monatomic ions, while two or more atoms form molecular ions or polyatomic ions. In the case of physical ionization in a fluid (gas or liquid), "ion pairs" are created by spontaneous molecule collisions, where each generated pair consists of a free electron and a positive ion. Ions are also created by chemical interactions, such as the dissolution of a salt in liquids, or by other means, such as passing a direct current through a conducting solution, dissolving an anode via ionization. History of discovery The word ion was coined from Greek neuter present participle of ienai (), meaning "to go". A cation is something that moves down ( pronounced kato, meaning "down") and an anion is something that moves up (, meaning "up"). They are so called because ions move toward the electrode of opposite charge. This term was introduced (after a suggestion by the English polymath William Whewell) by English physicist and chemist Michael Faraday in 1834 for the then-unknown species that goes from one electrode to the other through an aqueous medium. Faraday did not know the nature of these species, but he knew that since metals dissolved into and entered a solution at one electrode and new metal came forth from a solution at the other electrode; that some kind of Document 1::: An atom is a particle that consists of a nucleus of protons and neutrons surrounded by an electromagnetically-bound cloud of electrons. The atom is the basic particle of the chemical elements, and the chemical elements are distinguished from each other by the number of protons that are in their atoms. For example, any atom that contains 11 protons is sodium, and any atom that contains 29 protons is copper. The number of neutrons defines the isotope of the element. Atoms are extremely small, typically around 100 picometers across. A human hair is about a million carbon atoms wide. This is smaller than the shortest wavelength of visible light, which means humans cannot see atoms with conventional microscopes. Atoms are so small that accurately predicting their behavior using classical physics is not possible due to quantum effects. More than 99.94% of an atom's mass is in the nucleus. Each proton has a positive electric charge, while each electron has a negative charge, and the neutrons, if any are present, have no electric charge. If the numbers of protons and electrons are equal, as they normally are, then the atom is electrically neutral. If an atom has more electrons than protons, then it has an overall negative charge, and is called a negative ion (or anion). Conversely, if it has more protons than electrons, it has a positive charge, and is called a positive ion (or cation). The electrons of an atom are attracted to the protons in an atomic nucleus by the electromagnetic force. The protons and neutrons in the nucleus are attracted to each other by the nuclear force. This force is usually stronger than the electromagnetic force that repels the positively charged protons from one another. Under certain circumstances, the repelling electromagnetic force becomes stronger than the nuclear force. In this case, the nucleus splits and leaves behind different elements. This is a form of nuclear decay. Atoms can attach to one or more other atoms by chemical bonds to Document 2::: In physics, a charged particle is a particle with an electric charge. It may be an ion, such as a molecule or atom with a surplus or deficit of electrons relative to protons. It can also be an electron or a proton, or another elementary particle, which are all believed to have the same charge (except antimatter). Another charged particle may be an atomic nucleus devoid of electrons, such as an alpha particle. A plasma is a collection of charged particles, atomic nuclei and separated electrons, but can also be a gas containing a significant proportion of charged particles. Charged particles are labeled as either positive (+) or negative (-). Only the existence of two "types" of charges are known, and the designations themselves are arbitrarily named. Nothing is inherent to a positively charged particle that makes it "positive", and the same goes for negatively charged particles. Examples Positively charged particles protons and atomic nuclei positrons (antielectrons) alpha particles positive charged pions cations Negatively charged particles electrons antiprotons muons tauons negative charged pions anions Particles without an electric charge neutrons photons neutrinos neutral pions z boson higgs boson atoms Document 3::: In physics and chemistry, ionization energy (IE) (American English spelling), ionisation energy (British English spelling) is the minimum energy required to remove the most loosely bound electron of an isolated gaseous atom, positive ion, or molecule. The first ionization energy is quantitatively expressed as X(g) + energy ⟶ X+(g) + e− where X is any atom or molecule, X+ is the resultant ion when the original atom was stripped of a single electron, and e− is the removed electron. Ionization energy is positive for neutral atoms, meaning that the ionization is an endothermic process. Roughly speaking, the closer the outermost electrons are to the nucleus of the atom, the higher the atom's ionization energy. In physics, ionization energy is usually expressed in electronvolts (eV) or joules (J). In chemistry, it is expressed as the energy to ionize a mole of atoms or molecules, usually as kilojoules per mole (kJ/mol) or kilocalories per mole (kcal/mol). Comparison of ionization energies of atoms in the periodic table reveals two periodic trends which follow the rules of Coulombic attraction: Ionization energy generally increases from left to right within a given period (that is, row). Ionization energy generally decreases from top to bottom in a given group (that is, column). The latter trend results from the outer electron shell being progressively farther from the nucleus, with the addition of one inner shell per row as one moves down the column. The nth ionization energy refers to the amount of energy required to remove the most loosely bound electron from the species having a positive charge of (n − 1). For example, the first three ionization energies are defined as follows: 1st ionization energy is the energy that enables the reaction X ⟶ X+ + e− 2nd ionization energy is the energy that enables the reaction X+ ⟶ X2+ + e− 3rd ionization energy is the energy that enables the reaction X2+ ⟶ X3+ + e− The most notable influences that determine ionization ener Document 4::: In physics, a charge carrier is a particle or quasiparticle that is free to move, carrying an electric charge, especially the particles that carry electric charges in electrical conductors. Examples are electrons, ions and holes. The term is used most commonly in solid state physics. In a conducting medium, an electric field can exert force on these free particles, causing a net motion of the particles through the medium; this is what constitutes an electric current. The electron and the proton are the elementary charge carriers, each carrying one elementary charge (e), of the same magnitude and opposite sign. In conductors In conducting media, particles serve to carry charge: In many metals, the charge carriers are electrons. One or two of the valence electrons from each atom are able to move about freely within the crystal structure of the metal. The free electrons are referred to as conduction electrons, and the cloud of free electrons is called a Fermi gas. Many metals have electron and hole bands. In some, the majority carriers are holes. In electrolytes, such as salt water, the charge carriers are ions, which are atoms or molecules that have gained or lost electrons so they are electrically charged. Atoms that have gained electrons so they are negatively charged are called anions, atoms that have lost electrons so they are positively charged are called cations. Cations and anions of the dissociated liquid also serve as charge carriers in melted ionic solids (see e.g. the Hall–Héroult process for an example of electrolysis of a melted ionic solid). Proton conductors are electrolytic conductors employing positive hydrogen ions as carriers. In a plasma, an electrically charged gas which is found in electric arcs through air, neon signs, and the sun and stars, the electrons and cations of ionized gas act as charge carriers. In a vacuum, free electrons can act as charge carriers. In the electronic component known as the vacuum tube (also called valve), the mobil The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. Gain or loss of what causes an atom to become a negatively or positively charged ion? A. protons B. nucleus C. electrons D. neutrons Answer:
sciq-4931
multiple_choice
The last two stages of aerobic respiration require what?
[ "carbon", "oxygen", "water", "sulfur" ]
B
Relavent Documents: Document 0::: GRE Subject Biochemistry, Cell and Molecular Biology was a standardized exam provided by ETS (Educational Testing Service) that was discontinued in December 2016. It is a paper-based exam and there are no computer-based versions of it. ETS places this exam three times per year: once in April, once in October and once in November. Some graduate programs in the United States recommend taking this exam, while others require this exam score as a part of the application to their graduate programs. ETS sends a bulletin with a sample practice test to each candidate after registration for the exam. There are 180 questions within the biochemistry subject test. Scores are scaled and then reported as a number between 200 and 990; however, in recent versions of the test, the maximum and minimum reported scores have been 760 (corresponding to the 99 percentile) and 320 (1 percentile) respectively. The mean score for all test takers from July, 2009, to July, 2012, was 526 with a standard deviation of 95. After learning that test content from editions of the GRE® Biochemistry, Cell and Molecular Biology (BCM) Test has been compromised in Israel, ETS made the decision not to administer this test worldwide in 2016–17. Content specification Since many students who apply to graduate programs in biochemistry do so during the first half of their fourth year, the scope of most questions is largely that of the first three years of a standard American undergraduate biochemistry curriculum. A sampling of test item content is given below: Biochemistry (36%) A Chemical and Physical Foundations Thermodynamics and kinetics Redox states Water, pH, acid-base reactions and buffers Solutions and equilibria Solute-solvent interactions Chemical interactions and bonding Chemical reaction mechanisms B Structural Biology: Structure, Assembly, Organization and Dynamics Small molecules Macromolecules (e.g., nucleic acids, polysaccharides, proteins and complex lipids) Supramolecular complexes (e.g. Document 1::: Advanced Placement (AP) Biology (also known as AP Bio) is an Advanced Placement biology course and exam offered by the College Board in the United States. For the 2012–2013 school year, the College Board unveiled a new curriculum with a greater focus on "scientific practices". This course is designed for students who wish to pursue an interest in the life sciences. The College Board recommends successful completion of high school biology and high school chemistry before commencing AP Biology, although the actual prerequisites vary from school to school and from state to state. This course, nevertheless, is considered very challenging and one of the most difficult AP classes, as shown with AP Finals grade distributions. Topic outline The exam covers the following 8 units. The percentage indicates the portion of the multiple-choice section of the exam focused on each content area: The course is based on and tests six skills, called scientific practices which include: In addition to the topics above, students are required to be familiar with general lab procedure. Students should know how to collect data, analyze data to form conclusions, and apply those conclusions. Exam Students are allowed to use a four-function, scientific, or graphing calculator. The exam has two sections: a 90 minute multiple choice section and a 90 minute free response section. There are 60 multiple choice questions and six free responses, two long and four short. Both sections are worth 50% of the score. Score distribution Commonly used textbooks Biology, AP Edition by Sylvia Mader (2012, hardcover ) Life: The Science of Biology (Sadava, Heller, Orians, Purves, and Hillis, ) Campbell Biology AP Ninth Edition (Reece, Urry, Cain, Wasserman, Minorsky, and Andrew Jackson ) See also Glossary of biology A.P Bio (TV Show) Document 2::: Cellular respiration is the process by which biological fuels are oxidized in the presence of an inorganic electron acceptor, such as oxygen, to drive the bulk production of adenosine triphosphate (ATP), which contains energy. Cellular respiration may be described as a set of metabolic reactions and processes that take place in the cells of organisms to convert chemical energy from nutrients into ATP, and then release waste products. Cellular respiration is a vital process that happens in the cells of living organisms, including humans, plants, and animals. It's how cells produce energy to power all the activities necessary for life. The reactions involved in respiration are catabolic reactions, which break large molecules into smaller ones, producing large amounts of energy (ATP). Respiration is one of the key ways a cell releases chemical energy to fuel cellular activity. The overall reaction occurs in a series of biochemical steps, some of which are redox reactions. Although cellular respiration is technically a combustion reaction, it is an unusual one because of the slow, controlled release of energy from the series of reactions. Nutrients that are commonly used by animal and plant cells in respiration include sugar, amino acids and fatty acids, and the most common oxidizing agent is molecular oxygen (O2). The chemical energy stored in ATP (the bond of its third phosphate group to the rest of the molecule can be broken allowing more stable products to form, thereby releasing energy for use by the cell) can then be used to drive processes requiring energy, including biosynthesis, locomotion or transportation of molecules across cell membranes. Aerobic respiration Aerobic respiration requires oxygen (O2) in order to create ATP. Although carbohydrates, fats and proteins are consumed as reactants, aerobic respiration is the preferred method of pyruvate production in glycolysis, and requires pyruvate to the mitochondria in order to be fully oxidized by the c Document 3::: The SAT Subject Test in Biology was the name of a one-hour multiple choice test given on biology by the College Board. A student chose whether to take the test depending upon college entrance requirements for the schools in which the student is planning to apply. Until 1994, the SAT Subject Tests were known as Achievement Tests; and from 1995 until January 2005, they were known as SAT IIs. Of all SAT subject tests, the Biology E/M test was the only SAT II that allowed the test taker a choice between the ecological or molecular tests. A set of 60 questions was taken by all test takers for Biology and a choice of 20 questions was allowed between either the E or M tests. This test was graded on a scale between 200 and 800. The average for Molecular is 630 while Ecological is 591. On January 19 2021, the College Board discontinued all SAT Subject tests, including the SAT Subject Test in Biology E/M. This was effective immediately in the United States, and the tests were to be phased out by the following summer for international students. This was done as a response to changes in college admissions due to the impact of the COVID-19 pandemic on education. Format This test had 80 multiple-choice questions that were to be answered in one hour. All questions had five answer choices. Students received one point for each correct answer, lost ¼ of a point for each incorrect answer, and received 0 points for questions left blank. The student's score was based entirely on his or her performance in answering the multiple-choice questions. The questions covered a broad range of topics in general biology. There were more specific questions related respectively on ecological concepts (such as population studies and general Ecology) on the E test and molecular concepts such as DNA structure, translation, and biochemistry on the M test. Preparation The College Board suggested a year-long course in biology at the college preparatory level, as well as a one-year course in algebra, a Document 4::: Conceptual questions or conceptual problems in science, technology, engineering, and mathematics (STEM) education are questions that can be answered based only on the knowledge of relevant concepts, rather than performing extensive calculations. They contrast with most homework and exam problems in science and engineering that typically require plugging in numerical values into previously discussed formulas. Such "plug-and-chug" numerical problems can often be solved correctly by just matching the pattern of the problem to a previously discussed problem and changing the numerical inputs, which requires significant amounts of time to perform the calculations but does not test or deepen the understanding of how the concepts and formulas should work together. Conceptual questions, therefore, provide a good complement to conventional numerical problems because they need minimal or no calculations and instead encourage the students to engage more deeply with the underlying concepts and how they relate to formulas. Conceptual problems are often formulated as multiple-choice questions, making them easy to use during in-class discussions, particularly when utilizing active learning, peer instruction, and audience response. An example of a conceptual question in undergraduate thermodynamics is provided below: During adiabatic expansion of an ideal gas, its temperatureincreases decreases stays the same Impossible to tell/need more information The use of conceptual questions in physics was popularized by Eric Mazur, particularly in the form of multiple-choice tests that he called ConcepTests. In recent years, multiple websites that maintain lists of conceptual questions have been created by instructors for various disciplines. Some books on physics provide many examples of conceptual questions as well. Multiple conceptual questions can be assembled into a concept inventory to test the working knowledge of students at the beginning of a course or to track the improvement in The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. The last two stages of aerobic respiration require what? A. carbon B. oxygen C. water D. sulfur Answer:
sciq-1621
multiple_choice
When a warm air mass becomes trapped between two cold air masses, what type of front occurs?
[ "storm front", "occluded front", "obscured front", "stationary front" ]
B
Relavent Documents: Document 0::: Conceptual questions or conceptual problems in science, technology, engineering, and mathematics (STEM) education are questions that can be answered based only on the knowledge of relevant concepts, rather than performing extensive calculations. They contrast with most homework and exam problems in science and engineering that typically require plugging in numerical values into previously discussed formulas. Such "plug-and-chug" numerical problems can often be solved correctly by just matching the pattern of the problem to a previously discussed problem and changing the numerical inputs, which requires significant amounts of time to perform the calculations but does not test or deepen the understanding of how the concepts and formulas should work together. Conceptual questions, therefore, provide a good complement to conventional numerical problems because they need minimal or no calculations and instead encourage the students to engage more deeply with the underlying concepts and how they relate to formulas. Conceptual problems are often formulated as multiple-choice questions, making them easy to use during in-class discussions, particularly when utilizing active learning, peer instruction, and audience response. An example of a conceptual question in undergraduate thermodynamics is provided below: During adiabatic expansion of an ideal gas, its temperatureincreases decreases stays the same Impossible to tell/need more information The use of conceptual questions in physics was popularized by Eric Mazur, particularly in the form of multiple-choice tests that he called ConcepTests. In recent years, multiple websites that maintain lists of conceptual questions have been created by instructors for various disciplines. Some books on physics provide many examples of conceptual questions as well. Multiple conceptual questions can be assembled into a concept inventory to test the working knowledge of students at the beginning of a course or to track the improvement in Document 1::: In physics and fluid mechanics, a boundary layer is the thin layer of fluid in the immediate vicinity of a bounding surface formed by the fluid flowing along the surface. The fluid's interaction with the wall induces a no-slip boundary condition (zero velocity at the wall). The flow velocity then monotonically increases above the surface until it returns to the bulk flow velocity. The thin layer consisting of fluid whose velocity has not yet returned to the bulk flow velocity is called the velocity boundary layer. The air next to a human is heated resulting in gravity-induced convective airflow, airflow which results in both a velocity and thermal boundary layer. A breeze disrupts the boundary layer, and hair and clothing protect it, making the human feel cooler or warmer. On an aircraft wing, the velocity boundary layer is the part of the flow close to the wing, where viscous forces distort the surrounding non-viscous flow. In the Earth's atmosphere, the atmospheric boundary layer is the air layer (~ 1 km) near the ground. It is affected by the surface; day-night heat flows caused by the sun heating the ground, moisture, or momentum transfer to or from the surface. Types of boundary layer Laminar boundary layers can be loosely classified according to their structure and the circumstances under which they are created. The thin shear layer which develops on an oscillating body is an example of a Stokes boundary layer, while the Blasius boundary layer refers to the well-known similarity solution near an attached flat plate held in an oncoming unidirectional flow and Falkner–Skan boundary layer, a generalization of Blasius profile. When a fluid rotates and viscous forces are balanced by the Coriolis effect (rather than convective inertia), an Ekman layer forms. In the theory of heat transfer, a thermal boundary layer occurs. A surface can have multiple types of boundary layer simultaneously. The viscous nature of airflow reduces the local velocities on a surfac Document 2::: Stable stratification of fluids occurs when each layer is less dense than the one below it. Unstable stratification is when each layer is denser than the one below it. Buoyancy forces tend to preserve stable stratification; the higher layers float on the lower ones. In unstable stratification, on the other hand, buoyancy forces cause convection. The less-dense layers rise though the denser layers above, and the denser layers sink though the less-dense layers below. Stratifications can become more or less stable if layers change density. The processes involved are important in many science and engineering fields. Destablization and mixing Stable stratifications can become unstable if layers change density. This can happen due to outside influences (for instance, if water evaporates from a freshwater lens, making it saltier and denser, or if a pot or layered beverage is heated from below, making the bottom layer less dense). However, it can also happen due to internal diffusion of heat (the warmer layer slowly heats the adjacent cooler one) or other physical properties. This often causes mixing at the interface, creating new diffusive layers (see photo of coffee and milk). Sometimes, two physical properties diffuse between layers simultaneously; salt and temperature, for instance. This may form diffusive layers or even salt fingering, when the surfaces of the diffusive layers become so wavy that there are "fingers" of layers reaching up and down. Not all mixing is driven by density changes. Other physical forces may also mix stably-stratified layers. Sea spray and whitecaps (foaming whitewater on waves) are examples of water mixed into air, and air into water, respectively. In a fierce storm the air/water boundary may grow indistinct. Some of these wind waves are Kelvin-Helmholtz waves. Depending on the size of the velocity difference and the size of the density contrast between the layers, Kelvin-Helmholtz waves can look different. For instance, between two l Document 3::: The Rankine–Hugoniot conditions, also referred to as Rankine–Hugoniot jump conditions or Rankine–Hugoniot relations, describe the relationship between the states on both sides of a shock wave or a combustion wave (deflagration or detonation) in a one-dimensional flow in fluids or a one-dimensional deformation in solids. They are named in recognition of the work carried out by Scottish engineer and physicist William John Macquorn Rankine and French engineer Pierre Henri Hugoniot. The basic idea of the jump conditions is to consider what happens to a fluid when it undergoes a rapid change. Consider, for example, driving a piston into a tube filled with non-reacting gas. A disturbance is propagated through the fluid somewhat faster than the speed of sound. Because the disturbance propagates supersonically, it is a shock wave, and the fluid downstream of the shock has no advance information of it. In a frame of reference moving with the wave, atoms or molecules in front of the wave slam into the wave supersonically. On a microscopic level, they undergo collisions on the scale of the mean free path length until they come to rest in the post-shock flow (but moving in the frame of reference of the wave or of the tube). The bulk transfer of kinetic energy heats the post-shock flow. Because the mean free path length is assumed to be negligible in comparison to all other length scales in a hydrodynamic treatment, the shock front is essentially a hydrodynamic discontinuity. The jump conditions then establish the transition between the pre- and post-shock flow, based solely upon the conservation of mass, momentum, and energy. The conditions are correct even though the shock actually has a positive thickness. This non-reacting example of a shock wave also generalizes to reacting flows, where a combustion front (either a detonation or a deflagration) can be modeled as a discontinuity in a first approximation. Governing Equations In a coordinate system that is moving with t Document 4::: In meteorology, convective available potential energy (commonly abbreviated as CAPE), is the integrated amount of work that the upward (positive) buoyancy force would perform on a given mass of air (called an air parcel) if it rose vertically through the entire atmosphere. Positive CAPE will cause the air parcel to rise, while negative CAPE will cause the air parcel to sink. Nonzero CAPE is an indicator of atmospheric instability in any given atmospheric sounding, a necessary condition for the development of cumulus and cumulonimbus clouds with attendant severe weather hazards. Mechanics CAPE exists within the conditionally unstable layer of the troposphere, the free convective layer (FCL), where an ascending air parcel is warmer than the ambient air. CAPE is measured in joules per kilogram of air (J/kg). Any value greater than 0 J/kg indicates instability and an increasing possibility of thunderstorms and hail. Generic CAPE is calculated by integrating vertically the local buoyancy of a parcel from the level of free convection (LFC) to the equilibrium level (EL): Where is the height of the level of free convection and is the height of the equilibrium level (neutral buoyancy), where is the virtual temperature of the specific parcel, where is the virtual temperature of the environment (note that temperatures must be in the Kelvin scale), and where is the acceleration due to gravity. This integral is the work done by the buoyant force minus the work done against gravity, hence it's the excess energy that can become kinetic energy. CAPE for a given region is most often calculated from a thermodynamic or sounding diagram (e.g., a Skew-T log-P diagram) using air temperature and dew point data usually measured by a weather balloon. CAPE is effectively positive buoyancy, expressed B+ or simply B; the opposite of convective inhibition (CIN), which is expressed as B-, and can be thought of as "negative CAPE". As with CIN, CAPE is usually expressed in J/kg bu The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. When a warm air mass becomes trapped between two cold air masses, what type of front occurs? A. storm front B. occluded front C. obscured front D. stationary front Answer:
sciq-2802
multiple_choice
What do you call the substance in a cooling system that has a low boiling point and changes between liquid and gaseous states?
[ "byproduct", "refrigerant", "emission", "coolant" ]
B
Relavent Documents: Document 0::: Boiling is the rapid phase transition from liquid to gas or vapor; the reverse of boiling is condensation. Boiling occurs when a liquid is heated to its boiling point, so that the vapour pressure of the liquid is equal to the pressure exerted on the liquid by the surrounding atmosphere. Boiling and evaporation are the two main forms of liquid vapourization. There are two main types of boiling: nucleate boiling where small bubbles of vapour form at discrete points, and critical heat flux boiling where the boiling surface is heated above a certain critical temperature and a film of vapour forms on the surface. Transition boiling is an intermediate, unstable form of boiling with elements of both types. The boiling point of water is 100 °C or 212 °F but is lower with the decreased atmospheric pressure found at higher altitudes. Boiling water is used as a method of making it potable by killing microbes and viruses that may be present. The sensitivity of different micro-organisms to heat varies, but if water is held at for one minute, most micro-organisms and viruses are inactivated. Ten minutes at a temperature of 70 °C (158 °F) is also sufficient to inactivate most bacteria. Boiling water is also used in several cooking methods including boiling, steaming, and poaching. Types Free convection The lowest heat flux seen in boiling is only sufficient to cause [natural convection], where the warmer fluid rises due to its slightly lower density. This condition occurs only when the superheat is very low, meaning that the hot surface near the fluid is nearly the same temperature as the boiling point. Nucleate Nucleate boiling is characterised by the growth of bubbles or pops on a heated surface (heterogeneous nucleation), which rises from discrete points on a surface, whose temperature is only slightly above the temperature of the liquid. In general, the number of nucleation sites is increased by an increasing surface temperature. An irregular surface of the boiling Document 1::: Cold is the presence of low temperature, especially in the atmosphere. In common usage, cold is often a subjective perception. A lower bound to temperature is absolute zero, defined as 0.00K on the Kelvin scale, an absolute thermodynamic temperature scale. This corresponds to on the Celsius scale, on the Fahrenheit scale, and on the Rankine scale. Since temperature relates to the thermal energy held by an object or a sample of matter, which is the kinetic energy of the random motion of the particle constituents of matter, an object will have less thermal energy when it is colder and more when it is hotter. If it were possible to cool a system to absolute zero, all motion of the particles in a sample of matter would cease and they would be at complete rest in the classical sense. The object could be described as having zero thermal energy. Microscopically in the description of quantum mechanics, however, matter still has zero-point energy even at absolute zero, because of the uncertainty principle. Cooling Cooling refers to the process of becoming cold, or lowering in temperature. This could be accomplished by removing heat from a system, or exposing the system to an environment with a lower temperature. Coolants are fluids used to cool objects, prevent freezing and prevent erosion in machines. Air cooling is the process of cooling an object by exposing it to air. This will only work if the air is at a lower temperature than the object, and the process can be enhanced by increasing the surface area, increasing the coolant flow rate, or decreasing the mass of the object. Another common method of cooling is exposing an object to ice, dry ice, or liquid nitrogen. This works by conduction; the heat is transferred from the relatively warm object to the relatively cold coolant. Laser cooling and magnetic evaporative cooling are techniques used to reach very low temperatures. History Early history In ancient times, ice was not adopted for food preservation but u Document 2::: This is a list of gases at standard conditions, which means substances that boil or sublime at or below and 1 atm pressure and are reasonably stable. List This list is sorted by boiling point of gases in ascending order, but can be sorted on different values. "sub" and "triple" refer to the sublimation point and the triple point, which are given in the case of a substance that sublimes at 1 atm; "dec" refers to decomposition. "~" means approximately. Known as gas The following list has substances known to be gases, but with an unknown boiling point. Fluoroamine Trifluoromethyl trifluoroethyl trioxide CF3OOOCF2CF3 boils between 10 and 20° Bis-trifluoromethyl carbonate boils between −10 and +10° possibly +12, freezing −60° Difluorodioxirane boils between −80 and −90°. Difluoroaminosulfinyl fluoride F2NS(O)F is a gas but decomposes over several hours Trifluoromethylsulfinyl chloride CF3S(O)Cl Nitrosyl cyanide ?−20° blue-green gas 4343-68-4 Thiazyl chloride NSCl greenish yellow gas; trimerises. Document 3::: A cascade refrigeration cycle is a multi-stage thermodynamic cycle. An example two-stage process is shown at right. (Bottom on mobile) The cascade cycle is often employed for devices such as ULT freezers. In a cascade refrigeration system, two or more vapor-compression cycles with different refrigerants are used. The evaporation-condensation temperatures of each cycle are sequentially lower with some overlap to cover the total temperature drop desired, with refrigerants selected to work efficiently in the temperature range they cover. The low temperature system removes heat from the space to be cooled using an evaporator, and transfers it to a heat exchanger that is cooled by the evaporation of the refrigerant of the high temperature system. Alternatively, a liquid to liquid or similar heat exchanger may be used instead. The high temperature system transfers heat to a conventional condenser that carries the entire heat output of the system and may be passively, fan, or water-cooled. Cascade cycles may be separated by either being sealed in separated loops, or in what is referred to as an "auto-cascade" where the gases are compressed as a mixture but separated as one refrigerant condenses into a liquid while the other continues as a gas through the rest of the cycle. Although an auto-cascade introduces several constraints on the design and operating conditions of the system that may reduce the efficiency it is often used in small systems due to only requiring a single compressor, or in cryogenic systems as it reduces the need for high efficiency heat exchangers to prevent the compressors leaking heat into the cryogenic cycles. Both types can be used in the same system, generally with the separate cycles being the first stage(s) and the auto-cascade being the last stage. Peltier coolers may also be cascaded into a multi-stage system to achieve lower temperatures. Here the hot side of the first Peltier cooler is cooled by the cold side of the second Peltier cooler, Document 4::: Chilled water is a commodity often used to cool a building's air and equipment, especially in situations where many individual rooms must be controlled separately, such as a hotel. The chilled water can be supplied by a vendor, such as a public utility, or created at the location of the building that will use it, which has been the norm. Use Chilled water cooling is not very different from typical residential air conditioning where water is pumped from the chiller to the air handler unit to cool the air. Regardless of who provides it, the chilled water (between 4 and 7 °C (39-45 °F)) is pumped through an air handler, which captures the heat from the air, then disperses the air throughout the area to be cooled. Site generated As part of a chilled water system, the condenser water absorbs heat from the refrigerant in the condenser barrel of the water chiller and is then sent via return lines to a cooling tower, which is a heat exchange device used to transfer waste heat to the atmosphere. The extent to which the cooling tower decreases the temperature depends upon the outside temperature, the relative humidity and the atmospheric pressure. The water in the chilled water circuit will be lowered to the Wet-bulb temperature or dry-bulb temperature before proceeding to the water chiller, where it is cooled to between 4 and 7 °C and pumped to the air handler, where the cycle is repeated. The equipment required includes chillers, cooling towers, pumps and electrical control equipment. The initial capital outlay for these is substantial and maintenance costs can fluctuate. Adequate space must be included in building design for the physical plant and access to equipment. Utility generated The chilled water, having absorbed heat from the air, is sent via return lines back to the utility facility, where the process described in the previous section occurs. Utility generated chilled water eliminates the need for chillers and cooling towers at the property, reduces capital The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. What do you call the substance in a cooling system that has a low boiling point and changes between liquid and gaseous states? A. byproduct B. refrigerant C. emission D. coolant Answer:
sciq-5472
multiple_choice
Blood from what organs enters the left atrium of the heart?
[ "Lymph nodes", "intestines", "lungs", "Brain" ]
C
Relavent Documents: Document 0::: The pulmonary circulation is a division of the circulatory system in all vertebrates. The circuit begins with deoxygenated blood returned from the body to the right atrium of the heart where it is pumped out from the right ventricle to the lungs. In the lungs the blood is oxygenated and returned to the left atrium to complete the circuit. The other division of the circulatory system is the systemic circulation that begins with receiving the oxygenated blood from the pulmonary circulation into the left atrium. From the atrium the oxygenated blood enters the left ventricle where it is pumped out to the rest of the body, returning as deoxygenated blood back to the pulmonary circulation. The blood vessels of the pulmonary circulation are the pulmonary arteries and the pulmonary veins. A separate circulatory circuit known as the bronchial circulation supplies oxygenated blood to the tissue of the larger airways of the lung. Structure De-oxygenated blood leaves the heart, goes to the lungs, and then enters back into the heart. De-oxygenated blood leaves through the right ventricle through the pulmonary artery. From the right atrium, the blood is pumped through the tricuspid valve (or right atrioventricular valve) into the right ventricle. Blood is then pumped from the right ventricle through the pulmonary valve and into the pulmonary artery. Lungs The pulmonary arteries carry deoxygenated blood to the lungs, where carbon dioxide is released and oxygen is picked up during respiration. Arteries are further divided into very fine capillaries which are extremely thin-walled. The pulmonary veins return oxygenated blood to the left atrium of the heart. Veins Oxygenated blood leaves the lungs through pulmonary veins, which return it to the left part of the heart, completing the pulmonary cycle. This blood then enters the left atrium, which pumps it through the mitral valve into the left ventricle. From the left ventricle, the blood passes through the aortic valve to the Document 1::: The tubular heart or primitive heart tube is the earliest stage of heart development. From the inflow to the outflow, it consists of sinus venosus, primitive atrium, the primitive ventricle, the bulbus cordis, and truncus arteriosus. It forms primarily from splanchnic mesoderm. More specifically, they form from endocardial tubes, starting at day 21. Document 2::: The blood circulatory system is a system of organs that includes the heart, blood vessels, and blood which is circulated throughout the entire body of a human or other vertebrate. It includes the cardiovascular system, or vascular system, that consists of the heart and blood vessels (from Greek kardia meaning heart, and from Latin vascula meaning vessels). The circulatory system has two divisions, a systemic circulation or circuit, and a pulmonary circulation or circuit. Some sources use the terms cardiovascular system and vascular system interchangeably with the circulatory system. The network of blood vessels are the great vessels of the heart including large elastic arteries, and large veins; other arteries, smaller arterioles, capillaries that join with venules (small veins), and other veins. The circulatory system is closed in vertebrates, which means that the blood never leaves the network of blood vessels. Some invertebrates such as arthropods have an open circulatory system. Diploblasts such as sponges, and comb jellies lack a circulatory system. Blood is a fluid consisting of plasma, red blood cells, white blood cells, and platelets; it is circulated around the body carrying oxygen and nutrients to the tissues and collecting and disposing of waste materials. Circulated nutrients include proteins and minerals and other components include hemoglobin, hormones, and gases such as oxygen and carbon dioxide. These substances provide nourishment, help the immune system to fight diseases, and help maintain homeostasis by stabilizing temperature and natural pH. In vertebrates, the lymphatic system is complementary to the circulatory system. The lymphatic system carries excess plasma (filtered from the circulatory system capillaries as interstitial fluid between cells) away from the body tissues via accessory routes that return excess fluid back to blood circulation as lymph. The lymphatic system is a subsystem that is essential for the functioning of the bloo Document 3::: The thoracic aorta is a part of the aorta located in the thorax. It is a continuation of the aortic arch. It is located within the posterior mediastinal cavity, but frequently bulges into the left pleural cavity. The descending thoracic aorta begins at the lower border of the fourth thoracic vertebra and ends in front of the lower border of the twelfth thoracic vertebra, at the aortic hiatus in the diaphragm where it becomes the abdominal aorta. At its commencement, it is situated on the left of the vertebral column; it approaches the median line as it descends; and, at its termination, lies directly in front of the column. The thoracic aorta has a curved shape that faces forward, and has small branches. It has a radius of approximately 1.16 cm. Structure The thoracic aorta is part of the descending aorta, which has different parts named according to their structure or location. The thoracic aorta is a continuation of the descending aorta and becomes the abdominal aorta when it passes through the diaphragm. The initial part of the aorta, the ascending aorta, rises out of the left ventricle, from which it is separated by the aortic valve. The two coronary arteries of the heart arise from the aortic root, just above the cusps of the aortic valve. The aorta then arches back over the right pulmonary artery. Three vessels come out of the aortic arch: the brachiocephalic artery, the left common carotid artery, and the left subclavian artery. These vessels supply blood to the head, neck, thorax and upper limbs. Behind the descending thoracic aorta is the vertebral column and the hemiazygos vein. To the right is the azygos veins and thoracic duct, and to the left is the left pleura and lung. In front of the thoracic aorta lies the root of the left lung, the pericardium, the esophagus, and the diaphragm. The esophagus, which is covered by a nerve plexus lies to the right of the descending thoracic aorta. Lower, the esophagus passes in front of the aorta, and ultimately Document 4::: The right border of the heart (right margin of heart) is a long border on the surface of the heart, and is formed by the right atrium. The atrial portion is rounded and almost vertical; it is situated behind the third, fourth, and fifth right costal cartilages about 1.25 cm. from the margin of the sternum. The ventricular portion, thin and sharp, is named the acute margin; it is nearly horizontal, and extends from the sternal end of the sixth right coastal cartilage to the apex of the heart. The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. Blood from what organs enters the left atrium of the heart? A. Lymph nodes B. intestines C. lungs D. Brain Answer:
ai2_arc-711
multiple_choice
Which of the following best describes a change in Earth's atmosphere made by early photosynthetic life?
[ "increased level of oxygen", "increased level of carbon dioxide", "decreased ability to support life", "decreased ability to transmit light" ]
A
Relavent Documents: Document 0::: Carbon dioxide is a chemical compound with the chemical formula . It is made up of molecules that each have one carbon atom covalently double bonded to two oxygen atoms. It is found in the gas state at room temperature, and as the source of available carbon in the carbon cycle, atmospheric is the primary carbon source for life on Earth. In the air, carbon dioxide is transparent to visible light but absorbs infrared radiation, acting as a greenhouse gas. Carbon dioxide is soluble in water and is found in groundwater, lakes, ice caps, and seawater. When carbon dioxide dissolves in water, it forms carbonate and mainly bicarbonate (), which causes ocean acidification as atmospheric levels increase. It is a trace gas in Earth's atmosphere at 421 parts per million (ppm), or about 0.04% (as of May 2022) having risen from pre-industrial levels of 280 ppm or about 0.025%. Burning fossil fuels is the primary cause of these increased concentrations and also the primary cause of climate change. Its concentration in Earth's pre-industrial atmosphere since late in the Precambrian was regulated by organisms and geological phenomena. Plants, algae and cyanobacteria use energy from sunlight to synthesize carbohydrates from carbon dioxide and water in a process called photosynthesis, which produces oxygen as a waste product. In turn, oxygen is consumed and is released as waste by all aerobic organisms when they metabolize organic compounds to produce energy by respiration. is released from organic materials when they decay or combust, such as in forest fires. Since plants require for photosynthesis, and humans and animals depend on plants for food, is necessary for the survival of life on earth. Carbon dioxide is 53% more dense than dry air, but is long lived and thoroughly mixes in the atmosphere. About half of excess emissions to the atmosphere are absorbed by land and ocean carbon sinks. These sinks can become saturated and are volatile, as decay and wildfires result i Document 1::: Plant ecology is a subdiscipline of ecology that studies the distribution and abundance of plants, the effects of environmental factors upon the abundance of plants, and the interactions among plants and between plants and other organisms. Examples of these are the distribution of temperate deciduous forests in North America, the effects of drought or flooding upon plant survival, and competition among desert plants for water, or effects of herds of grazing animals upon the composition of grasslands. A global overview of the Earth's major vegetation types is provided by O.W. Archibold. He recognizes 11 major vegetation types: tropical forests, tropical savannas, arid regions (deserts), Mediterranean ecosystems, temperate forest ecosystems, temperate grasslands, coniferous forests, tundra (both polar and high mountain), terrestrial wetlands, freshwater ecosystems and coastal/marine systems. This breadth of topics shows the complexity of plant ecology, since it includes plants from floating single-celled algae up to large canopy forming trees. One feature that defines plants is photosynthesis. Photosynthesis is the process of a chemical reactions to create glucose and oxygen, which is vital for plant life. One of the most important aspects of plant ecology is the role plants have played in creating the oxygenated atmosphere of earth, an event that occurred some 2 billion years ago. It can be dated by the deposition of banded iron formations, distinctive sedimentary rocks with large amounts of iron oxide. At the same time, plants began removing carbon dioxide from the atmosphere, thereby initiating the process of controlling Earth's climate. A long term trend of the Earth has been toward increasing oxygen and decreasing carbon dioxide, and many other events in the Earth's history, like the first movement of life onto land, are likely tied to this sequence of events. One of the early classic books on plant ecology was written by J.E. Weaver and F.E. Clements. It Document 2::: The evolution of photosynthesis refers to the origin and subsequent evolution of photosynthesis, the process by which light energy is used to assemble sugars from carbon dioxide and a hydrogen and electron source such as water. The process of photosynthesis was discovered by Jan Ingenhousz, a Dutch-born British physician and scientist, first publishing about it in 1779. The first photosynthetic organisms probably evolved early in the evolutionary history of life and most likely used reducing agents such as hydrogen rather than water. There are three major metabolic pathways by which photosynthesis is carried out: C3 photosynthesis, C4 photosynthesis, and CAM photosynthesis. C3 photosynthesis is the oldest and most common form. A C3 plant uses the Calvin cycle for the initial steps that incorporate into organic material. A C4 plant prefaces the Calvin cycle with reactions that incorporate into four-carbon compounds. A CAM plant uses crassulacean acid metabolism, an adaptation for photosynthesis in arid conditions. C4 and CAM plants have special adaptations that save water. Origin Available evidence from geobiological studies of Archean (>2500 Ma) sedimentary rocks indicates that life existed 3500 Ma. Fossils of what are thought to be filamentous photosynthetic organisms have been dated at 3.4 billion years old, consistent with recent studies of photosynthesis. Early photosynthetic systems, such as those from green and purple sulfur and green and purple nonsulfur bacteria, are thought to have been anoxygenic, using various molecules as electron donors. Green and purple sulfur bacteria are thought to have used hydrogen and hydrogen sulfide as electron and hydrogen donors. Green nonsulfur bacteria used various amino and other organic acids. Purple nonsulfur bacteria used a variety of nonspecific organic and inorganic molecules. It is suggested that photosynthesis likely originated at low-wavelength geothermal light from acidic hydrothermal vents, Zn-tetrapyrroles w Document 3::: An atmosphere () is a layer of gas or layers of gases that envelop a planet, and is held in place by the gravity of the planetary body. A planet retains an atmosphere when the gravity is great and the temperature of the atmosphere is low. A stellar atmosphere is the outer region of a star, which includes the layers above the opaque photosphere; stars of low temperature might have outer atmospheres containing compound molecules. The atmosphere of Earth is composed of nitrogen (78 %), oxygen (21 %), argon (0.9 %), carbon dioxide (0.04 %) and trace gases. Most organisms use oxygen for respiration; lightning and bacteria perform nitrogen fixation to produce ammonia that is used to make nucleotides and amino acids; plants, algae, and cyanobacteria use carbon dioxide for photosynthesis. The layered composition of the atmosphere minimises the harmful effects of sunlight, ultraviolet radiation, solar wind, and cosmic rays to protect organisms from genetic damage. The current composition of the atmosphere of the Earth is the product of billions of years of biochemical modification of the paleoatmosphere by living organisms. Composition The initial gaseous composition of an atmosphere is determined by the chemistry and temperature of the local solar nebula from which a planet is formed, and the subsequent escape of some gases from the interior of the atmosphere proper. The original atmosphere of the planets originated from a rotating disc of gases, which collapsed onto itself and then divided into a series of spaced rings of gas and matter that, which later condensed to form the planets of the Solar System. The atmospheres of the planets Venus and Mars are principally composed of carbon dioxide and nitrogen, argon and oxygen. The composition of Earth's atmosphere is determined by the by-products of the life that it sustains. Dry air (mixture of gases) from Earth's atmosphere contains 78.08% nitrogen, 20.95% oxygen, 0.93% argon, 0.04% carbon dioxide, and traces of hydrogen, Document 4::: Soil respiration refers to the production of carbon dioxide when soil organisms respire. This includes respiration of plant roots, the rhizosphere, microbes and fauna. Soil respiration is a key ecosystem process that releases carbon from the soil in the form of CO2. CO2 is acquired by plants from the atmosphere and converted into organic compounds in the process of photosynthesis. Plants use these organic compounds to build structural components or respire them to release energy. When plant respiration occurs below-ground in the roots, it adds to soil respiration. Over time, plant structural components are consumed by heterotrophs. This heterotrophic consumption releases CO2 and when this CO2 is released by below-ground organisms, it is considered soil respiration. The amount of soil respiration that occurs in an ecosystem is controlled by several factors. The temperature, moisture, nutrient content and level of oxygen in the soil can produce extremely disparate rates of respiration. These rates of respiration can be measured in a variety of methods. Other methods can be used to separate the source components, in this case the type of photosynthetic pathway (C3/C4), of the respired plant structures. Soil respiration rates can be largely affected by human activity. This is because humans have the ability to and have been changing the various controlling factors of soil respiration for numerous years. Global climate change is composed of numerous changing factors including rising atmospheric CO2, increasing temperature and shifting precipitation patterns. All of these factors can affect the rate of global soil respiration. Increased nitrogen fertilization by humans also has the potential to affect rates over the entire planet. Soil respiration and its rate across ecosystems is extremely important to understand. This is because soil respiration plays a large role in global carbon cycling as well as other nutrient cycles. The respiration of plant structures releases The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. Which of the following best describes a change in Earth's atmosphere made by early photosynthetic life? A. increased level of oxygen B. increased level of carbon dioxide C. decreased ability to support life D. decreased ability to transmit light Answer:
sciq-7878
multiple_choice
During the winter, production of what amine involved in the sleep-wake cycle may be affected by less sunlight hours?
[ "melatonin", "serotonin", "dopamine", "folate" ]
A
Relavent Documents: Document 0::: A chronotype is the behavioral manifestation of underlying circadian rhythm's myriad of physical processes. A person's chronotype is the propensity for the individual to sleep at a particular time during a 24-hour period. Eveningness (delayed sleep period; most active and alert in the evening) and morningness (advanced sleep period; most active and alert in the morning) are the two extremes with most individuals having some flexibility in the timing of their sleep period. However, across development there are changes in the propensity of the sleep period with pre-pubescent children preferring an advanced sleep period, adolescents preferring a delayed sleep period and many elderly preferring an advanced sleep period. The causes and regulation of chronotypes, including developmental change, individual propensity for a specific chronotype, and flexible versus fixed chronotypes have yet to be determined. However, research is beginning to shed light on these questions, such as the relationship between age and chronotype. There are candidate genes (called CLOCK genes) that exist in most cells in the body and brain, referred to as the circadian system that regulate physiological phenomena (hormone levels, metabolic function, body temperature, cognitive faculties, and sleeping). With the exception of the most extreme and rigid chronotypes, regulation is likely due to gene-environment interactions. Important environmental cues (zeitgebers) include light, feeding, social behavior, and work and school schedules. Additional research has proposed an evolutionary link between chronotype and nighttime vigilance in ancestral societies. Humans are normally diurnal creatures, that is to say they are active in the daytime. As with most other diurnal animals, human activity-rest patterns are endogenously controlled by biological clocks with a circadian (~24-hour) period. Chronotypes have also been investigated in other species, such as fruit flies and mice. Normal variation in chron Document 1::: The morningness–eveningness questionnaire (MEQ) is a self-assessment questionnaire developed by researchers James A. Horne and Olov Östberg in 1976. Its main purpose is to measure whether a person's circadian rhythm (biological clock) produces peak alertness in the morning, in the evening, or in between. The original study showed that the subjective time of peak alertness correlates with the time of peak body temperature; morning types (early birds) have an earlier temperature peak than evening types (night owls), with intermediate types having temperature peaks between the morning and evening chronotype groups. The MEQ is widely used in psychological and medical research and has been professionally cited more than 4,000 times. MEQ questions The standard MEQ consists of 19 multiple-choice questions, each having four or five response options. Some example questions are: Responses to the questions are combined to form a composite score that indicates the degree to which the respondent favors morning versus evening. Subsequent researchers have created shorter versions with four, five, or six questions. Related research According to a 1997 study of identical and fraternal twins, 54% of variance in morningness–eveningness was due to genetic variability, 3% was due to age, and the rest was explained by non-shared environmental influences and errors in measurement. A study in 2000 showed that both "morningness" and "eveningness" participants performed poorly in the morning on the Multidimensional Aptitude Battery (MAB) tests. It thus did not support the hypothesis that there is a reliable relationship between morningness–eveningness, time of day, and cognitive ability. A study in 2008 examined the relationship between morningness and anxiety in adults aged 40–63. It found a negative correlation in women, but not in men, suggesting that gender-related variables may be attributed to morningness and eveningness when looking at mood. A study in 2009 examined differences Document 2::: A circadian clock, or circadian oscillator, is a biochemical oscillator that cycles with a stable phase and is synchronized with solar time. Such a clock's in vivo period is necessarily almost exactly 24 hours (the earth's current solar day). In most living things, internally synchronized circadian clocks make it possible for the organism to anticipate daily environmental changes corresponding with the day–night cycle and adjust its biology and behavior accordingly. The term circadian derives from the Latin circa (about) dies (a day), since when taken away from external cues (such as environmental light), they do not run to exactly 24 hours. Clocks in humans in a lab in constant low light, for example, will average about 24.2 hours per day, rather than 24 hours exactly. The normal body clock oscillates with an endogenous period of exactly 24 hours, it entrains, when it receives sufficient daily corrective signals from the environment, primarily daylight and darkness. Circadian clocks are the central mechanisms that drive circadian rhythms. They consist of three major components: a central biochemical oscillator with a period of about 24 hours that keeps time; a series of input pathways to this central oscillator to allow entrainment of the clock; a series of output pathways tied to distinct phases of the oscillator that regulate overt rhythms in biochemistry, physiology, and behavior throughout an organism. The clock is reset as an organism senses environmental time cues of which the primary one is light. Circadian oscillators are ubiquitous in tissues of the body where they are synchronized by both endogenous and external signals to regulate transcriptional activity throughout the day in a tissue-specific manner. The circadian clock is intertwined with most cellular metabolic processes and it is affected by organism aging. The basic molecular mechanisms of the biological clock have been defined in vertebrate species, Drosophila melanogaster, plants, fungi, b Document 3::: Elizabeth Maywood is an English researcher who studies circadian rhythms and sleep in mice. Her studies are focused on the suprachiasmatic nucleus (SCN), a small region of the brain that controls circadian rhythms. Biography Elizabeth Susan Maywood was born in Leeds, England. She attained a degree in Pharmacology before going on to obtain her Ph.D. in biochemical endocrinology in London. After receiving her Ph.D., in 1988 she joined Michael Hastings’ group as a postdoc in the Department of Anatomy at the University of Cambridge (now part of the Physiology, Development and Neuroscience (PDN) Department) to study seasonal biology in Syrian hamsters. In 2001 she moved with Hastings to the MRC Laboratory of Molecular Biology in Cambridge, where he had set up a new research group to study the molecular neurobiology of circadian rhythms. Since then, she has moved the focus of her study to circadian rhythms and sleep. Research contributions Early research in the field of chronobiology utilizing lesion experiments has suggested that the suprachiasmatic nucleus (SCN) serves as the master circadian clock of the mammalian brain and is entrained through retinal inputs. More recently, research on the SCN has focused on the function of individual neuropeptides and their complex interactions in the scope of the SCN circuitry. Research into the role of vasoactive intestinal polypeptide (VIP), gastrin-releasing peptide (GRP), arginine vasopressin (AVP), and GABA has started to paint a picture of the hierarchy of neuropeptides in the maintenance of circadian coherence in the SCN. Maywood's research investigates the complex interactions of various neuropeptides and the role of events at the membrane in feedback loops in the SCN. Furthermore, Maywood's research also seeks to understand how different parts of the SCN coordinate rhythms and more broadly understand the interaction of the SCN with sleep. Studies of CRY1/CRY2 in the Suprachiasmatic Nucleus In one experiment, Maywood Document 4::: AANAT is a gene that encodes an enzyme aralkylamine N-acetyltransferase. It is the key regulator of day-night cycle (circadian rhythm). It is found in all animals. In humans it is present on chromosome 17, in chimpanzees chromosome 17, in mouse and sheep chromosome 11, in rat chromosome 10, and in chicken chromosome 18. Function The protein encoded by this gene belongs to the acetyltransferase superfamily. It is the penultimate enzyme in melatonin synthesis and controls the night/day rhythm in melatonin production in the vertebrate pineal gland. Melatonin is essential for the function of the circadian clock that influences activity and sleep. This enzyme is regulated by cAMP-dependent phosphorylation that promotes its interaction with 14-3-3 proteins and thus protects the enzyme against proteasomal degradation. Clinical significance This gene may contribute to numerous genetic diseases such as delayed sleep phase syndrome. The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. During the winter, production of what amine involved in the sleep-wake cycle may be affected by less sunlight hours? A. melatonin B. serotonin C. dopamine D. folate Answer:
sciq-2415
multiple_choice
What is the period from earth's origin to the beginning of the phanerozoic eon?
[ "precambrian", "Paleolithic", "Cenozoic", "anatolian" ]
A
Relavent Documents: Document 0::: Timeline Paleontology Paleontology timelines Document 1::: The Paleophytic is an era of time preceding the Mesophytic and the Cenophytic and succeeding the Proterophytic. The phytic eras are based on the evolution of plants, and differ from the "-zoic" eras, which are based on animal life. The Paleophytic begins in the late Ordovician Period with the rise of the vascular plants and continues until the Kingurian, when advanced gymnosperms took over the Earth's floral niches. Document 2::: A pre-STEM program is a course of study at any two-year college that prepares a student to transfer to a four-year school to earn a bachelor's degree in a STEM field. Overview The concept of a pre-STEM program is being developed to address America's need for more college-trained professionals in science, technology, engineering, and mathematics (STEM). It is an innovation meant to fill a gap at community colleges that do not have 'major' degree paths that students identify with on their way to earning an Associates degree. Students must complete a considerable amount of STEM coursework before transferring from a two-year school to a four-year school and earn a baccalaureate degree in a STEM field. Schools with a pre-STEM program are able to identify those students and support them with STEM-specific academic and career advising, increasing the student's chances of going on to earn a STEM baccalaureate degree in a timely fashion. With over 50% of America's college-bound students starting their college career at public or private two-year school, and with a very small proportion of students who start college at a two-year school matriculating to and earning STEM degrees from four-year schools, pre-STEM programs have great potential for broadening participation in baccalaureate STEM studies. Example programs The effectiveness of pre-STEM programs is being investigated by a consortium of schools in Missouri: Moberly Area Community College, St. Charles Community College, Metropolitan Community College, and Truman State University. A larger group of schools met at the Belknap Springs Meetings in October 2009 to discuss the challenges and opportunities presented by STEM-focused partnerships between 2-year and 4-year schools. Each program represented a two-year school and a four-year school that were trying to increase the number of people who earn a baccalaureate degree in a STEM area through various means, some of which were pre-STEM programs. Other methods includes Document 3::: Chronology (from Latin chronologia, from Ancient Greek , chrónos, "time"; and , -logia) is the science of arranging events in their order of occurrence in time. Consider, for example, the use of a timeline or sequence of events. It is also "the determination of the actual temporal sequence of past events". Chronology is a part of periodization. It is also a part of the discipline of history including earth history, the earth sciences, and study of the geologic time scale. Related fields Chronology is the science of locating historical events in time. It relies upon chronometry, which is also known as timekeeping, and historiography, which examines the writing of history and the use of historical methods. Radiocarbon dating estimates the age of formerly living things by measuring the proportion of carbon-14 isotope in their carbon content. Dendrochronology estimates the age of trees by correlation of the various growth rings in their wood to known year-by-year reference sequences in the region to reflect year-to-year climatic variation. Dendrochronology is used in turn as a calibration reference for radiocarbon dating curves. Calendar and era The familiar terms calendar and era (within the meaning of a coherent system of numbered calendar years) concern two complementary fundamental concepts of chronology. For example, during eight centuries the calendar belonging to the Christian era, which era was taken in use in the 8th century by Bede, was the Julian calendar, but after the year 1582 it was the Gregorian calendar. Dionysius Exiguus (about the year 500) was the founder of that era, which is nowadays the most widespread dating system on earth. An epoch is the date (year usually) when an era begins. Ab Urbe condita era Ab Urbe condita is Latin for "from the founding of the City (Rome)", traditionally set in 753 BC. It was used to identify the Roman year by a few Roman historians. Modern historians use it much more frequently than the Romans themselves did; the Document 4::: Conceptual questions or conceptual problems in science, technology, engineering, and mathematics (STEM) education are questions that can be answered based only on the knowledge of relevant concepts, rather than performing extensive calculations. They contrast with most homework and exam problems in science and engineering that typically require plugging in numerical values into previously discussed formulas. Such "plug-and-chug" numerical problems can often be solved correctly by just matching the pattern of the problem to a previously discussed problem and changing the numerical inputs, which requires significant amounts of time to perform the calculations but does not test or deepen the understanding of how the concepts and formulas should work together. Conceptual questions, therefore, provide a good complement to conventional numerical problems because they need minimal or no calculations and instead encourage the students to engage more deeply with the underlying concepts and how they relate to formulas. Conceptual problems are often formulated as multiple-choice questions, making them easy to use during in-class discussions, particularly when utilizing active learning, peer instruction, and audience response. An example of a conceptual question in undergraduate thermodynamics is provided below: During adiabatic expansion of an ideal gas, its temperatureincreases decreases stays the same Impossible to tell/need more information The use of conceptual questions in physics was popularized by Eric Mazur, particularly in the form of multiple-choice tests that he called ConcepTests. In recent years, multiple websites that maintain lists of conceptual questions have been created by instructors for various disciplines. Some books on physics provide many examples of conceptual questions as well. Multiple conceptual questions can be assembled into a concept inventory to test the working knowledge of students at the beginning of a course or to track the improvement in The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. What is the period from earth's origin to the beginning of the phanerozoic eon? A. precambrian B. Paleolithic C. Cenozoic D. anatolian Answer:
sciq-792
multiple_choice
What occurs when an unstable nucleus emits a beta particle and energy?
[ "alpha decay", "methane decay", "nucleus decay", "beta decay" ]
D
Relavent Documents: Document 0::: In nuclear physics, beta decay (β-decay) is a type of radioactive decay in which an atomic nucleus emits a beta particle (fast energetic electron or positron), transforming into an isobar of that nuclide. For example, beta decay of a neutron transforms it into a proton by the emission of an electron accompanied by an antineutrino; or, conversely a proton is converted into a neutron by the emission of a positron with a neutrino in so-called positron emission. Neither the beta particle nor its associated (anti-)neutrino exist within the nucleus prior to beta decay, but are created in the decay process. By this process, unstable atoms obtain a more stable ratio of protons to neutrons. The probability of a nuclide decaying due to beta and other forms of decay is determined by its nuclear binding energy. The binding energies of all existing nuclides form what is called the nuclear band or valley of stability. For either electron or positron emission to be energetically possible, the energy release (see below) or Q value must be positive. Beta decay is a consequence of the weak force, which is characterized by relatively lengthy decay times. Nucleons are composed of up quarks and down quarks, and the weak force allows a quark to change its flavour by emission of a W boson leading to creation of an electron/antineutrino or positron/neutrino pair. For example, a neutron, composed of two down quarks and an up quark, decays to a proton composed of a down quark and two up quarks. Electron capture is sometimes included as a type of beta decay, because the basic nuclear process, mediated by the weak force, is the same. In electron capture, an inner atomic electron is captured by a proton in the nucleus, transforming it into a neutron, and an electron neutrino is released. Description The two types of beta decay are known as beta minus and beta plus. In beta minus (β−) decay, a neutron is converted to a proton, and the process creates an electron and an electron antineutrino; Document 1::: In nuclear physics, double beta decay is a type of radioactive decay in which two neutrons are simultaneously transformed into two protons, or vice versa, inside an atomic nucleus. As in single beta decay, this process allows the atom to move closer to the optimal ratio of protons and neutrons. As a result of this transformation, the nucleus emits two detectable beta particles, which are electrons or positrons. The literature distinguishes between two types of double beta decay: ordinary double beta decay and neutrinoless double beta decay. In ordinary double beta decay, which has been observed in several isotopes, two electrons and two electron antineutrinos are emitted from the decaying nucleus. In neutrinoless double beta decay, a hypothesized process that has never been observed, only electrons would be emitted. History The idea of double beta decay was first proposed by Maria Goeppert Mayer in 1935. In 1937, Ettore Majorana demonstrated that all results of beta decay theory remain unchanged if the neutrino were its own antiparticle, now known as a Majorana particle. In 1939, Wendell H. Furry proposed that if neutrinos are Majorana particles, then double beta decay can proceed without the emission of any neutrinos, via the process now called neutrinoless double beta decay. It is not yet known whether the neutrino is a Majorana particle, and, relatedly, whether neutrinoless double beta decay exists in nature. In 1930–1940s, parity violation in weak interactions was not known, and consequently calculations showed that neutrinoless double beta decay should be much more likely to occur than ordinary double beta decay, if neutrinos were Majorana particles. The predicted half-lives were on the order of ~ years. Efforts to observe the process in laboratory date back to at least 1948 when E.L. Fireman made the first attempt to directly measure the half-life of the isotope with a Geiger counter. Radiometric experiments through about 1960 produced negative results or Document 2::: When embedded in an atomic nucleus, neutrons are (usually) stable particles. Outside the nucleus, free neutrons are unstable and have a mean lifetime of (about , ). Therefore, the half-life for this process (which differs from the mean lifetime by a factor of ) is (about , ). (An article published in October 2021, arrives at for the mean lifetime). The beta decay of the neutron described in this article can be notated at four slightly different levels of detail, as shown in four layers of Feynman diagrams in a section below. The hard-to-observe quickly decays into an electron and its matching antineutrino. The subatomic reaction shown immediately above depicts the process as it was first understood, in the first half of the 20th century. The boson () vanished so quickly that it was not detected until much later. Later, beta decay was understood to occur by the emission of a weak boson (), sometimes called a charged weak current. Beta decay specifically involves the emission of a boson from one of the down quarks hidden within the neutron, thereby converting the down quark into an up quark and consequently the neutron into a proton. The following diagram gives a summary sketch of the beta decay process according to the present level of understanding. For diagrams at several levels of detail, see § Decay process, below. Energy budget For the free neutron, the decay energy for this process (based on the rest masses of the neutron, proton and electron) is . That is the difference between the rest mass of the neutron and the sum of the rest masses of the products. That difference has to be carried away as kinetic energy. The maximal energy of the beta decay electron (in the process wherein the neutrino receives a vanishingly small amount of kinetic energy) has been measured at . The latter number is not well-enough measured to determine the comparatively tiny rest mass of the neutrino (which must in theory be subtracted from the maximal electron kinetic ene Document 3::: Reaction products This sequence of reactions can be understood by thinking of the two interacting carbon nuclei as coming together to form an excited state of the 24Mg nucleus, which then decays in one of the five ways listed above. The first two reactions are strongly exothermic, as indicated by the large positive energies released, and ar Document 4::: Delayed nuclear radiation is a form of nuclear decay. When an isotope decays into a very short-lived isotope and then decays again to a relatively long-lived isotope, the products of the second decay are delayed. The short-lived isotope is usually a meta-stable nuclear isomer. For example, gallium-73 decays via beta decay into germanium-73m2, which is short-lived (499ms). The germanium isotope emits two weak gamma rays and a conversion electron. → + 2 + ; → + (53.4 keV) + (13.3 keV) + Because the middle isotope is so short-lived, the gamma rays are considered part of the gallium decay. Therefore, the above equations are combined. → + 4 + 2 However, since there is a short time delay between the beta decay and the high energy gamma emissions and the third and fourth gamma rays, it is said that the lower energy gamma rays are delayed. Delayed gamma emissions are the most common form of delayed radiation, but are not the only form. It is common for the short-lived isotopes to have delayed emissions of various particles. In these cases, it is commonly called a beta-delayed emission. This is because the decay is delayed until a beta decay takes place. For instance, nitrogen-17 emits two beta-delayed neutrons after its primary beta emission. Just as in the above delayed gamma emission, the nitrogen is not the actual source of the neutrons, the source of the neurons is a short-lived isotope of oxygen. See also Prompt neutron External links Flash animation of beta-delayed neutron emission Flash animation of beta-delayed proton emission Flash animation of beta-delayed alpha emission Radioactivity The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. What occurs when an unstable nucleus emits a beta particle and energy? A. alpha decay B. methane decay C. nucleus decay D. beta decay Answer:
ai2_arc-466
multiple_choice
Four different students take turns pushing a large, heavy ball on the school parking lot. What is the best way to determine which student used the most force to push the ball?
[ "compare the sizes of the students", "compare the ages of the students", "compare the distances that the ball rolled", "compare the number of times the ball was rolled" ]
C
Relavent Documents: Document 0::: Conceptual questions or conceptual problems in science, technology, engineering, and mathematics (STEM) education are questions that can be answered based only on the knowledge of relevant concepts, rather than performing extensive calculations. They contrast with most homework and exam problems in science and engineering that typically require plugging in numerical values into previously discussed formulas. Such "plug-and-chug" numerical problems can often be solved correctly by just matching the pattern of the problem to a previously discussed problem and changing the numerical inputs, which requires significant amounts of time to perform the calculations but does not test or deepen the understanding of how the concepts and formulas should work together. Conceptual questions, therefore, provide a good complement to conventional numerical problems because they need minimal or no calculations and instead encourage the students to engage more deeply with the underlying concepts and how they relate to formulas. Conceptual problems are often formulated as multiple-choice questions, making them easy to use during in-class discussions, particularly when utilizing active learning, peer instruction, and audience response. An example of a conceptual question in undergraduate thermodynamics is provided below: During adiabatic expansion of an ideal gas, its temperatureincreases decreases stays the same Impossible to tell/need more information The use of conceptual questions in physics was popularized by Eric Mazur, particularly in the form of multiple-choice tests that he called ConcepTests. In recent years, multiple websites that maintain lists of conceptual questions have been created by instructors for various disciplines. Some books on physics provide many examples of conceptual questions as well. Multiple conceptual questions can be assembled into a concept inventory to test the working knowledge of students at the beginning of a course or to track the improvement in Document 1::: The Force Concept Inventory is a test measuring mastery of concepts commonly taught in a first semester of physics developed by Hestenes, Halloun, Wells, and Swackhamer (1985). It was the first such "concept inventory" and several others have been developed since for a variety of topics. The FCI was designed to assess student understanding of the Newtonian concepts of force. Hestenes (1998) found that while "nearly 80% of the [students completing introductory college physics courses] could state Newton's Third Law at the beginning of the course, FCI data showed that less than 15% of them fully understood it at the end". These results have been replicated in a number of studies involving students at a range of institutions (see sources section below), and have led to greater recognition in the physics education research community of the importance of students' "active engagement" with the materials to be mastered. The 1995 version has 30 five-way multiple choice questions. Example question (question 4): Gender differences The FCI shows a gender difference in favor of males that has been the subject of some research in regard to gender equity in education. Men score on average about 10% higher. Document 2::: Adaptive comparative judgement is a technique borrowed from psychophysics which is able to generate reliable results for educational assessment – as such it is an alternative to traditional exam script marking. In the approach, judges are presented with pairs of student work and are then asked to choose which is better, one or the other. By means of an iterative and adaptive algorithm, a scaled distribution of student work can then be obtained without reference to criteria. Introduction Traditional exam script marking began in Cambridge 1792 when, with undergraduate numbers rising, the importance of proper ranking of students was growing. So in 1792 the new Proctor of Examinations, William Farish, introduced marking, a process in which every examiner gives a numerical score to each response by every student, and the overall total mark puts the students in the final rank order. Francis Galton (1869) noted that, in an unidentified year about 1863, the Senior Wrangler scored 7,634 out of a maximum of 17,000, while the Second Wrangler scored 4,123. (The 'Wooden Spoon' scored only 237.) Prior to 1792, a team of Cambridge examiners convened at 5pm on the last day of examining, reviewed the 19 papers each student had sat – and published their rank order at midnight. Marking solved the problems of numbers and prevented unfair personal bias, and its introduction was a step towards modern objective testing, the format it is best suited to. But the technology of testing that followed, with its major emphasis on reliability and the automatisation of marking, has been an uncomfortable partner for some areas of educational achievement: assessing writing or speaking, and other kinds of performance need something more qualitative and judgemental. The technique of Adaptive Comparative Judgement is an alternative to marking. It returns to the pre-1792 idea of sorting papers according to their quality, but retains the guarantee of reliability and fairness. It is by far the most rel Document 3::: Advanced Placement (AP) Physics C: Mechanics (also known as AP Mechanics) is an introductory physics course administered by the College Board as part of its Advanced Placement program. It is intended to proxy a one-semester calculus-based university course in mechanics. The content of Physics C: Mechanics overlaps with that of AP Physics 1, but Physics 1 is algebra-based, while Physics C is calculus-based. Physics C: Mechanics may be combined with its electricity and magnetism counterpart to form a year-long course that prepares for both exams. Course content Intended to be equivalent to an introductory college course in mechanics for physics or engineering majors, the course modules are: Kinematics Newton's laws of motion Work, energy and power Systems of particles and linear momentum Circular motion and rotation Oscillations and gravitation. Methods of calculus are used wherever appropriate in formulating physical principles and in applying them to physical problems. Therefore, students should have completed or be concurrently enrolled in a Calculus I class. This course is often compared to AP Physics 1: Algebra Based for its similar course material involving kinematics, work, motion, forces, rotation, and oscillations. However, AP Physics 1: Algebra Based lacks concepts found in Calculus I, like derivatives or integrals. This course may be combined with AP Physics C: Electricity and Magnetism to make a unified Physics C course that prepares for both exams. AP test The course culminates in an optional exam for which high-performing students may receive some credit towards their college coursework, depending on the institution. Registration The AP examination for AP Physics C: Mechanics is separate from the AP examination for AP Physics C: Electricity and Magnetism. Before 2006, test-takers paid only once and were given the choice of taking either one or two parts of the Physics C test. Format The exam is typically administered on a Monday aftern Document 4::: Test equating traditionally refers to the statistical process of determining comparable scores on different forms of an exam. It can be accomplished using either classical test theory or item response theory. In item response theory, equating is the process of placing scores from two or more parallel test forms onto a common score scale. The result is that scores from two different test forms can be compared directly, or treated as though they came from the same test form. When the tests are not parallel, the general process is called linking. It is the process of equating the units and origins of two scales on which the abilities of students have been estimated from results on different tests. The process is analogous to equating degrees Fahrenheit with degrees Celsius by converting measurements from one scale to the other. The determination of comparable scores is a by-product of equating that results from equating the scales obtained from test results. Purpose Suppose that Dick and Jane both take a test to become licensed in a certain profession. Because the high stakes (you get to practice the profession if you pass the test) may create a temptation to cheat, the organization that oversees the test creates two forms. If we know that Dick scored 60% on form A and Jane scored 70% on form B, do we know for sure which one has a better grasp of the material? What if form A is composed of very difficult items, while form B is relatively easy? Equating analyses are performed to address this very issue, so that scores are as fair as possible. Equating in item response theory In item response theory, person "locations" (measures of some quality being assessed by a test) are estimated on an interval scale; i.e., locations are estimated in relation to a unit and origin. It is common in educational assessment to employ tests in order to assess different groups of students with the intention of establishing a common scale by equating the origins, and when appropri The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. Four different students take turns pushing a large, heavy ball on the school parking lot. What is the best way to determine which student used the most force to push the ball? A. compare the sizes of the students B. compare the ages of the students C. compare the distances that the ball rolled D. compare the number of times the ball was rolled Answer:
sciq-5227
multiple_choice
What are the two ways animals can manage their internal environment?
[ "adaptation or mutation", "regulating or conforming", "sweating or sleeping", "digestion or excretion" ]
B
Relavent Documents: Document 0::: Thermal ecology is the study of the interactions between temperature and organisms. Such interactions include the effects of temperature on an organism's physiology, behavioral patterns, and relationship with its environment. While being warmer is usually associated with greater fitness, maintaining this level of heat costs a significant amount of energy. Organisms will make various trade-offs so that they can continue to operate at their preferred temperatures and optimize metabolic functions. With the emergence of climate change scientists are investigating how species will be affected and what changes they will undergo in response. History While it is not known exactly when thermal ecology began being recognized as a new branch of science, in 1969, the Savanna River Ecology Laboratory (SREL) developed a research program on thermal stress due to heated water previously used to cool nuclear reactors being released into various nearby bodies of water. The SREL alongside the DuPont Company Savanna River Laboratory and the Atomic Energy Commission sponsored the first scientific symposium on thermal ecology in 1974 to discuss this issue as well as similar instances and the second symposium was held the next year in 1975. Animals Temperature has a notable effect on animals, contributing to body growth and size, and behavioral and physical adaptations. Ways that animals can control their body temperature include generating heat through daily activity and cooling down through prolonged inactivity at night. Because this cannot be done by marine animals, they have adapted to have traits such as a small surface-area-to-volume ratio to minimize heat transfer with their environment and the creation of antifreeze in the body for survival in extreme cold conditions. Endotherms Endotherms expend a large amount of energy keeping their body temperatures warm and therefore require a large energy intake to make up for it. There are several ways that they have evolved to solve t Document 1::: Sensory ecology is a relatively new field focusing on the information organisms obtain about their environment. It includes questions of what information is obtained, how it is obtained (the mechanism), and why the information is useful to the organism (the function). Sensory ecology is the study of how organisms acquire, process, and respond to information from their environment. All individual organisms interact with their environment (consisting of both animate and inanimate components), and exchange materials, energy, and sensory information. Ecology has generally focused on the exchanges of matter and energy, while sensory interactions have generally been studied as influences on behavior and functions of certain physiological systems (sense organs). The relatively new area of sensory ecology has emerged as more researchers focus on questions concerning information in the environment. This field covers topics ranging from the neurobiological mechanisms of sensory systems to the behavioral patterns employed in the acquisition of sensory information to the role of sensory ecology in larger evolutionary processes such as speciation and reproductive isolation. While human perception is largely visual, other species may rely more heavily on different senses. In fact, how organisms perceive and filter information from their environment varies widely. Organisms experience different perceptual worlds, also known as “umwelten”, as a result of their sensory filters. These senses range from smell (olfaction), taste (gustation), hearing (mechanoreception), and sight (vision) to pheromone detection, pain detection (nociception), electroreception and magnetoreception. Because different species rely on different senses, sensory ecologists seek to understand which environmental and sensory cues are more important in determining the behavioral patterns of certain species. In recent years, this information has been widely applied in conservation and management fields. Reactio Document 2::: Habit, equivalent to habitus in some applications in biology, refers variously to aspects of behaviour or structure, as follows: In zoology (particularly in ethology), habit usually refers to aspects of more or less predictable behaviour, instinctive or otherwise, though it also has broader application. Habitus refers to the characteristic form or morphology of a species. In botany, habit is the characteristic form in which a given species of plant grows (see plant habit). Behavior In zoology, habit (not to be confused with habitus as described below) usually refers to a specific behavior pattern, either adopted, learned, pathological, innate, or directly related to physiology. For example: ...the [cat] was in the habit of springing upon the [door knocker] in order to gain admission... If these sensitive parrots are kept in cages, they quickly take up the habit of feather plucking. The spider monkey has an arboreal habit and rarely ventures onto the forest floor. The brittle star has the habit of breaking off arms as a means of defense. Mode of life (or lifestyle, modus vivendi) is a concept related to habit, and it is sometimes referred to as the habit of an animal. It may refer to the locomotor capabilities, as in "(motile habit", sessile, errant, sedentary), feeding behaviour and mechanisms, nutrition mode (free-living, parasitic, holozoic, saprotrophic, trophic type), type of habitat (terrestrial, arboreal, aquatic, marine, freshwater, seawater, benthic, pelagic, nektonic, planktonic, etc.), period of activity (diurnal, nocturnal), types of ecological interaction, etc. The habits of plants and animals often change responding to changes in their environment. For example: if a species develops a disease or there is a drastic change of habitat or local climate, or it is removed to a different region, then the normal habits may change. Such changes may be either pathological, or adaptive. Structure In botany, habit is the general appearance, growth form, Document 3::: Ecological competence is a term that has several different meanings that are dependent on the context it is used. The term "Ecological competence" can be used in a microbial sense, and it can be used in a sociological sense. Microbiology Ecological competence is the ability of an organism, often a pathogen, to survive and compete in new habitats. In the case of plant pathogens, it is also their ability to survive between growing seasons. For example, peanut clump virus can survive in the spores of its fungal vector until a new growing season begins and it can proceed to infect its primary host again. If a pathogen does not have ecological competence it is likely to become extinct. Bacteria and other pathogens can increase their ecological competence by creating a micro-niche, or a highly specialized environment that only they can survive in. This in turn will increase plasmid stability. Increased plasmid stability leads to a higher ecological competence due to added spatial organization and regulated cell protection. Sociology Ecological competence in a sociological sense is based around the relationship that humans have formed with the environment. It is often important in certain careers that will have a drastic impact on the surrounding ecosystem. A specific example is engineers working around and planning mining operations, due to the possible negative effects it can have on the surrounding environment. Ecological competence is especially important at the managerial level so that managers may understand society's risk to nature. These risks are learned through specific ecological knowledge so that the environment can be better protected in the future. See also Cultural ecology Environmental education Sustainable development Ecological relationship Document 4::: Metabolic ecology is a field of ecology aiming to understand constraints on metabolic organization as important for understanding almost all life processes. Main focus is on the metabolism of individuals, emerging intra- and inter-specific patterns, and the evolutionary perspective. Two main metabolic theories that have been applied in ecology are Kooijman's Dynamic energy budget (DEB) theory and the West, Brown, and Enquist (WBE) theory of ecology. Both theories have an individual-based metabolic underpinning, but have fundamentally different assumptions. Models of individual's metabolism follow the energy uptake and allocation, and can focus on mechanisms and constraints of energy transport (transport models), or on dynamic use of stored metabolites (energy budget models). The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. What are the two ways animals can manage their internal environment? A. adaptation or mutation B. regulating or conforming C. sweating or sleeping D. digestion or excretion Answer:
sciq-5131
multiple_choice
What are the effects of light on plant morphology called?
[ "electrogenesis", "photomorphogenesis", "Megasporogenesis", "Microsporogenesis" ]
B
Relavent Documents: Document 0::: In developmental biology, photomorphogenesis is light-mediated development, where plant growth patterns respond to the light spectrum. This is a completely separate process from photosynthesis where light is used as a source of energy. Phytochromes, cryptochromes, and phototropins are photochromic sensory receptors that restrict the photomorphogenic effect of light to the UV-A, UV-B, blue, and red portions of the electromagnetic spectrum. The photomorphogenesis of plants is often studied by using tightly frequency-controlled light sources to grow the plants. There are at least three stages of plant development where photomorphogenesis occurs: seed germination, seedling development, and the switch from the vegetative to the flowering stage (photoperiodism). Most research on photomorphogenesis is derived from plants studies involving several kingdoms: Fungi, Monera, Protista, and Plantae. History Theophrastus of Eresus (371 to 287 BC) may have been the first to write about photomorphogenesis. He described the different wood qualities of fir trees grown in different levels of light, likely the result of the photomorphogenic "shade-avoidance" effect. In 1686, John Ray wrote "Historia Plantarum" which mentioned the effects of etiolation (grow in the absence of light). Charles Bonnet introduced the term "etiolement" to the scientific literature in 1754 when describing his experiments, commenting that the term was already in use by gardeners. Developmental stages affected Seed germination Light has profound effects on the development of plants. The most striking effects of light are observed when a germinating seedling emerges from the soil and is exposed to light for the first time. Normally the seedling radicle (root) emerges first from the seed, and the shoot appears as the root becomes established. Later, with growth of the shoot (particularly when it emerges into the light) there is increased secondary root formation and branching. In this coordinated progressi Document 1::: Phytomorphology is the study of the physical form and external structure of plants. This is usually considered distinct from plant anatomy, which is the study of the internal structure of plants, especially at the microscopic level. Plant morphology is useful in the visual identification of plants. Recent studies in molecular biology started to investigate the molecular processes involved in determining the conservation and diversification of plant morphologies. In these studies transcriptome conservation patterns were found to mark crucial ontogenetic transitions during the plant life cycle which may result in evolutionary constraints limiting diversification. Scope Plant morphology "represents a study of the development, form, and structure of plants, and, by implication, an attempt to interpret these on the basis of similarity of plan and origin". There are four major areas of investigation in plant morphology, and each overlaps with another field of the biological sciences. First of all, morphology is comparative, meaning that the morphologist examines structures in many different plants of the same or different species, then draws comparisons and formulates ideas about similarities. When structures in different species are believed to exist and develop as a result of common, inherited genetic pathways, those structures are termed homologous. For example, the leaves of pine, oak, and cabbage all look very different, but share certain basic structures and arrangement of parts. The homology of leaves is an easy conclusion to make. The plant morphologist goes further, and discovers that the spines of cactus also share the same basic structure and development as leaves in other plants, and therefore cactus spines are homologous to leaves as well. This aspect of plant morphology overlaps with the study of plant evolution and paleobotany. Secondly, plant morphology observes both the vegetative (somatic) structures of plants, as well as the reproductive str Document 2::: Plants depend on epigenetic processes for proper function. Epigenetics is defined as "the study of changes in gene function that are mitotically and/or meiotically heritable and that do not entail a change in DNA sequence" (Wu et al. 2001). The area of study examines protein interactions with DNA and its associated components, including histones and various other modifications such as methylation, which alter the rate or target of transcription. Epi-alleles and epi-mutants, much like their genetic counterparts, describe changes in phenotypes due to epigenetic mechanisms. Epigenetics in plants has attracted scientific enthusiasm because of its importance in agriculture. Background and history In the past, macroscopic observations on plants led to basic understandings of how plants respond to their environments and grow. While these investigations could somewhat correlate cause and effect as a plant develops, they could not truly explain the mechanisms at work without inspection at the molecular level. Certain studies provided simplistic models with the groundwork for further exploration and eventual explanation through epigenetics. In 1918, Gassner published findings that noted the necessity of a cold phase in order for proper plant growth. Meanwhile, Garner and Allard examined the importance of the duration of light exposure to plant growth in 1920. Gassner's work would shape the conceptualization of vernalization which involves epigenetic changes in plants after a period of cold that leads to development of flowering (Heo and Sung et al. 2011). In a similar manner, Garner and Allard's efforts would gather an awareness of photoperiodism which involves epigenetic modifications following the duration of nighttime which enable flowering (Sun et al. 2014). Rudimentary comprehensions set precedent for later molecular evaluation and, eventually, a more complete view of how plants operate. Modern epigenetic work depends heavily on bioinformatics to gather large quant Document 3::: Organography (from Greek , organo, "organ"; and , -graphy) is the scientific description of the structure and function of the organs of living things. History Organography as a scientific study starts with Aristotle, who considered the parts of plants as "organs" and began to consider the relationship between different organs and different functions. In the 17th century Joachim Jung, clearly articulated that plants are composed of different organ types such as root, stem and leaf, and he went on to define these organ types on the basis of form and position. In the following century Caspar Friedrich Wolff was able to follow the development of organs from the "growing points" or apical meristems. He noted the commonality of development between foliage leaves and floral leaves (e.g. petals) and wrote: "In the whole plant, whose parts we wonder at as being, at the first glance, so extraordinarily diverse, I finally perceive and recognize nothing beyond leaves and stem (for the root may be regarded as a stem). Consequently all parts of the plant, except the stem, are modified leaves." Similar views were propounded at by Goethe in his well-known treatise. He wrote: "The underlying relationship between the various external parts of the plant, such as the leaves, the calyx, the corolla, the stamens, which develop one after the other and, as it were, out of one another, has long been generally recognized by investigators, and has in fact been specially studied; and the operation by which one and the same organ presents itself to us in various forms has been termed Metamorphosis of Plants." See also morphology (biology) Document 4::: Phenomics is the systematic study of traits that make up a phenotype. It was coined by UC Berkeley and LBNL scientist Steven A. Garan. As such, it is a transdisciplinary area of research that involves biology, data sciences, engineering and other fields. Phenomics is concerned with the measurement of the phenotype where a phenome is a set of traits (physical and biochemical traits) that can be produced by a given organism over the course of development and in response to genetic mutation and environmental influences. It is also important to remember that an organisms phenotype changes with time. The relationship between phenotype and genotype enables researchers to understand and study pleiotropy. Phenomics concepts are used in functional genomics, pharmaceutical research, metabolic engineering, agricultural research, and increasingly in phylogenetics. Technical challenges involve improving, both qualitatively and quantitatively, the capacity to measure phenomes. Applications Plant sciences In plant sciences, phenomics research occurs in both field and controlled environments. Field phenomics encompasses the measurement of phenotypes that occur in both cultivated and natural conditions, whereas controlled environment phenomics research involves the use of glass houses, growth chambers, and other systems where growth conditions can be manipulated. The University of Arizona's Field Scanner in Maricopa, Arizona is a platform developed to measure field phenotypes. Controlled environment systems include the Enviratron at Iowa State University, the Plant Cultivation Hall under construction at IPK, and platforms at the Donald Danforth Plant Science Center, the University of Nebraska-Lincoln, and elsewhere. Standards, methods, tools, and instrumentation A Minimal Information About a Plant Phenotyping Experiment (MIAPPE) standard is available and in use among many researchers collecting and organizing plant phenomics data. A diverse set of computer vision methods exist The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. What are the effects of light on plant morphology called? A. electrogenesis B. photomorphogenesis C. Megasporogenesis D. Microsporogenesis Answer:
sciq-387
multiple_choice
Zinc reacting with hydrochloric acid produces bubbles of which gas?
[ "mustard", "helium", "hydrogen", "carbon" ]
C
Relavent Documents: Document 0::: Conceptual questions or conceptual problems in science, technology, engineering, and mathematics (STEM) education are questions that can be answered based only on the knowledge of relevant concepts, rather than performing extensive calculations. They contrast with most homework and exam problems in science and engineering that typically require plugging in numerical values into previously discussed formulas. Such "plug-and-chug" numerical problems can often be solved correctly by just matching the pattern of the problem to a previously discussed problem and changing the numerical inputs, which requires significant amounts of time to perform the calculations but does not test or deepen the understanding of how the concepts and formulas should work together. Conceptual questions, therefore, provide a good complement to conventional numerical problems because they need minimal or no calculations and instead encourage the students to engage more deeply with the underlying concepts and how they relate to formulas. Conceptual problems are often formulated as multiple-choice questions, making them easy to use during in-class discussions, particularly when utilizing active learning, peer instruction, and audience response. An example of a conceptual question in undergraduate thermodynamics is provided below: During adiabatic expansion of an ideal gas, its temperatureincreases decreases stays the same Impossible to tell/need more information The use of conceptual questions in physics was popularized by Eric Mazur, particularly in the form of multiple-choice tests that he called ConcepTests. In recent years, multiple websites that maintain lists of conceptual questions have been created by instructors for various disciplines. Some books on physics provide many examples of conceptual questions as well. Multiple conceptual questions can be assembled into a concept inventory to test the working knowledge of students at the beginning of a course or to track the improvement in Document 1::: A pre-STEM program is a course of study at any two-year college that prepares a student to transfer to a four-year school to earn a bachelor's degree in a STEM field. Overview The concept of a pre-STEM program is being developed to address America's need for more college-trained professionals in science, technology, engineering, and mathematics (STEM). It is an innovation meant to fill a gap at community colleges that do not have 'major' degree paths that students identify with on their way to earning an Associates degree. Students must complete a considerable amount of STEM coursework before transferring from a two-year school to a four-year school and earn a baccalaureate degree in a STEM field. Schools with a pre-STEM program are able to identify those students and support them with STEM-specific academic and career advising, increasing the student's chances of going on to earn a STEM baccalaureate degree in a timely fashion. With over 50% of America's college-bound students starting their college career at public or private two-year school, and with a very small proportion of students who start college at a two-year school matriculating to and earning STEM degrees from four-year schools, pre-STEM programs have great potential for broadening participation in baccalaureate STEM studies. Example programs The effectiveness of pre-STEM programs is being investigated by a consortium of schools in Missouri: Moberly Area Community College, St. Charles Community College, Metropolitan Community College, and Truman State University. A larger group of schools met at the Belknap Springs Meetings in October 2009 to discuss the challenges and opportunities presented by STEM-focused partnerships between 2-year and 4-year schools. Each program represented a two-year school and a four-year school that were trying to increase the number of people who earn a baccalaureate degree in a STEM area through various means, some of which were pre-STEM programs. Other methods includes Document 2::: In mathematical psychology and education theory, a knowledge space is a combinatorial structure used to formulate mathematical models describing the progression of a human learner. Knowledge spaces were introduced in 1985 by Jean-Paul Doignon and Jean-Claude Falmagne, and remain in extensive use in the education theory. Modern applications include two computerized tutoring systems, ALEKS and the defunct RATH. Formally, a knowledge space assumes that a domain of knowledge is a collection of concepts or skills, each of which must be eventually mastered. Not all concepts are interchangeable; some require other concepts as prerequisites. Conversely, competency at one skill may ease the acquisition of another through similarity. A knowledge space marks out which collections of skills are feasible: they can be learned without mastering any other skills. Under reasonable assumptions, the collection of feasible competencies forms the mathematical structure known as an antimatroid. Researchers and educators usually explore the structure of a discipline's knowledge space as a latent class model. Motivation Knowledge Space Theory attempts to address shortcomings of standardized testing when used in educational psychometry. Common tests, such as the SAT and ACT, compress a student's knowledge into a very small range of ordinal ranks, in the process effacing the conceptual dependencies between questions. Consequently, the tests cannot distinguish between true understanding and guesses, nor can they identify a student's particular weaknesses, only the general proportion of skills mastered. The goal of knowledge space theory is to provide a language by which exams can communicate What the student can do and What the student is ready to learn. Model structure Knowledge Space Theory-based models presume that an educational subject can be modeled as a finite set of concepts, skills, or topics. Each feasible state of knowledge about is then a subset of ; the set of Document 3::: The actuarial credentialing and exam process usually requires passing a rigorous series of professional examinations, most often taking several years in total, before one can become recognized as a credentialed actuary. In some countries, such as Denmark, most study takes place in a university setting. In others, such as the U.S., most study takes place during employment through a series of examinations. In the UK, and countries based on its process, there is a hybrid university-exam structure. Australia The education system in Australia is divided into three components: an exam-based curriculum; a professionalism course; and work experience. The system is governed by the Institute of Actuaries of Australia. The exam-based curriculum is in three parts. Part I relies on exemptions from an accredited under-graduate degree from either Bond University, Monash University, Macquarie University, University of New South Wales, University of Melbourne, Australian National University or Curtin University. The courses cover subjects including finance, financial mathematics, economics, contingencies, demography, models, probability and statistics. Students may also gain exemptions by passing the exams of the Institute of Actuaries in London. Part II is the Actuarial control cycle and is also offered by each of the universities above. Part III consists of four half-year courses of which two are compulsory and the other two allow specialization. To become an Associate, one needs to complete Part I and Part II of the accreditation process, perform 3 years of recognized work experience, and complete a professionalism course. To become a Fellow, candidates must complete Part I, II, III, and take a professionalism course. Work experience is not required, however, as the Institute deems that those who have successfully completed Part III have shown enough level of professionalism. China Actuarial exams were suspended in 2014 but reintroduced in 2023. Denmark In Denmark it normal Document 4::: The Force Concept Inventory is a test measuring mastery of concepts commonly taught in a first semester of physics developed by Hestenes, Halloun, Wells, and Swackhamer (1985). It was the first such "concept inventory" and several others have been developed since for a variety of topics. The FCI was designed to assess student understanding of the Newtonian concepts of force. Hestenes (1998) found that while "nearly 80% of the [students completing introductory college physics courses] could state Newton's Third Law at the beginning of the course, FCI data showed that less than 15% of them fully understood it at the end". These results have been replicated in a number of studies involving students at a range of institutions (see sources section below), and have led to greater recognition in the physics education research community of the importance of students' "active engagement" with the materials to be mastered. The 1995 version has 30 five-way multiple choice questions. Example question (question 4): Gender differences The FCI shows a gender difference in favor of males that has been the subject of some research in regard to gender equity in education. Men score on average about 10% higher. The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. Zinc reacting with hydrochloric acid produces bubbles of which gas? A. mustard B. helium C. hydrogen D. carbon Answer:
sciq-5668
multiple_choice
Producing sperm and secreting testosterone are the main functions of what system?
[ "endocrine system", "female reproductive system", "male reproductive system", "pollination" ]
C
Relavent Documents: Document 0::: Reproductive biology includes both sexual and asexual reproduction. Reproductive biology includes a wide number of fields: Reproductive systems Endocrinology Sexual development (Puberty) Sexual maturity Reproduction Fertility Human reproductive biology Endocrinology Human reproductive biology is primarily controlled through hormones, which send signals to the human reproductive structures to influence growth and maturation. These hormones are secreted by endocrine glands, and spread to different tissues in the human body. In humans, the pituitary gland synthesizes hormones used to control the activity of endocrine glands. Reproductive systems Internal and external organs are included in the reproductive system. There are two reproductive systems including the male and female, which contain different organs from one another. These systems work together in order to produce offspring. Female reproductive system The female reproductive system includes the structures involved in ovulation, fertilization, development of an embryo, and birth. These structures include: Ovaries Oviducts Uterus Vagina Mammary Glands Estrogen is one of the sexual reproductive hormones that aid in the sexual reproductive system of the female. Male reproductive system The male reproductive system includes testes, rete testis, efferent ductules, epididymis, sex accessory glands, sex accessory ducts and external genitalia. Testosterone, an androgen, although present in both males and females, is relatively more abundant in males. Testosterone serves as one of the major sexual reproductive hormones in the male reproductive system However, the enzyme aromatase is present in testes and capable of synthesizing estrogens from androgens. Estrogens are present in high concentrations in luminal fluids of the male reproductive tract. Androgen and estrogen receptors are abundant in epithelial cells of the male reproductive tract. Animal Reproductive Biology Animal reproduction oc Document 1::: The reproductive system of an organism, also known as the genital system, is the biological system made up of all the anatomical organs involved in sexual reproduction. Many non-living substances such as fluids, hormones, and pheromones are also important accessories to the reproductive system. Unlike most organ systems, the sexes of differentiated species often have significant differences. These differences allow for a combination of genetic material between two individuals, which allows for the possibility of greater genetic fitness of the offspring. Animals In mammals, the major organs of the reproductive system include the external genitalia (penis and vulva) as well as a number of internal organs, including the gamete-producing gonads (testicles and ovaries). Diseases of the human reproductive system are very common and widespread, particularly communicable sexually transmitted diseases. Most other vertebrates have similar reproductive systems consisting of gonads, ducts, and openings. However, there is a great diversity of physical adaptations as well as reproductive strategies in every group of vertebrates. Vertebrates Vertebrates share key elements of their reproductive systems. They all have gamete-producing organs known as gonads. In females, these gonads are then connected by oviducts to an opening to the outside of the body, typically the cloaca, but sometimes to a unique pore such as a vagina or intromittent organ. Humans The human reproductive system usually involves internal fertilization by sexual intercourse. During this process, the male inserts their erect penis into the female's vagina and ejaculates semen, which contains sperm. The sperm then travels through the vagina and cervix into the uterus or fallopian tubes for fertilization of the ovum. Upon successful fertilization and implantation, gestation of the fetus then occurs within the female's uterus for approximately nine months, this process is known as pregnancy in humans. Gestati Document 2::: Prenatal Testosterone Transfer (also known as prenatal androgen transfer or prenatal hormone transfer) refers to the phenomenon in which testosterone synthesized by a developing male fetus transfers to one or more developing fetuses within the womb and influences development. This typically results in the partial masculinization of specific aspects of female behavior, cognition, and morphology, though some studies have found that testosterone transfer can cause an exaggerated masculinization in males. There is strong evidence supporting the occurrence of prenatal testosterone transfer in rodents and other litter-bearing species, such as pigs. When it comes to humans, studies comparing dizygotic opposite-sex and same-sex twins suggest the phenomenon may occur, though the results of these studies are often inconsistent. Mechanisms of transfer Testosterone is a steroid hormone; therefore it has the ability to diffuse through the amniotic fluid between fetuses. In addition, hormones can transfer among fetuses through the mother's bloodstream. Consequences of testosterone transfer During prenatal development, testosterone exposure is directly responsible for masculinizing the genitals and brain structures. This exposure leads to an increase in male-typical behavior. Animal studies Most animal studies are performed on rats or mice. In these studies, the amount of testosterone each individual fetus is exposed to depends on its intrauterine position (IUP). Each gestating fetus not at either end of the uterine horn is surrounded by either two males (2M), two females (0M), or one female and one male (1M). Development of the fetus varies widely according to its IUP. Mice In mice, prenatal testosterone transfer causes higher blood concentrations of testosterone in 2M females when compared to 1M or 0M females. This has a variety of consequences on later female behavior, physiology, and morphology. Below is a table comparing physiological, morphological, and behavioral diffe Document 3::: Systems Biology in Reproductive Medicine is a peer-reviewed medical journal that covers the use of systems approaches including genomic, cellular, proteomic, metabolomic, bioinformatic, molecular, and biochemical, to address fundamental questions in reproductive biology, reproductive medicine, and translational research. The journal publishes research involving human and animal gametes, stem cells, developmental biology, toxicology, and clinical care in reproductive medicine. Editor The editor-in-chief of Systems Biology in Reproductive Medicine is S. A. Krawetz (Wayne State University). Document 4::: Spermatogenesis is the process by which haploid spermatozoa develop from germ cells in the seminiferous tubules of the testis. This process starts with the mitotic division of the stem cells located close to the basement membrane of the tubules. These cells are called spermatogonial stem cells. The mitotic division of these produces two types of cells. Type A cells replenish the stem cells, and type B cells differentiate into primary spermatocytes. The primary spermatocyte divides meiotically (Meiosis I) into two secondary spermatocytes; each secondary spermatocyte divides into two equal haploid spermatids by Meiosis II. The spermatids are transformed into spermatozoa (sperm) by the process of spermiogenesis. These develop into mature spermatozoa, also known as sperm cells. Thus, the primary spermatocyte gives rise to two cells, the secondary spermatocytes, and the two secondary spermatocytes by their subdivision produce four spermatozoa and four haploid cells. Spermatozoa are the mature male gametes in many sexually reproducing organisms. Thus, spermatogenesis is the male version of gametogenesis, of which the female equivalent is oogenesis. In mammals it occurs in the seminiferous tubules of the male testes in a stepwise fashion. Spermatogenesis is highly dependent upon optimal conditions for the process to occur correctly, and is essential for sexual reproduction. DNA methylation and histone modification have been implicated in the regulation of this process. It starts during puberty and usually continues uninterrupted until death, although a slight decrease can be discerned in the quantity of produced sperm with increase in age (see Male infertility). Spermatogenesis starts in the bottom part of seminiferous tubes and, progressively, cells go deeper into tubes and moving along it until mature spermatozoa reaches the lumen, where mature spermatozoa are deposited. The division happens asynchronically; if the tube is cut transversally one could observe different The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. Producing sperm and secreting testosterone are the main functions of what system? A. endocrine system B. female reproductive system C. male reproductive system D. pollination Answer:
sciq-4844
multiple_choice
Series and parallel circuits are two basic types of what?
[ "mechanical circuits", "chemical circuits", "electric circuits", "mutual circuits" ]
C
Relavent Documents: Document 0::: Mathematical methods are integral to the study of electronics. Mathematics in electronics Electronics engineering careers usually include courses in calculus (single and multivariable), complex analysis, differential equations (both ordinary and partial), linear algebra and probability. Fourier analysis and Z-transforms are also subjects which are usually included in electrical engineering programs. Laplace transform can simplify computing RLC circuit behaviour. Basic applications A number of electrical laws apply to all electrical networks. These include Faraday's law of induction: Any change in the magnetic environment of a coil of wire will cause a voltage (emf) to be "induced" in the coil. Gauss's Law: The total of the electric flux out of a closed surface is equal to the charge enclosed divided by the permittivity. Kirchhoff's current law: the sum of all currents entering a node is equal to the sum of all currents leaving the node or the sum of total current at a junction is zero Kirchhoff's voltage law: the directed sum of the electrical potential differences around a circuit must be zero. Ohm's law: the voltage across a resistor is the product of its resistance and the current flowing through it.at constant temperature. Norton's theorem: any two-terminal collection of voltage sources and resistors is electrically equivalent to an ideal current source in parallel with a single resistor. Thévenin's theorem: any two-terminal combination of voltage sources and resistors is electrically equivalent to a single voltage source in series with a single resistor. Millman's theorem: the voltage on the ends of branches in parallel is equal to the sum of the currents flowing in every branch divided by the total equivalent conductance. See also Analysis of resistive circuits. Circuit analysis is the study of methods to solve linear systems for an unknown variable. Circuit analysis Components There are many electronic components currently used and they all have thei Document 1::: In electrical engineering, electrical terms are associated into pairs called duals. A dual of a relationship is formed by interchanging voltage and current in an expression. The dual expression thus produced is of the same form, and the reason that the dual is always a valid statement can be traced to the duality of electricity and magnetism. Here is a partial list of electrical dualities: voltage – current parallel – serial (circuits) resistance – conductance voltage division – current division impedance – admittance capacitance – inductance reactance – susceptance short circuit – open circuit Kirchhoff's current law – Kirchhoff's voltage law. Thévenin's theorem – Norton's theorem History The use of duality in circuit theory is due to Alexander Russell who published his ideas in 1904. Examples Constitutive relations Resistor and conductor (Ohm's law) Capacitor and inductor – differential form Capacitor and inductor – integral form Voltage division — current division Impedance and admittance Resistor and conductor Capacitor and inductor See also Duality (electricity and magnetism) Duality (mechanical engineering) Dual impedance Dual graph Mechanical–electrical analogies List of dualities Document 2::: Two-terminal components and electrical networks can be connected in series or parallel. The resulting electrical network will have two terminals, and itself can participate in a series or parallel topology. Whether a two-terminal "object" is an electrical component (e.g. a resistor) or an electrical network (e.g. resistors in series) is a matter of perspective. This article will use "component" to refer to a two-terminal "object" that participate in the series/parallel networks. Components connected in series are connected along a single "electrical path", and each component has the same electric current through it, equal to the current through the network. The voltage across the network is equal to the sum of the voltages across each component. Components connected in parallel are connected along multiple paths, and each component has the same voltage across it, equal to the voltage across the network. The current through the network is equal to the sum of the currents through each component. The two preceding statements are equivalent, except for exchanging the role of voltage and current. A circuit composed solely of components connected in series is known as a series circuit; likewise, one connected completely in parallel is known as a parallel circuit. Many circuits can be analyzed as a combination of series and parallel circuits, along with other configurations. In a series circuit, the current that flows through each of the components is the same, and the voltage across the circuit is the sum of the individual voltage drops across each component. In a parallel circuit, the voltage across each of the components is the same, and the total current is the sum of the currents flowing through each component. Consider a very simple circuit consisting of four light bulbs and a 12-volt automotive battery. If a wire joins the battery to one bulb, to the next bulb, to the next bulb, to the next bulb, then back to the battery in one continuous loop, the bulbs are s Document 3::: The Brune test (named after the South African mathematician Otto Brune) is used to check the permissibility of the combination of two or more two-port networks (or quadripoles) in electrical circuit analysis. The test determines whether the network still meets the port condition after the two-ports have been combined. The test is a sufficient, but not necessary, test. Series-series connection To check if two two-port networks can be connected in a series-series configuration, first of all just the input ports are connected in series, a voltage is applied to the input and the open-circuit voltage is measured/calculated between the output terminals to be connected. If there is a voltage drop, the two-port networks cannot be combined in series. The same test is repeated from the output side of the two-port networks (series connection of the output ports, application of a voltage to the output, measurement/calculation of the open-circuit voltage between the input terminals to be connected). Only if there is no voltage drop in both cases, a combination of the two-ports networks is permissible. examples The first example fails the series-series test because the through path between the lower terminals of 2-port #1 short-circuit part of the circuitry in 2-port #2. The second example passes the series-series test. The 2-ports are the same as in the first example, but 2-port #2 has been flipped or equivalently the choice of terminals to be placed in series has changed. The result is that the through path between the lower terminals of 2-port #1 simply provide a parallel path to the through path between the upper terminals of 2-port #2. The third example is the same as the first example, except that it passes the Brune test because ideal isolating transformers have been placed at the right side terminals which break the through paths. Parallel-parallel connection To check if two two-port networks can be connected in a parallel-parallel configuration, first of all Document 4::: IEEE Transactions on Circuits and Systems I: Regular Papers (sometimes abbreviated IEEE TCAS-I) is a monthly peer-reviewed scientific journal covering the theory, analysis, design, and practical implementations of electrical and electronic circuits, and the application of circuit techniques to systems and to signal processing. It is published by the IEEE Circuits and Systems Society. The journal was established in 1952 and the editor-in-chief is Weisheng Zhao (Beihang University). According to the Journal Citation Reports, the 2020 impact factor of the journal is 3.605. Title history Adapted from IEEE Xplore. 1992–2003: IEEE Transactions on Circuits and Systems I: Fundamental Theory and Applications 1974–1991: IEEE Transactions on Circuits and Systems 1963–1973: IEEE Transactions on Circuit Theory 1954–1962: IRE Transactions on Circuit Theory 1952–1954: Transactions of the IRE Professional Group on Circuit Theory Editors-in-chief The following people are or have been editor-in-chief: The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. Series and parallel circuits are two basic types of what? A. mechanical circuits B. chemical circuits C. electric circuits D. mutual circuits Answer:
sciq-6264
multiple_choice
What causes blue litmus to turn red?
[ "oxygen", "base", "acid", "carbon" ]
C
Relavent Documents: Document 0::: The SAT Subject Test in Biology was the name of a one-hour multiple choice test given on biology by the College Board. A student chose whether to take the test depending upon college entrance requirements for the schools in which the student is planning to apply. Until 1994, the SAT Subject Tests were known as Achievement Tests; and from 1995 until January 2005, they were known as SAT IIs. Of all SAT subject tests, the Biology E/M test was the only SAT II that allowed the test taker a choice between the ecological or molecular tests. A set of 60 questions was taken by all test takers for Biology and a choice of 20 questions was allowed between either the E or M tests. This test was graded on a scale between 200 and 800. The average for Molecular is 630 while Ecological is 591. On January 19 2021, the College Board discontinued all SAT Subject tests, including the SAT Subject Test in Biology E/M. This was effective immediately in the United States, and the tests were to be phased out by the following summer for international students. This was done as a response to changes in college admissions due to the impact of the COVID-19 pandemic on education. Format This test had 80 multiple-choice questions that were to be answered in one hour. All questions had five answer choices. Students received one point for each correct answer, lost ¼ of a point for each incorrect answer, and received 0 points for questions left blank. The student's score was based entirely on his or her performance in answering the multiple-choice questions. The questions covered a broad range of topics in general biology. There were more specific questions related respectively on ecological concepts (such as population studies and general Ecology) on the E test and molecular concepts such as DNA structure, translation, and biochemistry on the M test. Preparation The College Board suggested a year-long course in biology at the college preparatory level, as well as a one-year course in algebra, a Document 1::: GRE Subject Biochemistry, Cell and Molecular Biology was a standardized exam provided by ETS (Educational Testing Service) that was discontinued in December 2016. It is a paper-based exam and there are no computer-based versions of it. ETS places this exam three times per year: once in April, once in October and once in November. Some graduate programs in the United States recommend taking this exam, while others require this exam score as a part of the application to their graduate programs. ETS sends a bulletin with a sample practice test to each candidate after registration for the exam. There are 180 questions within the biochemistry subject test. Scores are scaled and then reported as a number between 200 and 990; however, in recent versions of the test, the maximum and minimum reported scores have been 760 (corresponding to the 99 percentile) and 320 (1 percentile) respectively. The mean score for all test takers from July, 2009, to July, 2012, was 526 with a standard deviation of 95. After learning that test content from editions of the GRE® Biochemistry, Cell and Molecular Biology (BCM) Test has been compromised in Israel, ETS made the decision not to administer this test worldwide in 2016–17. Content specification Since many students who apply to graduate programs in biochemistry do so during the first half of their fourth year, the scope of most questions is largely that of the first three years of a standard American undergraduate biochemistry curriculum. A sampling of test item content is given below: Biochemistry (36%) A Chemical and Physical Foundations Thermodynamics and kinetics Redox states Water, pH, acid-base reactions and buffers Solutions and equilibria Solute-solvent interactions Chemical interactions and bonding Chemical reaction mechanisms B Structural Biology: Structure, Assembly, Organization and Dynamics Small molecules Macromolecules (e.g., nucleic acids, polysaccharides, proteins and complex lipids) Supramolecular complexes (e.g. Document 2::: Single Best Answer (SBA or One Best Answer) is a written examination form of multiple choice questions used extensively in medical education. Structure A single question is posed with typically five alternate answers, from which the candidate must choose the best answer. This method avoids the problems of past examinations of a similar form described as Single Correct Answer. The older form can produce confusion where more than one of the possible answers has some validity. The newer form makes it explicit that more than one answer may have elements that are correct, but that one answer will be superior. Prior to the widespread introduction of SBAs into medical education, the typical form of examination was true-false multiple choice questions. But during the 2000s, educators found that SBAs would be superior. Document 3::: Conceptual questions or conceptual problems in science, technology, engineering, and mathematics (STEM) education are questions that can be answered based only on the knowledge of relevant concepts, rather than performing extensive calculations. They contrast with most homework and exam problems in science and engineering that typically require plugging in numerical values into previously discussed formulas. Such "plug-and-chug" numerical problems can often be solved correctly by just matching the pattern of the problem to a previously discussed problem and changing the numerical inputs, which requires significant amounts of time to perform the calculations but does not test or deepen the understanding of how the concepts and formulas should work together. Conceptual questions, therefore, provide a good complement to conventional numerical problems because they need minimal or no calculations and instead encourage the students to engage more deeply with the underlying concepts and how they relate to formulas. Conceptual problems are often formulated as multiple-choice questions, making them easy to use during in-class discussions, particularly when utilizing active learning, peer instruction, and audience response. An example of a conceptual question in undergraduate thermodynamics is provided below: During adiabatic expansion of an ideal gas, its temperatureincreases decreases stays the same Impossible to tell/need more information The use of conceptual questions in physics was popularized by Eric Mazur, particularly in the form of multiple-choice tests that he called ConcepTests. In recent years, multiple websites that maintain lists of conceptual questions have been created by instructors for various disciplines. Some books on physics provide many examples of conceptual questions as well. Multiple conceptual questions can be assembled into a concept inventory to test the working knowledge of students at the beginning of a course or to track the improvement in Document 4::: Mnemonics are used to help memorize the electronic color codes for resistors. Mnemonics describing specific and relatable scenarios are more memorable than abstract phrases. Resistor color code The first letter of the color code is matched by order of increasing magnitude. The electronic color codes, in order, are: 0 = Black 1 = Brown 2 = Red 3 = Orange 4 = Yellow 5 = Green 6 = Blue 7 = Violet 8 = Gray 9 = White. Easy to remember A mnemonic which includes color name(s) generally reduces the chances of confusing black and brown. Some mnemonics that are easy to remember: Big Boys Race Our Young Girls But Violet Generally Wins. Better Be Right Or Your Great Big Venture Goes West. Beetle Bailey Runs Over Your General Before Very Good Witnesses. Beach Bums Rarely Offer You Gatorade But Very Good Water. Buster Brown Races Our Young Girls But Violet Generally Wins. Better Be Right Or Your Great Big Vacation Goes Wrong. Better Be Right Or Your Great Big Values Go Wrong. Better Be Right Or Your Great Big Plan Goes Wrong. (with P = Purple for Violet) Back-Breaking Rascals Often Yield Grudgingly But Virtuous Gentlemen Will Give Shelter Nobly. (with tolerance bands Gold, Silver or None) Better Be Right Or Your Great Big Plan Goes Wrong - Go Start Now! Black Beetles Running Over Your Garden Bring Very Grey Weather. Bad Booze Rots Our Young Guts But Vodka Goes Well – get some now. Bad Boys Run Over Yellow Gardenias Behind Victory Garden Walls. Bat Brained Resistor Order You Gotta Be Very Good With. Betty Brown Runs Over Your Garden But Violet Gingerly Walks. Big Beautiful Roses Occupy Your Garden But Violets Grow Wild. Big Brown Rabbits Often Yield Great Big Vocal Groans When Gingerly Slapped Needlessly. Black Bananas Really Offend Your Girlfriend But Violets Get Welcomed. Black Birds Run Over Your Gay Barely Visible Grey Worms. Badly Burnt Resistors On Your Ground Bus Void General Warranty. Billy Brown Ran Out Yelling Get The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. What causes blue litmus to turn red? A. oxygen B. base C. acid D. carbon Answer:
sciq-5849
multiple_choice
The chromosomal theory of inheritance proposed that what reside on chromosomes?
[ "genes", "molecules", "atoms", "rna" ]
A
Relavent Documents: Document 0::: The Boveri–Sutton chromosome theory (also known as the chromosome theory of inheritance or the Sutton–Boveri theory) is a fundamental unifying theory of genetics which identifies chromosomes as the carriers of genetic material. It correctly explains the mechanism underlying the laws of Mendelian inheritance by identifying chromosomes with the paired factors (particles) required by Mendel's laws. It also states that chromosomes are linear structures with genes located at specific sites called loci along them. It states simply that chromosomes, which are seen in all dividing cells and pass from one generation to the next, are the basis for all genetic inheritance. Over a period of time random mutation creates changes in the DNA sequence of a gene. Genes are located on chromosomes. Background The chromosome theory of inheritance is credited to papers by Walter Sutton in 1902 and 1903, as well as to independent work by Theodor Boveri during roughly the same period. Boveri was studying sea urchins, in which he found that all the chromosomes had to be present for proper embryonic development to take place. Sutton's work with grasshoppers showed that chromosomes occur in matched pairs of maternal and paternal chromosomes which separate during meiosis and "may constitute the physical basis of the Mendelian law of heredity". This groundbreaking work led E.B. Wilson in his classic text to name the chromosome theory of inheritance the "Sutton-Boveri Theory". Wilson was close to both men since the young Sutton was his student and the prominent Boveri was his friend (in fact, Wilson dedicated the aforementioned book to Boveri). Although the naming precedence is now often reversed to "Boveri-Sutton", there are some who argue that Boveri did not actually articulate the theory until 1904. Verification The proposal that chromosomes carried the factors of Mendelian inheritance was initially controversial, but in 1905 it gained strong support when Nettie Stevens showed that the Document 1::: The Bateson Lecture is an annual genetics lecture held as a part of the John Innes Symposium since 1972, in honour of the first Director of the John Innes Centre, William Bateson. Past Lecturers Source: John Innes Centre 1951 Sir Ronald Fisher - "Statistical methods in Genetics" 1953 Julian Huxley - "Polymorphic variation: a problem in genetical natural history" 1955 Sidney C. Harland - "Plant breeding: present position and future perspective" 1957 J.B.S. Haldane - "The theory of evolution before and after Bateson" 1959 Kenneth Mather - "Genetics Pure and Applied" 1972 William Hayes - "Molecular genetics in retrospect" 1974 Guido Pontecorvo - "Alternatives to sex: genetics by means of somatic cells" 1976 Max F. Perutz - "Mechanism of respiratory haemoglobin" 1979 J. Heslop-Harrison - "The forgotten generation: some thoughts on the genetics and physiology of Angiosperm Gametophytes " 1982 Sydney Brenner - "Molecular genetics in prospect" 1984 W.W. Franke - "The cytoskeleton - the insoluble architectural framework of the cell" 1986 Arthur Kornberg - "Enzyme systems initiating replication at the origin of the E. coli chromosome" 1988 Gottfried Schatz - "Interaction between mitochondria and the nucleus" 1990 Christiane Nusslein-Volhard - "Axis determination in the Drosophila embryo" 1992 Frank Stahl - "Genetic recombination: thinking about it in phage and fungi" 1994 Ira Herskowitz - "Violins and orchestras: what a unicellular organism can do" 1996 R.J.P. Williams - "An Introduction to Protein Machines" 1999 Eugene Nester - "DNA and Protein Transfer from Bacteria to Eukaryotes - the Agrobacterium story" 2001 David Botstein - "Extracting biological information from DNA Microarray Data" 2002 Elliot Meyerowitz 2003 Thomas Steitz - "The Macromolecular machines of gene expression" 2008 Sean Carroll - "Endless flies most beautiful: the role of cis-regulatory sequences in the evolution of animal form" 2009 Sir Paul Nurse - "Genetic transmission through Document 2::: This is a list of topics in molecular biology. See also index of biochemistry articles. Document 3::: Walther Flemming (21 April 1843 – 4 August 1905) was a German biologist and a founder of cytogenetics. He was born in Sachsenberg (now part of Schwerin) as the fifth child and only son of the psychiatrist Carl Friedrich Flemming (1799–1880) and his second wife, Auguste Winter. He graduated from the Gymnasium der Residenzstadt, where one of his colleagues and lifelong friends was writer Heinrich Seidel. Career Flemming trained in medicine at the University of Prague, graduating in 1868. Afterwards, he served in 1870–71 as a military physician in the Franco-Prussian War. From 1873 to 1876 he worked as a teacher at the University of Prague. In 1876 he accepted a post as a professor of anatomy at the University of Kiel. He became the director of the Anatomical Institute and stayed there until his death. With the use of aniline dyes he was able to find a structure which strongly absorbed basophilic dyes, which he named chromatin. He identified that chromatin was correlated to threadlike structures in the cell nucleus – the chromosomes (meaning coloured bodies), which were named thus later by German anatomist Wilhelm von Waldeyer-Hartz (1841–1923). The Belgian scientist Edouard Van Beneden (1846–1910) had also observed them, independently. The centrosome was discovered jointly by Walther Flemming in 1875 and Edouard Van Beneden in 1876. Flemming investigated the process of cell division and the distribution of chromosomes to the daughter nuclei, a process he called mitosis from the Greek word for thread. However, he did not see the splitting into identical halves, the daughter chromatids. He studied mitosis both in vivo and in stained preparations, using as the source of biological material the fins and gills of salamanders. These results were published first in 1878 and in 1882 in the seminal book Zellsubstanz, Kern und Zelltheilung (1882; Cell substance, nucleus and cell division). On the basis of his discoveries, Flemming surmised for the first time that all cel Document 4::: The bead theory is a disproved hypothesis that genes are arranged on the chromosome like beads on a necklace. This theory was first proposed by Thomas Hunt Morgan after discovering genes through his work with breeding red and white eyed fruit flies. According to this theory, the existence of a gene as a unit of inheritance is recognized through its mutant alleles. A mutant allele affects a single phenotypic character, maps to one chromosome locus, gives a mutant phenotype when paired and shows a Mendelian ratio when intercrossed. Several tenets of the bead theory are worth emphasizing :- 1. The gene is viewed as a fundamental unit of structure, indivisible by crossing over. Crossing over take place between genes ( the beads in this model ) but never within them. 2. The gene is viewed as the fundamental unit of change or mutation. It changes in toto from one allelic form into another; there are no smaller components within it that can change. 3. The gene is viewed as the fundamental unit of function ( although the precise function of gene is not specified in this model ). Parts of a gene, if they exist cannot function. Guido Pontecorvo continued to work under the basis of this theory until Seymour Benzer showed in the 1950s that the bead theory was not correct. He demonstrated that a gene can be defined as a unit of function. A gene can be subdivided into a linear array of sites that are mutable and that can be recombined. The smallest units of mutation and recombination are now known to be correlated with single nucleotide pairs. The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. The chromosomal theory of inheritance proposed that what reside on chromosomes? A. genes B. molecules C. atoms D. rna Answer:
sciq-11453
multiple_choice
What contains carbon, hydrogen, and oxygen in a ratio of 1:2:1?
[ "helium", "magnesium", "sodium", "carbohydrate" ]
D
Relavent Documents: Document 0::: Conceptual questions or conceptual problems in science, technology, engineering, and mathematics (STEM) education are questions that can be answered based only on the knowledge of relevant concepts, rather than performing extensive calculations. They contrast with most homework and exam problems in science and engineering that typically require plugging in numerical values into previously discussed formulas. Such "plug-and-chug" numerical problems can often be solved correctly by just matching the pattern of the problem to a previously discussed problem and changing the numerical inputs, which requires significant amounts of time to perform the calculations but does not test or deepen the understanding of how the concepts and formulas should work together. Conceptual questions, therefore, provide a good complement to conventional numerical problems because they need minimal or no calculations and instead encourage the students to engage more deeply with the underlying concepts and how they relate to formulas. Conceptual problems are often formulated as multiple-choice questions, making them easy to use during in-class discussions, particularly when utilizing active learning, peer instruction, and audience response. An example of a conceptual question in undergraduate thermodynamics is provided below: During adiabatic expansion of an ideal gas, its temperatureincreases decreases stays the same Impossible to tell/need more information The use of conceptual questions in physics was popularized by Eric Mazur, particularly in the form of multiple-choice tests that he called ConcepTests. In recent years, multiple websites that maintain lists of conceptual questions have been created by instructors for various disciplines. Some books on physics provide many examples of conceptual questions as well. Multiple conceptual questions can be assembled into a concept inventory to test the working knowledge of students at the beginning of a course or to track the improvement in Document 1::: Carbon is a primary component of all known life on Earth, representing approximately 45–50% of all dry biomass. Carbon compounds occur naturally in great abundance on Earth. Complex biological molecules consist of carbon atoms bonded with other elements, especially oxygen and hydrogen and frequently also nitrogen, phosphorus, and sulfur (collectively known as CHNOPS). Because it is lightweight and relatively small in size, carbon molecules are easy for enzymes to manipulate. It is frequently assumed in astrobiology that if life exists elsewhere in the Universe, it will also be carbon-based. Critics refer to this assumption as carbon chauvinism. Characteristics Carbon is capable of forming a vast number of compounds, more than any other element, with almost ten million compounds described to date, and yet that number is but a fraction of the number of theoretically possible compounds under standard conditions. The enormous diversity of carbon-containing compounds, known as organic compounds, has led to a distinction between them and compounds that do not contain carbon, known as inorganic compounds. The branch of chemistry that studies organic compounds is known as organic chemistry. Carbon is the 15th most abundant element in the Earth's crust, and the fourth most abundant element in the universe by mass, after hydrogen, helium, and oxygen. Carbon's widespread abundance, its ability to form stable bonds with numerous other elements, and its unusual ability to form polymers at the temperatures commonly encountered on Earth enables it to serve as a common element of all known living organisms. In a 2018 study, carbon was found to compose approximately 550 billion tons of all life on Earth. It is the second most abundant element in the human body by mass (about 18.5%) after oxygen. The most important characteristics of carbon as a basis for the chemistry of life are that each carbon atom is capable of forming up to four valence bonds with other atoms simultaneously Document 2::: GRE Subject Biochemistry, Cell and Molecular Biology was a standardized exam provided by ETS (Educational Testing Service) that was discontinued in December 2016. It is a paper-based exam and there are no computer-based versions of it. ETS places this exam three times per year: once in April, once in October and once in November. Some graduate programs in the United States recommend taking this exam, while others require this exam score as a part of the application to their graduate programs. ETS sends a bulletin with a sample practice test to each candidate after registration for the exam. There are 180 questions within the biochemistry subject test. Scores are scaled and then reported as a number between 200 and 990; however, in recent versions of the test, the maximum and minimum reported scores have been 760 (corresponding to the 99 percentile) and 320 (1 percentile) respectively. The mean score for all test takers from July, 2009, to July, 2012, was 526 with a standard deviation of 95. After learning that test content from editions of the GRE® Biochemistry, Cell and Molecular Biology (BCM) Test has been compromised in Israel, ETS made the decision not to administer this test worldwide in 2016–17. Content specification Since many students who apply to graduate programs in biochemistry do so during the first half of their fourth year, the scope of most questions is largely that of the first three years of a standard American undergraduate biochemistry curriculum. A sampling of test item content is given below: Biochemistry (36%) A Chemical and Physical Foundations Thermodynamics and kinetics Redox states Water, pH, acid-base reactions and buffers Solutions and equilibria Solute-solvent interactions Chemical interactions and bonding Chemical reaction mechanisms B Structural Biology: Structure, Assembly, Organization and Dynamics Small molecules Macromolecules (e.g., nucleic acids, polysaccharides, proteins and complex lipids) Supramolecular complexes (e.g. Document 3::: This is an index of lists of molecules (i.e. by year, number of atoms, etc.). Millions of molecules have existed in the universe since before the formation of Earth. Three of them, carbon dioxide, water and oxygen were necessary for the growth of life. Although humanity had always been surrounded by these substances, it has not always known what they were composed of. By century The following is an index of list of molecules organized by time of discovery of their molecular formula or their specific molecule in case of isomers: List of compounds By number of carbon atoms in the molecule List of compounds with carbon number 1 List of compounds with carbon number 2 List of compounds with carbon number 3 List of compounds with carbon number 4 List of compounds with carbon number 5 List of compounds with carbon number 6 List of compounds with carbon number 7 List of compounds with carbon number 8 List of compounds with carbon number 9 List of compounds with carbon number 10 List of compounds with carbon number 11 List of compounds with carbon number 12 List of compounds with carbon number 13 List of compounds with carbon number 14 List of compounds with carbon number 15 List of compounds with carbon number 16 List of compounds with carbon number 17 List of compounds with carbon number 18 List of compounds with carbon number 19 List of compounds with carbon number 20 List of compounds with carbon number 21 List of compounds with carbon number 22 List of compounds with carbon number 23 List of compounds with carbon number 24 List of compounds with carbon numbers 25-29 List of compounds with carbon numbers 30-39 List of compounds with carbon numbers 40-49 List of compounds with carbon numbers 50+ Other lists List of interstellar and circumstellar molecules List of gases List of molecules with unusual names See also Molecule Empirical formula Chemical formula Chemical structure Chemical compound Chemical bond Coordination complex L Document 4::: The Force Concept Inventory is a test measuring mastery of concepts commonly taught in a first semester of physics developed by Hestenes, Halloun, Wells, and Swackhamer (1985). It was the first such "concept inventory" and several others have been developed since for a variety of topics. The FCI was designed to assess student understanding of the Newtonian concepts of force. Hestenes (1998) found that while "nearly 80% of the [students completing introductory college physics courses] could state Newton's Third Law at the beginning of the course, FCI data showed that less than 15% of them fully understood it at the end". These results have been replicated in a number of studies involving students at a range of institutions (see sources section below), and have led to greater recognition in the physics education research community of the importance of students' "active engagement" with the materials to be mastered. The 1995 version has 30 five-way multiple choice questions. Example question (question 4): Gender differences The FCI shows a gender difference in favor of males that has been the subject of some research in regard to gender equity in education. Men score on average about 10% higher. The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. What contains carbon, hydrogen, and oxygen in a ratio of 1:2:1? A. helium B. magnesium C. sodium D. carbohydrate Answer:
sciq-5820
multiple_choice
What is the name of the cell that results when a sperm nucleus fuses with a egg nucleus?
[ "t cell", "a filament", "a zygote", "a cytoplasm" ]
C
Relavent Documents: Document 0::: Spermatogenesis is the process by which haploid spermatozoa develop from germ cells in the seminiferous tubules of the testis. This process starts with the mitotic division of the stem cells located close to the basement membrane of the tubules. These cells are called spermatogonial stem cells. The mitotic division of these produces two types of cells. Type A cells replenish the stem cells, and type B cells differentiate into primary spermatocytes. The primary spermatocyte divides meiotically (Meiosis I) into two secondary spermatocytes; each secondary spermatocyte divides into two equal haploid spermatids by Meiosis II. The spermatids are transformed into spermatozoa (sperm) by the process of spermiogenesis. These develop into mature spermatozoa, also known as sperm cells. Thus, the primary spermatocyte gives rise to two cells, the secondary spermatocytes, and the two secondary spermatocytes by their subdivision produce four spermatozoa and four haploid cells. Spermatozoa are the mature male gametes in many sexually reproducing organisms. Thus, spermatogenesis is the male version of gametogenesis, of which the female equivalent is oogenesis. In mammals it occurs in the seminiferous tubules of the male testes in a stepwise fashion. Spermatogenesis is highly dependent upon optimal conditions for the process to occur correctly, and is essential for sexual reproduction. DNA methylation and histone modification have been implicated in the regulation of this process. It starts during puberty and usually continues uninterrupted until death, although a slight decrease can be discerned in the quantity of produced sperm with increase in age (see Male infertility). Spermatogenesis starts in the bottom part of seminiferous tubes and, progressively, cells go deeper into tubes and moving along it until mature spermatozoa reaches the lumen, where mature spermatozoa are deposited. The division happens asynchronically; if the tube is cut transversally one could observe different Document 1::: The spermatid is the haploid male gametid that results from division of secondary spermatocytes. As a result of meiosis, each spermatid contains only half of the genetic material present in the original primary spermatocyte. Spermatids are connected by cytoplasmic material and have superfluous cytoplasmic material around their nuclei. When formed, early round spermatids must undergo further maturational events to develop into spermatozoa, a process termed spermiogenesis (also termed spermeteliosis). The spermatids begin to grow a living thread, develop a thickened mid-piece where the mitochondria become localised, and form an acrosome. Spermatid DNA also undergoes packaging, becoming highly condensed. The DNA is packaged firstly with specific nuclear basic proteins, which are subsequently replaced with protamines during spermatid elongation. The resultant tightly packed chromatin is transcriptionally inactive. In 2016 scientists at Nanjing Medical University claimed they had produced cells resembling mouse spermatids artificially from stem cells. They injected these spermatids into mouse eggs and produced pups. DNA repair As postmeiotic germ cells develop to mature sperm they progressively lose the ability to repair DNA damage that may then accumulate and be transmitted to the zygote and ultimately the embryo. In particular, the repair of DNA double-strand breaks by the non-homologous end joining pathway, although present in round spermatids, appears to be lost as they develop into elongated spermatids. Additional images See also List of distinct cell types in the adult human body Document 2::: Spermatozoa develop in the seminiferous tubules of the testes. During their development the spermatogonia proceed through meiosis to become spermatozoa. Many changes occur during this process: the DNA in nuclei becomes condensed; the acrosome develops as a structure close to the nucleus. The acrosome is derived from the Golgi apparatus and contains hydrolytic enzymes important for fusion of the spermatozoon with an egg cell. During spermiogenesis the nucleus condenses and changes shape. Abnormal shape change is a feature of sperm in male infertility. The acroplaxome is a structure found between the acrosomal membrane and the nuclear membrane. The acroplaxome contains structural proteins including keratin 5, F-actin and profilin IV. Document 3::: In biology, a blastomere is a type of cell produced by cell division (cleavage) of the zygote after fertilization; blastomeres are an essential part of blastula formation, and blastocyst formation in mammals. Human blastomere characteristics In humans, blastomere formation begins immediately following fertilization and continues through the first week of embryonic development. About 90 minutes after fertilization, the zygote divides into two cells. The two-cell blastomere state, present after the zygote first divides, is considered the earliest mitotic product of the fertilized oocyte. These mitotic divisions continue and result in a grouping of cells called blastomeres. During this process, the total size of the embryo does not increase, so each division results in smaller and smaller cells. When the zygote contains 16 to 32 blastomeres it is referred to as a morula. These are the preliminary stages in the embryo beginning to form. Once this begins, microtubules within the morula's cytosolic material in the blastomere cells can develop into important membrane functions, such as sodium pumps. These pumps allow the inside of the embryo to fill with blastocoelic fluid, which supports the further growth of life. The blastomere is considered totipotent; that is, blastomeres are capable of developing from a single cell into a fully fertile adult organism. This has been demonstrated through studies and conjectures made with mouse blastomeres, which have been accepted as true for most mammalian blastomeres as well. Studies have analyzed monozygotic twin mouse blastomeres in their two-cell state, and have found that when one of the twin blastomeres is destroyed, a fully fertile adult mouse can still develop. Thus, it can be assumed that since one of the twin cells was totipotent, the destroyed one originally was as well. Relative blastomere size within the embryo is dependent not only on the stage of the cleavage, but also on the regularity of the cleavage amongst t Document 4::: Sperm (: sperm or sperms) is the male reproductive cell, or gamete, in anisogamous forms of sexual reproduction (forms in which there is a larger, female reproductive cell and a smaller, male one). Animals produce motile sperm with a tail known as a flagellum, which are known as spermatozoa, while some red algae and fungi produce non-motile sperm cells, known as spermatia. Flowering plants contain non-motile sperm inside pollen, while some more basal plants like ferns and some gymnosperms have motile sperm. Sperm cells form during the process known as spermatogenesis, which in amniotes (reptiles and mammals) takes place in the seminiferous tubules of the testes. This process involves the production of several successive sperm cell precursors, starting with spermatogonia, which differentiate into spermatocytes. The spermatocytes then undergo meiosis, reducing their chromosome number by half, which produces spermatids. The spermatids then mature and, in animals, construct a tail, or flagellum, which gives rise to the mature, motile sperm cell. This whole process occurs constantly and takes around 3 months from start to finish. Sperm cells cannot divide and have a limited lifespan, but after fusion with egg cells during fertilization, a new organism begins developing, starting as a totipotent zygote. The human sperm cell is haploid, so that its 23 chromosomes can join the 23 chromosomes of the female egg to form a diploid cell with 46 paired chromosomes. In mammals, sperm is stored in the epididymis and is released from the penis during ejaculation in a fluid known as semen. The word sperm is derived from the Greek word σπέρμα, sperma, meaning "seed". Evolution It is generally accepted that isogamy is the ancestor to sperm and eggs. However, there are no fossil records for the evolution of sperm and eggs from isogamy leading there to be a strong emphasis on mathematical models to understand the evolution of sperm. A widespread hypothesis states that sperm evolve The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. What is the name of the cell that results when a sperm nucleus fuses with a egg nucleus? A. t cell B. a filament C. a zygote D. a cytoplasm Answer:
sciq-10134
multiple_choice
What type of rock is a sandstone?
[ "sedimentary rocks", "igneous rocks", "limestone rocks", "landform rocks" ]
A
Relavent Documents: Document 0::: In geology, rock (or stone) is any naturally occurring solid mass or aggregate of minerals or mineraloid matter. It is categorized by the minerals included, its chemical composition, and the way in which it is formed. Rocks form the Earth's outer solid layer, the crust, and most of its interior, except for the liquid outer core and pockets of magma in the asthenosphere. The study of rocks involves multiple subdisciplines of geology, including petrology and mineralogy. It may be limited to rocks found on Earth, or it may include planetary geology that studies the rocks of other celestial objects. Rocks are usually grouped into three main groups: igneous rocks, sedimentary rocks and metamorphic rocks. Igneous rocks are formed when magma cools in the Earth's crust, or lava cools on the ground surface or the seabed. Sedimentary rocks are formed by diagenesis and lithification of sediments, which in turn are formed by the weathering, transport, and deposition of existing rocks. Metamorphic rocks are formed when existing rocks are subjected to such high pressures and temperatures that they are transformed without significant melting. Humanity has made use of rocks since the earliest humans. This early period, called the Stone Age, saw the development of many stone tools. Stone was then used as a major component in the construction of buildings and early infrastructure. Mining developed to extract rocks from the Earth and obtain the minerals within them, including metals. Modern technology has allowed the development of new man-made rocks and rock-like substances, such as concrete. Study Geology is the study of Earth and its components, including the study of rock formations. Petrology is the study of the character and origin of rocks. Mineralogy is the study of the mineral components that create rocks. The study of rocks and their components has contributed to the geological understanding of Earth's history, the archaeological understanding of human history, and the Document 1::: Conceptual questions or conceptual problems in science, technology, engineering, and mathematics (STEM) education are questions that can be answered based only on the knowledge of relevant concepts, rather than performing extensive calculations. They contrast with most homework and exam problems in science and engineering that typically require plugging in numerical values into previously discussed formulas. Such "plug-and-chug" numerical problems can often be solved correctly by just matching the pattern of the problem to a previously discussed problem and changing the numerical inputs, which requires significant amounts of time to perform the calculations but does not test or deepen the understanding of how the concepts and formulas should work together. Conceptual questions, therefore, provide a good complement to conventional numerical problems because they need minimal or no calculations and instead encourage the students to engage more deeply with the underlying concepts and how they relate to formulas. Conceptual problems are often formulated as multiple-choice questions, making them easy to use during in-class discussions, particularly when utilizing active learning, peer instruction, and audience response. An example of a conceptual question in undergraduate thermodynamics is provided below: During adiabatic expansion of an ideal gas, its temperatureincreases decreases stays the same Impossible to tell/need more information The use of conceptual questions in physics was popularized by Eric Mazur, particularly in the form of multiple-choice tests that he called ConcepTests. In recent years, multiple websites that maintain lists of conceptual questions have been created by instructors for various disciplines. Some books on physics provide many examples of conceptual questions as well. Multiple conceptual questions can be assembled into a concept inventory to test the working knowledge of students at the beginning of a course or to track the improvement in Document 2::: The Singing Stones of Brittany (or Pierres sonnantes du Guildo) are a number of rocks located on the left bank of the river Arguenon opposite the ruined castle of Gilles de Bretagne at Notre-Dame-le-Guildo, near Dinard in the Côtes-d'Armor, France. The stones are round and smooth, shiny black and extremely large, weighing in excess of ten tonnes each and are made of a heavy metallic stone that rings like a bell if struck by another smaller piece of the same stone. There are hundreds of tonnes of them and there is no indication of where they came from or how they got there. They are basalt, but this type of rock is usually associated with volcanic eruptions and there is no evidence or history of volcanic activity there. The stones are unlike any of the surrounding stones either on the ground or on the cliff face overlooking them; legend has it that these stones were spewed up by the giant Gargantua. External links Pierres sonantes, Au Pays de Saint Malo http://carolineld.blogspot.com/2008/08/singing-rocks.html Archaeological sites in Brittany Earth phenomena Tourist attractions in Brittany Individual rocks Document 3::: Mineral tests are several methods which can help identify the mineral type. This is used widely in mineralogy, hydrocarbon exploration and general mapping. There are over 4000 types of minerals known with each one with different sub-classes. Elements make minerals and minerals make rocks so actually testing minerals in the lab and in the field is essential to understand the history of the rock which aids data, zonation, metamorphic history, processes involved and other minerals. The following tests are used on specimen and thin sections through polarizing microscope. Color Color of the mineral. This is not mineral specific. For example quartz can be almost any color, shape and within many rock types. Streak Color of the mineral's powder. This can be found by rubbing the mineral onto a concrete. This is more accurate but not always mineral specific. Lustre This is the way light reflects from the mineral's surface. A mineral can be metallic (shiny) or non-metallic (not shiny). Transparency The way light travels through minerals. The mineral can be transparent (clear), translucent (cloudy) or opaque (none). Specific gravity Ratio between the weight of the mineral relative to an equal volume of water. Mineral habitat The shape of the crystal and habitat. Magnetism Magnetic or nonmagnetic. Can be tested by using a magnet or a compass. This does not apply to all ion minerals (for example, pyrite). Cleavage Number, behaviour, size and way cracks fracture in the mineral. UV fluorescence Many minerals glow when put under a UV light. Radioactivity Is the mineral radioactive or non-radioactive? This is measured by a Geiger counter. Taste This is not recommended. Is the mineral salty, bitter or does it have no taste? Bite Test This is not recommended. This involves biting a mineral to see if its generally soft or hard. This was used in early gold exploration to tell the difference between pyrite (fools gold, hard) and gold (soft). Hardness The Mohs Hardn Document 4::: Molybdenite is a mineral of molybdenum disulfide, MoS2. Similar in appearance and feel to graphite, molybdenite has a lubricating effect that is a consequence of its layered structure. The atomic structure consists of a sheet of molybdenum atoms sandwiched between sheets of sulfur atoms. The Mo-S bonds are strong, but the interaction between the sulfur atoms at the top and bottom of separate sandwich-like tri-layers is weak, resulting in easy slippage as well as cleavage planes. Molybdenite crystallizes in the hexagonal crystal system as the common polytype 2H and also in the trigonal system as the 3R polytype. Description Occurrence Molybdenite occurs in high temperature hydrothermal ore deposits. Its associated minerals include pyrite, chalcopyrite, quartz, anhydrite, fluorite, and scheelite. Important deposits include the disseminated porphyry molybdenum deposits at Questa, New Mexico and the Henderson and Climax mines in Colorado. Molybdenite also occurs in porphyry copper deposits of Arizona, Utah, and Mexico. The element rhenium is always present in molybdenite as a substitute for molybdenum, usually in the parts per million (ppm ) range, but often up to 1–2%. High rhenium content results in a structural variety detectable by X-ray diffraction techniques. Molybdenite ores are essentially the only source for rhenium. The presence of the radioactive isotope rhenium-187 and its daughter isotope osmium-187 provides a useful geochronologic dating technique. Features Molybdenite is extremely soft with a metallic luster, and is superficially almost identical to graphite, to the point where it is not possible to positively distinguish between the two minerals without scientific equipment. It marks paper in much the same way as graphite. Its distinguishing feature from graphite is its higher specific gravity, as well as its tendency to occur in a matrix. Uses Molybdenite is an important ore of molybdenum, and is the most common source of the metal. While The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. What type of rock is a sandstone? A. sedimentary rocks B. igneous rocks C. limestone rocks D. landform rocks Answer:
sciq-2685
multiple_choice
What does the human protein cytokine help fight?
[ "toxins", "infections", "parasites", "mutations" ]
B
Relavent Documents: Document 0::: Cytokines are a broad and loose category of small proteins (~5–25 kDa) important in cell signaling. Due to their size, cytokines cannot cross the lipid bilayer of cells to enter the cytoplasm and therefore typically exert their functions by interacting with specific cytokine receptors on the target cell surface. Cytokines have been shown to be involved in autocrine, paracrine and endocrine signaling as immunomodulating agents. Cytokines include chemokines, interferons, interleukins, lymphokines, and tumour necrosis factors, but generally not hormones or growth factors (despite some overlap in the terminology). Cytokines are produced by a broad range of cells, including immune cells like macrophages, B lymphocytes, T lymphocytes and mast cells, as well as endothelial cells, fibroblasts, and various stromal cells; a given cytokine may be produced by more than one type of cell. They act through cell surface receptors and are especially important in the immune system; cytokines modulate the balance between humoral and cell-based immune responses, and they regulate the maturation, growth, and responsiveness of particular cell populations. Some cytokines enhance or inhibit the action of other cytokines in complex ways. They are different from hormones, which are also important cell signaling molecules. Hormones circulate in higher concentrations, and tend to be made by specific kinds of cells. Cytokines are important in health and disease, specifically in host immune responses to infection, inflammation, trauma, sepsis, cancer, and reproduction. The word comes from the ancient Greek language: cyto, from Greek κύτος, kytos, 'cavity, cell' + kines, from Greek κίνησις, kinēsis, 'movement'. Discovery Interferon-alpha, an interferon type I, was identified in 1957 as a protein that interfered with viral replication. The activity of interferon-gamma (the sole member of the interferon type II class) was described in 1965; this was the first identified lymphocyte-derived med Document 1::: MicrobeLibrary is a permanent collection of over 1400 original peer-reviewed resources for teaching undergraduate microbiology. It is provided by the American Society for Microbiology, Washington DC, United States. Contents include curriculum activities; images and animations; reviews of books, websites and other resources; and articles from Focus on Microbiology Education, Microbiology Education and Microbe. Around 40% of the materials are free to educators and students, the remainder require a subscription. the service is suspended with the message to: "Please check back with us in 2017". External links MicrobeLibrary Microbiology Document 2::: The School of Biological Sciences is a School within the Faculty Biology, Medicine and Health at The University of Manchester. Biology at University of Manchester and its precursor institutions has gone through a number of reorganizations (see History below), the latest of which was the change from a Faculty of Life Sciences to the current School. Academics Research The School, though unitary for teaching, is divided into a number of broadly defined sections for research purposes, these sections consist of: Cellular Systems, Disease Systems, Molecular Systems, Neuro Systems and Tissue Systems. Research in the School is structured into multiple research groups including the following themes: Cell-Matrix Research (part of the Wellcome Trust Centre for Cell-Matrix Research) Cell Organisation and Dynamics Computational and Evolutionary Biology Developmental Biology Environmental Research Eye and Vision Sciences Gene Regulation and Cellular Biotechnology History of Science, Technology and Medicine Immunology and Molecular Microbiology Molecular Cancer Studies Neurosciences (part of the University of Manchester Neurosciences Research Institute) Physiological Systems & Disease Structural and Functional Systems The School hosts a number of research centres, including: the Manchester Centre for Biophysics and Catalysis, the Wellcome Trust Centre for Cell-Matrix Research, the Centre of Excellence in Biopharmaceuticals, the Centre for the History of Science, Technology and Medicine, the Centre for Integrative Mammalian Biology, and the Healing Foundation Centre for Tissue Regeneration. The Manchester Collaborative Centre for Inflammation Research is a joint endeavour with the Faculty of Medical and Human Sciences of Manchester University and industrial partners. Research Assessment Exercise (2008) The faculty entered research into the units of assessment (UOA) for Biological Sciences and Pre-clinical and Human Biological Sciences. In Biological Sciences 20% of outputs Document 3::: Cytokine redundancy is a term in immunology referring to the phenomenon in which, and the ability of, multiple cytokines to exert similar actions. This phenomenon is largely due to multiple cytokines utilizing common receptor subunits and common intracellular cell signalling molecules/pathways. For instance, a pair of redundant cytokines are interleukin 4 and interleukin 13. Cytokine redundancy is associated with the term cytokine pleiotropy, which refers to the ability of cytokines to exert multiple actions. Document 4::: This is a list of Immune cells, also known as white blood cells, white cells, leukocytes, or leucocytes. They are cells involved in protecting the body against both infectious disease and foreign invaders. The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. What does the human protein cytokine help fight? A. toxins B. infections C. parasites D. mutations Answer:
sciq-10600
multiple_choice
Release of spores in a suitable environment will lead to germination and a new generation of what?
[ "gametophytes", "seeds", "filaments", "assemblages" ]
A
Relavent Documents: Document 0::: Germination is the process by which an organism grows from a seed or spore. The term is applied to the sprouting of a seedling from a seed of an angiosperm or gymnosperm, the growth of a sporeling from a spore, such as the spores of fungi, ferns, bacteria, and the growth of the pollen tube from the pollen grain of a seed plant. Seed plants Germination is usually the growth of a plant contained within a seed; it results in the formation of the seedling. It is also the process of reactivation of metabolic machinery of the seed resulting in the emergence of radicle and plumule. The seed of a vascular plant is a small package produced in a fruit or cone after the union of male and female reproductive cells. All fully developed seeds contain an embryo and, in most plant species some store of food reserves, wrapped in a seed coat. Some plants produce varying numbers of seeds that lack embryos; these are empty seeds which never germinate. Dormant seeds are viable seeds that do not germinate because they require specific internal or environmental stimuli to resume growth. Under proper conditions, the seed begins to germinate and the embryo resumes growth, developing into a seedling. Disturbance of soil can result in vigorous plant growth by exposing seeds already in the soil to changes in environmental factors where germination may have previously been inhibited by depth of the seeds or soil that was too compact. This is often observed at gravesites after a burial. Seed germination depends on both internal and external conditions. The most important external factors include right temperature, water, oxygen or air and sometimes light or darkness. Various plants require different variables for successful seed germination. Often this depends on the individual seed variety and is closely linked to the ecological conditions of a plant's natural habitat. For some seeds, their future germination response is affected by environmental conditions during seed formation; most ofte Document 1::: In biology, a spore is a unit of sexual (in fungi) or asexual reproduction that may be adapted for dispersal and for survival, often for extended periods of time, in unfavourable conditions. Spores form part of the life cycles of many plants, algae, fungi and protozoa. Bacterial spores are not part of a sexual cycle, but are resistant structures used for survival under unfavourable conditions. Myxozoan spores release amoeboid infectious germs ("amoebulae") into their hosts for parasitic infection, but also reproduce within the hosts through the pairing of two nuclei within the plasmodium, which develops from the amoebula. In plants, spores are usually haploid and unicellular and are produced by meiosis in the sporangium of a diploid sporophyte. Under favourable conditions the spore can develop into a new organism using mitotic division, producing a multicellular gametophyte, which eventually goes on to produce gametes. Two gametes fuse to form a zygote, which develops into a new sporophyte. This cycle is known as alternation of generations. The spores of seed plants are produced internally, and the megaspores (formed within the ovules) and the microspores are involved in the formation of more complex structures that form the dispersal units, the seeds and pollen grains. Definition The term spore derives from the ancient Greek word σπορά spora, meaning "seed, sowing", related to σπόρος , "sowing", and σπείρειν , "to sow". In common parlance, the difference between a "spore" and a "gamete" is that a spore will germinate and develop into a sporeling, while a gamete needs to combine with another gamete to form a zygote before developing further. The main difference between spores and seeds as dispersal units is that spores are unicellular, the first cell of a gametophyte, while seeds contain within them a developing embryo (the multicellular sporophyte of the next generation), produced by the fusion of the male gamete of the pollen tube with the female gamete for Document 2::: Micropropagation or tissue culture is the practice of rapidly multiplying plant stock material to produce many progeny plants, using modern plant tissue culture methods. Micropropagation is used to multiply a wide variety of plants, such as those that have been genetically modified or bred through conventional plant breeding methods. It is also used to provide a sufficient number of plantlets for planting from seedless plants, plants that do not respond well to vegetative reproduction or where micropropagation is the cheaper means of propagating (e.g. Orchids). Cornell University botanist Frederick Campion Steward discovered and pioneered micropropagation and plant tissue culture in the late 1950s and early 1960s. Steps In short, steps of micropropagation can be divided into four stages: Selection of mother plant Multiplication Rooting and acclimatizing Transfer new plant to soil Selection of mother plant Micropropagation begins with the selection of plant material to be propagated. The plant tissues are removed from an intact plant in a sterile condition. Clean stock materials that are free of viruses and fungi are important in the production of the healthiest plants. Once the plant material is chosen for culture, the collection of explant(s) begins and is dependent on the type of tissue to be used; including stem tips, anthers, petals, pollen and other plant tissues. The explant material is then surface sterilized, usually in multiple courses of bleach and alcohol washes, and finally rinsed in sterilized water. This small portion of plant tissue, sometimes only a single cell, is placed on a growth medium, typically containing Macro and micro nutrients, water, sucrose as an energy source and one or more plant growth regulators (plant hormones). Usually the medium is thickened with a gelling agent, such as agar, to create a gel which supports the explant during growth. Some plants are easily grown on simple media, but others require more complicated media f Document 3::: In plant science, the spermosphere is the zone in the soil surrounding a germinating seed. This is a small volume with radius perhaps 1 cm but varying with seed type, the variety of soil microorganisms, the level of soil moisture, and other factors. Within the spermosphere a range of complex interactions take place among the germinating seed, the soil, and the microbiome. Because germination is a brief process, the spermosphere is transient, but the impact of the microbial activity within the spermosphere can have strong and long-lasting effects on the developing plant. Seeds exude various molecules that influence their surrounding microbial communities, either inhibiting or stimulating their growth. The composition of the exudates varies according to the plant type and such properties of the soil as its pH and moisture content. With these biochemical effects, the spermosphere develops both downward—to form the rhizosphere (upon the emergence of the plant's radicle)—and upward to form the laimosphere, which is the soil surrounding the growing plant stem. Document 4::: A sporeling is a young plant or fungus produced by a germinated spore, similar to a seedling derived from a germinated seed. They occur in algae, fungi, lichens, bryophytes and seedless vascular plants. Sporeling development Most spores germinate by first producing a germ-rhizoid or holdfast followed by a germ tube emerging from the opposite end. The germ tube develops into the hypha, protonema or thallus of the gametophyte. In seedless vascular plants such as ferns and lycopodiophyta, the term "sporeling" refers to the young sporophyte growing on the gametophyte. These sporelings develop via an embryo stage from a fertilized egg inside an archegonium and depend on the gametophyte for their early stages of growth before becoming independent sporophytes. Young fern sporelings can often be found with the prothallus gametophyte still attached at the base of their fronds. See also Conidium (mitospore) Sporogenesis External links British Pteridological Society: An introduction to ferns (contains a picture of a sporeling fern attached to the prothallus) Plant morphology Plant reproduction Fungal morphology and anatomy The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. Release of spores in a suitable environment will lead to germination and a new generation of what? A. gametophytes B. seeds C. filaments D. assemblages Answer:
sciq-3192
multiple_choice
The process of producing mature sperm is called what?
[ "spermatogenesis", "ketoacidosis", "glycogenolysis", "spermatosis" ]
A
Relavent Documents: Document 0::: Spermatogenesis is the process by which haploid spermatozoa develop from germ cells in the seminiferous tubules of the testis. This process starts with the mitotic division of the stem cells located close to the basement membrane of the tubules. These cells are called spermatogonial stem cells. The mitotic division of these produces two types of cells. Type A cells replenish the stem cells, and type B cells differentiate into primary spermatocytes. The primary spermatocyte divides meiotically (Meiosis I) into two secondary spermatocytes; each secondary spermatocyte divides into two equal haploid spermatids by Meiosis II. The spermatids are transformed into spermatozoa (sperm) by the process of spermiogenesis. These develop into mature spermatozoa, also known as sperm cells. Thus, the primary spermatocyte gives rise to two cells, the secondary spermatocytes, and the two secondary spermatocytes by their subdivision produce four spermatozoa and four haploid cells. Spermatozoa are the mature male gametes in many sexually reproducing organisms. Thus, spermatogenesis is the male version of gametogenesis, of which the female equivalent is oogenesis. In mammals it occurs in the seminiferous tubules of the male testes in a stepwise fashion. Spermatogenesis is highly dependent upon optimal conditions for the process to occur correctly, and is essential for sexual reproduction. DNA methylation and histone modification have been implicated in the regulation of this process. It starts during puberty and usually continues uninterrupted until death, although a slight decrease can be discerned in the quantity of produced sperm with increase in age (see Male infertility). Spermatogenesis starts in the bottom part of seminiferous tubes and, progressively, cells go deeper into tubes and moving along it until mature spermatozoa reaches the lumen, where mature spermatozoa are deposited. The division happens asynchronically; if the tube is cut transversally one could observe different Document 1::: Sperm (: sperm or sperms) is the male reproductive cell, or gamete, in anisogamous forms of sexual reproduction (forms in which there is a larger, female reproductive cell and a smaller, male one). Animals produce motile sperm with a tail known as a flagellum, which are known as spermatozoa, while some red algae and fungi produce non-motile sperm cells, known as spermatia. Flowering plants contain non-motile sperm inside pollen, while some more basal plants like ferns and some gymnosperms have motile sperm. Sperm cells form during the process known as spermatogenesis, which in amniotes (reptiles and mammals) takes place in the seminiferous tubules of the testes. This process involves the production of several successive sperm cell precursors, starting with spermatogonia, which differentiate into spermatocytes. The spermatocytes then undergo meiosis, reducing their chromosome number by half, which produces spermatids. The spermatids then mature and, in animals, construct a tail, or flagellum, which gives rise to the mature, motile sperm cell. This whole process occurs constantly and takes around 3 months from start to finish. Sperm cells cannot divide and have a limited lifespan, but after fusion with egg cells during fertilization, a new organism begins developing, starting as a totipotent zygote. The human sperm cell is haploid, so that its 23 chromosomes can join the 23 chromosomes of the female egg to form a diploid cell with 46 paired chromosomes. In mammals, sperm is stored in the epididymis and is released from the penis during ejaculation in a fluid known as semen. The word sperm is derived from the Greek word σπέρμα, sperma, meaning "seed". Evolution It is generally accepted that isogamy is the ancestor to sperm and eggs. However, there are no fossil records for the evolution of sperm and eggs from isogamy leading there to be a strong emphasis on mathematical models to understand the evolution of sperm. A widespread hypothesis states that sperm evolve Document 2::: The spermatid is the haploid male gametid that results from division of secondary spermatocytes. As a result of meiosis, each spermatid contains only half of the genetic material present in the original primary spermatocyte. Spermatids are connected by cytoplasmic material and have superfluous cytoplasmic material around their nuclei. When formed, early round spermatids must undergo further maturational events to develop into spermatozoa, a process termed spermiogenesis (also termed spermeteliosis). The spermatids begin to grow a living thread, develop a thickened mid-piece where the mitochondria become localised, and form an acrosome. Spermatid DNA also undergoes packaging, becoming highly condensed. The DNA is packaged firstly with specific nuclear basic proteins, which are subsequently replaced with protamines during spermatid elongation. The resultant tightly packed chromatin is transcriptionally inactive. In 2016 scientists at Nanjing Medical University claimed they had produced cells resembling mouse spermatids artificially from stem cells. They injected these spermatids into mouse eggs and produced pups. DNA repair As postmeiotic germ cells develop to mature sperm they progressively lose the ability to repair DNA damage that may then accumulate and be transmitted to the zygote and ultimately the embryo. In particular, the repair of DNA double-strand breaks by the non-homologous end joining pathway, although present in round spermatids, appears to be lost as they develop into elongated spermatids. Additional images See also List of distinct cell types in the adult human body Document 3::: Reproductive biology includes both sexual and asexual reproduction. Reproductive biology includes a wide number of fields: Reproductive systems Endocrinology Sexual development (Puberty) Sexual maturity Reproduction Fertility Human reproductive biology Endocrinology Human reproductive biology is primarily controlled through hormones, which send signals to the human reproductive structures to influence growth and maturation. These hormones are secreted by endocrine glands, and spread to different tissues in the human body. In humans, the pituitary gland synthesizes hormones used to control the activity of endocrine glands. Reproductive systems Internal and external organs are included in the reproductive system. There are two reproductive systems including the male and female, which contain different organs from one another. These systems work together in order to produce offspring. Female reproductive system The female reproductive system includes the structures involved in ovulation, fertilization, development of an embryo, and birth. These structures include: Ovaries Oviducts Uterus Vagina Mammary Glands Estrogen is one of the sexual reproductive hormones that aid in the sexual reproductive system of the female. Male reproductive system The male reproductive system includes testes, rete testis, efferent ductules, epididymis, sex accessory glands, sex accessory ducts and external genitalia. Testosterone, an androgen, although present in both males and females, is relatively more abundant in males. Testosterone serves as one of the major sexual reproductive hormones in the male reproductive system However, the enzyme aromatase is present in testes and capable of synthesizing estrogens from androgens. Estrogens are present in high concentrations in luminal fluids of the male reproductive tract. Androgen and estrogen receptors are abundant in epithelial cells of the male reproductive tract. Animal Reproductive Biology Animal reproduction oc Document 4::: Spermarche, also known as semenarche, is the time at which a male experiences his first ejaculation. It is considered to be the counterpart of menarche in girls. Depending on upbringing, cultural differences, and prior sexual knowledge, males may have different reactions to spermarche, ranging from fear to excitement. Spermarche is one of the first events in the life of a male leading to sexual maturity. It occurs at the time when the secondary sex characteristics are just beginning to develop. Researchers have had difficulty determining the onset of spermarche because it is reliant on self-reporting. Other methods to determine it have included the examination of urine samples to determine the presence of spermatozoa. The presence of sperm in urine is referred to as spermaturia. Age of occurrence Research on the subject has varied for the reasons stated above, as well as changes in the average age of pubescence, which has been decreasing at an average rate of three months a decade. Research from 2010 indicated that the average age for spermarche in the U.S. was 12–16. In 2015, researchers in China determined that the average age for spermarche in China was 14. Historical data from countries including Nigeria and the United States also suggest 14 as an average age. Context Various studies have examined the circumstances in which first ejaculation occurred. Most commonly this occurred via a nocturnal emission, with a significant number experiencing semenarche via masturbation, which is very common at that stage. Less commonly, the first ejaculation occurred during sexual intercourse with a partner. See also Adrenarche The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. The process of producing mature sperm is called what? A. spermatogenesis B. ketoacidosis C. glycogenolysis D. spermatosis Answer:
sciq-10998
multiple_choice
What is the growing root tip protected by?
[ "root hinge", "root flap", "root cap", "tip cap" ]
C
Relavent Documents: Document 0::: Many pot designs train the roots. One example is a truncated plastic cone in which a seedling is planted. There is a drainage hole at the bottom and the main tap root tends to grow towards this. What this achieves is to encourage the roots to grow a denser system of root hairs. How it does this is to have the pots designed so as to air prune the roots. The advantage is when the plant is planted into its home environment it has a stronger root base to start with. When polythene bags are used instead, this root tends to go through the bag into the ground and is then broken off when the tree is moved for planting. The other roots are insufficiently developed to cope with the shock caused by this and so the tree's chances of survival are reduced. The root trainer is mounted in a stand above ground so that, when the tap root emerges, it is dried by the air. This air pruning causes the root inside the pot to thicken with stored carbohydrates that support vigorous root growth when the plant is put in the ground. The other lateral roots of the plant grow to compensate for this—so a stronger root ball forms, which improves the sapling's chances. When raising multiple seedlings, the root trainers are commonly placed in trays or racks. The size of each trainer depends upon the species but, for broad-leaved trees, the capacity is about a cup. Vertical ribs inside the trainer are positioned to train the roots to grow downwards and so prevent root spiralling. History Owing to numerous problems (stability, restricted growth, etc.), the issue of root circling in root pruning containers had to be addressed. Some, even today, promote cutting, slicing, or shaving root systems of plants grown in conventional containers prior to planting to stop circling. However, this is only partially effective and, like mechanical field pruning, it creates open wounds, allowing pathogens an opportunity to attack. Most understood the root system is extremely important to its overall perfor Document 1::: A root ball is the mass of roots and growing media at the base of a plant such as trees, shrubs, and other perennials and annual plants. The appearance and structure of the root ball will be largely dependent on the method of growing used in the production of the plant. The root ball of a container plant will be different than that of the field-harvested “ball and burlap” tree. The root ball is of particular significance in horticulture when plants are being planted or require repotting as, the quality, size, and preparation of the root ball will heavily determine how well the plant will survive being transplanted and re-establish in its new location. Root ball pruning of container grown plants Most commonly plants are grown in containers where the roots begin to circle and take the shape of their pot. The root balls that have been exposed to this scenario have a very high chance of developing circling or girdling roots that will become problematic and possibly detrimental to the tree or plant's health in the future. To manage this problem, it is best to remove any circling roots where you see them visible. Experts from Clemson University suggest making several slice marks in the root ball from the top to the bottom going 1 to 2 inches deep as this has been found to have positive effects. They have found these cuts cause new regenerative roots to be formed behind the wounds which aid in the plant establishing roots in the new location. The experts from Florida University suggest shaving the entire outside of the root ball when it has taken the shape of the pot (otherwise known as rootbound) before planting it into a larger container or its location. They have several supporting studies and images displaying how shaving the outer layer aids in removing circling roots and allows for better root establishment in the new growing area. Root balls of field grown plants For larger caliper trees and shrubs after their root balls are harvested from the ground, they are Document 2::: Crown sprouting is the ability of a plant to regenerate its shoot system after destruction (usually by fire) by activating dormant vegetative structures to produce regrowth from the root crown (the junction between the root and shoot portions of a plant). These dormant structures take the form of lignotubers or basal epicormic buds. Plant species that can accomplish crown sprouting are called crown resprouters (distinguishing them from stem or trunk resprouters) and, like them, are characteristic of fire-prone habitats such as chaparral. In contrast to plant fire survival strategies that decrease the flammability of the plant, or by requiring heat to germinate, crown sprouting allows for the total destruction of the above ground growth. Crown sprouting plants typically have extensive root systems in which they store nutrients allowing them to survive during fires and sprout afterwards. Early researchers suggested that crown sprouting species might lack species genetic diversity; however, research on Gondwanan shrubland suggests that crown sprouting species have similar genetic diversity to seed sprouters. Some genera, such as Arctostaphylos and Ceanothus, have species that are both resprouters and not, both adapted to fire. California Buckeye, Aesculus californica, is an example of a western United States tree which can regenerate from its root crown after a fire event, but can also regenerate by seed. See also Fire ecology Lignotuber Notes Document 3::: The root cap is a type of tissue at the tip of a plant root. It is also called calyptra. Root caps contain statocytes which are involved in gravity perception in plants. If the cap is carefully removed the root will grow randomly. The root cap protects the growing tip in plants. It secretes mucilage to ease the movement of the root through soil, and may also be involved in communication with the soil microbiota. The purpose of the root cap is to enable downward growth of the root, with the root cap covering the sensitive tissue in the root. Thanks to the presence of statocytes, the root cap enables geoperception or gravitropism. This allows the plant to grow downwards (with gravity) or upwards (against gravity). The root cap is absent in some parasitic plants and some aquatic plants, in which a sac-like structure called the root pocket may form instead. Document 4::: The quiescent centre is a group of cells, up to 1,000 in number, in the form of a hemisphere, with the flat face toward the root tip of vascular plants. It is a region in the apical meristem of a root where cell division proceeds very slowly or not at all, but the cells are capable of resuming meristematic activity when the tissue surrounding them is damaged. Cells of root apical meristems do not all divide at the same rate. Determinations of relative rates of DNA synthesis show that primary roots of Zea, Vicia and Allium have quiescent centres to the meristems, in which the cells divide rarely or never in the course of normal root growth (Clowes, 1958). Such a quiescent centre includes the cells at the apices of the histogens of both stele and cortex. Its presence can be deduced from the anatomy of the apex in Zea (Clowes, 1958), but not in the other species which lack discrete histogens. History In 1953, during the course of analysing the organization and function of the root apices, Frederick Albert Lionel Clowes (born 10 September 1921), at the School of Botany (now Department of Plant Sciences), University of Oxford, proposed the term ‘cytogenerative centre’ to denote ‘the region of an apical meristem from which all future cells are derived’. This term had been suggested to him by Mr Harold K. Pusey, a lecturer in embryology at the Department of Zoology and Comparative Anatomy at the same university. The 1953 paper of Clowes reported results of his experiments on Fagus sylvatica and Vicia faba, in which small oblique and wedge-shaped excisions were made at the tip of the primary root, at the most distal level of the root body, near the boundary with the root cap. The results of these experiments were striking and showed that: the root which grew on following the excision was normal at the undamaged meristem side; the nonexcised meristem portion contributed to the regeneration of the excised portion; the regenerated part of the root had abnormal patterning and The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. What is the growing root tip protected by? A. root hinge B. root flap C. root cap D. tip cap Answer:
sciq-1114
multiple_choice
What's the term for the period during which women's ovaries stop producing eggs?
[ "adolescence", "maturity", "menopause", "puberty" ]
C
Relavent Documents: Document 0::: The follicular phase, also known as the preovulatory phase or proliferative phase, is the phase of the estrous cycle (or, in primates for example, the menstrual cycle) during which follicles in the ovary mature from primary follicle to a fully mature graafian follicle. It ends with ovulation. The main hormones controlling this stage are secretion of gonadotropin-releasing hormones, which are follicle-stimulating hormones and luteinising hormones. They are released by pulsatile secretion. The duration of the follicular phase can differ depending on the length of the menstrual cycle, while the luteal phase is usually stable, does not really change and lasts 14 days. Hormonal events Protein secretion Due to the increase of FSH, the protein inhibin B will be secreted by the granulosa cells. Inhibin B will eventually blunt the secretion of FSH toward the end of the follicular phase. Inhibin B levels will be highest during the LH surge before ovulation and will quickly decrease after. Follicle recruitment Follicle-stimulating hormone (FSH) is secreted by the anterior pituitary gland (Figure 2). FSH secretion begins to rise in the last few days of the previous menstrual cycle, and is the highest and most important during the first week of the follicular phase (Figure 1). The rise in FSH levels recruits five to seven tertiary-stage ovarian follicles (this stage follicle is also known as a Graafian follicle or antral follicle) for entry into the menstrual cycle. These follicles, that have been growing for the better part of a year in a process known as folliculogenesis, compete with each other for dominance. FSH induces the proliferation of granulosa cells in the developing follicles, and the expression of luteinizing hormone (LH) receptors on these granulosa cells (Figure 1). Under the influence of FSH, aromatase and p450 enzymes are activated, causing the granulosa cells to begin to secrete estrogen. This increased level of estrogen stimulates production of gonadotrop Document 1::: Seed cycling is the rotation of different edible seeds into the diet at different times in the menstrual cycle. Practitioners believe that since some seeds promote estrogen production, and others promote progesterone production, that eating these seeds in the correct parts of the menstrual cycle will balance the hormonal rhythm. There is no scientific evidence to support the belief that cycling the seeds actually regulates the hormonal rhythm, but the practice is probably harmless. Overview Seed cycling advocates note that the menstrual cycle is broken up into four interconnected phases. The first phase is menstruation, followed by the follicular phase, then ovulation, then the luteal phase. Assuming a 28-day cycle, the first 13 days represent the menstrual and follicular phases, in which day 1 is when menstruation begins. During day-13, the seed cycling diet suggests consuming either flax or pumpkin seeds daily to boost estrogen, which helps support these phases and the move towards ovulation. Days 14-28 represent the ovulatory phase and luteal phase, with ovulation around day 14. The seed cycling diet suggests sesame or sunflower seeds to boost progesterone at this time, ground up to increase the surface area for absorption of the essential fatty acids, minerals, and other nutrients. The seed cycling diet relies on the belief that most women have a 28-day cycle. However, only 10-15% of women have 28-30 day cycles; most women's cycles vary, or run longer or shorter. For women with irregular or absent cycle, menopause, or post-menopause, the seed cycling diet suggests starting the seed cycle with any two weeks, and then rotating. However, many women who track their cycles through symptothermal methods (e.g. Basal Body Temperature and cervical mucus) are able to adapt the seed cycling protocol to their individual cycle and therefore do not need to rely on the belief that women have 28-day cycles. Research There is currently a lack of solid scientific eviden Document 2::: Although the process is similar in many animals, this article will deal exclusively with human folliculogenesis. In biology, folliculogenesis is the maturation of the ovarian follicle, a densely packed shell of somatic cells that contains an immature oocyte. Folliculogenesis describes the progression of a number of small primordial follicles into large preovulatory follicles that occurs in part during the menstrual cycle. Contrary to male spermatogenesis, which can last indefinitely, folliculogenesis ends when the remaining follicles in the ovaries are incapable of responding to the hormonal cues that previously recruited some follicles to mature. This depletion in follicle supply signals the beginning of menopause. Overview The primary role of the follicle is oocyte support. From the whole pool of follicles a woman is born with, only 0.1% of them will rise ovulation, whereas 99.9% will break down (in a process called follicular atresia). From birth, the ovaries of the human female contain a number of immature, primordial follicles. These follicles each contain a similarly immature primary oocyte. At puberty, clutches of follicles begin folliculogenesis, entering a growth pattern that ends in ovulation (the process where the oocyte leaves the follicle) or in atresia (death of the follicle's granulosa cells). During follicular development, primordial follicles undergo a series of critical changes in character, both histologically and hormonally. First they change into primary follicles and later into secondary follicles. The follicles then transition to tertiary, or antral, follicles. At this stage in development, they become dependent on hormones, particularly FSH which causes a substantial increase in their growth rate. The late tertiary or pre-ovulatory follicle ruptures and discharges the oocyte (that has become a secondary oocyte), ending folliculogenesis. Follicle ‘selection’ is the process by which a single ‘dominant’ follicle is chosen from the recruited Document 3::: The corpus albicans (Latin for "whitening body"; also known as atretic corpus luteum, corpus candicans, or simply as albicans) is the regressed form of the corpus luteum. As the corpus luteum is being broken down by macrophages, fibroblasts lay down type I collagen, forming the corpus albicans. This process is called "luteolysis". The remains of the corpus albicans may persist as a scar on the surface of the ovary. Background During the first few hours after expulsion of the ovum from the follicle, the remaining granulosa and theca interna cells change rapidly into lutein cells. They enlarge in diameter two or more times and become filled with lipid inclusions that give them a yellowish appearance. This process is called luteinization, and the total mass of cells together is called the corpus luteum. A well-developed vascular supply also grows into the corpus luteum. The granulosa cells in the corpus luteum develop extensive intracellular smooth endoplasmic reticula that form large amounts of the female sex hormones progesterone and estrogen (more progesterone than estrogen during the luteal phase). The theca cells form mainly the androgens androstenedione and testosterone. These hormones may then be converted by aromatase in the granulosa cells into estrogens, including estradiol. The corpus luteum normally grows to about 1.5 centimeters in diameter, reaching this stage of development 7 to 8 days after ovulation. Then it begins to involute and eventually loses its secretory function and its yellowish, lipid characteristic about 12 days after ovulation, becoming the corpus albicans. In the ensuing weeks, this is replaced by connective tissue and over months is reabsorbed. Document 4::: Menstruation is the shedding of the uterine lining (endometrium). It occurs on a regular basis in uninseminated sexually reproductive-age females of certain mammal species. Although there is some disagreement in definitions between sources, menstruation is generally considered to be limited to primates. Overt menstruation (where there is bleeding from the uterus through the vagina) is found primarily in humans and close relatives such as chimpanzees. It is common in simians (Old World monkeys, New World monkeys, and apes), but completely lacking in strepsirrhine primates and possibly weakly present in tarsiers. Beyond primates, it is known only in bats, the elephant shrew, and the spiny mouse species Acomys cahirinus. Females of other species of placental mammal undergo estrous cycles, in which the endometrium is completely reabsorbed by the animal (covert menstruation) at the end of its reproductive cycle. Many zoologists regard this as different from a "true" menstrual cycle. Female domestic animals used for breeding—for example dogs, pigs, cattle, or horses—are monitored for physical signs of an estrous cycle period, which indicates that the animal is ready for insemination. Estrus and menstruation Females of most mammal species advertise fertility to males with visual behavioral cues, pheromones, or both. This period of advertised fertility is known as oestrus, "estrus" or heat. In species that experience estrus, females are generally only receptive to copulation while they are in heat (dolphins are an exception). In the estrous cycles of most placental mammals, if no fertilization takes place, the uterus reabsorbs the endometrium. This breakdown of the endometrium without vaginal discharge is sometimes called covert menstruation. Overt menstruation (where there is blood flow from the vagina) occurs primarily in humans and close evolutionary relatives such as chimpanzees. Some species, such as domestic dogs, experience small amounts of vaginal bleeding The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. What's the term for the period during which women's ovaries stop producing eggs? A. adolescence B. maturity C. menopause D. puberty Answer:
sciq-5125
multiple_choice
What's the name for the point reached at a ph of 7?
[ "constriction point", "equivalence point", "acidic point", "analogous point" ]
B
Relavent Documents: Document 0::: Conceptual questions or conceptual problems in science, technology, engineering, and mathematics (STEM) education are questions that can be answered based only on the knowledge of relevant concepts, rather than performing extensive calculations. They contrast with most homework and exam problems in science and engineering that typically require plugging in numerical values into previously discussed formulas. Such "plug-and-chug" numerical problems can often be solved correctly by just matching the pattern of the problem to a previously discussed problem and changing the numerical inputs, which requires significant amounts of time to perform the calculations but does not test or deepen the understanding of how the concepts and formulas should work together. Conceptual questions, therefore, provide a good complement to conventional numerical problems because they need minimal or no calculations and instead encourage the students to engage more deeply with the underlying concepts and how they relate to formulas. Conceptual problems are often formulated as multiple-choice questions, making them easy to use during in-class discussions, particularly when utilizing active learning, peer instruction, and audience response. An example of a conceptual question in undergraduate thermodynamics is provided below: During adiabatic expansion of an ideal gas, its temperatureincreases decreases stays the same Impossible to tell/need more information The use of conceptual questions in physics was popularized by Eric Mazur, particularly in the form of multiple-choice tests that he called ConcepTests. In recent years, multiple websites that maintain lists of conceptual questions have been created by instructors for various disciplines. Some books on physics provide many examples of conceptual questions as well. Multiple conceptual questions can be assembled into a concept inventory to test the working knowledge of students at the beginning of a course or to track the improvement in Document 1::: Test equating traditionally refers to the statistical process of determining comparable scores on different forms of an exam. It can be accomplished using either classical test theory or item response theory. In item response theory, equating is the process of placing scores from two or more parallel test forms onto a common score scale. The result is that scores from two different test forms can be compared directly, or treated as though they came from the same test form. When the tests are not parallel, the general process is called linking. It is the process of equating the units and origins of two scales on which the abilities of students have been estimated from results on different tests. The process is analogous to equating degrees Fahrenheit with degrees Celsius by converting measurements from one scale to the other. The determination of comparable scores is a by-product of equating that results from equating the scales obtained from test results. Purpose Suppose that Dick and Jane both take a test to become licensed in a certain profession. Because the high stakes (you get to practice the profession if you pass the test) may create a temptation to cheat, the organization that oversees the test creates two forms. If we know that Dick scored 60% on form A and Jane scored 70% on form B, do we know for sure which one has a better grasp of the material? What if form A is composed of very difficult items, while form B is relatively easy? Equating analyses are performed to address this very issue, so that scores are as fair as possible. Equating in item response theory In item response theory, person "locations" (measures of some quality being assessed by a test) are estimated on an interval scale; i.e., locations are estimated in relation to a unit and origin. It is common in educational assessment to employ tests in order to assess different groups of students with the intention of establishing a common scale by equating the origins, and when appropri Document 2::: Single Best Answer (SBA or One Best Answer) is a written examination form of multiple choice questions used extensively in medical education. Structure A single question is posed with typically five alternate answers, from which the candidate must choose the best answer. This method avoids the problems of past examinations of a similar form described as Single Correct Answer. The older form can produce confusion where more than one of the possible answers has some validity. The newer form makes it explicit that more than one answer may have elements that are correct, but that one answer will be superior. Prior to the widespread introduction of SBAs into medical education, the typical form of examination was true-false multiple choice questions. But during the 2000s, educators found that SBAs would be superior. Document 3::: Advanced Placement (AP) Biology (also known as AP Bio) is an Advanced Placement biology course and exam offered by the College Board in the United States. For the 2012–2013 school year, the College Board unveiled a new curriculum with a greater focus on "scientific practices". This course is designed for students who wish to pursue an interest in the life sciences. The College Board recommends successful completion of high school biology and high school chemistry before commencing AP Biology, although the actual prerequisites vary from school to school and from state to state. This course, nevertheless, is considered very challenging and one of the most difficult AP classes, as shown with AP Finals grade distributions. Topic outline The exam covers the following 8 units. The percentage indicates the portion of the multiple-choice section of the exam focused on each content area: The course is based on and tests six skills, called scientific practices which include: In addition to the topics above, students are required to be familiar with general lab procedure. Students should know how to collect data, analyze data to form conclusions, and apply those conclusions. Exam Students are allowed to use a four-function, scientific, or graphing calculator. The exam has two sections: a 90 minute multiple choice section and a 90 minute free response section. There are 60 multiple choice questions and six free responses, two long and four short. Both sections are worth 50% of the score. Score distribution Commonly used textbooks Biology, AP Edition by Sylvia Mader (2012, hardcover ) Life: The Science of Biology (Sadava, Heller, Orians, Purves, and Hillis, ) Campbell Biology AP Ninth Edition (Reece, Urry, Cain, Wasserman, Minorsky, and Andrew Jackson ) See also Glossary of biology A.P Bio (TV Show) Document 4::: The SAT Subject Test in Biology was the name of a one-hour multiple choice test given on biology by the College Board. A student chose whether to take the test depending upon college entrance requirements for the schools in which the student is planning to apply. Until 1994, the SAT Subject Tests were known as Achievement Tests; and from 1995 until January 2005, they were known as SAT IIs. Of all SAT subject tests, the Biology E/M test was the only SAT II that allowed the test taker a choice between the ecological or molecular tests. A set of 60 questions was taken by all test takers for Biology and a choice of 20 questions was allowed between either the E or M tests. This test was graded on a scale between 200 and 800. The average for Molecular is 630 while Ecological is 591. On January 19 2021, the College Board discontinued all SAT Subject tests, including the SAT Subject Test in Biology E/M. This was effective immediately in the United States, and the tests were to be phased out by the following summer for international students. This was done as a response to changes in college admissions due to the impact of the COVID-19 pandemic on education. Format This test had 80 multiple-choice questions that were to be answered in one hour. All questions had five answer choices. Students received one point for each correct answer, lost ¼ of a point for each incorrect answer, and received 0 points for questions left blank. The student's score was based entirely on his or her performance in answering the multiple-choice questions. The questions covered a broad range of topics in general biology. There were more specific questions related respectively on ecological concepts (such as population studies and general Ecology) on the E test and molecular concepts such as DNA structure, translation, and biochemistry on the M test. Preparation The College Board suggested a year-long course in biology at the college preparatory level, as well as a one-year course in algebra, a The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. What's the name for the point reached at a ph of 7? A. constriction point B. equivalence point C. acidic point D. analogous point Answer:
sciq-440
multiple_choice
A new species is said to have evolved if separated members of a species evolve genetic differences that prevent what from occurring with the original members??
[ "interbreeding", "re-population", "evolution", "extinction" ]
A
Relavent Documents: Document 0::: Evolution is a collection of short stories that work together to form an episodic science fiction novel by author Stephen Baxter. It follows 565 million years of human evolution, from shrewlike mammals 65 million years in the past to the ultimate fate of humanity (and its descendants, both biological and non-biological) 500 million years in the future. Plot summary The book follows the evolution of mankind as it shapes surviving Purgatorius into tree dwellers, remoulds a group that drifts from Africa to a (then much closer) New World on a raft formed out of debris, and confronting others with a terrible dead end as ice clamps down on Antarctica. The stream of DNA runs on elsewhere, where ape-like creatures in North Africa are forced out of their diminishing forests to come across grasslands where their distant descendants will later run joyously. At one point, hominids become sapient, and go on to develop technology, including an evolving universal constructor machine that goes to Mars and multiplies, and in an act of global ecophagy consumes Mars by converting the planet into a mass of machinery that leaves the Solar system in search of new planets to assimilate. Human extinction (or the extinction of human culture) also occurs in the book, as well as the end of planet Earth and the rebirth of life on another planet. (The extinction-level event that causes the human extinction is, indirectly, an eruption of the Rabaul caldera, coupled with various actions of humans themselves, some of which are only vaguely referred to, but implied to be a form of genetic engineering which removed the ability to reproduce with non-engineered humans.) Also to be found in Evolution are ponderous Romans, sapient dinosaurs, the last of the wild Neanderthals, a primate who witnesses the extinction of the dinosaurs, symbiotic primate-tree relationships, mole people, and primates who live on a Mars-like Earth. The final chapter witnesses the final fate of the last primate and the des Document 1::: The scientific study of speciation — how species evolve to become new species — began around the time of Charles Darwin in the middle of the 19th century. Many naturalists at the time recognized the relationship between biogeography (the way species are distributed) and the evolution of species. The 20th century saw the growth of the field of speciation, with major contributors such as Ernst Mayr researching and documenting species' geographic patterns and relationships. The field grew in prominence with the modern evolutionary synthesis in the early part of that century. Since then, research on speciation has expanded immensely. The language of speciation has grown more complex. Debate over classification schemes on the mechanisms of speciation and reproductive isolation continue. The 21st century has seen a resurgence in the study of speciation, with new techniques such as molecular phylogenetics and systematics. Speciation has largely been divided into discrete modes that correspond to rates of gene flow between two incipient populations. Current research has driven the development of alternative schemes and the discovery of new processes of speciation. Early history Charles Darwin introduced the idea that species could evolve and split into separate lineages, referring to it as specification in his 1859 book On the Origin of Species. It was not until 1906 that the modern term speciation was coined by the biologist Orator F. Cook. Darwin, in his 1859 publication, focused primarily on the changes that can occur within a species, and less on how species may divide into two. It is almost universally accepted that Darwin's book did not directly address its title. Darwin instead saw speciation as occurring by species entering new ecological niches. Darwin's views Controversy exists as to whether Charles Darwin recognized a true geographical-based model of speciation in his publication On the Origin of Species. In chapter 11, "Geographical Distribution", Darwin d Document 2::: Quantum evolution is a component of George Gaylord Simpson's multi-tempoed theory of evolution proposed to explain the rapid emergence of higher taxonomic groups in the fossil record. According to Simpson, evolutionary rates differ from group to group and even among closely related lineages. These different rates of evolutionary change were designated by Simpson as bradytelic (slow tempo), horotelic (medium tempo), and tachytelic (rapid tempo). Quantum evolution differed from these styles of change in that it involved a drastic shift in the adaptive zones of certain classes of animals. The word "quantum" therefore refers to an "all-or-none reaction", where transitional forms are particularly unstable, and thereby perish rapidly and completely. Although quantum evolution may happen at any taxonomic level, it plays a much larger role in "the origin taxonomic units of relatively high rank, such as families, orders, and classes." Quantum evolution in plants Usage of the phrase "quantum evolution" in plants was apparently first articulated by Verne Grant in 1963 (pp. 458-459). He cited an earlier 1958 paper by Harlan Lewis and Peter H. Raven, wherein Grant asserted that Lewis and Raven gave a "parallel" definition of quantum evolution as defined by Simpson. Lewis and Raven postulated that species in the Genus Clarkia had a mode of speciation that resulted ...as a consequence of a rapid reorganization of the chromosomes due to the presence, at some time, of a genotype conducive to extensive chromosome breakage. A similar mode of origin by rapid reorganization of the chromosomes is suggested for the derivation of other species of Clarkia. In all of these examples the derivative populations grow adjacent to the parental species, which they resemble closely in morphology, but from which they are reproductively isolated because of multiple structural differences in their chromosomes. The spatial relationship of each parental species and its derivative suggests that di Document 3::: Evolutionary biology is the subfield of biology that studies the evolutionary processes (natural selection, common descent, speciation) that produced the diversity of life on Earth. It is also defined as the study of the history of life forms on Earth. Evolution holds that all species are related and gradually change over generations. In a population, the genetic variations affect the phenotypes (physical characteristics) of an organism. These changes in the phenotypes will be an advantage to some organisms, which will then be passed onto their offspring. Some examples of evolution in species over many generations are the peppered moth and flightless birds. In the 1930s, the discipline of evolutionary biology emerged through what Julian Huxley called the modern synthesis of understanding, from previously unrelated fields of biological research, such as genetics and ecology, systematics, and paleontology. The investigational range of current research has widened to encompass the genetic architecture of adaptation, molecular evolution, and the different forces that contribute to evolution, such as sexual selection, genetic drift, and biogeography. Moreover, the newer field of evolutionary developmental biology ("evo-devo") investigates how embryogenesis is controlled, thus yielding a wider synthesis that integrates developmental biology with the fields of study covered by the earlier evolutionary synthesis. Subfields Evolution is the central unifying concept in biology. Biology can be divided into various ways. One way is by the level of biological organization, from molecular to cell, organism to population. Another way is by perceived taxonomic group, with fields such as zoology, botany, and microbiology, reflecting what was once seen as the major divisions of life. A third way is by approaches, such as field biology, theoretical biology, experimental evolution, and paleontology. These alternative ways of dividing up the subject have been combined with evolution Document 4::: In biology, evolution is the process of change in all forms of life over generations, and evolutionary biology is the study of how evolution occurs. Biological populations evolve through genetic changes that correspond to changes in the organisms' observable traits. Genetic changes include mutations, which are caused by damage or replication errors in organisms' DNA. As the genetic variation of a population drifts randomly over generations, natural selection gradually leads traits to become more or less common based on the relative reproductive success of organisms with those traits. The age of the Earth is about 4.5 billion years. The earliest undisputed evidence of life on Earth dates from at least 3.5 billion years ago. Evolution does not attempt to explain the origin of life (covered instead by abiogenesis), but it does explain how early lifeforms evolved into the complex ecosystem that we see today. Based on the similarities between all present-day organisms, all life on Earth is assumed to have originated through common descent from a last universal ancestor from which all known species have diverged through the process of evolution. All individuals have hereditary material in the form of genes received from their parents, which they pass on to any offspring. Among offspring there are variations of genes due to the introduction of new genes via random changes called mutations or via reshuffling of existing genes during sexual reproduction. The offspring differs from the parent in minor random ways. If those differences are helpful, the offspring is more likely to survive and reproduce. This means that more offspring in the next generation will have that helpful difference and individuals will not have equal chances of reproductive success. In this way, traits that result in organisms being better adapted to their living conditions become more common in descendant populations. These differences accumulate resulting in changes within the population. This proce The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. A new species is said to have evolved if separated members of a species evolve genetic differences that prevent what from occurring with the original members?? A. interbreeding B. re-population C. evolution D. extinction Answer:
sciq-8325
multiple_choice
What is the industrial chemical produced in greatest quantity worldwide?
[ "liquid acid", "gaseous acid", "boric acid", "sulfuric acid" ]
D
Relavent Documents: Document 0::: Biochemical engineering, also known as bioprocess engineering, is a field of study with roots stemming from chemical engineering and biological engineering. It mainly deals with the design, construction, and advancement of unit processes that involve biological organisms (such as fermentation) or organic molecules (often enzymes) and has various applications in areas of interest such as biofuels, food, pharmaceuticals, biotechnology, and water treatment processes. The role of a biochemical engineer is to take findings developed by biologists and chemists in a laboratory and translate that to a large-scale manufacturing process. History For hundreds of years, humans have made use of the chemical reactions of biological organisms in order to create goods. In the mid-1800s, Louis Pasteur was one of the first people to look into the role of these organisms when he researched fermentation. His work also contributed to the use of pasteurization, which is still used to this day. By the early 1900s, the use of microorganisms had expanded, and was used to make industrial products. Up to this point, biochemical engineering hadn't developed as a field yet. It wasn't until 1928 when Alexander Fleming discovered penicillin that the field of biochemical engineering was established. After this discovery, samples were gathered from around the world in order to continue research into the characteristics of microbes from places such as soils, gardens, forests, rivers, and streams. Today, biochemical engineers can be found working in a variety of industries, from food to pharmaceuticals. This is due to the increasing need for efficiency and production which requires knowledge of how biological systems and chemical reactions interact with each other and how they can be used to meet these needs. Education Biochemical engineering is not a major offered by most universities and is instead an area of interest under the chemical engineering major in most cases. The following universiti Document 1::: Bioproducts engineering or bioprocess engineering refers to engineering of bio-products from renewable bioresources. This pertains to the design and development of processes and technologies for the sustainable manufacture of bioproducts (materials, chemicals and energy) from renewable biological resources. Bioproducts engineers harness the molecular building blocks of renewable resources to design, develop and manufacture environmentally friendly industrial and consumer products. From biofuels, renewable energy, and bioplastics to paper products and "green" building materials such as bio-based composites, Bioproducts engineers are developing sustainable solutions to meet the world's growing materials and energy demand. Conventional bioproducts and emerging bioproducts are two broad categories used to categorize bioproducts. Examples of conventional bio-based products include building materials, pulp and paper, and forest products. Examples of emerging bioproducts or biobased products include biofuels, bioenergy, starch-based and cellulose-based ethanol, bio-based adhesives, biochemicals, biodegradable plastics, etc. Bioproducts Engineers play a major role in the design and development of "green" products including biofuels, bioenergy, biodegradable plastics, biocomposites, building materials, paper and chemicals. Bioproducts engineers also develop energy efficient, environmentally friendly manufacturing processes for these products as well as effective end-use applications. Bioproducts engineers play a critical role in a sustainable 21st century bio-economy by using renewable resources to design, develop, and manufacture the products we use every day. The career outlook for bioproducts engineers is very bright with employment opportunities in a broad range of industries, including pulp and paper, alternative energy, renewable plastics, and other fiber, forest products, building materials and chemical-based industries. Commonly referred to as bioprocess engineerin Document 2::: Computer science and engineering (CSE) is an academic program at many universities which comprises computer science classes (e.g. data structures and algorithms) and computer engineering classes (e.g computer architecture). There is no clear division in computing between science and engineering, just like in the field of materials science and engineering. CSE is also a term often used in Europe to translate the name of engineering informatics academic programs. It is offered in both undergraduate as well postgraduate with specializations. Academic courses Academic programs vary between colleges, but typically include a combination of topics in computer science, computer engineering, and electrical engineering. Undergraduate courses usually include programming, algorithms and data structures, computer architecture, operating systems, computer networks, parallel computing, embedded systems, algorithms design, circuit analysis and electronics, digital logic and processor design, computer graphics, scientific computing, software engineering, database systems, digital signal processing, virtualization, computer simulations and games programming. CSE programs also include core subjects of theoretical computer science such as theory of computation, numerical methods, machine learning, programming theory and paradigms. Modern academic programs also cover emerging computing fields like image processing, data science, robotics, bio-inspired computing, computational biology, autonomic computing and artificial intelligence. Most CSE programs require introductory mathematical knowledge, hence the first year of study is dominated by mathematical courses, primarily discrete mathematics, mathematical analysis, linear algebra, probability, and statistics, as well as the basics of electrical and electronic engineering, physics, and electromagnetism. Example universities with CSE majors and departments APJ Abdul Kalam Technological University American International University-B Document 3::: The STEM (Science, Technology, Engineering, and Mathematics) pipeline is a critical infrastructure for fostering the development of future scientists, engineers, and problem solvers. It's the educational and career pathway that guides individuals from early childhood through to advanced research and innovation in STEM-related fields. Description The "pipeline" metaphor is based on the idea that having sufficient graduates requires both having sufficient input of students at the beginning of their studies, and retaining these students through completion of their academic program. The STEM pipeline is a key component of workplace diversity and of workforce development that ensures sufficient qualified candidates are available to fill scientific and technical positions. The STEM pipeline was promoted in the United States from the 1970s onwards, as “the push for STEM (science, technology, engineering, and mathematics) education appears to have grown from a concern for the low number of future professionals to fill STEM jobs and careers and economic and educational competitiveness.” Today, this metaphor is commonly used to describe retention problems in STEM fields, called “leaks” in the pipeline. For example, the White House reported in 2012 that 80% of minority groups and women who enroll in a STEM field switch to a non-STEM field or drop out during their undergraduate education. These leaks often vary by field, gender, ethnic and racial identity, socioeconomic background, and other factors, drawing attention to structural inequities involved in STEM education and careers. Current efforts The STEM pipeline concept is a useful tool for programs aiming at increasing the total number of graduates, and is especially important in efforts to increase the number of underrepresented minorities and women in STEM fields. Using STEM methodology, educational policymakers can examine the quantity and retention of students at all stages of the K–12 educational process and beyo Document 4::: The European Algae Biomass Association (EABA), established on 2 June 2009, is the European association representing both research and industry in the field of algae technologies. EABA was founded during its inaugural conference on 1–2 June 2009 at Villa La Pietra in Florence. The association is headquartered in Florence, Italy. History The first EABA's President, Prof. Dr. Mario Tredici, served a 2-year term since his election on 2 June 2009. The EABA Vice-presidents were Mr. Claudio Rochietta, (Oxem, Italy), Prof. Patrick Sorgeloos (University of Ghent, Belgium) and Mr. Marc Van Aken (SBAE Industries, Belgium). The EABA Executive Director was Mr. Raffaello Garofalo. EABA had 58 founding members and the EABA reached 79 members in 2011. The last election occurred on 3 December 2018 in Amsterdam. The EABA's President is Mr. Jean-Paul Cadoret (Algama / France). The EABA Vice-presidents are Prof. Dr. Sammy Boussiba (Ben-Gurion University of the Negev / Israel), Prof. Dr. Gabriel Acien (University of Almeria / Spain) and Dr. Alexandra Mosch (Germany). The EABA General Manager is Dr. Vítor Verdelho (A4F AlgaFuel, S.A. / Portugal) and Prof. Dr. Mario Tredici (University of Florence / Italy) is elected as Honorary President. Cooperation with other organisations ART Fuels Forum European Society of Biochemical Engineering Sciences Algae Biomass Organization The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. What is the industrial chemical produced in greatest quantity worldwide? A. liquid acid B. gaseous acid C. boric acid D. sulfuric acid Answer:
sciq-7831
multiple_choice
Light retains its original color under water because what remains the same when light is refracted?
[ "frequency", "sound", "density", "wave length" ]
A
Relavent Documents: Document 0::: Conceptual questions or conceptual problems in science, technology, engineering, and mathematics (STEM) education are questions that can be answered based only on the knowledge of relevant concepts, rather than performing extensive calculations. They contrast with most homework and exam problems in science and engineering that typically require plugging in numerical values into previously discussed formulas. Such "plug-and-chug" numerical problems can often be solved correctly by just matching the pattern of the problem to a previously discussed problem and changing the numerical inputs, which requires significant amounts of time to perform the calculations but does not test or deepen the understanding of how the concepts and formulas should work together. Conceptual questions, therefore, provide a good complement to conventional numerical problems because they need minimal or no calculations and instead encourage the students to engage more deeply with the underlying concepts and how they relate to formulas. Conceptual problems are often formulated as multiple-choice questions, making them easy to use during in-class discussions, particularly when utilizing active learning, peer instruction, and audience response. An example of a conceptual question in undergraduate thermodynamics is provided below: During adiabatic expansion of an ideal gas, its temperatureincreases decreases stays the same Impossible to tell/need more information The use of conceptual questions in physics was popularized by Eric Mazur, particularly in the form of multiple-choice tests that he called ConcepTests. In recent years, multiple websites that maintain lists of conceptual questions have been created by instructors for various disciplines. Some books on physics provide many examples of conceptual questions as well. Multiple conceptual questions can be assembled into a concept inventory to test the working knowledge of students at the beginning of a course or to track the improvement in Document 1::: In visual physiology, adaptation is the ability of the retina of the eye to adjust to various levels of light. Natural night vision, or scotopic vision, is the ability to see under low-light conditions. In humans, rod cells are exclusively responsible for night vision as cone cells are only able to function at higher illumination levels. Night vision is of lower quality than day vision because it is limited in resolution and colors cannot be discerned; only shades of gray are seen. In order for humans to transition from day to night vision they must undergo a dark adaptation period of up to two hours in which each eye adjusts from a high to a low luminescence "setting", increasing sensitivity hugely, by many orders of magnitude. This adaptation period is different between rod and cone cells and results from the regeneration of photopigments to increase retinal sensitivity. Light adaptation, in contrast, works very quickly, within seconds. Efficiency The human eye can function from very dark to very bright levels of light; its sensing capabilities reach across nine orders of magnitude. This means that the brightest and the darkest light signal that the eye can sense are a factor of roughly 1,000,000,000 apart. However, in any given moment of time, the eye can only sense a contrast ratio of 1,000. What enables the wider reach is that the eye adapts its definition of what is black. The eye takes approximately 20–30 minutes to fully adapt from bright sunlight to complete darkness and becomes 10,000 to 1,000,000 times more sensitive than at full daylight. In this process, the eye's perception of color changes as well (this is called the Purkinje effect). However, it takes approximately five minutes for the eye to adapt from darkness to bright sunlight. This is due to cones obtaining more sensitivity when first entering the dark for the first five minutes but the rods taking over after five or more minutes. Cone cells are able to regain maximum retinal sensitivity in 9 Document 2::: Colored music notation is a technique used to facilitate enhanced learning in young music students by adding visual color to written musical notation. It is based upon the concept that color can affect the observer in various ways, and combines this with standard learning of basic notation. Basis Viewing color has been widely shown to change an individual's emotional state and stimulate neurons. The Lüscher color test observes from experiments that when individuals are required to contemplate pure red for varying lengths of time, [the experiments] have shown that this color decidedly has a stimulating effect on the nervous system; blood pressure increases, and respiration rate and heart rate both increase. Pure blue, on the other hand, has the reverse effect; observers experience a decline in blood pressure, heart rate, and breathing. Given these findings, it has been suggested that the influence of colored musical notation would be similar. Music education In music education, color is typically used in method books to highlight new material. Stimuli received through several senses excite more neurons in several localized areas of the cortex, thereby reinforcing the learning process and improving retention. This information has been proven by other researchers; Chute (1978) reported that "elementary students who viewed a colored version of an instructional film scored significantly higher on both immediate and delayed tests than did students who viewed a monochrome version". Color studies Effect on achievement A researcher in this field, George L. Rogers is the Director of Music Education at Westfield State College. He is also the author of 25 articles in publications that include the Music Educators Journal, The Instrumentalist, and the Journal of Research in Music Education. In 1991, George L. Rogers did a study that researched the effect of color-coded notation on music achievement of elementary instrumental students. Rogers states that the color-co Document 3::: On the coloured light of the binary stars and some other stars of the heavens or in the original German is a treatise by Christian Doppler (1842) in which he postulated his principle that the observed frequency changes if either the source or the observer is moving, which later has been coined the Doppler effect. The original German text can be found in wikisource. The following annotated summary serves as a companion to that original. Title The title "" (On the coloured light of the binary stars and some other stars of the heavens - Attempt at a general theory including Bradley's theorem as an integral part) specifies the purpose: describe the hypothesis of the Doppler effect, use it to explain the colours of binary stars, and establish a relation with Bradley's stellar aberration. Content § 1 In which Doppler reminds the readers that light is a wave, and that there is debate as to whether it is a transverse wave, with aether particles oscillating perpendicular to the propagation direction. Proponents claim this is necessary to explain polarised light, whereas opponents object to implications for the aether. Doppler doesn't choose sides, although the issue returns in § 6. § 2 Doppler observes that colour is a manifestation of the frequency of the light wave, in the eye of the beholder. He describes his principle that a frequency shift occurs when the source or the observer moves. A ship meets waves at a faster rate when sailing against the waves than when sailing along with them. The same goes for sound and light. § 3 Doppler derives his equations for the frequency shift, in two cases: § 4 Doppler provides imaginary examples of large and small frequency shifts for sound: § 5 Doppler provides imaginary examples of large and small frequency shifts for light from stars. Velocities are expressed in Meilen/s, and the light speed has a rounded value of 42000 Meilen/s. Doppler assumes that 458 THz (extreme red) and 727 THz (extreme violet) are the borders of the v Document 4::: The optical properties of all liquid and solid materials change as a function of the wavelength of light used to measure them. This change as a function of wavelength is called the dispersion of the optical properties. The graph created by plotting the optical property of interest by the wavelength at which it is measured is called a dispersion curve. The dispersion staining is an analytical technique used in light microscopy that takes advantage of the differences in the dispersion curve of the refractive index of an unknown material relative to a standard material with a known dispersion curve to identify or characterize that unknown material. These differences become manifest as a color when the two dispersion curves intersect for some visible wavelength. This is an optical staining technique and requires no stains or dyes to produce the color. Its primary use today is in the confirmation of the presence of asbestos in construction materials but it has many other applications. Types There are five basic optical configurations of the microscope used for dispersion staining. Each configuration has its advantages and disadvantages. The first two of these, Becke` line dispersion staining and oblique dispersion staining, were first reported in the United States by F. E. Wright in 1911 based on work done by O. Maschke in Germany during the 1870s. The five dispersion staining configurations are: Colored Becke` Line Dispersion Staining (Maschke, 1872; Wright, 1911) Oblique Illumination Dispersion Staining (Wright, 1911) Darkfield Dispersion Staining (Crossmon, 1948) Phase Contrast Dispersion Staining (Crossmon, 1949) Objective Stop Dispersion Staining (Cherkasov, 1958) All of these configurations have the same requirements for the preparation of the sample to be examined. First, the substance of interest must be in intimate contact with the known reference material. In other words, the clean solid must be mounted in a reference liquid, one mineral phas The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. Light retains its original color under water because what remains the same when light is refracted? A. frequency B. sound C. density D. wave length Answer:
sciq-4063
multiple_choice
What are the tiny packets of energy the sun gives off called?
[ "neutrons", "electrons", "photons", "ions" ]
C
Relavent Documents: Document 0::: Conceptual questions or conceptual problems in science, technology, engineering, and mathematics (STEM) education are questions that can be answered based only on the knowledge of relevant concepts, rather than performing extensive calculations. They contrast with most homework and exam problems in science and engineering that typically require plugging in numerical values into previously discussed formulas. Such "plug-and-chug" numerical problems can often be solved correctly by just matching the pattern of the problem to a previously discussed problem and changing the numerical inputs, which requires significant amounts of time to perform the calculations but does not test or deepen the understanding of how the concepts and formulas should work together. Conceptual questions, therefore, provide a good complement to conventional numerical problems because they need minimal or no calculations and instead encourage the students to engage more deeply with the underlying concepts and how they relate to formulas. Conceptual problems are often formulated as multiple-choice questions, making them easy to use during in-class discussions, particularly when utilizing active learning, peer instruction, and audience response. An example of a conceptual question in undergraduate thermodynamics is provided below: During adiabatic expansion of an ideal gas, its temperatureincreases decreases stays the same Impossible to tell/need more information The use of conceptual questions in physics was popularized by Eric Mazur, particularly in the form of multiple-choice tests that he called ConcepTests. In recent years, multiple websites that maintain lists of conceptual questions have been created by instructors for various disciplines. Some books on physics provide many examples of conceptual questions as well. Multiple conceptual questions can be assembled into a concept inventory to test the working knowledge of students at the beginning of a course or to track the improvement in Document 1::: Cosmic dustalso called extraterrestrial dust, space dust, or star dustis dust that occurs in outer space or has fallen onto Earth. Most cosmic dust particles measure between a few molecules and , such as micrometeoroids. Larger particles are called meteoroids. Cosmic dust can be further distinguished by its astronomical location: intergalactic dust, interstellar dust, interplanetary dust (as in the zodiacal cloud), and circumplanetary dust (as in a planetary ring). There are several methods to obtain space dust measurement. In the Solar System, interplanetary dust causes the zodiacal light. Solar System dust includes comet dust, planetary dust (like from Mars), asteroidal dust, dust from the Kuiper belt, and interstellar dust passing through the Solar System. Thousands of tons of cosmic dust are estimated to reach Earth's surface every year, with most grains having a mass between 10−16 kg (0.1 pg) and 10−4 kg (0.1 g). The density of the dust cloud through which the Earth is traveling is approximately 10−6 dust grains/m3. Cosmic dust contains some complex organic compounds (amorphous organic solids with a mixed aromatic–aliphatic structure) that could be created naturally, and rapidly, by stars. A smaller fraction of dust in space is "stardust" consisting of larger refractory minerals that condensed as matter left by stars. Interstellar dust particles were collected by the Stardust spacecraft and samples were returned to Earth in 2006. Study and importance Cosmic dust was once solely an annoyance to astronomers, as it obscures objects they wished to observe. When infrared astronomy began, the dust particles were observed to be significant and vital components of astrophysical processes. Their analysis can reveal information about phenomena like the formation of the Solar System. For example, cosmic dust can drive the mass loss when a star is nearing the end of its life, play a part in the early stages of star formation, and form planets. In the Solar System, Document 2::: Cosmic rays or astroparticles are high-energy particles or clusters of particles (primarily represented by protons or atomic nuclei) that move through space at nearly the speed of light. They originate from the Sun, from outside of the Solar System in our own galaxy, and from distant galaxies. Upon impact with Earth's atmosphere, cosmic rays produce showers of secondary particles, some of which reach the surface, although the bulk is deflected off into space by the magnetosphere or the heliosphere. Cosmic rays were discovered by Victor Hess in 1912 in balloon experiments, for which he was awarded the 1936 Nobel Prize in Physics. Direct measurement of cosmic rays, especially at lower energies, has been possible since the launch of the first satellites in the late 1950s. Particle detectors similar to those used in nuclear and high-energy physics are used on satellites and space probes for research into cosmic rays. Data from the Fermi Space Telescope (2013) have been interpreted as evidence that a significant fraction of primary cosmic rays originate from the supernova explosions of stars. Based on observations of neutrinos and gamma rays from blazar TXS 0506+056 in 2018, active galactic nuclei also appear to produce cosmic rays. Etymology The term ray (as in optical ray) seems to have arisen from an initial belief, due to their penetrating power, that cosmic rays were mostly electromagnetic radiation. Nevertheless, following wider recognition of cosmic rays as being various high-energy particles with intrinsic mass, the term "rays" was still consistent with then known particles such as cathode rays, canal rays, alpha rays and beta rays. Meanwhile "cosmic" ray photons, which are quanta of electromagnetic radiation (and so have no intrinsic mass) are known by their common names, such as gamma rays or X-rays, depending on their photon energy. Composition Of primary cosmic rays, which originate outside of Earth's atmosphere, about 99% are the bare nuclei of common at Document 3::: The interplanetary medium (IPM) or interplanetary space consists of the mass and energy which fills the Solar System, and through which all the larger Solar System bodies, such as planets, dwarf planets, asteroids, and comets, move. The IPM stops at the heliopause, outside of which the interstellar medium begins. Before 1950, interplanetary space was widely considered to either be an empty vacuum, or consisting of "aether". Composition and physical characteristics The interplanetary medium includes interplanetary dust, cosmic rays, and hot plasma from the solar wind. The density of the interplanetary medium is very low, decreasing in inverse proportion to the square of the distance from the Sun. It is variable, and may be affected by magnetic fields and events such as coronal mass ejections. Typical particle densities in the interplanetary medium are about 5-40 particles/cm, but exhibit substantial variation. In the vicinity of the Earth, it contains about 5 particles/cm, but values as high as 100 particles/cm have been observed. The temperature of the interplanetary medium varies through the solar system. Joseph Fourier estimated that interplanetary medium must have temperatures comparable to those observed at Earth's poles, but on faulty grounds: lacking modern estimates of atmospheric heat transport, he saw no other means to explain the relative consistency of earth's climate. A very hot interplanetary medium remained a minor position among geophysicists as late as 1959, when Chapman proposed a temperature on the order of 10000 K, but observation in Low Earth orbit of the exosphere soon contradicted his position. In fact, both Fourier and Chapman's final predictions were correct: because the interplanetary medium is so rarefied, it does not exhibit thermodynamic equilibrium. Instead, different components have different temperatures. The solar wind exhibits temperatures consistent with Chapman's estimate in cislunar space, and dust particles near Earth's Document 4::: Solar radio emission refers to radio waves that are naturally produced by the Sun, primarily from the lower and upper layers of the atmosphere called the chromosphere and corona, respectively. The Sun produces radio emissions through four known mechanisms, each of which operates primarily by converting the energy of moving electrons into electromagnetic radiation. The four emission mechanisms are thermal bremsstrahlung (braking) emission, gyromagnetic emission, plasma emission, and electron-cyclotron maser emission. The first two are incoherent mechanisms, which means that they are the summation of radiation generated independently by many individual particles. These mechanisms are primarily responsible for the persistent "background" emissions that slowly vary as structures in the atmosphere evolve. The latter two processes are coherent mechanisms, which refers to special cases where radiation is efficiently produced at a particular set of frequencies. Coherent mechanisms can produce much larger brightness temperatures (intensities) and are primarily responsible for the intense spikes of radiation called solar radio bursts, which are byproducts of the same processes that lead to other forms of solar activity like solar flares and coronal mass ejections. History and observations Radio emission from the Sun was first reported in the scientific literature by Grote Reber in 1944. Those were observations of 160 MHz frequency (2 meters wavelength) microwave emission emanating from the chromosphere. However, the earliest known observation was in 1942 during World War II by British radar operators who detected an intense low-frequency solar radio burst; that information was kept secret as potentially useful in evading enemy radar, but was later described in a scientific journal after the war. One of the most significant discoveries from early solar radio astronomers such as Joseph Pawsey was that the Sun produces much more radio emission than expected from standard blac The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. What are the tiny packets of energy the sun gives off called? A. neutrons B. electrons C. photons D. ions Answer:
sciq-2410
multiple_choice
Where is food stored?
[ "the stomach", "The esophagus", "The liver", "The gall bladder" ]
A
Relavent Documents: Document 0::: The National Centre for Food Manufacturing (NCFM) is the food science campus of the University of Lincoln, situated on Park Road at Holbeach in the south of the county of Lincolnshire. It offers part-time apprenticeships and distance learning degrees for individuals working in the food industry. Apprenticeships, Degree and Foundation Courses The National Centre for Food Manufacturing offers part time distance learning options to achieve Foundation and BSc (Honours) Food Manufacture degrees and higher degrees through research together with all levels of apprenticeships including Higher Apprenticeships (which includes a Foundation Degree). The Foundation and Undergraduate degrees cover areas including Food and Drink Operations and Manufacturing Management - Food Science and Technology – and Food Supply Chain Management. The Centre also offers part time Masters and PhDs - often progressed by food sector employees and focused on specific Food Manufacturing Industry Challenges. The Higher and Degree Apprenticeships include the CMDA (Chartered Manager Degree Apprenticeship), Departmental Manager, Laboratory Scientist and Professional Technical degrees. The Centre provides support to apprentices for Functional Skills development in maths and English as required by their relevant apprenticeship standard and offers employers a complete skills development programme for its employees. NCFM has apprenticeship partnerships with 250 UK food businesses including Addo Food Group, Bakkavor, Bidfood, Dalehead Foods, Summers Butchery Services, Greencore Group, Tulip, Dovecote Park, Fresttime, Finlays, JDM Food Group, Kerry, Nestle, Worldwide Fruit, University Academy Holbeach, Produce World Group, J.O. Sims Ltd, Greenvale AP, FreshLinc, Ripe Now and Lincolshire Field Products. Research NCFM advances food manufacturing and related food supply chain research initiatives via a wide range of industry and academic partnerships. The areas of core research include Robotics and Automatio Document 1::: Food science is the basic science and applied science of food; its scope starts at overlap with agricultural science and nutritional science and leads through the scientific aspects of food safety and food processing, informing the development of food technology. Food science brings together multiple scientific disciplines. It incorporates concepts from fields such as chemistry, physics, physiology, microbiology, and biochemistry. Food technology incorporates concepts from chemical engineering, for example. Activities of food scientists include the development of new food products, design of processes to produce these foods, choice of packaging materials, shelf-life studies, sensory evaluation of products using survey panels or potential consumers, as well as microbiological and chemical testing. Food scientists may study more fundamental phenomena that are directly linked to the production of food products and its properties. Definition The Institute of Food Technologists defines food science as "the discipline in which the engineering, biological, and physical sciences are used to study the nature of foods, the causes of deterioration, the principles underlying food processing, and the improvement of foods for the consuming public". The textbook Food Science defines food science in simpler terms as "the application of basic sciences and engineering to study the physical, chemical, and biochemical nature of foods and the principles of food processing". Disciplines Some of the subdisciplines of food science are described below. Food chemistry Food chemistry is the study of chemical processes and interactions of all biological and non-biological components of foods. The biological substances include such items as meat, poultry, lettuce, beer, and milk. It is similar to biochemistry in its main components such as carbohydrates, lipids, and protein, but it also includes areas such as water, vitamins, minerals, enzymes, food additives, flavors, and colors. This Document 2::: Food Valley is a region in the Netherlands where international food companies, research institutes, and Wageningen University and Research Centre are concentrated. The Food Valley area is the home of a large number of food multinationals and within the Food Valley about 15,000 professionals are active in food related sciences and technological development. Far more are involved in the manufacturing of food products. Food Valley, with the city of Wageningen as its center, is intended to form a dynamic heart of knowledge for the international food industry. Within this region, Foodvalley NL is intended to create conditions so that food manufacturers and knowledge institutes can work together in developing new and innovating food concepts. Current research about the Food Valley The Food Valley as a region has been the subject of study by several human geographers. Even before the Food Valley was established as an organisation in 2004 and as a region in 2011 Frank Kraak and Frits Oevering made a SWOT analysis of the region using an Evolutionary economics framework and compared it with similar regions in Canada, Denmark, Italy and Sweden. A similar study was done by Floris Wieberdink. The study utilised Geomarketing concepts in the WERV, the predecessor of the Regio Food Valley. Geijer and Van der Velden studied the economic development of the Regio Food Valley using statistical data. Discussion The research performed in the Food Valley has generated some discussion about the influence of culture on economic growth. Wieberdink argued that culture and habitat are not spatially bounded, but historically. More recently a study about the Food Valley argued that culture and habitat are in fact spatially bounded. Both studies, however, recommend the Regio Food Valley to promote its distinct culture. See also Document 3::: Relatively speaking, the brain consumes an immense amount of energy in comparison to the rest of the body. The mechanisms involved in the transfer of energy from foods to neurons are likely to be fundamental to the control of brain function. Human bodily processes, including the brain, all require both macronutrients, as well as micronutrients. Insufficient intake of selected vitamins, or certain metabolic disorders, may affect cognitive processes by disrupting the nutrient-dependent processes within the body that are associated with the management of energy in neurons, which can subsequently affect synaptic plasticity, or the ability to encode new memories. Macronutrients The human brain requires nutrients obtained from the diet to develop and sustain its physical structure and cognitive functions. Additionally, the brain requires caloric energy predominately derived from the primary macronutrients to operate. The three primary macronutrients include carbohydrates, proteins, and fats. Each macronutrient can impact cognition through multiple mechanisms, including glucose and insulin metabolism, neurotransmitter actions, oxidative stress and inflammation, and the gut-brain axis. Inadequate macronutrient consumption or proportion could impair optimal cognitive functioning and have long-term health implications. Carbohydrates Through digestion, dietary carbohydrates are broken down and converted into glucose, which is the sole energy source for the brain. Optimal brain function relies on adequate carbohydrate consumption, as carbohydrates provide the quickest source of glucose for the brain. Glucose deficiencies such as hypoglycaemia reduce available energy for the brain and impair all cognitive processes and performance. Additionally, situations with high cognitive demand, such as learning a new task, increase brain glucose utilization, depleting blood glucose stores and initiating the need for supplementation. Complex carbohydrates, especially those with high d Document 4::: Food and biological process engineering is a discipline concerned with applying principles of engineering to the fields of food production and distribution and biology. It is a broad field, with workers fulfilling a variety of roles ranging from design of food processing equipment to genetic modification of organisms. In some respects it is a combined field, drawing from the disciplines of food science and biological engineering to improve the earth's food supply. Creating, processing, and storing food to support the world's population requires extensive interdisciplinary knowledge. Notably, there are many biological engineering processes within food engineering to manipulate the multitude of organisms involved in our complex food chain. Food safety in particular requires biological study to understand the microorganisms involved and how they affect humans. However, other aspects of food engineering, such as food storage and processing, also require extensive biological knowledge of both the food and the microorganisms that inhabit it. This food microbiology and biology knowledge becomes biological engineering when systems and processes are created to maintain desirable food properties and microorganisms while providing mechanisms for eliminating the unfavorable or dangerous ones. Concepts Many different concepts are involved in the field of food and biological process engineering. Below are listed several major ones. Food science The science behind food and food production involves studying how food behaves and how it can be improved. Researchers analyze longevity and composition (i.e., ingredients, vitamins, minerals, etc.) of foods, as well as how to ensure food safety. Genetic engineering Modern food and biological process engineering relies heavily on applications of genetic manipulation. By understanding plants and animals on the molecular level, scientists are able to engineer them with specific goals in mind. Among the most notable applications of The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. Where is food stored? A. the stomach B. The esophagus C. The liver D. The gall bladder Answer:
sciq-3414
multiple_choice
The complement system is a series of proteins constitutively found in the what?
[ "organs", "platelets", "nucleus", "blood plasma" ]
D
Relavent Documents: Document 0::: The complement system, also known as complement cascade, is a part of the immune system that enhances (complements) the ability of antibodies and phagocytic cells to clear microbes and damaged cells from an organism, promote inflammation, and attack the pathogen's cell membrane. It is part of the innate immune system, which is not adaptable and does not change during an individual's lifetime. The complement system can, however, be recruited and brought into action by antibodies generated by the adaptive immune system. The complement system consists of a number of small proteins that are synthesized by the liver, and circulate in the blood as inactive precursors. When stimulated by one of several triggers, proteases in the system cleave specific proteins to release cytokines and initiate an amplifying cascade of further cleavages. The end result of this complement activation or complement fixation cascade is stimulation of phagocytes to clear foreign and damaged material, inflammation to attract additional phagocytes, and activation of the cell-killing membrane attack complex. About 50 proteins and protein fragments make up the complement system, including serum proteins, and cell membrane receptors. They account for about 10% of the globulin fraction of blood serum. Three biochemical pathways activate the complement system: the classical complement pathway, the alternative complement pathway, and the lectin pathway. The alternative pathway accounts for the majority of terminal pathway activation and so therapeutic efforts in disease have revolved around its inhibition. History In 1888, George Nuttall found that sheep blood serum had mild killing activity against the bacterium that causes anthrax. The killing activity disappeared when he heated the blood. In 1891, Hans Ernst August Buchner, noting the same property of blood in his experiments, named the killing property "alexin", which means "to ward off" in Greek. By 1894, several laboratories had demonstrated Document 1::: Complement control protein are proteins that interact with components of the complement system. The complement system is tightly regulated by a network of proteins known as "regulators of complement activation (RCA)" that help distinguish target cells as "self" or "non-self." A subset of this family of proteins, complement control proteins (CCP), are characterized by domains of conserved repeats that direct interaction with components of the complement system. These "Sushi" domains have been used to identify other putative members of the CCP family. There are many other RCA proteins that do not fall into this family. Most CCPs prevent activation of the complement system on the surface of host cells and protect host tissues against damage caused by autoimmunity. Because of this, these proteins play important roles in autoimmune disorders and cancers. Members Most of the well-studied proteins within this family can be categorized in two classes: Membrane-bound complement regulators Membrane Cofactor Protein, MCP (CD46) Decay Accelerating Factor, DAF (CD55) Protectin (CD59) Complement C3b/C4b Receptor 1, CR1 (CD35) Complement Regulator of the Immunoglobulin Superfamily, CRIg Soluble complement regulators Factor H C4-Binding Protein (C4bp) Other proteins with characteristic CCP domains have been identified including members of the sushi domain containing (SUSD) protein family and Human CUB and sushi multiple domains family (CSMD). Mechanisms of protection Every cell in the human body is protected by one or more of the membrane-associated RCA proteins, CR1, DAF or MCP. Factor H and C4BP circulate in the plasma and are recruited to self-surfaces through binding to host-specific polysaccharides such as the glycosaminoglycans. Most CCPs function by preventing convertase activity. Convertases, specifically the C3 convertases C3b.Bb and C4b.2a, are the enzymes that drive complement activation by activating C3b, a central component of the complement syst Document 2::: Anaphylatoxins, or complement peptides, are fragments (C3a, C4a and C5a) that are produced as part of the activation of the complement system. Complement components C3, C4 and C5 are large glycoproteins that have important functions in the immune response and host defense. They have a wide variety of biological activities and are proteolytically activated by cleavage at a specific site, forming a- and b-fragments. A-fragments form distinct structural domains of approximately 76 amino acids, coded for by a single exon within the complement protein gene. The C3a, C4a and C5a components are referred to as anaphylatoxins: they cause smooth muscle contraction, vasodilation, histamine release from mast cells, and enhanced vascular permeability. They also mediate chemotaxis, inflammation, and generation of cytotoxic oxygen radicals. The proteins are highly hydrophilic, with a mainly alpha-helical structure held together by 3 disulfide bridges. Function Anaphylatoxins are able to trigger degranulation (release of substances) of endothelial cells, mast cells or phagocytes, which produce a local inflammatory response. If the degranulation is widespread, it can cause a shock-like syndrome similar to that of an allergic reaction. Anaphylatoxins indirectly mediate: smooth muscle cells contraction, for example bronchospasms increase in the permeability of blood capillaries C5a indirectly mediates chemotaxis—receptor-mediated movement of leukocytes in the direction of the increasing concentration of anaphylatoxins Examples Important anaphylatoxins: C5a has the highest specific biological activity and is able to act directly on neutrophils and monocytes to speed up the phagocytosis of pathogens. C3a works with C5a to activate mast cells, recruit antibody, complement and phagocytic cells and increase fluid in the tissue, all of which contribute to the initiation of the adaptive immune response. C4a is the least active anaphylatoxin. Terminology Although some drugs (mo Document 3::: iC3b is a protein fragment that is part of the complement system, a component of the vertebrate immune system. iC3b is produced when complement factor I cleaves C3b. Complement receptors on white blood cells are able to bind iC3b, so iC3b functions as an opsonin. Unlike intact C3b, iC3b cannot associate with factor B, thus preventing amplification of the complement cascade through the alternative pathway. Complement factor I can further cleave iC3b into a protein fragment known as C3d. Document 4::: The complement fixation test is an immunological medical test that can be used to detect the presence of either specific antibody or specific antigen in a patient's serum, based on whether complement fixation occurs. It was widely used to diagnose infections, particularly with microbes that are not easily detected by culture methods, and in rheumatic diseases. However, in clinical diagnostics labs it has been largely superseded by other serological methods such as ELISA and by DNA-based methods of pathogen detection, particularly PCR. Process The complement system is a system of serum proteins that react with antigen-antibody complexes. If this reaction occurs on a cell surface, it will result in the formation of trans-membrane pores and therefore destruction of the cell. The basic steps of a complement fixation test are as follows: Serum is separated from the patient. Patients naturally have different levels of complement proteins in their serum. To negate any effects this might have on the test, the complement proteins in the patient's serum must be destroyed and replaced by a known amount of standardized complement proteins. The serum is heated in such a way that all of the complement proteins—but none of the antibodies—within it are destroyed. (This is possible because complement proteins are much more susceptible to destruction by heat than antibodies.) A known amount of standard complement proteins are added to the serum. (These proteins are frequently obtained from guinea pig serum.) The antigen of interest is added to the serum. Sheep red blood cells () which have been pre-bound to anti- antibodies are added to the serum. The test is considered negative if the solution turns pink at this point and positive otherwise. If the patient's serum contains antibodies against the antigen of interest, they will bind to the antigen in step 3 to form antigen-antibody complexes. The complement proteins will react with these complexes and be depleted. Thus The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. The complement system is a series of proteins constitutively found in the what? A. organs B. platelets C. nucleus D. blood plasma Answer:
sciq-4936
multiple_choice
Plants can lose their leaves, flower, or break dormancy in response to a change in what?
[ "seasons", "events", "periods", "averages" ]
A
Relavent Documents: Document 0::: In plant biology, plant memory describes the ability of a plant to retain information from experienced stimuli and respond at a later time. For example, some plants have been observed to raise their leaves synchronously with the rising of the sun. Other plants produce new leaves in the spring after overwintering. Many experiments have been conducted into a plant's capacity for memory, including sensory, short-term, and long-term. The most basic learning and memory functions in animals have been observed in some plant species, and it has been proposed that the development of these basic memory mechanisms may have developed in an early organismal ancestor. Some plant species appear to have developed conserved ways to use functioning memory, and some species may have developed unique ways to use memory function depending on their environment and life history. The use of the term plant memory still sparks controversy. Some researchers believe the function of memory only applies to organisms with a brain and others believe that comparing plant functions resembling memory to humans and other higher division organisms may be too direct of a comparison. Others argue that the function of the two are essentially the same and this comparison can serve as the basis for further understanding into how memory in plants works. History Experiments involving the curling of pea tendrils were some of the first to explore the concept of plant memory. Mark Jaffe recognized that pea plants coil around objects that act as support to help them grow. Jaffe’s experiments included testing different stimuli to induce coiling behavior. One such stimulus was the effect of light on the coiling mechanism. When Jaffe rubbed the tendrils in light, he witnessed the expected coiling response. When subjected to perturbation in darkness, the pea plants did not exhibit coiling behavior. Tendrils from the dark experiment were brought back into light hours later, exhibiting a coiling response without a Document 1::: Plant ecology is a subdiscipline of ecology that studies the distribution and abundance of plants, the effects of environmental factors upon the abundance of plants, and the interactions among plants and between plants and other organisms. Examples of these are the distribution of temperate deciduous forests in North America, the effects of drought or flooding upon plant survival, and competition among desert plants for water, or effects of herds of grazing animals upon the composition of grasslands. A global overview of the Earth's major vegetation types is provided by O.W. Archibold. He recognizes 11 major vegetation types: tropical forests, tropical savannas, arid regions (deserts), Mediterranean ecosystems, temperate forest ecosystems, temperate grasslands, coniferous forests, tundra (both polar and high mountain), terrestrial wetlands, freshwater ecosystems and coastal/marine systems. This breadth of topics shows the complexity of plant ecology, since it includes plants from floating single-celled algae up to large canopy forming trees. One feature that defines plants is photosynthesis. Photosynthesis is the process of a chemical reactions to create glucose and oxygen, which is vital for plant life. One of the most important aspects of plant ecology is the role plants have played in creating the oxygenated atmosphere of earth, an event that occurred some 2 billion years ago. It can be dated by the deposition of banded iron formations, distinctive sedimentary rocks with large amounts of iron oxide. At the same time, plants began removing carbon dioxide from the atmosphere, thereby initiating the process of controlling Earth's climate. A long term trend of the Earth has been toward increasing oxygen and decreasing carbon dioxide, and many other events in the Earth's history, like the first movement of life onto land, are likely tied to this sequence of events. One of the early classic books on plant ecology was written by J.E. Weaver and F.E. Clements. It Document 2::: Vernalization () is the induction of a plant's flowering process by exposure to the prolonged cold of winter, or by an artificial equivalent. After vernalization, plants have acquired the ability to flower, but they may require additional seasonal cues or weeks of growth before they will actually do so. The term is sometimes used to refer to the need of herbal (non-woody) plants for a period of cold dormancy in order to produce new shoots and leaves, but this usage is discouraged. Many plants grown in temperate climates require vernalization and must experience a period of low winter temperature to initiate or accelerate the flowering process. This ensures that reproductive development and seed production occurs in spring and winters, rather than in autumn. The needed cold is often expressed in chill hours. Typical vernalization temperatures are between 1 and 7 degrees Celsius (34 and 45 degrees Fahrenheit). For many perennial plants, such as fruit tree species, a period of cold is needed first to induce dormancy and then later, after the requisite period of time, re-emerge from that dormancy prior to flowering. Many monocarpic winter annuals and biennials, including some ecotypes of Arabidopsis thaliana and winter cereals such as wheat, must go through a prolonged period of cold before flowering occurs. History of vernalization research In the history of agriculture, farmers observed a traditional distinction between "winter cereals", whose seeds require chilling (to trigger their subsequent emergence and growth), and "spring cereals", whose seeds can be sown in spring, and germinate, and then flower soon thereafter. Scientists in the early 19th century had discussed how some plants needed cold temperatures to flower. In 1857 an American agriculturist John Hancock Klippart, Secretary of the Ohio Board of Agriculture, reported the importance and effect of winter temperature on the germination of wheat. One of the most significant works was by a German plant physi Document 3::: Plants depend on epigenetic processes for proper function. Epigenetics is defined as "the study of changes in gene function that are mitotically and/or meiotically heritable and that do not entail a change in DNA sequence" (Wu et al. 2001). The area of study examines protein interactions with DNA and its associated components, including histones and various other modifications such as methylation, which alter the rate or target of transcription. Epi-alleles and epi-mutants, much like their genetic counterparts, describe changes in phenotypes due to epigenetic mechanisms. Epigenetics in plants has attracted scientific enthusiasm because of its importance in agriculture. Background and history In the past, macroscopic observations on plants led to basic understandings of how plants respond to their environments and grow. While these investigations could somewhat correlate cause and effect as a plant develops, they could not truly explain the mechanisms at work without inspection at the molecular level. Certain studies provided simplistic models with the groundwork for further exploration and eventual explanation through epigenetics. In 1918, Gassner published findings that noted the necessity of a cold phase in order for proper plant growth. Meanwhile, Garner and Allard examined the importance of the duration of light exposure to plant growth in 1920. Gassner's work would shape the conceptualization of vernalization which involves epigenetic changes in plants after a period of cold that leads to development of flowering (Heo and Sung et al. 2011). In a similar manner, Garner and Allard's efforts would gather an awareness of photoperiodism which involves epigenetic modifications following the duration of nighttime which enable flowering (Sun et al. 2014). Rudimentary comprehensions set precedent for later molecular evaluation and, eventually, a more complete view of how plants operate. Modern epigenetic work depends heavily on bioinformatics to gather large quant Document 4::: In botany, available space theory (also known as first available space theory) is a theory used to explain why most plants have an alternating leaf pattern on their stems. The theory states that the location of a new leaf on a stem is determined by the physical space between existing leaves. In other words, the location of a new leaf on a growing stem is directly related to the amount of space between the previous two leaves. Building on ideas first put forth by Hoffmeister in 1868, Snow and Snow hypothesized in 1947 that leaves sprouted in the first available space on the stem. See also Repulsion theory Phyllotaxis The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. Plants can lose their leaves, flower, or break dormancy in response to a change in what? A. seasons B. events C. periods D. averages Answer:
sciq-8885
multiple_choice
What are important coenzymes or precursors of coenzymes, and are required for enzymes to function properly?
[ "Drugs", "supplements", "vitamins", "carbohydrates" ]
C
Relavent Documents: Document 0::: Coenzyme A (CoA, SHCoA, CoASH) is a coenzyme, notable for its role in the synthesis and oxidation of fatty acids, and the oxidation of pyruvate in the citric acid cycle. All genomes sequenced to date encode enzymes that use coenzyme A as a substrate, and around 4% of cellular enzymes use it (or a thioester) as a substrate. In humans, CoA biosynthesis requires cysteine, pantothenate (vitamin B5), and adenosine triphosphate (ATP). In its acetyl form, coenzyme A is a highly versatile molecule, serving metabolic functions in both the anabolic and catabolic pathways. Acetyl-CoA is utilised in the post-translational regulation and allosteric regulation of pyruvate dehydrogenase and carboxylase to maintain and support the partition of pyruvate synthesis and degradation. Discovery of structure Coenzyme A was identified by Fritz Lipmann in 1946, who also later gave it its name. Its structure was determined during the early 1950s at the Lister Institute, London, together by Lipmann and other workers at Harvard Medical School and Massachusetts General Hospital. Lipmann initially intended to study acetyl transfer in animals, and from these experiments he noticed a unique factor that was not present in enzyme extracts but was evident in all organs of the animals. He was able to isolate and purify the factor from pig liver and discovered that its function was related to a coenzyme that was active in choline acetylation. Work with Beverly Guirard, Nathan Kaplan, and others determined that pantothenic acid was a central component of coenzyme A. The coenzyme was named coenzyme A to stand for "activation of acetate". In 1953, Fritz Lipmann won the Nobel Prize in Physiology or Medicine "for his discovery of co-enzyme A and its importance for intermediary metabolism". Biosynthesis Coenzyme A is naturally synthesized from pantothenate (vitamin B5), which is found in food such as meat, vegetables, cereal grains, legumes, eggs, and milk. In humans and most living organisms, pantothena Document 1::: This is a list of articles that describe particular biomolecules or types of biomolecules. A For substances with an A- or α- prefix such as α-amylase, please see the parent page (in this case Amylase). A23187 (Calcimycin, Calcium Ionophore) Abamectine Abietic acid Acetic acid Acetylcholine Actin Actinomycin D Adenine Adenosmeme Adenosine diphosphate (ADP) Adenosine monophosphate (AMP) Adenosine triphosphate (ATP) Adenylate cyclase Adiponectin Adonitol Adrenaline, epinephrine Adrenocorticotropic hormone (ACTH) Aequorin Aflatoxin Agar Alamethicin Alanine Albumins Aldosterone Aleurone Alpha-amanitin Alpha-MSH (Melaninocyte stimulating hormone) Allantoin Allethrin α-Amanatin, see Alpha-amanitin Amino acid Amylase (also see α-amylase) Anabolic steroid Anandamide (ANA) Androgen Anethole Angiotensinogen Anisomycin Antidiuretic hormone (ADH) Anti-Müllerian hormone (AMH) Arabinose Arginine Argonaute Ascomycin Ascorbic acid (vitamin C) Asparagine Aspartic acid Asymmetric dimethylarginine ATP synthase Atrial-natriuretic peptide (ANP) Auxin Avidin Azadirachtin A – C35H44O16 B Bacteriocin Beauvericin beta-Hydroxy beta-methylbutyric acid beta-Hydroxybutyric acid Bicuculline Bilirubin Biopolymer Biotin (Vitamin H) Brefeldin A Brassinolide Brucine Butyric acid C Document 2::: The metabolome refers to the complete set of small-molecule chemicals found within a biological sample. The biological sample can be a cell, a cellular organelle, an organ, a tissue, a tissue extract, a biofluid or an entire organism. The small molecule chemicals found in a given metabolome may include both endogenous metabolites that are naturally produced by an organism (such as amino acids, organic acids, nucleic acids, fatty acids, amines, sugars, vitamins, co-factors, pigments, antibiotics, etc.) as well as exogenous chemicals (such as drugs, environmental contaminants, food additives, toxins and other xenobiotics) that are not naturally produced by an organism. In other words, there is both an endogenous metabolome and an exogenous metabolome. The endogenous metabolome can be further subdivided to include a "primary" and a "secondary" metabolome (particularly when referring to plant or microbial metabolomes). A primary metabolite is directly involved in the normal growth, development, and reproduction. A secondary metabolite is not directly involved in those processes, but usually has important ecological function. Secondary metabolites may include pigments, antibiotics or waste products derived from partially metabolized xenobiotics. The study of the metabolome is called metabolomics. Origins The word metabolome appears to be a blending of the words "metabolite" and "chromosome". It was constructed to imply that metabolites are indirectly encoded by genes or act on genes and gene products. The term "metabolome" was first used in 1998 and was likely coined to match with existing biological terms referring to the complete set of genes (the genome), the complete set of proteins (the proteome) and the complete set of transcripts (the transcriptome). The first book on metabolomics was published in 2003. The first journal dedicated to metabolomics (titled simply "Metabolomics") was launched in 2005 and is currently edited by Prof. Roy Goodacre. Some of the m Document 3::: Acetyl-CoA (acetyl coenzyme A) is a molecule that participates in many biochemical reactions in protein, carbohydrate and lipid metabolism. Its main function is to deliver the acetyl group to the citric acid cycle (Krebs cycle) to be oxidized for energy production. Coenzyme A (CoASH or CoA) consists of a β-mercaptoethylamine group linked to the vitamin pantothenic acid (B5) through an amide linkage and 3'-phosphorylated ADP. The acetyl group (indicated in blue in the structural diagram on the right) of acetyl-CoA is linked to the sulfhydryl substituent of the β-mercaptoethylamine group. This thioester linkage is a "high energy" bond, which is particularly reactive. Hydrolysis of the thioester bond is exergonic (−31.5 kJ/mol). CoA is acetylated to acetyl-CoA by the breakdown of carbohydrates through glycolysis and by the breakdown of fatty acids through β-oxidation. Acetyl-CoA then enters the citric acid cycle, where the acetyl group is oxidized to carbon dioxide and water, and the energy released is captured in the form of 11 ATP and one GTP per acetyl group. GTP is the equivalent of ATP and they can be interconverted by Nucleoside-diphosphate kinase. Konrad Bloch and Feodor Lynen were awarded the 1964 Nobel Prize in Physiology and Medicine for their discoveries linking acetyl-CoA and fatty acid metabolism. Fritz Lipmann won the Nobel Prize in 1953 for his discovery of the cofactor coenzyme A. Direct synthesis The acetylation of CoA is determined by the carbon sources. Document 4::: Metabolic intermediates are molecules that are the precursors or metabolites of biologically significant molecules. Although these intermediates are of relatively minor direct importance to cellular function, they can play important roles in the allosteric regulation of enzymes. Clinical significance Some can be useful in measuring rates of metabolic processes (for example, 3,4-dihydroxyphenylacetic acid or 3-aminoisobutyrate). Because they can represent unnatural points of entry into natural metabolic pathways, some (such as AICA ribonucleotide) are of interest to researchers in developing new therapies. See also Metabolism Metabolism The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. What are important coenzymes or precursors of coenzymes, and are required for enzymes to function properly? A. Drugs B. supplements C. vitamins D. carbohydrates Answer:
sciq-615
multiple_choice
What causes polarization in a neutral object?
[ "separation of charges", "signaling of charges", "combining of charges", "meaning of charges" ]
A
Relavent Documents: Document 0::: Polarizability usually refers to the tendency of matter, when subjected to an electric field, to acquire an electric dipole moment in proportion to that applied field. It is a property of all matter, considering that matter is made up of elementary particles which have an electric charge, namely protons and electrons. When subject to an electric field, the negatively charged electrons and positively charged atomic nuclei are subject to opposite forces and undergo charge separation. Polarizability is responsible for a material's dielectric constant and, at high (optical) frequencies, its refractive index. The polarizability of an atom or molecule is defined as the ratio of its induced dipole moment to the local electric field; in a crystalline solid, one considers the dipole moment per unit cell. Note that the local electric field seen by a molecule is generally different from the macroscopic electric field that would be measured externally. This discrepancy is taken into account by the Clausius–Mossotti relation (below) which connects the bulk behaviour (polarization density due to an external electric field according to the electric susceptibility ) with the molecular polarizability due to the local field. Magnetic polarizability likewise refers to the tendency for a magnetic dipole moment to appear in proportion to an external magnetic field. Electric and magnetic polarizabilities determine the dynamical response of a bound system (such as a molecule or crystal) to external fields, and provide insight into a molecule's internal structure. "Polarizability" should not be confused with the intrinsic magnetic or electric dipole moment of an atom, molecule, or bulk substance; these do not depend on the presence of an external field. Electric polarizability Definition Electric polarizability is the relative tendency of a charge distribution, like the electron cloud of an atom or molecule, to be distorted from its normal shape by an external electric field. The p Document 1::: A Polaroid synthetic plastic sheet is a brand name product trademarked and produced by the Polaroid Corporation used as a polarizer or polarizing filter. The term “Polaroid” entered the common vocabulary with the early 1960s introduction of patented film and cameras manufactured by the corporation that produced “instant photos”. Patent The original material, patented in 1929 and further developed in 1932 by Edwin H. Land, consists of many microscopic crystals of iodoquinine sulphate (herapathite) embedded in a transparent nitrocellulose polymer film. The needle-like crystals are aligned during the manufacture of the film by stretching or by applying electric or magnetic fields. With the crystals aligned, the sheet is dichroic: it tends to absorb light which is polarized parallel to the direction of crystal alignment but to transmit light which is polarized perpendicular to it. The resultant electric field of an electromagnetic wave (such as light) determines its polarization. If the wave interacts with a line of crystals as in a sheet of polaroid, any varying electric field in the direction parallel to the line of the crystals will cause a current to flow along this line. The electrons moving in this current will collide with other particles and re-emit the light backwards and forwards. This will cancel the incident wave causing little or no transmission through the sheet. The component of the electric field perpendicular to the line of crystals, however, can cause only small movements in the electrons as they cannot move very much from side to side. This means there will be little change in the perpendicular component of the field leading to transmission of the part of the light wave polarized perpendicular to the crystals only, hence allowing the material to be used as a light polarizer. This material, known as J-sheet, was later replaced by the improved H-sheet Polaroid, invented in 1938 by Land. H-sheet is a polyvinyl alcohol (PVA) polymer impregnated with i Document 2::: Polarization is an important phenomenon in astronomy. Stars The polarization of starlight was first observed by the astronomers William Hiltner and John S. Hall in 1949. Subsequently, Jesse Greenstein and Leverett Davis, Jr. developed theories allowing the use of polarization data to trace interstellar magnetic fields. Though the integrated thermal radiation of stars is not usually appreciably polarized at source, scattering by interstellar dust can impose polarization on starlight over long distances. Net polarization at the source can occur if the photosphere itself is asymmetric, due to limb polarization. Plane polarization of starlight generated at the star itself is observed for Ap stars (peculiar A type stars). Sun Both circular and linear polarization of sunlight has been measured. Circular polarization is mainly due to transmission and absorption effects in strongly magnetic regions of the Sun's surface. Another mechanism that gives rise to circular polarization is the so-called "alignment-to-orientation mechanism". Continuum light is linearly polarized at different locations across the face of the Sun (limb polarization) though taken as a whole, this polarization cancels. Linear polarization in spectral lines is usually created by anisotropic scattering of photons on atoms and ions which can themselves be polarized by this interaction. The linearly polarized spectrum of the Sun is often called the second solar spectrum. Atomic polarization can be modified in weak magnetic fields by the Hanle effect. As a result, polarization of the scattered photons is also modified providing a diagnostics tool for understanding stellar magnetic fields. Other sources Polarization is also present in radiation from coherent astronomical sources due to the Zeeman effect (e.g. hydroxyl or methanol masers). The large radio lobes in active galaxies and pulsar radio radiation (which may, it is speculated, sometimes be coherent) also show polarization. Apart from providing in Document 3::: The Stokes parameters are a set of values that describe the polarization state of electromagnetic radiation. They were defined by George Gabriel Stokes in 1852,<ref>S. Chandrasekhar 'Radiative Transfer, Dover Publications, New York, 1960, , page 25</ref> as a mathematically convenient alternative to the more common description of incoherent or partially polarized radiation in terms of its total intensity (I), (fractional) degree of polarization (p), and the shape parameters of the polarization ellipse. The effect of an optical system on the polarization of light can be determined by constructing the Stokes vector for the input light and applying Mueller calculus, to obtain the Stokes vector of the light leaving the system. The original Stokes paper was discovered independently by Francis Perrin in 1942 and by Subrahamanyan Chandrasekhar in 1947,Chandrasekhar, S. (1947). The transfer of radiation in stellar atmospheres. Bulletin of the American Mathematical Society, 53(7), 641-711. who named it as the Stokes parameters. Definitions The relationship of the Stokes parameters S0, S1, S2, S3 to intensity and polarization ellipse parameters is shown in the equations below and the figure on the right. Here , and are the spherical coordinates of the three-dimensional vector of cartesian coordinates . is the total intensity of the beam, and is the degree of polarization, constrained by . The factor of two before represents the fact that any polarization ellipse is indistinguishable from one rotated by 180°, while the factor of two before indicates that an ellipse is indistinguishable from one with the semi-axis lengths swapped accompanied by a 90° rotation. The phase information of the polarized light is not recorded in the Stokes parameters. The four Stokes parameters are sometimes denoted I, Q, U and V, respectively. Given the Stokes parameters, one can solve for the spherical coordinates with the following equations: Stokes vectors The Stokes parameters are oft Document 4::: A field effect is the polarization of a molecule through space. The effect is a result of an electric field produced by charge localization in a molecule. This field, which is substituent and conformation dependent, can influence structure and reactivity by manipulating the location of electron density in bonds and/or the overall molecule. The polarization of a molecule through its bonds is a separate phenomenon known as induction. Field effects are relatively weak, and diminish rapidly with distance, but have still been found to alter molecular properties such as acidity. Field sources Field effects can arise from the electric dipole field of a bond containing an electronegative atom or electron-withdrawing substituent, as well as from an atom or substituent bearing a formal charge. The directionality of a dipole, and concentration of charge, can both define the shape of a molecule's electric field which will manipulate the localization of electron density toward or away from sites of interest, such as an acidic hydrogen. Field effects are typically associated with the alignment of a dipole field with respect to a reaction center. Since these are through space effects, the 3D structure of a molecule is an important consideration. A field may be interrupted by other bonds or atoms before propagating to a reactive site of interest. Atoms of differing electronegativities can move closer together resulting in bond polarization through space that mimics the inductive effect through bonds. Bicycloheptane and bicyclooctane (seen left) are two compounds in which the change in acidity with substitution was attributed to the field effect. The C-X dipole is oriented away from the carboxylic acid group, and can draw electron density away because the molecule center is empty, with a low dielectric constant, so the electric field is able to propagate with minimal resistance. Utility of effect A dipole can align to stabilize or destabilize the formation or loss of a charge, The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. What causes polarization in a neutral object? A. separation of charges B. signaling of charges C. combining of charges D. meaning of charges Answer:
sciq-3011
multiple_choice
How much electricity is generated by an average car battery?
[ "ten volts", "six volts", "eight volts", "twelve volts" ]
D
Relavent Documents: Document 0::: Conceptual questions or conceptual problems in science, technology, engineering, and mathematics (STEM) education are questions that can be answered based only on the knowledge of relevant concepts, rather than performing extensive calculations. They contrast with most homework and exam problems in science and engineering that typically require plugging in numerical values into previously discussed formulas. Such "plug-and-chug" numerical problems can often be solved correctly by just matching the pattern of the problem to a previously discussed problem and changing the numerical inputs, which requires significant amounts of time to perform the calculations but does not test or deepen the understanding of how the concepts and formulas should work together. Conceptual questions, therefore, provide a good complement to conventional numerical problems because they need minimal or no calculations and instead encourage the students to engage more deeply with the underlying concepts and how they relate to formulas. Conceptual problems are often formulated as multiple-choice questions, making them easy to use during in-class discussions, particularly when utilizing active learning, peer instruction, and audience response. An example of a conceptual question in undergraduate thermodynamics is provided below: During adiabatic expansion of an ideal gas, its temperatureincreases decreases stays the same Impossible to tell/need more information The use of conceptual questions in physics was popularized by Eric Mazur, particularly in the form of multiple-choice tests that he called ConcepTests. In recent years, multiple websites that maintain lists of conceptual questions have been created by instructors for various disciplines. Some books on physics provide many examples of conceptual questions as well. Multiple conceptual questions can be assembled into a concept inventory to test the working knowledge of students at the beginning of a course or to track the improvement in Document 1::: Advanced Placement (AP) Physics C: Electricity and Magnetism (also known as AP Physics C: E&M or AP E&M) is an introductory physics course administered by the College Board as part of its Advanced Placement program. It is intended to proxy a second-semester calculus-based university course in electricity and magnetism. The content of Physics C: E&M overlaps with that of AP Physics 2, but Physics 2 is algebra-based and covers other topics outside of electromagnetism, while Physics C is calculus-based and only covers electromagnetism. Physics C: E&M may be combined with its mechanics counterpart to form a year-long course that prepares for both exams. Course content E&M is equivalent to an introductory college course in electricity and magnetism for physics or engineering majors. The course modules are: Electrostatics Conductors, capacitors, and dielectrics Electric circuits Magnetic fields Electromagnetism. Methods of calculus are used wherever appropriate in formulating physical principles and in applying them to physical problems. Therefore, students should have completed or be concurrently enrolled in a calculus class. AP test The course culminates in an optional exam for which high-performing students may receive some credit towards their college coursework, depending on the institution. Registration The AP examination for AP Physics C: Electricity and Magnetism is separate from the AP examination for AP Physics C: Mechanics. Before 2006, test-takers paid only once and were given the choice of taking either one or two parts of the Physics C test. Format The exam is typically administered on a Monday afternoon in May. The exam is configured in two categories: a 35-question multiple choice section and a 3-question free response section. Test takers are allowed to use an approved calculator during the entire exam. The test is weighted such that each section is worth half of the final score. This and AP Physics C: Mechanics are the shortest AP exams, with Document 2::: Female education in STEM refers to child and adult female representation in the educational fields of science, technology, engineering, and mathematics (STEM). In 2017, 33% of students in STEM fields were women. The organization UNESCO has stated that this gender disparity is due to discrimination, biases, social norms and expectations that influence the quality of education women receive and the subjects they study. UNESCO also believes that having more women in STEM fields is desirable because it would help bring about sustainable development. Current status of girls and women in STEM education Overall trends in STEM education Gender differences in STEM education participation are already visible in early childhood care and education in science- and math-related play, and become more pronounced at higher levels of education. Girls appear to lose interest in STEM subjects with age, particularly between early and late adolescence. This decreased interest affects participation in advanced studies at the secondary level and in higher education. Female students represent 35% of all students enrolled in STEM-related fields of study at this level globally. Differences are also observed by disciplines, with female enrollment lowest in engineering, manufacturing and construction, natural science, mathematics and statistics and ICT fields. Significant regional and country differences in female representation in STEM studies can be observed, though, suggesting the presence of contextual factors affecting girls’ and women's engagement in these fields. Women leave STEM disciplines in disproportionate numbers during their higher education studies, in their transition to the world of work and even in their career cycle. Learning achievement in STEM education Data on gender differences in learning achievement present a complex picture, depending on what is measured (subject, knowledge acquisition against knowledge application), the level of education/age of students, and Document 3::: A pre-STEM program is a course of study at any two-year college that prepares a student to transfer to a four-year school to earn a bachelor's degree in a STEM field. Overview The concept of a pre-STEM program is being developed to address America's need for more college-trained professionals in science, technology, engineering, and mathematics (STEM). It is an innovation meant to fill a gap at community colleges that do not have 'major' degree paths that students identify with on their way to earning an Associates degree. Students must complete a considerable amount of STEM coursework before transferring from a two-year school to a four-year school and earn a baccalaureate degree in a STEM field. Schools with a pre-STEM program are able to identify those students and support them with STEM-specific academic and career advising, increasing the student's chances of going on to earn a STEM baccalaureate degree in a timely fashion. With over 50% of America's college-bound students starting their college career at public or private two-year school, and with a very small proportion of students who start college at a two-year school matriculating to and earning STEM degrees from four-year schools, pre-STEM programs have great potential for broadening participation in baccalaureate STEM studies. Example programs The effectiveness of pre-STEM programs is being investigated by a consortium of schools in Missouri: Moberly Area Community College, St. Charles Community College, Metropolitan Community College, and Truman State University. A larger group of schools met at the Belknap Springs Meetings in October 2009 to discuss the challenges and opportunities presented by STEM-focused partnerships between 2-year and 4-year schools. Each program represented a two-year school and a four-year school that were trying to increase the number of people who earn a baccalaureate degree in a STEM area through various means, some of which were pre-STEM programs. Other methods includes Document 4::: Computer science and engineering (CSE) is an academic program at many universities which comprises computer science classes (e.g. data structures and algorithms) and computer engineering classes (e.g computer architecture). There is no clear division in computing between science and engineering, just like in the field of materials science and engineering. CSE is also a term often used in Europe to translate the name of engineering informatics academic programs. It is offered in both undergraduate as well postgraduate with specializations. Academic courses Academic programs vary between colleges, but typically include a combination of topics in computer science, computer engineering, and electrical engineering. Undergraduate courses usually include programming, algorithms and data structures, computer architecture, operating systems, computer networks, parallel computing, embedded systems, algorithms design, circuit analysis and electronics, digital logic and processor design, computer graphics, scientific computing, software engineering, database systems, digital signal processing, virtualization, computer simulations and games programming. CSE programs also include core subjects of theoretical computer science such as theory of computation, numerical methods, machine learning, programming theory and paradigms. Modern academic programs also cover emerging computing fields like image processing, data science, robotics, bio-inspired computing, computational biology, autonomic computing and artificial intelligence. Most CSE programs require introductory mathematical knowledge, hence the first year of study is dominated by mathematical courses, primarily discrete mathematics, mathematical analysis, linear algebra, probability, and statistics, as well as the basics of electrical and electronic engineering, physics, and electromagnetism. Example universities with CSE majors and departments APJ Abdul Kalam Technological University American International University-B The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. How much electricity is generated by an average car battery? A. ten volts B. six volts C. eight volts D. twelve volts Answer:
sciq-5992
multiple_choice
All elements are most stable when their outermost shell is filled with electrons according to which rule?
[ "quartet rule", "octet rule", "string rule", "coupling rule" ]
B
Relavent Documents: Document 0::: In chemistry and physics, the iron group refers to elements that are in some way related to iron; mostly in period (row) 4 of the periodic table. The term has different meanings in different contexts. In chemistry, the term is largely obsolete, but it often means iron, cobalt, and nickel, also called the iron triad; or, sometimes, other elements that resemble iron in some chemical aspects. In astrophysics and nuclear physics, the term is still quite common, and it typically means those three plus chromium and manganese—five elements that are exceptionally abundant, both on Earth and elsewhere in the universe, compared to their neighbors in the periodic table. Titanium and vanadium are also produced in Type Ia supernovae. General chemistry In chemistry, "iron group" used to refer to iron and the next two elements in the periodic table, namely cobalt and nickel. These three comprised the "iron triad". They are the top elements of groups 8, 9, and 10 of the periodic table; or the top row of "group VIII" in the old (pre-1990) IUPAC system, or of "group VIIIB" in the CAS system. These three metals (and the three of the platinum group, immediately below them) were set aside from the other elements because they have obvious similarities in their chemistry, but are not obviously related to any of the other groups. The iron group and its alloys exhibit ferromagnetism. The similarities in chemistry were noted as one of Döbereiner's triads and by Adolph Strecker in 1859. Indeed, Newlands' "octaves" (1865) were harshly criticized for separating iron from cobalt and nickel. Mendeleev stressed that groups of "chemically analogous elements" could have similar atomic weights as well as atomic weights which increase by equal increments, both in his original 1869 paper and his 1889 Faraday Lecture. Analytical chemistry In the traditional methods of qualitative inorganic analysis, the iron group consists of those cations which have soluble chlorides; and are not precipitated Document 1::: An extended periodic table theorises about chemical elements beyond those currently known in the periodic table and proven. The element with the highest atomic number known is oganesson (Z = 118), which completes the seventh period (row) in the periodic table. All elements in the eighth period and beyond thus remain purely hypothetical. Elements beyond 118 will be placed in additional periods when discovered, laid out (as with the existing periods) to illustrate periodically recurring trends in the properties of the elements concerned. Any additional periods are expected to contain a larger number of elements than the seventh period, as they are calculated to have an additional so-called g-block, containing at least 18 elements with partially filled g-orbitals in each period. An eight-period table containing this block was suggested by Glenn T. Seaborg in 1969. The first element of the g-block may have atomic number 121, and thus would have the systematic name unbiunium. Despite many searches, no elements in this region have been synthesized or discovered in nature. According to the orbital approximation in quantum mechanical descriptions of atomic structure, the g-block would correspond to elements with partially filled g-orbitals, but spin–orbit coupling effects reduce the validity of the orbital approximation substantially for elements of high atomic number. Seaborg's version of the extended period had the heavier elements following the pattern set by lighter elements, as it did not take into account relativistic effects. Models that take relativistic effects into account predict that the pattern will be broken. Pekka Pyykkö and Burkhard Fricke used computer modeling to calculate the positions of elements up to Z = 172, and found that several were displaced from the Madelung rule. As a result of uncertainty and variability in predictions of chemical and physical properties of elements beyond 120, there is currently no consensus on their placement in the extende Document 2::: In quantum chemistry, Slater's rules provide numerical values for the effective nuclear charge in a many-electron atom. Each electron is said to experience less than the actual nuclear charge, because of shielding or screening by the other electrons. For each electron in an atom, Slater's rules provide a value for the screening constant, denoted by s, S, or σ, which relates the effective and actual nuclear charges as The rules were devised semi-empirically by John C. Slater and published in 1930. Revised values of screening constants based on computations of atomic structure by the Hartree–Fock method were obtained by Enrico Clementi et al. in the 1960s. Rules Firstly, the electrons are arranged into a sequence of groups in order of increasing principal quantum number n, and for equal n in order of increasing azimuthal quantum number l, except that s- and p- orbitals are kept together. [1s] [2s,2p] [3s,3p] [3d] [4s,4p] [4d] [4f] [5s, 5p] [5d] etc. Each group is given a different shielding constant which depends upon the number and types of electrons in those groups preceding it. The shielding constant for each group is formed as the sum of the following contributions: An amount of 0.35 from each other electron within the same group except for the [1s] group, where the other electron contributes only 0.30. If the group is of the [ns, np] type, an amount of 0.85 from each electron with principal quantum number (n–1), and an amount of 1.00 for each electron with principal quantum number (n–2) or less. If the group is of the [d] or [f], type, an amount of 1.00 for each electron "closer" to the nucleus than the group. This includes both i) electrons with a smaller principal quantum number than n and ii) electrons with principal quantum number n and a smaller azimuthal quantum number l. In tabular form, the rules are summarized as: Example An example provided in Slater's original paper is for the iron atom which has nuclear charge 26 and electronic configuration Document 3::: This page shows the electron configurations of the neutral gaseous atoms in their ground states. For each atom the subshells are given first in concise form, then with all subshells written out, followed by the number of electrons per shell. Electron configurations of elements beyond hassium (element 108) have never been measured; predictions are used below. As an approximate rule, electron configurations are given by the Aufbau principle and the Madelung rule. However there are numerous exceptions; for example the lightest exception is chromium, which would be predicted to have the configuration , written as , but whose actual configuration given in the table below is . Note that these electron configurations are given for neutral atoms in the gas phase, which are not the same as the electron configurations for the same atoms in chemical environments. In many cases, multiple configurations are within a small range of energies and the irregularities shown below do not necessarily have a clear relation to chemical behaviour. For the undiscovered eighth-row elements, mixing of configurations is expected to be very important, and sometimes the result can no longer be well-described by a single configuration. See also Extended periodic table#Electron configurations – Predictions for undiscovered elements 119–173 and 184 Document 4::: The periodic table is an arrangement of the chemical elements, structured by their atomic number, electron configuration and recurring chemical properties. In the basic form, elements are presented in order of increasing atomic number, in the reading sequence. Then, rows and columns are created by starting new rows and inserting blank cells, so that rows (periods) and columns (groups) show elements with recurring properties (called periodicity). For example, all elements in group (column) 18 are noble gases that are largely—though not completely—unreactive. The history of the periodic table reflects over two centuries of growth in the understanding of the chemical and physical properties of the elements, with major contributions made by Antoine-Laurent de Lavoisier, Johann Wolfgang Döbereiner, John Newlands, Julius Lothar Meyer, Dmitri Mendeleev, Glenn T. Seaborg, and others. Early history Nine chemical elements – carbon, sulfur, iron, copper, silver, tin, gold, mercury, and lead, have been known since before antiquity, as they are found in their native form and are relatively simple to mine with primitive tools. Around 330 BCE, the Greek philosopher Aristotle proposed that everything is made up of a mixture of one or more roots, an idea originally suggested by the Sicilian philosopher Empedocles. The four roots, which the Athenian philosopher Plato called elements, were earth, water, air and fire. Similar ideas about these four elements existed in other ancient traditions, such as Indian philosophy. A few extra elements were known in the age of alchemy: zinc, arsenic, antimony, and bismuth. Platinum was also known to pre-Columbian South Americans, but knowledge of it did not reach Europe until the 16th century. First categorizations The history of the periodic table is also a history of the discovery of the chemical elements. The first person in recorded history to discover a new element was Hennig Brand, a bankrupt German merchant. Brand tried to discover The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. All elements are most stable when their outermost shell is filled with electrons according to which rule? A. quartet rule B. octet rule C. string rule D. coupling rule Answer:
sciq-4448
multiple_choice
What is a relationship between living things that depend on the same resources?
[ "contention", "parasitic", "symbiotic", "competition" ]
D
Relavent Documents: Document 0::: Mutualism describes the ecological interaction between two or more species where each species has a net benefit. Mutualism is a common type of ecological interaction. Prominent examples include most vascular plants engaged in mutualistic interactions with mycorrhizae, flowering plants being pollinated by animals, vascular plants being dispersed by animals, and corals with zooxanthellae, among many others. Mutualism can be contrasted with interspecific competition, in which each species experiences reduced fitness, and exploitation, or parasitism, in which one species benefits at the expense of the other. The term mutualism was introduced by Pierre-Joseph van Beneden in his 1876 book Animal Parasites and Messmates to mean "mutual aid among species". Mutualism is often conflated with two other types of ecological phenomena: cooperation and symbiosis. Cooperation most commonly refers to increases in fitness through within-species (intraspecific) interactions, although it has been used (especially in the past) to refer to mutualistic interactions, and it is sometimes used to refer to mutualistic interactions that are not obligate. Symbiosis involves two species living in close physical contact over a long period of their existence and may be mutualistic, parasitic, or commensal, so symbiotic relationships are not always mutualistic, and mutualistic interactions are not always symbiotic. Despite a different definition between mutualistic interactions and symbiosis, mutualistic and symbiosis have been largely used interchangeably in the past, and confusion on their use has persisted. Mutualism plays a key part in ecology and evolution. For example, mutualistic interactions are vital for terrestrial ecosystem function as about 80% of land plants species rely on mycorrhizal relationships with fungi to provide them with inorganic compounds and trace elements. As another example, the estimate of tropical rainforest plants with seed dispersal mutualisms with animals ranges Document 1::: In ecology, a biological interaction is the effect that a pair of organisms living together in a community have on each other. They can be either of the same species (intraspecific interactions), or of different species (interspecific interactions). These effects may be short-term, or long-term, both often strongly influence the adaptation and evolution of the species involved. Biological interactions range from mutualism, beneficial to both partners, to competition, harmful to both partners. Interactions can be direct when physical contact is established or indirect, through intermediaries such as shared resources, territories, ecological services, metabolic waste, toxins or growth inhibitors. This type of relationship can be shown by net effect based on individual effects on both organisms arising out of relationship. Several recent studies have suggested non-trophic species interactions such as habitat modification and mutualisms can be important determinants of food web structures. However, it remains unclear whether these findings generalize across ecosystems, and whether non-trophic interactions affect food webs randomly, or affect specific trophic levels or functional groups. History Although biological interactions, more or less individually, were studied earlier, Edward Haskell (1949) gave an integrative approach to the thematic, proposing a classification of "co-actions", later adopted by biologists as "interactions". Close and long-term interactions are described as symbiosis; symbioses that are mutually beneficial are called mutualistic. The term symbiosis was subject to a century-long debate about whether it should specifically denote mutualism, as in lichens or in parasites that benefit themselves. This debate created two different classifications for biotic interactions, one based on the time (long-term and short-term interactions), and other based on the magnitud of interaction force (competition/mutualism) or effect of individual fitness, accordi Document 2::: Any action or influence that species have on each other is considered a biological interaction. These interactions between species can be considered in several ways. One such way is to depict interactions in the form of a network, which identifies the members and the patterns that connect them. Species interactions are considered primarily in terms of trophic interactions, which depict which species feed on others. Currently, ecological networks that integrate non-trophic interactions are being built. The type of interactions they can contain can be classified into six categories: mutualism, commensalism, neutralism, amensalism, antagonism, and competition. Observing and estimating the fitness costs and benefits of species interactions can be very problematic. The way interactions are interpreted can profoundly affect the ensuing conclusions. Interaction characteristics Characterization of interactions can be made according to various measures, or any combination of them. Prevalence Prevalence identifies the proportion of the population affected by a given interaction, and thus quantifies whether it is relatively rare or common. Generally, only common interactions are considered. Negative/ Positive Whether the interaction is beneficial or harmful to the species involved determines the sign of the interaction, and what type of interaction it is classified as. To establish whether they are harmful or beneficial, careful observational and/or experimental studies can be conducted, in an attempt to establish the cost/benefit balance experienced by the members. Strength The sign of an interaction does not capture the impact on fitness of that interaction. One example of this is of antagonism, in which predators may have a much stronger impact on their prey species (death), than parasites (reduction in fitness). Similarly, positive interactions can produce anything from a negligible change in fitness to a life or death impact. Relationship in space and time The rel Document 3::: Microbial population biology is the application of the principles of population biology to microorganisms. Distinguishing from other biological disciplines Microbial population biology, in practice, is the application of population ecology and population genetics toward understanding the ecology and evolution of bacteria, archaebacteria, microscopic fungi (such as yeasts), additional microscopic eukaryotes (e.g., "protozoa" and algae), and viruses. Microbial population biology also encompasses the evolution and ecology of community interactions (community ecology) between microorganisms, including microbial coevolution and predator-prey interactions. In addition, microbial population biology considers microbial interactions with more macroscopic organisms (e.g., host-parasite interactions), though strictly this should be more from the perspective of the microscopic rather than the macroscopic organism. A good deal of microbial population biology may be described also as microbial evolutionary ecology. On the other hand, typically microbial population biologists (unlike microbial ecologists) are less concerned with questions of the role of microorganisms in ecosystem ecology, which is the study of nutrient cycling and energy movement between biotic as well as abiotic components of ecosystems. Microbial population biology can include aspects of molecular evolution or phylogenetics. Strictly, however, these emphases should be employed toward understanding issues of microbial evolution and ecology rather than as a means of understanding more universal truths applicable to both microscopic and macroscopic organisms. The microorganisms in such endeavors consequently should be recognized as organisms rather than simply as molecular or evolutionary reductionist model systems. Thus, the study of RNA in vitro evolution is not microbial population biology and nor is the in silico generation of phylogenies of otherwise non-microbial sequences, even if aspects of either may Document 4::: The stress gradient hypothesis (SGH) is an evolutionary theory in microbial ecology and community ecology that provides a framework to predict when positive or negative interactions should be observed in an habitat. The SGH states that facilitation, cooperation or mutualism should be more common in stressful environments, compared with benign environments (i.e nutrient excess) where competition or parasitism should be more common. The stress gradient hypothesis, in which ecological interactions shift in a positive direction with increasing environmental stress, is controversial among ecologists, in part because of contradictory support, yet a 2021 meta analysis study compared SGH across different organisms with intraspecificity and interspecificity interacrions and conclude that the SGH is indeed a broadly relevant ecological phenomena that is currently held back by cross-disciplinary communication barriers. SGH is well supported by studies that feature bacteria, plants, terrestrial ecosystems, interspecific negative interactions, adults, survival instead of growth or reproduction, and drought, fire, and nutrient stress. Drought and nutrient stress, especially when combined, shift ecological interactions positively The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. What is a relationship between living things that depend on the same resources? A. contention B. parasitic C. symbiotic D. competition Answer:
sciq-9535
multiple_choice
Plaque is surgically removed from the walls of a vessel in which surgery?
[ "arthroscopy", "lumpectomy", "discectomy", "endarterectomy" ]
D
Relavent Documents: Document 0::: Medical education is education related to the practice of being a medical practitioner, including the initial training to become a physician (i.e., medical school and internship) and additional training thereafter (e.g., residency, fellowship, and continuing medical education). Medical education and training varies considerably across the world. Various teaching methodologies have been used in medical education, which is an active area of educational research. Medical education is also the subject-didactic academic field of educating medical doctors at all levels, including entry-level, post-graduate, and continuing medical education. Specific requirements such as entrustable professional activities must be met before moving on in stages of medical education. Common techniques and evidence base Medical education applies theories of pedagogy specifically in the context of medical education. Medical education has been a leader in the field of evidence-based education, through the development of evidence syntheses such as the Best Evidence Medical Education collection, formed in 1999, which aimed to "move from opinion-based education to evidence-based education". Common evidence-based techniques include the Objective structured clinical examination (commonly known as the 'OSCE) to assess clinical skills, and reliable checklist-based assessments to determine the development of soft skills such as professionalism. However, there is a persistence of ineffective instructional methods in medical education, such as the matching of teaching to learning styles and Edgar Dales' "Cone of Learning". Entry-level education Entry-level medical education programs are tertiary-level courses undertaken at a medical school. Depending on jurisdiction and university, these may be either undergraduate-entry (most of Europe, Asia, South America and Oceania), or graduate-entry programs (mainly Australia, Philippines and North America). Some jurisdictions and universities provide both u Document 1::: HipNav was the first computer-assisted surgery system developed to guide the surgeon during total hip replacement surgery. It was developed at Carnegie Mellon University. Document 2::: The Medical Artists Association of Great Britain was founded on 2 April 1949 by British medical illustrators Dorothy Davison, Audrey Arnott and Margaret McLarty to act as a professional body for medical artists and to raise the standard of medical art through training, education and examinations. Arnott acted as the association's first Secretary and the first Chairman was D.H. Tompsett, surgeon and later author of Anatomical Techniques, published in 1956. The association started out as four departments in London, Manchester and Edinburgh and it took students or trainee/assistants during the 1940s and 1950s. By 1962 the association had started its own postgraduate programme to train graduate artists. In 1989, forty years after its foundation, the association received the patronage of the Worshipful Company of Barbers, one of the City of London livery companies, and by the same year students were able to register at a medical school within London University to take a university diploma course. A year later in 1990, the Association became a limited company continuing to train artists looking for a career in medical illustration. In the 1996, the Association received the Charlotte Holt Bequest created by medical artist Charlotte Holt for the express purpose of training medical artists.  This led to the establishment of the Medical Artists' Education Trust (MAET), a charitable organisation tasked with managing the Association's specialist Postgraduate Training Programme. Today, the Association is the professional body for Medical Artists in the UK with its members possessing specialist skills in art and a deep, if not professional, understanding of medical procedures specifically, but not exclusively, in the area of surgery. Document 3::: An open biopsy is a procedure in which a surgical incision (cut) is made through the skin to expose and remove tissues. The biopsy tissue is examined under a microscope by a pathologist. An open biopsy may be done in the doctor's office or hospital, and may use local anesthesia or general anesthesia. A lumpectomy to remove a breast tumor is a type of open biopsy. Document 4::: Alternative medicine degrees include academic degrees, first professional degrees, qualifications or diplomas issued by accredited and legally recognised academic institutions in alternative medicine or related areas, either human or animal. Examples Examples of alternative medicine degrees include: Ayurveda - BSc, MSc, BAMC, MD(Ayurveda), M.S.(Ayurveda), Ph.D(Ayurveda) Siddha medicine - BSMS, MD(Siddha), Ph.D(Siddha) Acupuncture - BSc, LAc, DAc, AP, DiplAc, MAc Herbalism - Acs, BSc, Msc. Homeopathy - BSc, MSc, DHMs, BHMS, M.D. (HOM), PhD in homoeopathy Naprapathy - DN Naturopathic medicine - BSc, MSc, BNYS, MD (Naturopathy), ND, NMD Oriental Medicine - BSc, MSOM, MSTOM, KMD (Korea), BCM (Hong Kong), MCM (Hong Kong), BChinMed (Hong Kong), MChinMed (Hong Kong), MD (Taiwan), MB (China), TCM-Traditional Chinese medicine master (China) Osteopathy - BOst, BOstMed, BSc (Osteo), DipOsteo The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. Plaque is surgically removed from the walls of a vessel in which surgery? A. arthroscopy B. lumpectomy C. discectomy D. endarterectomy Answer:
sciq-9433
multiple_choice
What is the distance that sound waves travel in a given amount of time called?
[ "velocity of sound", "speed of sound", "momentum of sound", "force of sound" ]
B
Relavent Documents: Document 0::: Particle displacement or displacement amplitude is a measurement of distance of the movement of a sound particle from its equilibrium position in a medium as it transmits a sound wave. The SI unit of particle displacement is the metre (m). In most cases this is a longitudinal wave of pressure (such as sound), but it can also be a transverse wave, such as the vibration of a taut string. In the case of a sound wave travelling through air, the particle displacement is evident in the oscillations of air molecules with, and against, the direction in which the sound wave is travelling. A particle of the medium undergoes displacement according to the particle velocity of the sound wave traveling through the medium, while the sound wave itself moves at the speed of sound, equal to in air at . Mathematical definition Particle displacement, denoted δ, is given by where v is the particle velocity. Progressive sine waves The particle displacement of a progressive sine wave is given by where is the amplitude of the particle displacement; is the phase shift of the particle displacement; is the angular wavevector; is the angular frequency. It follows that the particle velocity and the sound pressure along the direction of propagation of the sound wave x are given by where is the amplitude of the particle velocity; is the phase shift of the particle velocity; is the amplitude of the acoustic pressure; is the phase shift of the acoustic pressure. Taking the Laplace transforms of v and p with respect to time yields Since , the amplitude of the specific acoustic impedance is given by Consequently, the amplitude of the particle displacement is related to those of the particle velocity and the sound pressure by See also Sound Sound particle Particle velocity Particle acceleration Document 1::: Acoustic waves are a type of energy propagation through a medium by means of adiabatic loading and unloading. Important quantities for describing acoustic waves are acoustic pressure, particle velocity, particle displacement and acoustic intensity. Acoustic waves travel with a characteristic acoustic velocity that depends on the medium they're passing through. Some examples of acoustic waves are audible sound from a speaker (waves traveling through air at the speed of sound), seismic waves (ground vibrations traveling through the earth), or ultrasound used for medical imaging (waves traveling through the body). Wave properties Acoustic wave is a mechanical wave that transmits energy through the movements of atoms and molecules. Acoustic wave transmits through liquids in longitudinal manner (movement of particles are parallel to the direction of propagation of the wave); in contrast to electromagnetic wave that transmits in transverse manner (movement of particles at a right angle to the direction of propagation of the wave). However, in solids, acoustic wave transmits in both longitudinal and transverse manners due to presence of shear moduli in such a state of matter. Acoustic wave equation The acoustic wave equation describes the propagation of sound waves. The acoustic wave equation for sound pressure in one dimension is given by where is sound pressure in Pa is position in the direction of propagation of the wave, in m is speed of sound in m/s is time in s The wave equation for particle velocity has the same shape and is given by where is particle velocity in m/s For lossy media, more intricate models need to be applied in order to take into account frequency-dependent attenuation and phase speed. Such models include acoustic wave equations that incorporate fractional derivative terms, see also the acoustic attenuation article. D'Alembert gave the general solution for the lossless wave equation. For sound pressure, a solution would be where is angu Document 2::: In a compressible sound transmission medium - mainly air - air particles get an accelerated motion: the particle acceleration or sound acceleration with the symbol a in metre/second2. In acoustics or physics, acceleration (symbol: a) is defined as the rate of change (or time derivative) of velocity. It is thus a vector quantity with dimension length/time2. In SI units, this is m/s2. To accelerate an object (air particle) is to change its velocity over a period. Acceleration is defined technically as "the rate of change of velocity of an object with respect to time" and is given by the equation where a is the acceleration vector v is the velocity vector expressed in m/s t is time expressed in seconds. This equation gives a the units of m/(s·s), or m/s2 (read as "metres per second per second", or "metres per second squared"). An alternative equation is: where is the average acceleration (m/s2) is the initial velocity (m/s) is the final velocity (m/s) is the time interval (s) Transverse acceleration (perpendicular to velocity) causes change in direction. If it is constant in magnitude and changing in direction with the velocity, we get a circular motion. For this centripetal acceleration we have One common unit of acceleration is g-force, one g being the acceleration caused by the gravity of Earth. In classical mechanics, acceleration is related to force and mass (assumed to be constant) by way of Newton's second law: Equations in terms of other measurements The Particle acceleration of the air particles a in m/s2 of a plain sound wave is: See also Sound Sound particle Particle displacement Particle velocity External links Relationships of acoustic quantities associated with a plane progressive acoustic sound wave - pdf Acoustics Document 3::: Particle velocity (denoted or ) is the velocity of a particle (real or imagined) in a medium as it transmits a wave. The SI unit of particle velocity is the metre per second (m/s). In many cases this is a longitudinal wave of pressure as with sound, but it can also be a transverse wave as with the vibration of a taut string. When applied to a sound wave through a medium of a fluid like air, particle velocity would be the physical speed of a parcel of fluid as it moves back and forth in the direction the sound wave is travelling as it passes. Particle velocity should not be confused with the speed of the wave as it passes through the medium, i.e. in the case of a sound wave, particle velocity is not the same as the speed of sound. The wave moves relatively fast, while the particles oscillate around their original position with a relatively small particle velocity. Particle velocity should also not be confused with the velocity of individual molecules, which depends mostly on the temperature and molecular mass. In applications involving sound, the particle velocity is usually measured using a logarithmic decibel scale called particle velocity level. Mostly pressure sensors (microphones) are used to measure sound pressure which is then propagated to the velocity field using Green's function. Mathematical definition Particle velocity, denoted , is defined by where is the particle displacement. Progressive sine waves The particle displacement of a progressive sine wave is given by where is the amplitude of the particle displacement; is the phase shift of the particle displacement; is the angular wavevector; is the angular frequency. It follows that the particle velocity and the sound pressure along the direction of propagation of the sound wave x are given by where is the amplitude of the particle velocity; is the phase shift of the particle velocity; is the amplitude of the acoustic pressure; is the phase shift of the acoustic pressure. Taking the La Document 4::: The speed of sound is the distance travelled per unit of time by a sound wave as it propagates through an elastic medium. At , the speed of sound in air is about , or one kilometre in or one mile in . It depends strongly on temperature as well as the medium through which a sound wave is propagating. At , the speed of sound in air is about . More simply, the speed of sound is how fast vibrations travel. The speed of sound in an ideal gas depends only on its temperature and composition. The speed has a weak dependence on frequency and pressure in ordinary air, deviating slightly from ideal behavior. In colloquial speech, speed of sound refers to the speed of sound waves in air. However, the speed of sound varies from substance to substance: typically, sound travels most slowly in gases, faster in liquids, and fastest in solids. For example, while sound travels at in air, it travels at in water (almost 4.3 times as fast) and at in iron (almost 15 times as fast). In an exceptionally stiff material such as diamond, sound travels at , about 35 times its speed in air and about the fastest it can travel under normal conditions. In theory, the speed of sound is actually the speed of vibrations. Sound waves in solids are composed of compression waves (just as in gases and liquids) and a different type of sound wave called a shear wave, which occurs only in solids. Shear waves in solids usually travel at different speeds than compression waves, as exhibited in seismology. The speed of compression waves in solids is determined by the medium's compressibility, shear modulus, and density. The speed of shear waves is determined only by the solid material's shear modulus and density. In fluid dynamics, the speed of sound in a fluid medium (gas or liquid) is used as a relative measure for the speed of an object moving through the medium. The ratio of the speed of an object to the speed of sound (in the same medium) is called the object's Mach number. Objects moving at speeds g The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. What is the distance that sound waves travel in a given amount of time called? A. velocity of sound B. speed of sound C. momentum of sound D. force of sound Answer:
sciq-916
multiple_choice
Animals that molt their exoskeletons belong to which clade?
[ "trichina", "ecdysozoa", "spirogyra", "protists" ]
B
Relavent Documents: Document 0::: Polydactyly in stem-tetrapods should here be understood as having more than five digits to the finger or foot, a condition that was the natural state of affairs in the earliest stegocephalians during the evolution of terrestriality. The polydactyly in these largely aquatic animals is not to be confused with polydactyly in the medical sense, i.e. it was not an anomaly in the sense it was not a congenital condition of having more than the typical number of digits for a given taxon. Rather, it appears to be a result of the early evolution from a limb with a fin rather than digits. "Living tetrapods, such as the frogs, turtles, birds and mammals, are a subgroup of the tetrapod lineage. The lineage also includes finned and limbed tetrapods that are more closely related to living tetrapods than to living lungfishes." Tetrapods evolved from animals with fins such as found in lobe-finned fishes. From this condition a new pattern of limb formation evolved, where the development axis of the limb rotated to sprout secondary axes along the lower margin, giving rise to a variable number of very stout skeletal supports for a paddle-like foot. The condition is thought to have arisen from the loss of the fin ray-forming proteins actinodin 1 and actinodin 2 or modification of the expression of HOXD13. It is still unknown why exactly this happens. "SHH is produced by the mesenchymal cells of the zone of polarizing activity (ZPA) found at the posterior margin of the limbs of all vertebrates with paired appendages, including the most primitive chondrichthyian fishes. Its expression is driven by a well-conserved limb-specific enhancer called the ZRS (zone of polarizing region activity regulatory sequence) that is located approximately 1 Mb upstream of the coding sequence of Shh." Devonian taxa were polydactylous. Acanthostega had eight digits on both the hindlimbs and forelimbs. Ichthyostega, which was both more derived and more specialized, had seven digits on the hindlimb, though th Document 1::: Arthropods are covered with a tough, resilient integument or exoskeleton of chitin. Generally the exoskeleton will have thickened areas in which the chitin is reinforced or stiffened by materials such as minerals or hardened proteins. This happens in parts of the body where there is a need for rigidity or elasticity. Typically the mineral crystals, mainly calcium carbonate, are deposited among the chitin and protein molecules in a process called biomineralization. The crystals and fibres interpenetrate and reinforce each other, the minerals supplying the hardness and resistance to compression, while the chitin supplies the tensile strength. Biomineralization occurs mainly in crustaceans. In insects and arachnids, the main reinforcing materials are various proteins hardened by linking the fibres in processes called sclerotisation and the hardened proteins are called sclerotin. The dorsal tergum, ventral sternum, and the lateral pleura form the hardened plates or sclerites of a typical body segment. In either case, in contrast to the carapace of a tortoise or the cranium of a vertebrate, the exoskeleton has little ability to grow or change its form once it has matured. Except in special cases, whenever the animal needs to grow, it moults, shedding the old skin after growing a new skin from beneath. Microscopic structure A typical arthropod exoskeleton is a multi-layered structure with four functional regions: epicuticle, procuticle, epidermis and basement membrane. Of these, the epicuticle is a multi-layered external barrier that, especially in terrestrial arthropods, acts as a barrier against desiccation. The strength of the exoskeleton is provided by the underlying procuticle, which is in turn secreted by the epidermis. Arthropod cuticle is a biological composite material, consisting of two main portions: fibrous chains of alpha-chitin within a matrix of silk-like and globular proteins, of which the best-known is the rubbery protein called resilin. The rel Document 2::: Osteoderms are bony deposits forming scales, plates, or other structures based in the dermis. Osteoderms are found in many groups of extant and extinct reptiles and amphibians, including lizards, crocodilians, frogs, temnospondyls (extinct amphibians), various groups of dinosaurs (most notably ankylosaurs and stegosaurians), phytosaurs, aetosaurs, placodonts, and hupehsuchians (marine reptiles with possible ichthyosaur affinities). Osteoderms are uncommon in mammals, although they have occurred in many xenarthrans (armadillos and the extinct glyptodonts and mylodontid ground sloths). The heavy, bony osteoderms have evolved independently in many different lineages. The armadillo osteoderm is believed to develop in subcutaneous dermal tissues. These varied structures should be thought of as anatomical analogues, not homologues, and do not necessarily indicate monophyly. The structures are however derived from scutes, common to all classes of amniotes and are an example of what has been termed deep homology. In many cases, osteoderms may function as defensive armor. Osteoderms are composed of bone tissue, and are derived from a scleroblast neural crest cell population during embryonic development of the organism. The scleroblastic neural crest cell population shares some homologous characteristics associated with the dermis. Neural crest cells, through epithelial-to-mesenchymal transition, are thought to contribute to osteoderm development. The osteoderms of modern crocodilians are heavily vascularized, and can function as both armor and as heat-exchangers, allowing these large reptiles to rapidly raise or lower their temperature. Another function is to neutralize acidosis, caused by being submerged under water for longer periods of time and leading to the accumulation of carbon dioxide in the blood. The calcium and magnesium in the dermal bone will release alkaline ions into the bloodstream, acting as a buffer against acidification of the body fluids. See also Ex Document 3::: Several organisms are capable of rolling locomotion. However, true wheels and propellers—despite their utility in human vehicles—do not play a significant role in the movement of living things (with the exception of certain flagella, which work like corkscrews). Biologists have offered several explanations for the apparent absence of biological wheels, and wheeled creatures have appeared often in speculative fiction. Given the ubiquity of the wheel in human technology, and the existence of biological analogues of many other technologies (such as wings and lenses), the lack of wheels in the natural world would seem to demand explanation—and the phenomenon is broadly explained by two main factors. First, there are several developmental and evolutionary obstacles to the advent of a wheel by natural selection, addressing the question "Why can't life evolve wheels?" Secondly, wheels are often at a competitive disadvantage when compared with other means of propulsion (such as walking, running, or slithering) in natural environments, addressing the question "If wheels evolve, why might they be rare nonetheless?" This environment-specific disadvantage also explains why humans abandoned the wheel in certain regions at least once in history. Known instances of rotation in biology There exist two distinct modes of locomotion using rotation: first, simple rolling; and second, the use of wheels or propellers, which spin on an axle or shaft, relative to a fixed body. While many creatures employ the former mode, the latter is restricted to microscopic, single-celled organisms. Rolling Some organisms use rolling as a means of locomotion. These examples do not constitute the use of a wheel, as the organism rotates as a whole, rather than employing separate parts which rotate independently. Several species of elongate organisms form their bodies into a loop to roll, including certain caterpillars (which do so to escape danger), tiger beetle larvae, myriapods, mantis shrimp, Arm Document 4::: Cephalization is an evolutionary trend in which, over many generations, the mouth, sense organs, and nerve ganglia become concentrated at the front end of an animal, producing a head region. This is associated with movement and bilateral symmetry, such that the animal has a definite head end. This led to the formation of a highly sophisticated brain in three groups of animals, namely the arthropods, cephalopod molluscs, and vertebrates. Animals without bilateral symmetry Cnidaria, such as the radially symmetrical Hydrozoa, show some degree of cephalization. The Anthomedusae have a head end with their mouth, photoreceptive cells, and a concentration of neural cells. Bilateria Cephalization is a characteristic feature of the Bilateria, a large group containing the majority of animal phyla. These have the ability to move, using muscles, and a body plan with a front end that encounters stimuli first as the animal moves forwards, and accordingly has evolved to contain many of the body's sense organs, able to detect light, chemicals, and gravity. There is often also a collection of nerve cells able to process the information from these sense organs, forming a brain in several phyla and one or more ganglia in others. Acoela The Acoela are basal bilaterians, part of the Xenacoelomorpha. They are small and simple animals, and have very slightly more nerve cells at the head end than elsewhere, not forming a distinct and compact brain. This represents an early stage in cephalization. Flatworms The Platyhelminthes (flatworms) have a more complex nervous system than the Acoela, and are lightly cephalized, for instance having an eyespot above the brain, near the front end. Complex active bodies The philosopher Michael Trestman noted that three bilaterian phyla, namely the arthropods, the molluscs in the shape of the cephalopods, and the chordates, were distinctive in having "complex active bodies", something that the acoels and flatworms did not have. Any such animal, whe The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. Animals that molt their exoskeletons belong to which clade? A. trichina B. ecdysozoa C. spirogyra D. protists Answer:
sciq-10643
multiple_choice
What do monotremes have instead of a uterus and vagina?
[ "urethra", "pouch", "cloaca", "endometrium" ]
C
Relavent Documents: Document 0::: This list of related male and female reproductive organs shows how the male and female reproductive organs and the development of the reproductive system are related, sharing a common developmental path. This makes them biological homologues. These organs differentiate into the respective sex organs in males and females. List Internal organs External organs The external genitalia of both males and females have similar origins. They arise from the genital tubercle that forms anterior to the cloacal folds (proliferating mesenchymal cells around the cloacal membrane). The caudal aspect of the cloacal folds further subdivides into the posterior anal folds and the anterior urethral folds. Bilateral to the urethral fold, genital swellings (tubercles) become prominent. These structures are the future scrotum and labia majora in males and females, respectively. The genital tubercles of an eight-week-old embryo of either sex are identical. They both have a glans area, which will go on to form the glans clitoridis (females) or glans penis (males), a urogenital fold and groove, and an anal tubercle. At around ten weeks, the external genitalia are still similar. At the base of the glans, there is a groove known as the coronal sulcus or corona glandis. It is the site of attachment of the future prepuce. Just anterior to the anal tubercle, the caudal end of the left and right urethral folds fuse to form the urethral raphe. The lateral part of the genital tubercle (called the lateral tubercle) grows longitudinally and is about the same length in either sex. Human physiology The male external genitalia include the penis and the scrotum. The female external genitalia include the clitoris, the labia, and the vaginal opening, which are collectively called the vulva. External genitalia vary widely in external appearance among different people. One difference between the glans penis and the glans clitoridis is that the glans clitoridis packs nerve endings into a volume only about Document 1::: The Science, Technology, Engineering and Mathematics Network or STEMNET is an educational charity in the United Kingdom that seeks to encourage participation at school and college in science and engineering-related subjects (science, technology, engineering, and mathematics) and (eventually) work. History It is based at Woolgate Exchange near Moorgate tube station in London and was established in 1996. The chief executive is Kirsten Bodley. The STEMNET offices are housed within the Engineering Council. Function Its chief aim is to interest children in science, technology, engineering and mathematics. Primary school children can start to have an interest in these subjects, leading secondary school pupils to choose science A levels, which will lead to a science career. It supports the After School Science and Engineering Clubs at schools. There are also nine regional Science Learning Centres. STEM ambassadors To promote STEM subjects and encourage young people to take up jobs in these areas, STEMNET have around 30,000 ambassadors across the UK. these come from a wide selection of the STEM industries and include TV personalities like Rob Bell. Funding STEMNET used to receive funding from the Department for Education and Skills. Since June 2007, it receives funding from the Department for Children, Schools and Families and Department for Innovation, Universities and Skills, since STEMNET sits on the chronological dividing point (age 16) of both of the new departments. See also The WISE Campaign Engineering and Physical Sciences Research Council National Centre for Excellence in Teaching Mathematics Association for Science Education Glossary of areas of mathematics Glossary of astronomy Glossary of biology Glossary of chemistry Glossary of engineering Glossary of physics Document 2::: In anatomy, a lobe is a clear anatomical division or extension of an organ (as seen for example in the brain, lung, liver, or kidney) that can be determined without the use of a microscope at the gross anatomy level. This is in contrast to the much smaller lobule, which is a clear division only visible under the microscope. Interlobar ducts connect lobes and interlobular ducts connect lobules. Examples of lobes The four main lobes of the brain the frontal lobe the parietal lobe the occipital lobe the temporal lobe The three lobes of the human cerebellum the flocculonodular lobe the anterior lobe the posterior lobe The two lobes of the thymus The two and three lobes of the lungs Left lung: superior and inferior Right lung: superior, middle, and inferior The four lobes of the liver Left lobe of liver Right lobe of liver Quadrate lobe of liver Caudate lobe of liver The renal lobes of the kidney Earlobes Examples of lobules the cortical lobules of the kidney the testicular lobules of the testis the lobules of the mammary gland the pulmonary lobules of the lung the lobules of the thymus Document 3::: Female education in STEM refers to child and adult female representation in the educational fields of science, technology, engineering, and mathematics (STEM). In 2017, 33% of students in STEM fields were women. The organization UNESCO has stated that this gender disparity is due to discrimination, biases, social norms and expectations that influence the quality of education women receive and the subjects they study. UNESCO also believes that having more women in STEM fields is desirable because it would help bring about sustainable development. Current status of girls and women in STEM education Overall trends in STEM education Gender differences in STEM education participation are already visible in early childhood care and education in science- and math-related play, and become more pronounced at higher levels of education. Girls appear to lose interest in STEM subjects with age, particularly between early and late adolescence. This decreased interest affects participation in advanced studies at the secondary level and in higher education. Female students represent 35% of all students enrolled in STEM-related fields of study at this level globally. Differences are also observed by disciplines, with female enrollment lowest in engineering, manufacturing and construction, natural science, mathematics and statistics and ICT fields. Significant regional and country differences in female representation in STEM studies can be observed, though, suggesting the presence of contextual factors affecting girls’ and women's engagement in these fields. Women leave STEM disciplines in disproportionate numbers during their higher education studies, in their transition to the world of work and even in their career cycle. Learning achievement in STEM education Data on gender differences in learning achievement present a complex picture, depending on what is measured (subject, knowledge acquisition against knowledge application), the level of education/age of students, and Document 4::: Several universities have designed interdisciplinary courses with a focus on human biology at the undergraduate level. There is a wide variation in emphasis ranging from business, social studies, public policy, healthcare and pharmaceutical research. Americas Human Biology major at Stanford University, Palo Alto (since 1970) Stanford's Human Biology Program is an undergraduate major; it integrates the natural and social sciences in the study of human beings. It is interdisciplinary and policy-oriented and was founded in 1970 by a group of Stanford faculty (Professors Dornbusch, Ehrlich, Hamburg, Hastorf, Kennedy, Kretchmer, Lederberg, and Pittendrigh). It is a very popular major and alumni have gone to post-graduate education, medical school, law, business and government. Human and Social Biology (Caribbean) Human and Social Biology is a Level 4 & 5 subject in the secondary and post-secondary schools in the Caribbean and is optional for the Caribbean Secondary Education Certification (CSEC) which is equivalent to Ordinary Level (O-Level) under the British school system. The syllabus centers on structure and functioning (anatomy, physiology, biochemistry) of human body and the relevance to human health with Caribbean-specific experience. The syllabus is organized under five main sections: Living organisms and the environment, life processes, heredity and variation, disease and its impact on humans, the impact of human activities on the environment. Human Biology Program at University of Toronto The University of Toronto offers an undergraduate program in Human Biology that is jointly offered by the Faculty of Arts & Science and the Faculty of Medicine. The program offers several major and specialist options in: human biology, neuroscience, health & disease, global health, and fundamental genetics and its applications. Asia BSc (Honours) Human Biology at All India Institute of Medical Sciences, New Delhi (1980–2002) BSc (honours) Human Biology at AIIMS (New The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. What do monotremes have instead of a uterus and vagina? A. urethra B. pouch C. cloaca D. endometrium Answer:
sciq-5663
multiple_choice
What kind of fibers are used to transport telephone and television signals?
[ "hair fibers", "process fibers", "optical fibers", "touch fibers" ]
C
Relavent Documents: Document 0::: Physical media refers to the physical materials that are used to store or transmit information in data communications. These physical media are generally physical objects made of materials such as copper or glass. They can be touched and felt, and have physical properties such as weight and color. For a number of years, copper and glass were the only media used in computer networking. The term physical media can also be used to describe data storage media like records, cassettes, VHS, LaserDiscs, CDs, DVDs, and Blu-rays, especially when compared with modern streaming media or content that has been downloaded from the Internet onto a hard drive or other storage device as files. Types of physical media Copper wire Copper wire is currently the most commonly used type of physical media due to the abundance of copper in the world, as well as its ability to conduct electrical power. Copper is also one of the cheaper metals which makes it more feasible to use. Most copper wires used in data communications today have eight strands of copper, organized in unshielded twisted pairs, or UTP. The wires are twisted around one another because it reduces electrical interference from outside sources. In addition to UTP, some wires use shielded twisted pairs (STP), which reduce electrical interference even further. The way copper wires are twisted around one another also has an effect on data rates. Category 3 cable (Cat3), has three to four twists per foot and can support speeds of 10 Mbit/s. Category 5 cable (Cat5) is newer and has three to four twists per inch, which results in a maximum data rate of 100 Mbit/s. In addition, there are category 5e (Cat5e) cables which can support speeds of up to 1,000 Mbit/s, and more recently, category 6 cables (Cat6), which support data rates of up to 10,000 Mbit/s (i.e., 10 Gbit/s). On average, copper wire costs around $1 per foot. Optical fiber Optical fiber is a thin and flexible piece of fiber made of glass or plastic. Unlike copper w Document 1::: In electrical engineering, a transmission line is a specialized cable or other structure designed to conduct electromagnetic waves in a contained manner. The term applies when the conductors are long enough that the wave nature of the transmission must be taken into account. This applies especially to radio-frequency engineering because the short wavelengths mean that wave phenomena arise over very short distances (this can be as short as millimetres depending on frequency). However, the theory of transmission lines was historically developed to explain phenomena on very long telegraph lines, especially submarine telegraph cables. Transmission lines are used for purposes such as connecting radio transmitters and receivers with their antennas (they are then called feed lines or feeders), distributing cable television signals, trunklines routing calls between telephone switching centres, computer network connections and high speed computer data buses. RF engineers commonly use short pieces of transmission line, usually in the form of printed planar transmission lines, arranged in certain patterns to build circuits such as filters. These circuits, known as distributed-element circuits, are an alternative to traditional circuits using discrete capacitors and inductors. Overview Ordinary electrical cables suffice to carry low frequency alternating current (AC), such as mains power, which reverses direction 100 to 120 times per second, and audio signals. However, they cannot be used to carry currents in the radio frequency range, above about 30 kHz, because the energy tends to radiate off the cable as radio waves, causing power losses. Radio frequency currents also tend to reflect from discontinuities in the cable such as connectors and joints, and travel back down the cable toward the source. These reflections act as bottlenecks, preventing the signal power from reaching the destination. Transmission lines use specialized construction, and impedance matching, t Document 2::: Category 1 cable, also known as Cat 1, Level 1, or voice-grade copper, is a grade of unshielded twisted pair cabling designed for telephone communications, and at one time was the most common on-premises wiring. The maximum frequency suitable for transmission over Cat 1 cable is 1 MHz, but Cat 1 is not currently considered adequate for data transmission (though it was at one time used for that purpose on the Apple Macintosh starting in the late 1980s in the form of Farallon Computing's//NetTopia's PhoneNet, an implementation of Apple's LocalTalk networking hardware standard). Although not an official category standard established by TIA/EIA, Category 1 has become the de facto name given to Level 1 cables originally defined by Anixter International, the distributor. Cat 1 cable was typically used for networks that carry only voice traffic, for example telephones. Official TIA/EIA-568 standards have only been established for cables of Category 3 ratings or above. See also Category 2 cable Category 3 cable Category 4 cable Category 5 cable Document 3::: On-premises wiring (customer premises wiring) is customer-owned telecommunication transmission or distribution lines. The transmission lines may be metallic (copper) or optical fiber, and may be installed within or between buildings. Premises wiring may consist of horizontal wiring, vertical wiring, and backbone cabling. It may extend from the point-of-entry to user work areas. Any type of telecommunications or data wiring is considered premises wiring, including telephone, computer/data, intercom, closed-circuit television. Premises networks are wired worldwide, across every industry, in both small and large-scale applications. Any type or number of topologies may be used – star, bus, ring, etc. In 1989, the United States Federal Communications Commission (FCC) deregulated charges for maintaining at home inside wiring; the corresponding monthly charge was dropped January 1990. Ownership The ownership of on-premises wiring varies between jurisdictions: It depends on the location of the demarcation point. The location determines ownership and responsibility for maintenance and repair. In the United States and Canada, most premises wiring is owned by the customer. There generally is a demarcation point "as close to the poles" as possible. For many installations, this is a network interface device mounted on the outside of the building. In some cases, it is a minimum-point-of-entry (MPOE) location inside the building. In the United Kingdom, the demarcation point is the wall jack, and hence most of the on-premises wiring is the property of the telephone company. See also Customer-premises equipment Demarc extension Riser cable Structured cabling Document 4::: The F connector (also F-type connector) is a coaxial RF connector commonly used for "over the air" terrestrial television, cable television and universally for satellite television and cable modems, usually with RG-6/U cable or with RG-59/U cable. The F connector was invented by Eric E. Winston in the early 1950s while working for Jerrold Electronics on their development of cable television. In the 1970s, it became commonplace on VHF, and later UHF, television antenna connections in the United States, as coaxial cables replaced twin-lead. It is now specified in IEC 61169-24:2019. Description The F connector is an inexpensive, gendered, threaded, compression connector for radio frequency signals. It has good 75 Ω impedance match for frequencies well over 1 GHz and has usable bandwidth up to several GHz. Connectors mate using a 3/8-32UNEF thread. The female connector has a socket for the center conductor and external threads. The male connector has a center pin, and a captive nut with internal threads. The design allows for low-cost construction, where cables are terminated almost exclusively with male connectors. The coaxial cable center conductor forms the pin, and cable dielectric extends up to the mating face of the connector. Thus, the male connector consists of only a body, which is generally crimped onto or screwed over the cable shielding braid, and a captive nut, neither of which require tight tolerances. Push-on versions are also available. Female connectors are typically used on bulkheads or as couplers, often being secured with the same threads as for the connectors. They can be manufactured as a single piece, with center sockets and dielectric, entirely at the factory where tolerances can easily be controlled. This design is sensitive to the surface properties of the inner conductor (which must be solid wire, not stranded). Weatherproofing The F connector is not weatherproof. Neither the threads nor the joint between male connector body and capt The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. What kind of fibers are used to transport telephone and television signals? A. hair fibers B. process fibers C. optical fibers D. touch fibers Answer:
sciq-5703
multiple_choice
What captures carbon dioxide as it is emitted by a power plant before it enters the atmosphere?
[ "chemical sequestration", "carbon sequestration", "oxide sequestration", "nitrogen sequestration" ]
B
Relavent Documents: Document 0::: Carbon sequestration (or carbon storage) is the process of storing carbon in a carbon pool. Carbon sequestration is a naturally occurring process but it can also be enhanced or achieved with technology, for example within carbon capture and storage projects. There are two main types of carbon sequestration: geologic and biologic (also called biosequestration). Carbon dioxide () is naturally captured from the atmosphere through biological, chemical, and physical processes. These changes can be accelerated through changes in land use and agricultural practices, such as converting crop land into land for non-crop fast growing plants. Artificial processes have been devised to produce similar effects, including large-scale, artificial capture and sequestration of industrially produced using subsurface saline aquifers or aging oil fields. Other technologies that work with carbon sequestration include bio-energy with carbon capture and storage, biochar, enhanced weathering, direct air carbon capture and sequestration (DACCS). Forests, kelp beds, and other forms of plant life absorb carbon dioxide from the air as they grow, and bind it into biomass. However, these biological stores are considered volatile carbon sinks as the long-term sequestration cannot be guaranteed. For example, natural events, such as wildfires or disease, economic pressures and changing political priorities can result in the sequestered carbon being released back into the atmosphere. Carbon dioxide that has been removed from the atmosphere can also be stored in the Earth's crust by injecting it into the subsurface, or in the form of insoluble carbonate salts (mineral sequestration). These methods are considered non-volatile because they remove carbon from the atmosphere and sequester it indefinitely and presumably for a considerable duration (thousands to millions of years). To enhance carbon sequestration processes in oceans the following technologies have been proposed but none have achieved lar Document 1::: Activated carbon, also called activated charcoal, is a form of carbon commonly used to filter contaminants from water and air, among many other uses. It is processed (activated) to have small, low-volume pores that increase the surface area available for adsorption (which is not the same as absorption) or chemical reactions. Activation is analogous to making popcorn from dried corn kernels: popcorn is light, fluffy, and its kernels have a high surface-area-to-volume ratio. Activated is sometimes replaced by active. Due to its high degree of microporosity, one gram of activated carbon has a surface area in excess of as determined by gas adsorption. Charcoal, before activation, has a specific surface area in the range of . An activation level sufficient for useful application may be obtained solely from high surface area. Further chemical treatment often enhances adsorption properties. Activated carbon is usually derived from waste products such as coconut husks; waste from paper mills has been studied as a source. These bulk sources are converted into charcoal before being 'activated'. When derived from coal it is referred to as activated coal. Activated coke is derived from coke. Uses Activated carbon is used in methane and hydrogen storage, air purification, capacitive deionization, supercapacitive swing adsorption, solvent recovery, decaffeination, gold purification, metal extraction, water purification, medicine, sewage treatment, air filters in respirators, filters in compressed air, teeth whitening, production of hydrogen chloride, edible electronics, and many other applications. Industrial One major industrial application involves use of activated carbon in metal finishing for purification of electroplating solutions. For example, it is the main purification technique for removing organic impurities from bright nickel plating solutions. A variety of organic chemicals are added to plating solutions for improving their deposit qualities and for enhancing Document 2::: Carbon dioxide is a chemical compound with the chemical formula . It is made up of molecules that each have one carbon atom covalently double bonded to two oxygen atoms. It is found in the gas state at room temperature, and as the source of available carbon in the carbon cycle, atmospheric is the primary carbon source for life on Earth. In the air, carbon dioxide is transparent to visible light but absorbs infrared radiation, acting as a greenhouse gas. Carbon dioxide is soluble in water and is found in groundwater, lakes, ice caps, and seawater. When carbon dioxide dissolves in water, it forms carbonate and mainly bicarbonate (), which causes ocean acidification as atmospheric levels increase. It is a trace gas in Earth's atmosphere at 421 parts per million (ppm), or about 0.04% (as of May 2022) having risen from pre-industrial levels of 280 ppm or about 0.025%. Burning fossil fuels is the primary cause of these increased concentrations and also the primary cause of climate change. Its concentration in Earth's pre-industrial atmosphere since late in the Precambrian was regulated by organisms and geological phenomena. Plants, algae and cyanobacteria use energy from sunlight to synthesize carbohydrates from carbon dioxide and water in a process called photosynthesis, which produces oxygen as a waste product. In turn, oxygen is consumed and is released as waste by all aerobic organisms when they metabolize organic compounds to produce energy by respiration. is released from organic materials when they decay or combust, such as in forest fires. Since plants require for photosynthesis, and humans and animals depend on plants for food, is necessary for the survival of life on earth. Carbon dioxide is 53% more dense than dry air, but is long lived and thoroughly mixes in the atmosphere. About half of excess emissions to the atmosphere are absorbed by land and ocean carbon sinks. These sinks can become saturated and are volatile, as decay and wildfires result i Document 3::: A breakthrough curve in adsorption is the course of the effluent adsorptive concentration at the outlet of a fixed bed adsorber. Breakthrough curves are important for adsorptive separation technologies and for the characterization of porous materials. Importance Since almost all adsorptive separation processes are dynamic -meaning, that they are running under flow - testing porous materials for those applications for their separation performance has to be tested under flow as well. Since separation processes run with mixtures of different components, measuring several breakthrough curves results in thermodynamic mixture equilibria - mixture sorption isotherms, that are hardly accessible with static manometric sorption characterization. This enables the determination of sorption selectivities in gaseous and liquid phase. The determination of breakthrough curves is the foundation of many other processes, like the pressure swing adsorption. Within this process, the loading of one adsorber is equivalent to a breakthrough experiment. Measurement A fixed bed of porous materials (e.g. activated carbons and zeolites) is pressurized and purged with a carrier gas. After becoming stationary one or more adsorptives are added to the carrier gas, resulting in a step-wise change of the inlet concentration. This is in contrast to chromatographic separation processes, where pulse-wise changes of the inlet concentrations are used. The course of the adsorptive concentrations at the outlet of the fixed bed are monitored. Results Integration of the area above the entire breakthrough curve gives the maximum loading of the adsorptive material. Additionally, the duration of the breakthrough experiment until a certain threshold of the adsorptive concentration at the outlet can be measured, which enables the calculation of a technically usable sorption capacity. Up to this time, the quality of the product stream can be maintained. The shape of the breakthrough curves contains informat Document 4::: Conceptual questions or conceptual problems in science, technology, engineering, and mathematics (STEM) education are questions that can be answered based only on the knowledge of relevant concepts, rather than performing extensive calculations. They contrast with most homework and exam problems in science and engineering that typically require plugging in numerical values into previously discussed formulas. Such "plug-and-chug" numerical problems can often be solved correctly by just matching the pattern of the problem to a previously discussed problem and changing the numerical inputs, which requires significant amounts of time to perform the calculations but does not test or deepen the understanding of how the concepts and formulas should work together. Conceptual questions, therefore, provide a good complement to conventional numerical problems because they need minimal or no calculations and instead encourage the students to engage more deeply with the underlying concepts and how they relate to formulas. Conceptual problems are often formulated as multiple-choice questions, making them easy to use during in-class discussions, particularly when utilizing active learning, peer instruction, and audience response. An example of a conceptual question in undergraduate thermodynamics is provided below: During adiabatic expansion of an ideal gas, its temperatureincreases decreases stays the same Impossible to tell/need more information The use of conceptual questions in physics was popularized by Eric Mazur, particularly in the form of multiple-choice tests that he called ConcepTests. In recent years, multiple websites that maintain lists of conceptual questions have been created by instructors for various disciplines. Some books on physics provide many examples of conceptual questions as well. Multiple conceptual questions can be assembled into a concept inventory to test the working knowledge of students at the beginning of a course or to track the improvement in The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. What captures carbon dioxide as it is emitted by a power plant before it enters the atmosphere? A. chemical sequestration B. carbon sequestration C. oxide sequestration D. nitrogen sequestration Answer:
sciq-10647
multiple_choice
Most autotrophs make their "food" through which process, using the energy of the sun?
[ "atherosclerosis", "oculitis", "photosynthesis", "glycolysis" ]
C
Relavent Documents: Document 0::: The soil food web is the community of organisms living all or part of their lives in the soil. It describes a complex living system in the soil and how it interacts with the environment, plants, and animals. Food webs describe the transfer of energy between species in an ecosystem. While a food chain examines one, linear, energy pathway through an ecosystem, a food web is more complex and illustrates all of the potential pathways. Much of this transferred energy comes from the sun. Plants use the sun’s energy to convert inorganic compounds into energy-rich, organic compounds, turning carbon dioxide and minerals into plant material by photosynthesis. Plant flowers exude energy-rich nectar above ground and plant roots exude acids, sugars, and ectoenzymes into the rhizosphere, adjusting the pH and feeding the food web underground. Plants are called autotrophs because they make their own energy; they are also called producers because they produce energy available for other organisms to eat. Heterotrophs are consumers that cannot make their own food. In order to obtain energy they eat plants or other heterotrophs. Above ground food webs In above ground food webs, energy moves from producers (plants) to primary consumers (herbivores) and then to secondary consumers (predators). The phrase, trophic level, refers to the different levels or steps in the energy pathway. In other words, the producers, consumers, and decomposers are the main trophic levels. This chain of energy transferring from one species to another can continue several more times, but eventually ends. At the end of the food chain, decomposers such as bacteria and fungi break down dead plant and animal material into simple nutrients. Methodology The nature of soil makes direct observation of food webs difficult. Since soil organisms range in size from less than 0.1 mm (nematodes) to greater than 2 mm (earthworms) there are many different ways to extract them. Soil samples are often taken using a metal Document 1::: The trophic level of an organism is the position it occupies in a food web. A food chain is a succession of organisms that eat other organisms and may, in turn, be eaten themselves. The trophic level of an organism is the number of steps it is from the start of the chain. A food web starts at trophic level 1 with primary producers such as plants, can move to herbivores at level 2, carnivores at level 3 or higher, and typically finish with apex predators at level 4 or 5. The path along the chain can form either a one-way flow or a food "web". Ecological communities with higher biodiversity form more complex trophic paths. The word trophic derives from the Greek τροφή (trophē) referring to food or nourishment. History The concept of trophic level was developed by Raymond Lindeman (1942), based on the terminology of August Thienemann (1926): "producers", "consumers", and "reducers" (modified to "decomposers" by Lindeman). Overview The three basic ways in which organisms get food are as producers, consumers, and decomposers. Producers (autotrophs) are typically plants or algae. Plants and algae do not usually eat other organisms, but pull nutrients from the soil or the ocean and manufacture their own food using photosynthesis. For this reason, they are called primary producers. In this way, it is energy from the sun that usually powers the base of the food chain. An exception occurs in deep-sea hydrothermal ecosystems, where there is no sunlight. Here primary producers manufacture food through a process called chemosynthesis. Consumers (heterotrophs) are species that cannot manufacture their own food and need to consume other organisms. Animals that eat primary producers (like plants) are called herbivores. Animals that eat other animals are called carnivores, and animals that eat both plants and other animals are called omnivores. Decomposers (detritivores) break down dead plant and animal material and wastes and release it again as energy and nutrients into Document 2::: Bioenergetics is a field in biochemistry and cell biology that concerns energy flow through living systems. This is an active area of biological research that includes the study of the transformation of energy in living organisms and the study of thousands of different cellular processes such as cellular respiration and the many other metabolic and enzymatic processes that lead to production and utilization of energy in forms such as adenosine triphosphate (ATP) molecules. That is, the goal of bioenergetics is to describe how living organisms acquire and transform energy in order to perform biological work. The study of metabolic pathways is thus essential to bioenergetics. Overview Bioenergetics is the part of biochemistry concerned with the energy involved in making and breaking of chemical bonds in the molecules found in biological organisms. It can also be defined as the study of energy relationships and energy transformations and transductions in living organisms. The ability to harness energy from a variety of metabolic pathways is a property of all living organisms. Growth, development, anabolism and catabolism are some of the central processes in the study of biological organisms, because the role of energy is fundamental to such biological processes. Life is dependent on energy transformations; living organisms survive because of exchange of energy between living tissues/ cells and the outside environment. Some organisms, such as autotrophs, can acquire energy from sunlight (through photosynthesis) without needing to consume nutrients and break them down. Other organisms, like heterotrophs, must intake nutrients from food to be able to sustain energy by breaking down chemical bonds in nutrients during metabolic processes such as glycolysis and the citric acid cycle. Importantly, as a direct consequence of the First Law of Thermodynamics, autotrophs and heterotrophs participate in a universal metabolic network—by eating autotrophs (plants), heterotrophs ha Document 3::: Consumer–resource interactions are the core motif of ecological food chains or food webs, and are an umbrella term for a variety of more specialized types of biological species interactions including prey-predator (see predation), host-parasite (see parasitism), plant-herbivore and victim-exploiter systems. These kinds of interactions have been studied and modeled by population ecologists for nearly a century. Species at the bottom of the food chain, such as algae and other autotrophs, consume non-biological resources, such as minerals and nutrients of various kinds, and they derive their energy from light (photons) or chemical sources. Species higher up in the food chain survive by consuming other species and can be classified by what they eat and how they obtain or find their food. Classification of consumer types The standard categorization Various terms have arisen to define consumers by what they eat, such as meat-eating carnivores, fish-eating piscivores, insect-eating insectivores, plant-eating herbivores, seed-eating granivores, and fruit-eating frugivores and omnivores are meat eaters and plant eaters. An extensive classification of consumer categories based on a list of feeding behaviors exists. The Getz categorization Another way of categorizing consumers, proposed by South African American ecologist Wayne Getz, is based on a biomass transformation web (BTW) formulation that organizes resources into five components: live and dead animal, live and dead plant, and particulate (i.e. broken down plant and animal) matter. It also distinguishes between consumers that gather their resources by moving across landscapes from those that mine their resources by becoming sessile once they have located a stock of resources large enough for them to feed on during completion of a full life history stage. In Getz's scheme, words for miners are of Greek etymology and words for gatherers are of Latin etymology. Thus a bestivore, such as a cat, preys on live animal Document 4::: Biochemistry or biological chemistry is the study of chemical processes within and relating to living organisms. A sub-discipline of both chemistry and biology, biochemistry may be divided into three fields: structural biology, enzymology, and metabolism. Over the last decades of the 20th century, biochemistry has become successful at explaining living processes through these three disciplines. Almost all areas of the life sciences are being uncovered and developed through biochemical methodology and research. Biochemistry focuses on understanding the chemical basis which allows biological molecules to give rise to the processes that occur within living cells and between cells, in turn relating greatly to the understanding of tissues and organs, as well as organism structure and function. Biochemistry is closely related to molecular biology, which is the study of the molecular mechanisms of biological phenomena. Much of biochemistry deals with the structures, bonding, functions, and interactions of biological macromolecules, such as proteins, nucleic acids, carbohydrates, and lipids. They provide the structure of cells and perform many of the functions associated with life. The chemistry of the cell also depends upon the reactions of small molecules and ions. These can be inorganic (for example, water and metal ions) or organic (for example, the amino acids, which are used to synthesize proteins). The mechanisms used by cells to harness energy from their environment via chemical reactions are known as metabolism. The findings of biochemistry are applied primarily in medicine, nutrition and agriculture. In medicine, biochemists investigate the causes and cures of diseases. Nutrition studies how to maintain health and wellness and also the effects of nutritional deficiencies. In agriculture, biochemists investigate soil and fertilizers, with the goal of improving crop cultivation, crop storage, and pest control. In recent decades, biochemical principles a The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. Most autotrophs make their "food" through which process, using the energy of the sun? A. atherosclerosis B. oculitis C. photosynthesis D. glycolysis Answer:
sciq-9669
multiple_choice
Where are unsaturated fatty acids commonly found?
[ "oil", "butter", "animal products", "fish" ]
A
Relavent Documents: Document 0::: An unsaturated fat is a fat or fatty acid in which there is at least one double bond within the fatty acid chain. A fatty acid chain is monounsaturated if it contains one double bond, and polyunsaturated if it contains more than one double bond. A saturated fat has no carbon to carbon double bonds, so the maximum possible number of hydrogens bonded to the carbons, and is "saturated" with hydrogen atoms. To form carbon to carbon double bonds, hydrogen atoms are removed from the carbon chain. In cellular metabolism, unsaturated fat molecules contain less energy (i.e., fewer calories) than an equivalent amount of saturated fat. The greater the degree of unsaturation in a fatty acid (i.e., the more double bonds in the fatty acid) the more vulnerable it is to lipid peroxidation (rancidity). Antioxidants can protect unsaturated fat from lipid peroxidation. Composition of common fats In chemical analysis, fats are broken down to their constituent fatty acids, which can be analyzed in various ways. In one approach, fats undergo transesterification to give fatty acid methyl esters (FAMEs), which are amenable to separation and quantitation using by gas chromatography. Classically, unsaturated isomers were separated and identified by argentation thin-layer chromatography. The saturated fatty acid components are almost exclusively stearic (C18) and palmitic acids (C16). Monounsaturated fats are almost exclusively oleic acid. Linolenic acid comprises most of the triunsaturated fatty acid component. Chemistry and nutrition Although polyunsaturated fats are protective against cardiac arrhythmias, a study of post-menopausal women with a relatively low fat intake showed that polyunsaturated fat is positively associated with progression of coronary atherosclerosis, whereas monounsaturated fat is not. This probably is an indication of the greater vulnerability of polyunsaturated fats to lipid peroxidation, against which vitamin E has been shown to be protective. Examples Document 1::: A simple lipid is a fatty acid ester of different alcohols and carries no other substance. These lipids belong to a heterogeneous class of predominantly nonpolar compounds, mostly insoluble in water, but soluble in nonpolar organic solvents such as chloroform and benzene. Simple lipids: esters of fatty acids with various alcohols. a. Fats: esters of fatty acids with glycerol. Oils are fats in the liquid state. Fats are also called triglycerides because all the three hydroxyl groups of glycerol are esterified. b. Waxes: Solid esters of long-chain fatty acids such as palmitic acid with aliphatic or alicyclic higher molecular weight monohydric alcohols. Waxes are water-insoluble due to the weakly polar nature of the ester group. See also Lipid Lipids Document 2::: In biochemistry and nutrition, a monounsaturated fat is a fat that contains a monounsaturated fatty acid (MUFA), a subclass of fatty acid characterized by having a double bond in the fatty acid chain with all of the remaining carbon atoms being single-bonded. By contrast, polyunsaturated fatty acids (PUFAs) have more than one double bond. Molecular description Monounsaturated fats are triglycerides containing one unsaturated fatty acid. Almost invariably that fatty acid is oleic acid (18:1 n−9). Palmitoleic acid (16:1 n−7) and cis-vaccenic acid (18:1 n−7) occur in small amounts in fats. Health Studies have shown that substituting dietary monounsaturated fat for saturated fat is associated with increased daily physical activity and resting energy expenditure. More physical activity was associated with a higher-oleic acid diet than one of a palmitic acid diet. From the study, it is shown that more monounsaturated fats lead to less anger and irritability. Foods containing monounsaturated fats may affect low-density lipoprotein (LDL) cholesterol and high-density lipoprotein (HDL) cholesterol. Levels of oleic acid along with other monounsaturated fatty acids in red blood cell membranes were positively associated with breast cancer risk. The saturation index (SI) of the same membranes was inversely associated with breast cancer risk. Monounsaturated fats and low SI in erythrocyte membranes are predictors of postmenopausal breast cancer. Both of these variables depend on the activity of the enzyme delta-9 desaturase (Δ9-d). In children, consumption of monounsaturated oils is associated with healthier serum lipid profiles. The Mediterranean diet is one heavily influenced by monounsaturated fats. People in Mediterranean countries consume more total fat than Northern European countries, but most of the fat is in the form of monounsaturated fatty acids from olive oil and omega-3 fatty acids from fish, vegetables, and certain meats like lamb, while consumption of satur Document 3::: Per 100 g, soybean oil has 16 g of saturated fat, 23 g of monounsaturated fat, and 58 g of polyunsaturated fat. The major unsaturated fatty acids in soybean oil triglycerides are the polyunsaturates alpha-linolenic acid (C-18:3), 7-10%, and linoleic acid (C-18:2), 51%; and the monounsatu Document 4::: A saponifiable lipid is part of the ester functional group. They are made up of long chain carboxylic (of fatty) acids connected to an alcoholic functional group through the ester linkage which can undergo a saponification reaction. The fatty acids are released upon base-catalyzed ester hydrolysis to form ionized salts. The primary saponifiable lipids are free fatty acids, neutral glycerolipids, glycerophospholipids, sphingolipids, and glycolipids. By comparison, the non-saponifiable class of lipids is made up of terpenes, including fat-soluble A and E vitamins, and certain steroids, such as cholesterol. Applications Saponifiable lipids have relevant applications as a source of biofuel and can be extracted from various forms of biomass to produce biodiesel. See also Lipids Simple lipid The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. Where are unsaturated fatty acids commonly found? A. oil B. butter C. animal products D. fish Answer:
sciq-9151
multiple_choice
The cell starting meiosis is called a what?
[ "primary gamete", "zygote", "secondary oocyte", "primary oocyte" ]
D
Relavent Documents: Document 0::: In cellular biology, a somatic cell (), or vegetal cell, is any biological cell forming the body of a multicellular organism other than a gamete, germ cell, gametocyte or undifferentiated stem cell. Somatic cells compose the body of an organism and divide through the process of binary fission and mitotic division. In contrast, gametes are cells that fuse during sexual reproduction and germ cells are cells that give rise to gametes. Stem cells also can divide through mitosis, but are different from somatic in that they differentiate into diverse specialized cell types. In mammals, somatic cells make up all the internal organs, skin, bones, blood and connective tissue, while mammalian germ cells give rise to spermatozoa and ova which fuse during fertilization to produce a cell called a zygote, which divides and differentiates into the cells of an embryo. There are approximately 220 types of somatic cell in the human body. Theoretically, these cells are not germ cells (the source of gametes); they transmit their mutations, to their cellular descendants (if they have any), but not to the organism's descendants. However, in sponges, non-differentiated somatic cells form the germ line and, in Cnidaria, differentiated somatic cells are the source of the germline. Mitotic cell division is only seen in diploid somatic cells. Only some cells like germ cells take part in reproduction. Evolution As multicellularity was theorized to be evolved many times, so did sterile somatic cells. The evolution of an immortal germline producing specialized somatic cells involved the emergence of mortality, and can be viewed in its simplest version in volvocine algae. Those species with a separation between sterile somatic cells and a germline are called Weismannists. Weismannist development is relatively rare (e.g., vertebrates, arthropods, Volvox), as many species have the capacity for somatic embryogenesis (e.g., land plants, most algae, and numerous invertebrates). Genetics and chrom Document 1::: Microgametogenesis is the process in plant reproduction where a microgametophyte develops in a pollen grain to the three-celled stage of its development. In flowering plants it occurs with a microspore mother cell inside the anther of the plant. When the microgametophyte is first formed inside the pollen grain four sets of fertile cells called sporogenous cells are apparent. These cells are surrounded by a wall of sterile cells called the tapetum, which supplies food to the cell and eventually becomes the cell wall for the pollen grain. These sets of sporogenous cells eventually develop into diploid microspore mother cells. These microspore mother cells, also called microsporocytes, then undergo meiosis and become four microspore haploid cells. These new microspore cells then undergo mitosis and form a tube cell and a generative cell. The generative cell then undergoes mitosis one more time to form two male gametes, also called sperm. See also Gametogenesis Document 2::: Gametogenesis is a biological process by which diploid or haploid precursor cells undergo cell division and differentiation to form mature haploid gametes. Depending on the biological life cycle of the organism, gametogenesis occurs by meiotic division of diploid gametocytes into various gametes, or by mitosis. For example, plants produce gametes through mitosis in gametophytes. The gametophytes grow from haploid spores after sporic meiosis. The existence of a multicellular, haploid phase in the life cycle between meiosis and gametogenesis is also referred to as alternation of generations. It is the biological process of gametogenesis; cells that are haploid or diploid divide to create other cells. matured haploid gametes. It can take place either through mitosis or meiotic division of diploid gametocytes into different depending on an organism's biological life cycle, gametes. For instance, gametophytes in plants undergo mitosis to produce gametes. Both male and female have different forms. In animals Animals produce gametes directly through meiosis from diploid mother cells in organs called gonads (testis in males and ovaries in females). In mammalian germ cell development, sexually dimorphic gametes differentiates into primordial germ cells from pluripotent cells during initial mammalian development. Males and females of a species that reproduce sexually have different forms of gametogenesis: spermatogenesis (male): Immature germ cells are produced in a man's testes. To mature into sperms, males' immature germ cells, or spermatogonia, go through spermatogenesis during adolescence. Spermatogonia are diploid cells that become larger as they divide through mitosis. These primary spermatocytes. These diploid cells undergo meiotic division to create secondary spermatocytes. These secondary spermatocytes undergo a second meiotic division to produce immature sperms or spermatids. These spermatids undergo spermiogenesis in order to develop into sperm. LH, FSH, GnRH Document 3::: Oogenesis, ovogenesis, or oögenesis is the differentiation of the ovum (egg cell) into a cell competent to further develop when fertilized. It is developed from the primary oocyte by maturation. Oogenesis is initiated in the embryonic stage. Oogenesis in non-human mammals In mammals, the first part of oogenesis starts in the germinal epithelium, which gives rise to the development of ovarian follicles, the functional unit of the ovary. Oogenesis consists of several sub-processes: oocytogenesis, ootidogenesis, and finally maturation to form an ovum (oogenesis proper). Folliculogenesis is a separate sub-process that accompanies and supports all three oogenetic sub-processes. Oogonium —(Oocytogenesis)—> Primary Oocyte —(Meiosis I)—> First Polar body (Discarded afterward) + Secondary oocyte —(Meiosis II)—> Second Polar Body (Discarded afterward) + Ovum Oocyte meiosis, important to all animal life cycles yet unlike all other instances of animal cell division, occurs completely without the aid of spindle-coordinating centrosomes. The creation of oogonia The creation of oogonia traditionally doesn't belong to oogenesis proper, but, instead, to the common process of gametogenesis, which, in the female human, begins with the processes of folliculogenesis, oocytogenesis, and ootidogenesis. Oogonia enter meiosis during embryonic development, becoming oocytes. Meiosis begins with DNA replication and meiotic crossing over. It then stops in early prophase. Maintenance of meiotic arrest Mammalian oocytes are maintained in meiotic prophase arrest for a very long time—months in mice, years in humans. Initially the arrest is due to lack of sufficient cell cycle proteins to allow meiotic progression. However, as the oocyte grows, these proteins are synthesized, and meiotic arrest becomes dependent on cyclic AMP. The cyclic AMP is generated by the oocyte by adenylyl cyclase in the oocyte membrane. The adenylyl cyclase is kept active by a constitutively active G-protein-coupled Document 4::: Germ-Soma Differentiation is the process by which organisms develop distinct germline and somatic cells. The development of cell differentiation has been one of the critical aspects of the evolution of multicellularity and sexual reproduction in organisms. Multicellularity has evolved upwards of 25 times, and due to this there is great possibility that multiple factors have shaped the differentiation of cells. There are three general types of cells: germ cells, somatic cells, and stem cells. Germ cells lead to the production of gametes, while somatic cells perform all other functions within the body. Within the broad category of somatic cells, there is further specialization as cells become specified to certain tissues and functions. In addition, stem cell are undifferentiated cells which can develop into a specialized cell and are the earliest type of cell in a cell lineage. Due to the differentiation in function, somatic cells are found ony in multicellular organisms, as in unicellular ones the purposes of somatic and germ cells are consolidated in one cell. All organisms with germ-soma differentiation are eukaryotic, and represent an added level of specialization to multicellular organisms. Pure germ-soma differentiation has developed in a select number of eukaryotes (called Weismannists), included in this category are vertebrates and arthropods- however land plants, green algae, red algae, brown algae, and fungi have partial differentiation. While a significant portion of organisms with germ-soma differentiation are asexual, this distinction has been imperative in the development of sexual reproduction; the specialization of certain cells into germ cells is fundamental for meiosis and recombination. Weismann barrier The strict division between somatic and germ cells is called the Weismann barrier, in which genetic information passed onto offspring is found only in germ cells. This occurs only in select organisms, however some without a Weismann barrier do pre The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. The cell starting meiosis is called a what? A. primary gamete B. zygote C. secondary oocyte D. primary oocyte Answer:
sciq-8653
multiple_choice
Androgen secretion and sperm production are both controlled by hypothalamic and which other hormones?
[ "adrenal", "pituitary", "salivary", "Testes" ]
B
Relavent Documents: Document 0::: The following is a list of hormones found in Homo sapiens. Spelling is not uniform for many hormones. For example, current North American and international usage uses estrogen and gonadotropin, while British usage retains the Greek digraph in oestrogen and favours the earlier spelling gonadotrophin. Hormone listing Steroid Document 1::: Reproductive biology includes both sexual and asexual reproduction. Reproductive biology includes a wide number of fields: Reproductive systems Endocrinology Sexual development (Puberty) Sexual maturity Reproduction Fertility Human reproductive biology Endocrinology Human reproductive biology is primarily controlled through hormones, which send signals to the human reproductive structures to influence growth and maturation. These hormones are secreted by endocrine glands, and spread to different tissues in the human body. In humans, the pituitary gland synthesizes hormones used to control the activity of endocrine glands. Reproductive systems Internal and external organs are included in the reproductive system. There are two reproductive systems including the male and female, which contain different organs from one another. These systems work together in order to produce offspring. Female reproductive system The female reproductive system includes the structures involved in ovulation, fertilization, development of an embryo, and birth. These structures include: Ovaries Oviducts Uterus Vagina Mammary Glands Estrogen is one of the sexual reproductive hormones that aid in the sexual reproductive system of the female. Male reproductive system The male reproductive system includes testes, rete testis, efferent ductules, epididymis, sex accessory glands, sex accessory ducts and external genitalia. Testosterone, an androgen, although present in both males and females, is relatively more abundant in males. Testosterone serves as one of the major sexual reproductive hormones in the male reproductive system However, the enzyme aromatase is present in testes and capable of synthesizing estrogens from androgens. Estrogens are present in high concentrations in luminal fluids of the male reproductive tract. Androgen and estrogen receptors are abundant in epithelial cells of the male reproductive tract. Animal Reproductive Biology Animal reproduction oc Document 2::: Adrenal androgen stimulating hormone (AASH), also known as cortical androgen stimulating hormone (CASH), is a hypothetical hormone which has been proposed to stimulate the adrenal glands to produce adrenal androgens such as dehydroepiandrosterone (DHEA), dehydroepiandrosterone sulfate (DHEA-S), and androstenedione (A4). It is hypothesized to be involved in adrenarche and adrenopause. The existence of this hormone is controversial and disputed and it has not been identified to date. A number of other mechanisms and/or hormones may instead play the functional role of the so-called AASH. See also Adrenocorticotrophic hormone (ACTH) Document 3::: Heterocrine glands (or composite glands) are the glands which function as both exocrine gland and endocrine gland. These glands exhibit a unique and diverse secretory function encompassing the release of proteins and non-proteinaceous compounds, endocrine and exocrine secretions into both the bloodstream and ducts respectively, thereby bridging the realms of internal and external communication within the body. This duality allows them to serve crucial roles in regulating various physiological processes and maintaining homeostasis. These include the gonads (testes and ovaries), pancreas and salivary glands. Pancreas releases digestive enzymes into the small intestine via ducts (exocrine) and secretes insulin and glucagon into the bloodstream (endocrine) to regulate blood sugar level. Testes produce sperm, which is released through ducts (exocrine), and they also secrete testosterone into the bloodstream (endocrine). Similarly, ovaries release ova through ducts (exocrine) and produce estrogen and progesterone (endocrine). Salivary glands secrete saliva through ducts to aid in digestion (exocrine) and produce epidermal growth factor and insulin-like growth factor (endocrine). Anatomy Heterocrine glands typically have a complex structure that enables them to produce and release different types of secretions. The two primary components of these glands are: Endocrine component: Heterocrine glands produce hormones, which are chemical messengers that travel through the bloodstream to target organs or tissues. These hormones play a vital role in regulating numerous physiological processes, such as metabolism, growth, and the immune response. Exocrine component: In addition to their endocrine function, heterocrine glands secrete substances directly into ducts or cavities, which can be released through various body openings. These exocrine secretions can include enzymes, mucus, and other substances that aid in digestion, lubrication, or protection. Characteristics and Func Document 4::: Hormonal imprinting (HI) is a phenomenon which takes place at the first encounter between a hormone and its developing receptor in the critical periods of life (in unicellulars during the whole life) and determines the later signal transduction capacity of the cell. The most important period in mammals is the perinatal one, however this system can be imprinted at weaning, at puberty and in case of continuously dividing cells during the whole life. Faulty imprinting is caused by drugs, environmental pollutants and other hormone-like molecules present in excess at the critical periods with lifelong receptorial, morphological, biochemical and behavioral consequences. HI is transmitted to the hundreds of progeny generations in unicellulars and (as proved) to a few generations also in mammals. The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. Androgen secretion and sperm production are both controlled by hypothalamic and which other hormones? A. adrenal B. pituitary C. salivary D. Testes Answer:
sciq-7294
multiple_choice
During what time period did poor air quality become a problem?
[ "Chernobyl disaster", "second world war", "coal industry boom", "industrial revolution" ]
D
Relavent Documents: Document 0::: A pre-STEM program is a course of study at any two-year college that prepares a student to transfer to a four-year school to earn a bachelor's degree in a STEM field. Overview The concept of a pre-STEM program is being developed to address America's need for more college-trained professionals in science, technology, engineering, and mathematics (STEM). It is an innovation meant to fill a gap at community colleges that do not have 'major' degree paths that students identify with on their way to earning an Associates degree. Students must complete a considerable amount of STEM coursework before transferring from a two-year school to a four-year school and earn a baccalaureate degree in a STEM field. Schools with a pre-STEM program are able to identify those students and support them with STEM-specific academic and career advising, increasing the student's chances of going on to earn a STEM baccalaureate degree in a timely fashion. With over 50% of America's college-bound students starting their college career at public or private two-year school, and with a very small proportion of students who start college at a two-year school matriculating to and earning STEM degrees from four-year schools, pre-STEM programs have great potential for broadening participation in baccalaureate STEM studies. Example programs The effectiveness of pre-STEM programs is being investigated by a consortium of schools in Missouri: Moberly Area Community College, St. Charles Community College, Metropolitan Community College, and Truman State University. A larger group of schools met at the Belknap Springs Meetings in October 2009 to discuss the challenges and opportunities presented by STEM-focused partnerships between 2-year and 4-year schools. Each program represented a two-year school and a four-year school that were trying to increase the number of people who earn a baccalaureate degree in a STEM area through various means, some of which were pre-STEM programs. Other methods includes Document 1::: The Science, Technology, Engineering and Mathematics Network or STEMNET is an educational charity in the United Kingdom that seeks to encourage participation at school and college in science and engineering-related subjects (science, technology, engineering, and mathematics) and (eventually) work. History It is based at Woolgate Exchange near Moorgate tube station in London and was established in 1996. The chief executive is Kirsten Bodley. The STEMNET offices are housed within the Engineering Council. Function Its chief aim is to interest children in science, technology, engineering and mathematics. Primary school children can start to have an interest in these subjects, leading secondary school pupils to choose science A levels, which will lead to a science career. It supports the After School Science and Engineering Clubs at schools. There are also nine regional Science Learning Centres. STEM ambassadors To promote STEM subjects and encourage young people to take up jobs in these areas, STEMNET have around 30,000 ambassadors across the UK. these come from a wide selection of the STEM industries and include TV personalities like Rob Bell. Funding STEMNET used to receive funding from the Department for Education and Skills. Since June 2007, it receives funding from the Department for Children, Schools and Families and Department for Innovation, Universities and Skills, since STEMNET sits on the chronological dividing point (age 16) of both of the new departments. See also The WISE Campaign Engineering and Physical Sciences Research Council National Centre for Excellence in Teaching Mathematics Association for Science Education Glossary of areas of mathematics Glossary of astronomy Glossary of biology Glossary of chemistry Glossary of engineering Glossary of physics Document 2::: The STEM (Science, Technology, Engineering, and Mathematics) pipeline is a critical infrastructure for fostering the development of future scientists, engineers, and problem solvers. It's the educational and career pathway that guides individuals from early childhood through to advanced research and innovation in STEM-related fields. Description The "pipeline" metaphor is based on the idea that having sufficient graduates requires both having sufficient input of students at the beginning of their studies, and retaining these students through completion of their academic program. The STEM pipeline is a key component of workplace diversity and of workforce development that ensures sufficient qualified candidates are available to fill scientific and technical positions. The STEM pipeline was promoted in the United States from the 1970s onwards, as “the push for STEM (science, technology, engineering, and mathematics) education appears to have grown from a concern for the low number of future professionals to fill STEM jobs and careers and economic and educational competitiveness.” Today, this metaphor is commonly used to describe retention problems in STEM fields, called “leaks” in the pipeline. For example, the White House reported in 2012 that 80% of minority groups and women who enroll in a STEM field switch to a non-STEM field or drop out during their undergraduate education. These leaks often vary by field, gender, ethnic and racial identity, socioeconomic background, and other factors, drawing attention to structural inequities involved in STEM education and careers. Current efforts The STEM pipeline concept is a useful tool for programs aiming at increasing the total number of graduates, and is especially important in efforts to increase the number of underrepresented minorities and women in STEM fields. Using STEM methodology, educational policymakers can examine the quantity and retention of students at all stages of the K–12 educational process and beyo Document 3::: Female education in STEM refers to child and adult female representation in the educational fields of science, technology, engineering, and mathematics (STEM). In 2017, 33% of students in STEM fields were women. The organization UNESCO has stated that this gender disparity is due to discrimination, biases, social norms and expectations that influence the quality of education women receive and the subjects they study. UNESCO also believes that having more women in STEM fields is desirable because it would help bring about sustainable development. Current status of girls and women in STEM education Overall trends in STEM education Gender differences in STEM education participation are already visible in early childhood care and education in science- and math-related play, and become more pronounced at higher levels of education. Girls appear to lose interest in STEM subjects with age, particularly between early and late adolescence. This decreased interest affects participation in advanced studies at the secondary level and in higher education. Female students represent 35% of all students enrolled in STEM-related fields of study at this level globally. Differences are also observed by disciplines, with female enrollment lowest in engineering, manufacturing and construction, natural science, mathematics and statistics and ICT fields. Significant regional and country differences in female representation in STEM studies can be observed, though, suggesting the presence of contextual factors affecting girls’ and women's engagement in these fields. Women leave STEM disciplines in disproportionate numbers during their higher education studies, in their transition to the world of work and even in their career cycle. Learning achievement in STEM education Data on gender differences in learning achievement present a complex picture, depending on what is measured (subject, knowledge acquisition against knowledge application), the level of education/age of students, and Document 4::: Conceptual questions or conceptual problems in science, technology, engineering, and mathematics (STEM) education are questions that can be answered based only on the knowledge of relevant concepts, rather than performing extensive calculations. They contrast with most homework and exam problems in science and engineering that typically require plugging in numerical values into previously discussed formulas. Such "plug-and-chug" numerical problems can often be solved correctly by just matching the pattern of the problem to a previously discussed problem and changing the numerical inputs, which requires significant amounts of time to perform the calculations but does not test or deepen the understanding of how the concepts and formulas should work together. Conceptual questions, therefore, provide a good complement to conventional numerical problems because they need minimal or no calculations and instead encourage the students to engage more deeply with the underlying concepts and how they relate to formulas. Conceptual problems are often formulated as multiple-choice questions, making them easy to use during in-class discussions, particularly when utilizing active learning, peer instruction, and audience response. An example of a conceptual question in undergraduate thermodynamics is provided below: During adiabatic expansion of an ideal gas, its temperatureincreases decreases stays the same Impossible to tell/need more information The use of conceptual questions in physics was popularized by Eric Mazur, particularly in the form of multiple-choice tests that he called ConcepTests. In recent years, multiple websites that maintain lists of conceptual questions have been created by instructors for various disciplines. Some books on physics provide many examples of conceptual questions as well. Multiple conceptual questions can be assembled into a concept inventory to test the working knowledge of students at the beginning of a course or to track the improvement in The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. During what time period did poor air quality become a problem? A. Chernobyl disaster B. second world war C. coal industry boom D. industrial revolution Answer:
sciq-6068
multiple_choice
Platinum and gold are useful materials for constructing circuits because of their ability to resist what?
[ "oxidation", "nitrogen", "decomposition", "Electricity" ]
A
Relavent Documents: Document 0::: Advanced Placement (AP) Physics C: Electricity and Magnetism (also known as AP Physics C: E&M or AP E&M) is an introductory physics course administered by the College Board as part of its Advanced Placement program. It is intended to proxy a second-semester calculus-based university course in electricity and magnetism. The content of Physics C: E&M overlaps with that of AP Physics 2, but Physics 2 is algebra-based and covers other topics outside of electromagnetism, while Physics C is calculus-based and only covers electromagnetism. Physics C: E&M may be combined with its mechanics counterpart to form a year-long course that prepares for both exams. Course content E&M is equivalent to an introductory college course in electricity and magnetism for physics or engineering majors. The course modules are: Electrostatics Conductors, capacitors, and dielectrics Electric circuits Magnetic fields Electromagnetism. Methods of calculus are used wherever appropriate in formulating physical principles and in applying them to physical problems. Therefore, students should have completed or be concurrently enrolled in a calculus class. AP test The course culminates in an optional exam for which high-performing students may receive some credit towards their college coursework, depending on the institution. Registration The AP examination for AP Physics C: Electricity and Magnetism is separate from the AP examination for AP Physics C: Mechanics. Before 2006, test-takers paid only once and were given the choice of taking either one or two parts of the Physics C test. Format The exam is typically administered on a Monday afternoon in May. The exam is configured in two categories: a 35-question multiple choice section and a 3-question free response section. Test takers are allowed to use an approved calculator during the entire exam. The test is weighted such that each section is worth half of the final score. This and AP Physics C: Mechanics are the shortest AP exams, with Document 1::: Conceptual questions or conceptual problems in science, technology, engineering, and mathematics (STEM) education are questions that can be answered based only on the knowledge of relevant concepts, rather than performing extensive calculations. They contrast with most homework and exam problems in science and engineering that typically require plugging in numerical values into previously discussed formulas. Such "plug-and-chug" numerical problems can often be solved correctly by just matching the pattern of the problem to a previously discussed problem and changing the numerical inputs, which requires significant amounts of time to perform the calculations but does not test or deepen the understanding of how the concepts and formulas should work together. Conceptual questions, therefore, provide a good complement to conventional numerical problems because they need minimal or no calculations and instead encourage the students to engage more deeply with the underlying concepts and how they relate to formulas. Conceptual problems are often formulated as multiple-choice questions, making them easy to use during in-class discussions, particularly when utilizing active learning, peer instruction, and audience response. An example of a conceptual question in undergraduate thermodynamics is provided below: During adiabatic expansion of an ideal gas, its temperatureincreases decreases stays the same Impossible to tell/need more information The use of conceptual questions in physics was popularized by Eric Mazur, particularly in the form of multiple-choice tests that he called ConcepTests. In recent years, multiple websites that maintain lists of conceptual questions have been created by instructors for various disciplines. Some books on physics provide many examples of conceptual questions as well. Multiple conceptual questions can be assembled into a concept inventory to test the working knowledge of students at the beginning of a course or to track the improvement in Document 2::: Computer science and engineering (CSE) is an academic program at many universities which comprises computer science classes (e.g. data structures and algorithms) and computer engineering classes (e.g computer architecture). There is no clear division in computing between science and engineering, just like in the field of materials science and engineering. CSE is also a term often used in Europe to translate the name of engineering informatics academic programs. It is offered in both undergraduate as well postgraduate with specializations. Academic courses Academic programs vary between colleges, but typically include a combination of topics in computer science, computer engineering, and electrical engineering. Undergraduate courses usually include programming, algorithms and data structures, computer architecture, operating systems, computer networks, parallel computing, embedded systems, algorithms design, circuit analysis and electronics, digital logic and processor design, computer graphics, scientific computing, software engineering, database systems, digital signal processing, virtualization, computer simulations and games programming. CSE programs also include core subjects of theoretical computer science such as theory of computation, numerical methods, machine learning, programming theory and paradigms. Modern academic programs also cover emerging computing fields like image processing, data science, robotics, bio-inspired computing, computational biology, autonomic computing and artificial intelligence. Most CSE programs require introductory mathematical knowledge, hence the first year of study is dominated by mathematical courses, primarily discrete mathematics, mathematical analysis, linear algebra, probability, and statistics, as well as the basics of electrical and electronic engineering, physics, and electromagnetism. Example universities with CSE majors and departments APJ Abdul Kalam Technological University American International University-B Document 3::: Material is a substance or mixture of substances that constitutes an object. Materials can be pure or impure, living or non-living matter. Materials can be classified on the basis of their physical and chemical properties, or on their geological origin or biological function. Materials science is the study of materials, their properties and their applications. Raw materials can be processed in different ways to influence their properties, by purification, shaping or the introduction of other materials. New materials can be produced from raw materials by synthesis. In industry, materials are inputs to manufacturing processes to produce products or more complex materials. Historical elements Materials chart the history of humanity. The system of the three prehistoric ages (Stone Age, Bronze Age, Iron Age) were succeeded by historical ages: steel age in the 19th century, polymer age in the middle of the following century (plastic age) and silicon age in the second half of the 20th century. Classification by use Materials can be broadly categorized in terms of their use, for example: Building materials are used for construction Building insulation materials are used to retain heat within buildings Refractory materials are used for high-temperature applications Nuclear materials are used for nuclear power and weapons Aerospace materials are used in aircraft and other aerospace applications Biomaterials are used for applications interacting with living systems Material selection is a process to determine which material should be used for a given application. Classification by structure The relevant structure of materials has a different length scale depending on the material. The structure and composition of a material can be determined by microscopy or spectroscopy. Microstructure In engineering, materials can be categorised according to their microscopic structure: Plastics: a wide range of synthetic or semi-synthetic materials that use polymers as a main ingred Document 4::: There are four Advanced Placement (AP) Physics courses administered by the College Board as part of its Advanced Placement program: the algebra-based Physics 1 and Physics 2 and the calculus-based Physics C: Mechanics and Physics C: Electricity and Magnetism. All are intended to be at the college level. Each AP Physics course has an exam for which high-performing students may receive credit toward their college coursework. AP Physics 1 and 2 AP Physics 1 and AP Physics 2 were introduced in 2015, replacing AP Physics B. The courses were designed to emphasize critical thinking and reasoning as well as learning through inquiry. They are algebra-based and do not require any calculus knowledge. AP Physics 1 AP Physics 1 covers Newtonian mechanics, including: Unit 1: Kinematics Unit 2: Dynamics Unit 3: Circular Motion and Gravitation Unit 4: Energy Unit 5: Momentum Unit 6: Simple Harmonic Motion Unit 7: Torque and Rotational Motion Until 2020, the course also covered topics in electricity (including Coulomb's Law and resistive DC circuits), mechanical waves, and sound. These units were removed because they are included in AP Physics 2. AP Physics 2 AP Physics 2 covers the following topics: Unit 1: Fluids Unit 2: Thermodynamics Unit 3: Electric Force, Field, and Potential Unit 4: Electric Circuits Unit 5: Magnetism and Electromagnetic Induction Unit 6: Geometric and Physical Optics Unit 7: Quantum, Atomic, and Nuclear Physics AP Physics C From 1969 to 1972, AP Physics C was a single course with a single exam that covered all standard introductory university physics topics, including mechanics, fluids, electricity and magnetism, optics, and modern physics. In 1973, the College Board split the course into AP Physics C: Mechanics and AP Physics C: Electricity and Magnetism. The exam was also split into two separate 90-minute tests, each equivalent to a semester-length calculus-based college course. Until 2006, both exams could be taken for a single The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. Platinum and gold are useful materials for constructing circuits because of their ability to resist what? A. oxidation B. nitrogen C. decomposition D. Electricity Answer:
sciq-4466
multiple_choice
The properties of the alkali metals are similar to each other as expected for elements in the same what?
[ "class", "farm", "family", "branch" ]
C
Relavent Documents: Document 0::: Conceptual questions or conceptual problems in science, technology, engineering, and mathematics (STEM) education are questions that can be answered based only on the knowledge of relevant concepts, rather than performing extensive calculations. They contrast with most homework and exam problems in science and engineering that typically require plugging in numerical values into previously discussed formulas. Such "plug-and-chug" numerical problems can often be solved correctly by just matching the pattern of the problem to a previously discussed problem and changing the numerical inputs, which requires significant amounts of time to perform the calculations but does not test or deepen the understanding of how the concepts and formulas should work together. Conceptual questions, therefore, provide a good complement to conventional numerical problems because they need minimal or no calculations and instead encourage the students to engage more deeply with the underlying concepts and how they relate to formulas. Conceptual problems are often formulated as multiple-choice questions, making them easy to use during in-class discussions, particularly when utilizing active learning, peer instruction, and audience response. An example of a conceptual question in undergraduate thermodynamics is provided below: During adiabatic expansion of an ideal gas, its temperatureincreases decreases stays the same Impossible to tell/need more information The use of conceptual questions in physics was popularized by Eric Mazur, particularly in the form of multiple-choice tests that he called ConcepTests. In recent years, multiple websites that maintain lists of conceptual questions have been created by instructors for various disciplines. Some books on physics provide many examples of conceptual questions as well. Multiple conceptual questions can be assembled into a concept inventory to test the working knowledge of students at the beginning of a course or to track the improvement in Document 1::: Advanced Placement (AP) Physics C: Electricity and Magnetism (also known as AP Physics C: E&M or AP E&M) is an introductory physics course administered by the College Board as part of its Advanced Placement program. It is intended to proxy a second-semester calculus-based university course in electricity and magnetism. The content of Physics C: E&M overlaps with that of AP Physics 2, but Physics 2 is algebra-based and covers other topics outside of electromagnetism, while Physics C is calculus-based and only covers electromagnetism. Physics C: E&M may be combined with its mechanics counterpart to form a year-long course that prepares for both exams. Course content E&M is equivalent to an introductory college course in electricity and magnetism for physics or engineering majors. The course modules are: Electrostatics Conductors, capacitors, and dielectrics Electric circuits Magnetic fields Electromagnetism. Methods of calculus are used wherever appropriate in formulating physical principles and in applying them to physical problems. Therefore, students should have completed or be concurrently enrolled in a calculus class. AP test The course culminates in an optional exam for which high-performing students may receive some credit towards their college coursework, depending on the institution. Registration The AP examination for AP Physics C: Electricity and Magnetism is separate from the AP examination for AP Physics C: Mechanics. Before 2006, test-takers paid only once and were given the choice of taking either one or two parts of the Physics C test. Format The exam is typically administered on a Monday afternoon in May. The exam is configured in two categories: a 35-question multiple choice section and a 3-question free response section. Test takers are allowed to use an approved calculator during the entire exam. The test is weighted such that each section is worth half of the final score. This and AP Physics C: Mechanics are the shortest AP exams, with Document 2::: There are four Advanced Placement (AP) Physics courses administered by the College Board as part of its Advanced Placement program: the algebra-based Physics 1 and Physics 2 and the calculus-based Physics C: Mechanics and Physics C: Electricity and Magnetism. All are intended to be at the college level. Each AP Physics course has an exam for which high-performing students may receive credit toward their college coursework. AP Physics 1 and 2 AP Physics 1 and AP Physics 2 were introduced in 2015, replacing AP Physics B. The courses were designed to emphasize critical thinking and reasoning as well as learning through inquiry. They are algebra-based and do not require any calculus knowledge. AP Physics 1 AP Physics 1 covers Newtonian mechanics, including: Unit 1: Kinematics Unit 2: Dynamics Unit 3: Circular Motion and Gravitation Unit 4: Energy Unit 5: Momentum Unit 6: Simple Harmonic Motion Unit 7: Torque and Rotational Motion Until 2020, the course also covered topics in electricity (including Coulomb's Law and resistive DC circuits), mechanical waves, and sound. These units were removed because they are included in AP Physics 2. AP Physics 2 AP Physics 2 covers the following topics: Unit 1: Fluids Unit 2: Thermodynamics Unit 3: Electric Force, Field, and Potential Unit 4: Electric Circuits Unit 5: Magnetism and Electromagnetic Induction Unit 6: Geometric and Physical Optics Unit 7: Quantum, Atomic, and Nuclear Physics AP Physics C From 1969 to 1972, AP Physics C was a single course with a single exam that covered all standard introductory university physics topics, including mechanics, fluids, electricity and magnetism, optics, and modern physics. In 1973, the College Board split the course into AP Physics C: Mechanics and AP Physics C: Electricity and Magnetism. The exam was also split into two separate 90-minute tests, each equivalent to a semester-length calculus-based college course. Until 2006, both exams could be taken for a single Document 3::: The Force Concept Inventory is a test measuring mastery of concepts commonly taught in a first semester of physics developed by Hestenes, Halloun, Wells, and Swackhamer (1985). It was the first such "concept inventory" and several others have been developed since for a variety of topics. The FCI was designed to assess student understanding of the Newtonian concepts of force. Hestenes (1998) found that while "nearly 80% of the [students completing introductory college physics courses] could state Newton's Third Law at the beginning of the course, FCI data showed that less than 15% of them fully understood it at the end". These results have been replicated in a number of studies involving students at a range of institutions (see sources section below), and have led to greater recognition in the physics education research community of the importance of students' "active engagement" with the materials to be mastered. The 1995 version has 30 five-way multiple choice questions. Example question (question 4): Gender differences The FCI shows a gender difference in favor of males that has been the subject of some research in regard to gender equity in education. Men score on average about 10% higher. Document 4::: Test equating traditionally refers to the statistical process of determining comparable scores on different forms of an exam. It can be accomplished using either classical test theory or item response theory. In item response theory, equating is the process of placing scores from two or more parallel test forms onto a common score scale. The result is that scores from two different test forms can be compared directly, or treated as though they came from the same test form. When the tests are not parallel, the general process is called linking. It is the process of equating the units and origins of two scales on which the abilities of students have been estimated from results on different tests. The process is analogous to equating degrees Fahrenheit with degrees Celsius by converting measurements from one scale to the other. The determination of comparable scores is a by-product of equating that results from equating the scales obtained from test results. Purpose Suppose that Dick and Jane both take a test to become licensed in a certain profession. Because the high stakes (you get to practice the profession if you pass the test) may create a temptation to cheat, the organization that oversees the test creates two forms. If we know that Dick scored 60% on form A and Jane scored 70% on form B, do we know for sure which one has a better grasp of the material? What if form A is composed of very difficult items, while form B is relatively easy? Equating analyses are performed to address this very issue, so that scores are as fair as possible. Equating in item response theory In item response theory, person "locations" (measures of some quality being assessed by a test) are estimated on an interval scale; i.e., locations are estimated in relation to a unit and origin. It is common in educational assessment to employ tests in order to assess different groups of students with the intention of establishing a common scale by equating the origins, and when appropri The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. The properties of the alkali metals are similar to each other as expected for elements in the same what? A. class B. farm C. family D. branch Answer:
sciq-8335
multiple_choice
The four moons of jupiter are collectively named after what 17th century astronomer, who discovered them?
[ "darwin", "Linnaeus", "galileo", "copernicus" ]
C
Relavent Documents: Document 0::: There are 95 moons of Jupiter with confirmed orbits . This number does not include a number of meter-sized moonlets thought to be shed from the inner moons, nor hundreds of possible kilometer-sized outer irregular moons that were only briefly captured by telescopes. All together, Jupiter's moons form a satellite system called the Jovian system. The most massive of the moons are the four Galilean moons: Io, Europa, Ganymede, and Callisto, which were independently discovered in 1610 by Galileo Galilei and Simon Marius and were the first objects found to orbit a body that was neither Earth nor the Sun. Much more recently, beginning in 1892, dozens of far smaller Jovian moons have been detected and have received the names of lovers (or other sexual partners) or daughters of the Roman god Jupiter or his Greek equivalent Zeus. The Galilean moons are by far the largest and most massive objects to orbit Jupiter, with the remaining 91 known moons and the rings together composing just 0.003% of the total orbiting mass. Of Jupiter's moons, eight are regular satellites with prograde and nearly circular orbits that are not greatly inclined with respect to Jupiter's equatorial plane. The Galilean satellites are nearly spherical in shape due to their planetary mass, and are just massive enough that they would be considered major planets if they were in direct orbit around the Sun. The other four regular satellites, known as the inner moons, are much smaller and closer to Jupiter; these serve as sources of the dust that makes up Jupiter's rings. The remainder of Jupiter's moons are outer irregular satellites whose prograde and retrograde orbits are much farther from Jupiter and have high inclinations and eccentricities. The largest of these moons were likely asteroids that were captured from solar orbits by Jupiter before impacts with other small bodies shattered them into many kilometer-sized fragments, forming collisional families of moons sharing similar orbits. Jupiter is expe Document 1::: The timeline of discovery of Solar System planets and their natural satellites charts the progress of the discovery of new bodies over history. Each object is listed in chronological order of its discovery (multiple dates occur when the moments of imaging, observation, and publication differ), identified through its various designations (including temporary and permanent schemes), and the discoverer(s) listed. Historically the naming of moons did not always match the times of their discovery. Traditionally, the discoverer enjoys the privilege of naming the new object; however, some neglected to do so (E. E. Barnard stated he would "defer any suggestions as to a name" [for Amalthea] "until a later paper" but never got around to picking one from the numerous suggestions he received) or actively declined (S. B. Nicholson stated "Many have asked what the new satellites [Lysithea and Carme] are to be named. They will be known only by the numbers X and XI, written in Roman numerals, and usually prefixed by the letter J to identify them with Jupiter."). The issue arose nearly as soon as planetary satellites were discovered: Galileo referred to the four main satellites of Jupiter using numbers while the names suggested by his rival Simon Marius gradually gained universal acceptance. The International Astronomical Union (IAU) eventually started officially approving names in the late 1970s. With the explosion of discoveries in the 21st century, new moons have once again started to be left unnamed even after their numbering, beginning with Jupiter LI and Jupiter LII in 2010. Key info In the following tables, planetary satellites are indicated in bold type (e.g. Moon) while planets and dwarf planets, which directly circle the Sun, are in italic type (e.g. Earth). The Sun itself is indicated in roman type. The tables are sorted by publication/announcement date. Dates are annotated with the following symbols: i: for date of first imaging (photography, etc.); o: for date of fir Document 2::: A scholar is a person who is a researcher or has expertise in an academic discipline. A scholar can also be an academic, who works as a professor, teacher, or researcher at a university. An academic usually holds an advanced degree or a terminal degree, such as a master's degree or a doctorate (PhD). Independent scholars and public intellectuals work outside of the academy yet may publish in academic journals and participate in scholarly public discussion. Definitions In contemporary English usage, the term scholar sometimes is equivalent to the term academic, and describes a university-educated individual who has achieved intellectual mastery of an academic discipline, as instructor and as researcher. Moreover, before the establishment of universities, the term scholar identified and described an intellectual person whose primary occupation was professional research. In 1847, minister Emanuel Vogel Gerhart spoke of the role of the scholar in society: Gerhart argued that a scholar can not be focused on a single discipline, contending that knowledge of multiple disciplines is necessary to put each into context and to inform the development of each: A 2011 examination outlined the following attributes commonly accorded to scholars as "described by many writers, with some slight variations in the definition": Scholars may rely on the scholarly method or scholarship, a body of principles and practices used by scholars to make their claims about the world as valid and trustworthy as possible, and to make them known to the scholarly public. It is the methods that systemically advance the teaching, research, and practice of a given scholarly or academic field of study through rigorous inquiry. Scholarship is creative, can be documented, can be replicated or elaborated, and can be and is peer-reviewed through various methods. Role in society Scholars have generally been upheld as creditable figures of high social standing, who are engaged in work important to society. Document 3::: A pre-STEM program is a course of study at any two-year college that prepares a student to transfer to a four-year school to earn a bachelor's degree in a STEM field. Overview The concept of a pre-STEM program is being developed to address America's need for more college-trained professionals in science, technology, engineering, and mathematics (STEM). It is an innovation meant to fill a gap at community colleges that do not have 'major' degree paths that students identify with on their way to earning an Associates degree. Students must complete a considerable amount of STEM coursework before transferring from a two-year school to a four-year school and earn a baccalaureate degree in a STEM field. Schools with a pre-STEM program are able to identify those students and support them with STEM-specific academic and career advising, increasing the student's chances of going on to earn a STEM baccalaureate degree in a timely fashion. With over 50% of America's college-bound students starting their college career at public or private two-year school, and with a very small proportion of students who start college at a two-year school matriculating to and earning STEM degrees from four-year schools, pre-STEM programs have great potential for broadening participation in baccalaureate STEM studies. Example programs The effectiveness of pre-STEM programs is being investigated by a consortium of schools in Missouri: Moberly Area Community College, St. Charles Community College, Metropolitan Community College, and Truman State University. A larger group of schools met at the Belknap Springs Meetings in October 2009 to discuss the challenges and opportunities presented by STEM-focused partnerships between 2-year and 4-year schools. Each program represented a two-year school and a four-year school that were trying to increase the number of people who earn a baccalaureate degree in a STEM area through various means, some of which were pre-STEM programs. Other methods includes Document 4::: Female education in STEM refers to child and adult female representation in the educational fields of science, technology, engineering, and mathematics (STEM). In 2017, 33% of students in STEM fields were women. The organization UNESCO has stated that this gender disparity is due to discrimination, biases, social norms and expectations that influence the quality of education women receive and the subjects they study. UNESCO also believes that having more women in STEM fields is desirable because it would help bring about sustainable development. Current status of girls and women in STEM education Overall trends in STEM education Gender differences in STEM education participation are already visible in early childhood care and education in science- and math-related play, and become more pronounced at higher levels of education. Girls appear to lose interest in STEM subjects with age, particularly between early and late adolescence. This decreased interest affects participation in advanced studies at the secondary level and in higher education. Female students represent 35% of all students enrolled in STEM-related fields of study at this level globally. Differences are also observed by disciplines, with female enrollment lowest in engineering, manufacturing and construction, natural science, mathematics and statistics and ICT fields. Significant regional and country differences in female representation in STEM studies can be observed, though, suggesting the presence of contextual factors affecting girls’ and women's engagement in these fields. Women leave STEM disciplines in disproportionate numbers during their higher education studies, in their transition to the world of work and even in their career cycle. Learning achievement in STEM education Data on gender differences in learning achievement present a complex picture, depending on what is measured (subject, knowledge acquisition against knowledge application), the level of education/age of students, and The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. The four moons of jupiter are collectively named after what 17th century astronomer, who discovered them? A. darwin B. Linnaeus C. galileo D. copernicus Answer:
sciq-2579
multiple_choice
Esters are neutral compounds that undergo what process, which is a reaction with water?
[ "osmosis", "hydrolysis", "replication", "cellular respiration" ]
B
Relavent Documents: Document 0::: In biochemistry, an esterase is a class of enzyme that splits esters into an acid and an alcohol in a chemical reaction with water called hydrolysis (and as such, it is a type of hydrolase). A wide range of different esterases exist that differ in their substrate specificity, their protein structure, and their biological function. Document 1::: Chain reaction in chemistry and physics is a process that produces products capable of initiating subsequent processes of a similar nature. It is a self-sustaining sequence in which the resulting products continue to propagate further reactions. There are at least two examples of chain reactions in living organisms. Lipid peroxidation in cell membranes Nonenzymatic peroxidation occurs through the action of reactive oxygen species (ROS), specifically hydroxyl (HO•) and hydroperoxyl (HO) radicals, which initiate the oxidation of polyunsaturated fatty acids. Other initiators of lipid peroxidation include ozone (O3), nitrogen oxide (NO), nitrogen dioxide (NO2), and sulfur dioxide. The process of nonenzymatic peroxidation can be divided into three phases: initiation, propagation, and termination. During the initiation phase, fatty acid radicals are generated, which can propagate peroxidation to other molecules. This occurs when a free radical removes a hydrogen atom from a fatty acid, resulting in a lipid radical (L•) with an unpaired electron. In the propagation phase, the lipid radical reacts with oxygen (O2) or a transition metal, forming a peroxyl radical (LOO•). This peroxyl radical continues the chain reaction by reacting with a new unsaturated fatty acid, producing a new lipid radical (L•) and lipid hydroperoxide (LOOH). These primary products can further decompose into secondary products. The termination phase involves the interaction of a radical with an antioxidant molecule, such as α-tocopherol (vitamin E), which inhibits the propagation of chain reactions, thus terminating peroxidation. Another method of termination is the reaction between a lipid radical and a lipid peroxide, or the combination of two lipid peroxide molecules, resulting in stable nonreactive molecules. Propagation of excitation of neurons in epilepsy Epilepsy is a neurological condition marked by recurring seizures. It occurs when the brain's electrical activity becomes unbalanced, leading Document 2::: This is a list of articles that describe particular biomolecules or types of biomolecules. A For substances with an A- or α- prefix such as α-amylase, please see the parent page (in this case Amylase). A23187 (Calcimycin, Calcium Ionophore) Abamectine Abietic acid Acetic acid Acetylcholine Actin Actinomycin D Adenine Adenosmeme Adenosine diphosphate (ADP) Adenosine monophosphate (AMP) Adenosine triphosphate (ATP) Adenylate cyclase Adiponectin Adonitol Adrenaline, epinephrine Adrenocorticotropic hormone (ACTH) Aequorin Aflatoxin Agar Alamethicin Alanine Albumins Aldosterone Aleurone Alpha-amanitin Alpha-MSH (Melaninocyte stimulating hormone) Allantoin Allethrin α-Amanatin, see Alpha-amanitin Amino acid Amylase (also see α-amylase) Anabolic steroid Anandamide (ANA) Androgen Anethole Angiotensinogen Anisomycin Antidiuretic hormone (ADH) Anti-Müllerian hormone (AMH) Arabinose Arginine Argonaute Ascomycin Ascorbic acid (vitamin C) Asparagine Aspartic acid Asymmetric dimethylarginine ATP synthase Atrial-natriuretic peptide (ANP) Auxin Avidin Azadirachtin A – C35H44O16 B Bacteriocin Beauvericin beta-Hydroxy beta-methylbutyric acid beta-Hydroxybutyric acid Bicuculline Bilirubin Biopolymer Biotin (Vitamin H) Brefeldin A Brassinolide Brucine Butyric acid C Document 3::: In molecular biology, biosynthesis is a multi-step, enzyme-catalyzed process where substrates are converted into more complex products in living organisms. In biosynthesis, simple compounds are modified, converted into other compounds, or joined to form macromolecules. This process often consists of metabolic pathways. Some of these biosynthetic pathways are located within a single cellular organelle, while others involve enzymes that are located within multiple cellular organelles. Examples of these biosynthetic pathways include the production of lipid membrane components and nucleotides. Biosynthesis is usually synonymous with anabolism. The prerequisite elements for biosynthesis include: precursor compounds, chemical energy (e.g. ATP), and catalytic enzymes which may need coenzymes (e.g. NADH, NADPH). These elements create monomers, the building blocks for macromolecules. Some important biological macromolecules include: proteins, which are composed of amino acid monomers joined via peptide bonds, and DNA molecules, which are composed of nucleotides joined via phosphodiester bonds. Properties of chemical reactions Biosynthesis occurs due to a series of chemical reactions. For these reactions to take place, the following elements are necessary: Precursor compounds: these compounds are the starting molecules or substrates in a reaction. These may also be viewed as the reactants in a given chemical process. Chemical energy: chemical energy can be found in the form of high energy molecules. These molecules are required for energetically unfavorable reactions. Furthermore, the hydrolysis of these compounds drives a reaction forward. High energy molecules, such as ATP, have three phosphates. Often, the terminal phosphate is split off during hydrolysis and transferred to another molecule. Catalysts: these may be for example metal ions or coenzymes and they catalyze a reaction by increasing the rate of the reaction and lowering the activation energy. In the sim Document 4::: In chemistry, ammonolysis (/am·mo·nol·y·sis/) is the process of splitting ammonia into NH2- + H+. Ammonolysis reactions can be conducted with organic compounds to produce amines (molecules containing a nitrogen atom with a lone pair, :N), or with inorganic compounds to produce nitrides. This reaction is analogous to hydrolysis in which water molecules are split. Similar to water, liquid ammonia also undergoes auto-ionization, {2 NH3 ⇌ NH4+ + NH2- }, where the rate constant is k = 1.9 × 10-38. Organic compounds such as alkyl halides, hydroxyls (hydroxyl nitriles and carbohydrates), carbonyl (aldehydes/ketones/esters/alcohols), and sulfur (sulfonyl derivatives) can all undergo ammonolysis in liquid ammonia. Organic synthesis Mechanism: ammonolysis of esters This mechanism is similar to the hydrolysis of esters, the ammonia attacks the electrophilic carbonyl carbon forming a tetrahedral intermediate. The reformation of the C-O double bond ejects the ester. The alkoxide deprotonates the ammonia forming an alcohol and amide as products. Of haloalkanes On heating a haloalkane and concentrated ammonia in a sealed tube with ethanol, a series of amines are formed along with their salts. The tertiary amine is usually the major product. {NH3 ->[\ce{RX}] RNH2 ->[\ce{RX}] R2NH ->[\ce{RX}] R3N ->[\ce{RX}] R4N+} This is known as Hoffmann's ammonolysis. Of alcohols Alcohols can also undergo ammonolysis when in the presence of ammonia. An example is the conversion of phenol to aniline, catalyzed by stannic chloride. ROH + NH3 A ->[\ce{TnCl4}] RNH2 + H2O The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. Esters are neutral compounds that undergo what process, which is a reaction with water? A. osmosis B. hydrolysis C. replication D. cellular respiration Answer:
sciq-8750
multiple_choice
What do we call the amount of time it will take for half of the radioactive material to decay?
[ "one half-life", "one full-life", "one partial-life", "one quarter-life" ]
A
Relavent Documents: Document 0::: In nuclear science, the decay chain refers to a series of radioactive decays of different radioactive decay products as a sequential series of transformations. It is also known as a "radioactive cascade". The typical radioisotope does not decay directly to a stable state, but rather it decays to another radioisotope. Thus there is usually a series of decays until the atom has become a stable isotope, meaning that the nucleus of the atom has reached a stable state. Decay stages are referred to by their relationship to previous or subsequent stages. A parent isotope is one that undergoes decay to form a daughter isotope. One example of this is uranium (atomic number 92) decaying into thorium (atomic number 90). The daughter isotope may be stable or it may decay to form a daughter isotope of its own. The daughter of a daughter isotope is sometimes called a granddaughter isotope. Note that the parent isotope becomes the daughter isotope, unlike in the case of a biological parent and daughter. The time it takes for a single parent atom to decay to an atom of its daughter isotope can vary widely, not only between different parent-daughter pairs, but also randomly between identical pairings of parent and daughter isotopes. The decay of each single atom occurs spontaneously, and the decay of an initial population of identical atoms over time t, follows a decaying exponential distribution, e−λt, where λ is called a decay constant. One of the properties of an isotope is its half-life, the time by which half of an initial number of identical parent radioisotopes can be expected statistically to have decayed to their daughters, which is inversely related to λ. Half-lives have been determined in laboratories for many radioisotopes (or radionuclides). These can range from nearly instantaneous (less than 10−21 seconds) to more than 1019 years. The intermediate stages each emit the same amount of radioactivity as the original radioisotope (i.e., there is a one-to-one relationsh Document 1::: Decay correction is a method of estimating the amount of radioactive decay at some set time before it was actually measured. Example of use Researchers often want to measure, say, medical compounds in the bodies of animals. It's hard to measure them directly, so it can be chemically joined to a radionuclide - by measuring the radioactivity, you can get a good idea of how the original medical compound is being processed. Samples may be collected and counted at short time intervals (ex: 1 and 4 hours). But they might be tested for radioactivity all at once. Decay correction is one way of working out what the radioactivity would have been at the time it was taken, rather than at the time it was tested. For example, the isotope copper-64, commonly used in medical research, has a half-life of 12.7 hours. If you inject a large group of animals at "time zero", but measure the radioactivity in their organs at two later times, the later groups must be "decay corrected" to adjust for the decay that has occurred between the two time points. Mathematics The formula for decay correcting is: where is the original activity count at time zero, is the activity at time "t", "λ" is the decay constant, and "t" is the elapsed time. The decay constant is where "" is the half-life of the radioactive material of interest. Example The decay correct might be used this way: a group of 20 animals is injected with a compound of interest on a Monday at 10:00 a.m. The compound is chemically joined to the isotope copper-64, which has a known half-life of 12.7 hours, or 764 minutes. After one hour, the 5 animals in the "one hour" group are killed, dissected, and organs of interest are placed in sealed containers to await measurement. This is repeated for another 5 animals, at 2 hours, and again at 4 hours. At this point, (say, 4:00 p.m., Monday) all the organs collected so far are measured for radioactivity (a proxy of the distribution of the compound of interest). The next day Document 2::: A quantity is subject to exponential decay if it decreases at a rate proportional to its current value. Symbolically, this process can be expressed by the following differential equation, where is the quantity and (lambda) is a positive rate called the exponential decay constant, disintegration constant, rate constant, or transformation constant: The solution to this equation (see derivation below) is: where is the quantity at time , is the initial quantity, that is, the quantity at time . Measuring rates of decay Mean lifetime If the decaying quantity, N(t), is the number of discrete elements in a certain set, it is possible to compute the average length of time that an element remains in the set. This is called the mean lifetime (or simply the lifetime), where the exponential time constant, , relates to the decay rate constant, λ, in the following way: The mean lifetime can be looked at as a "scaling time", because the exponential decay equation can be written in terms of the mean lifetime, , instead of the decay constant, λ: and that is the time at which the population of the assembly is reduced to ≈ 0.367879441 times its initial value. For example, if the initial population of the assembly, N(0), is 1000, then the population at time , , is 368. A very similar equation will be seen below, which arises when the base of the exponential is chosen to be 2, rather than e. In that case the scaling time is the "half-life". Half-life A more intuitive characteristic of exponential decay for many people is the time required for the decaying quantity to fall to one half of its initial value. (If N(t) is discrete, then this is the median life-time rather than the mean life-time.) This time is called the half-life, and often denoted by the symbol t1/2. The half-life can be written in terms of the decay constant, or the mean lifetime, as: When this expression is inserted for in the exponential equation above, and ln 2 is absorbed into the base, this equat Document 3::: The decay energy is the energy change of a nucleus having undergone a radioactive decay. Radioactive decay is the process in which an unstable atomic nucleus loses energy by emitting ionizing particles and radiation. This decay, or loss of energy, results in an atom of one type (called the parent nuclide) transforming to an atom of a different type (called the daughter nuclide). Decay calculation The energy difference of the reactants is often written as Q: Decay energy is usually quoted in terms of the energy units MeV (million electronvolts) or keV (thousand electronvolts): Types of radioactive decay include gamma ray beta decay (decay energy is divided between the emitted electron and the neutrino which is emitted at the same time) alpha decay The decay energy is the mass difference Δm between the parent and the daughter atom and particles. It is equal to the energy of radiation E. If A is the radioactive activity, i.e. the number of transforming atoms per time, M the molar mass, then the radiation power P is: or or Example: 60Co decays into 60Ni. The mass difference Δm is 0.003u. The radiated energy is approximately 2.8MeV. The molar weight is 59.93. The half life T of 5.27 year corresponds to the activity , where N is the number of atoms per mol, and T is the half-life. Taking care of the units the radiation power for 60Co is 17.9W/g Radiation power in W/g for several isotopes: 60Co: 17.9 238Pu: 0.57 137Cs: 0.6 241Am: 0.1 210Po: 140 (T = 136d) 90Sr: 0.9 226Ra: 0.02 For use in radioisotope thermoelectric generators (RTGs) high decay energy combined with a long half life is desirable. To reduce the cost and weight of radiation shielding, sources that do not emit strong gamma radiation are preferred. This table gives an indication why - despite its enormous cost - with its roughly eighty year half life and low gamma emissions has become the RTG nuclide of choice. performs worse than on almost all measures, being shorter lived, a beta emitt Document 4::: In the context of radioactivity, activity or total activity (symbol A) is a physical quantity defined as the number of radioactive transformations per second that occur in a particular radionuclide. The unit of activity is the becquerel (symbol Bq), which is defined equivalent to reciprocal seconds (symbol s-1). The older, non-SI unit of activity is the curie (Ci), which is radioactive decay per second. Another unit of activity is the rutherford, which is defined as radioactive decay per second. Specific activity (symbol a) is the activity per unit mass of a radionuclide and is a physical property of that radionuclide. It is usually given in units of becquerel per kilogram (Bq/kg), but another commonly used unit of specific activity is the curie per gram (Ci/g). The specific activity should not be confused with level of exposure to ionizing radiation and thus the exposure or absorbed dose, which is the quantity important in assessing the effects of ionizing radiation on humans. Since the probability of radioactive decay for a given radionuclide within a set time interval is fixed (with some slight exceptions, see changing decay rates), the number of decays that occur in a given time of a given mass (and hence a specific number of atoms) of that radionuclide is also a fixed (ignoring statistical fluctuations). Formulation Relationship between λ and T1/2 Radioactivity is expressed as the decay rate of a particular radionuclide with decay constant λ and the number of atoms N: The integral solution is described by exponential decay: where N0 is the initial quantity of atoms at time t = 0. Half-life T1/2 is defined as the length of time for half of a given quantity of radioactive atoms to undergo radioactive decay: Taking the natural logarithm of both sides, the half-life is given by Conversely, the decay constant λ can be derived from the half-life T1/2 as Calculation of specific activity The mass of the radionuclide is given by where M i The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. What do we call the amount of time it will take for half of the radioactive material to decay? A. one half-life B. one full-life C. one partial-life D. one quarter-life Answer:
sciq-4972
multiple_choice
What common process is used in the production of bread, cheese and alcoholic beverages?
[ "cloning", "fermentation", "oxidation", "condensation" ]
B
Relavent Documents: Document 0::: Brewing is the production of beer by steeping a starch source (commonly cereal grains, the most popular of which is barley) in water and fermenting the resulting sweet liquid with yeast. It may be done in a brewery by a commercial brewer, at home by a homebrewer, or communally. Brewing has taken place since around the 6th millennium BC, and archaeological evidence suggests that emerging civilizations, including ancient Egypt, China, and Mesopotamia, brewed beer. Since the nineteenth century the brewing industry has been part of most western economies. The basic ingredients of beer are water and a fermentable starch source such as malted barley. Most beer is fermented with a brewer's yeast and flavoured with hops. Less widely used starch sources include millet, sorghum and cassava. Secondary sources (adjuncts), such as maize (corn), rice, or sugar, may also be used, sometimes to reduce cost, or to add a feature, such as adding wheat to aid in retaining the foamy head of the beer. The most common starch source is ground cereal or "grist" - the proportion of the starch or cereal ingredients in a beer recipe may be called grist, grain bill, or simply mash ingredients. Steps in the brewing process include malting, milling, mashing, lautering, boiling, fermenting, conditioning, filtering, and packaging. There are three main fermentation methods: warm, cool and spontaneous. Fermentation may take place in an open or closed fermenting vessel; a secondary fermentation may also occur in the cask or bottle. There are several additional brewing methods, such as Burtonisation, double dropping, and Yorkshire Square, as well as post-fermentation treatment such as filtering, and barrel-ageing. History Brewing has taken place since around the 6th millennium BC, and archaeological evidence suggests emerging civilizations including China, ancient Egypt, and Mesopotamia brewed beer. Descriptions of various beer recipes can be found in cuneiform (the oldest known writing) from ancie Document 1::: Food and biological process engineering is a discipline concerned with applying principles of engineering to the fields of food production and distribution and biology. It is a broad field, with workers fulfilling a variety of roles ranging from design of food processing equipment to genetic modification of organisms. In some respects it is a combined field, drawing from the disciplines of food science and biological engineering to improve the earth's food supply. Creating, processing, and storing food to support the world's population requires extensive interdisciplinary knowledge. Notably, there are many biological engineering processes within food engineering to manipulate the multitude of organisms involved in our complex food chain. Food safety in particular requires biological study to understand the microorganisms involved and how they affect humans. However, other aspects of food engineering, such as food storage and processing, also require extensive biological knowledge of both the food and the microorganisms that inhabit it. This food microbiology and biology knowledge becomes biological engineering when systems and processes are created to maintain desirable food properties and microorganisms while providing mechanisms for eliminating the unfavorable or dangerous ones. Concepts Many different concepts are involved in the field of food and biological process engineering. Below are listed several major ones. Food science The science behind food and food production involves studying how food behaves and how it can be improved. Researchers analyze longevity and composition (i.e., ingredients, vitamins, minerals, etc.) of foods, as well as how to ensure food safety. Genetic engineering Modern food and biological process engineering relies heavily on applications of genetic manipulation. By understanding plants and animals on the molecular level, scientists are able to engineer them with specific goals in mind. Among the most notable applications of Document 2::: Chemical Engineering and Biotechnology Abstracts (CEABA-VTB) is an abstracting and indexing service that is published by DECHEMA, BASF, and Bayer Technology Services, all based in Germany. This is a bibliographic database that covers multiple disciplines. Subject coverage Subject coverage includes engineering, management, manufacturing plants, equipment, production, and processing pertaining to various disciplines. The fields of interest are bio-process engineering, chemical engineering, process engineering, environmental protection (including safety), fermentation, enzymology, bio-transformation, information technology, technology and testing of materials (including corrosion), mathematical methods (including modeling), measurement (including control of processes), utilities (including services). Also covered are production processes and process development. CAS registry numbers are also part of this database. Document 3::: A ferment (also known as bread starter) is a fermentation starter used in indirect methods of bread making. It may also be called mother dough. A ferment and a longer fermentation in the bread-making process have several benefits: there is more time for yeast, enzyme and, if sourdough, bacterial actions on the starch and proteins in the dough; this in turn improves the keeping time of the baked bread, and it creates greater complexities of flavor. Though ferments have declined in popularity as direct additions of yeast in bread recipes have streamlined the process on a commercial level, ferments of various forms are widely used in artisanal bread recipes and formulas. Classifications In general, there are two ferment varieties: sponges, based on baker's yeast, and the starters of sourdough, based on wild yeasts and lactic acid bacteria. There are several kinds of pre-ferment commonly named and used in bread baking. They all fall on a varying process and time spectrum, from a mature mother dough of many generations of age to a first-generation sponge based on a fresh batch of baker's yeast: Biga and poolish (or pouliche) are terms used in Italian and French baking, respectively, for sponges made with domestic baker's yeast. Poolish is a fairly wet sponge (typically one-to-one, this is made with a one-part-flour-to-one-part-water ratio by weight), and it is called biga liquida, whereas the "normal" biga is usually drier. Bigas can be held longer at their peak than wetter sponges, while a poolish is one known technique to increase a dough's extensibility. Sourdough starter is likely the oldest, being reliant on organisms present in the grain and local environment. In general, these starters have fairly complex microbiological makeups, the most notable including wild yeasts, lactobacillus, and acetobacteria in symbiotic relationship referred to as a SCOBY. They are often maintained over long periods of time. For example, the Boudin Bakery in San Francisco has used t Document 4::: Industrial microbiology is a branch of biotechnology that applies microbial sciences to create industrial products in mass quantities, often using microbial cell factories. There are multiple ways to manipulate a microorganism in order to increase maximum product yields. Introduction of mutations into an organism may be accomplished by introducing them to mutagens. Another way to increase production is by gene amplification, this is done by the use of plasmids, and vectors. The plasmids and/ or vectors are used to incorporate multiple copies of a specific gene that would allow more enzymes to be produced that eventually cause more product yield. The manipulation of organisms in order to yield a specific product has many applications to the real world like the production of some antibiotics, vitamins, enzymes, amino acids, solvents, alcohol and daily products. Microorganisms play a big role in the industry, with multiple ways to be used. Medicinally, microbes can be used for creating antibiotics in order to treat infection. Microbes can also be used for the food industry as well. Microbes are very useful in creating some of the mass produced products that are consumed by people. The chemical industry also uses microorganisms in order to synthesize amino acids and organic solvents. Microbes can also be used in an agricultural application for use as a biopesticide instead of using dangerous chemicals and or inoculants to help plant proliferation. Medical application The medical application to industrial microbiology is the production of new drugs synthesized in a specific organism for medical purposes. Production of antibiotics is necessary for the treatment of many bacterial infections. Some natural occurring antibiotics and precursors, are produced through a process called fermentation. The microorganisms grow in a liquid media where the population size is controlled in order to yield the greatest amount of product. In this environment nutrient, pH, temperature, an The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. What common process is used in the production of bread, cheese and alcoholic beverages? A. cloning B. fermentation C. oxidation D. condensation Answer:
sciq-8716
multiple_choice
The vapor pressures of various liquids depends primarily on the strength of what kind of attractions between individual particles?
[ "diffusion", "intermolecular", "gravitational", "outer molecular" ]
B
Relavent Documents: Document 0::: Diffusivity, mass diffusivity or diffusion coefficient is usually written as the proportionality constant between the molar flux due to molecular diffusion and the negative value of the gradient in the concentration of the species. More accurately, the diffusion coefficient times the local concentration is the proportionality constant between the negative value of the mole fraction gradient and the molar flux. This distinction is especially significant in gaseous systems with strong temperature gradients. Diffusivity derives its definition from Fick's law and plays a role in numerous other equations of physical chemistry. The diffusivity is generally prescribed for a given pair of species and pairwise for a multi-species system. The higher the diffusivity (of one substance with respect to another), the faster they diffuse into each other. Typically, a compound's diffusion coefficient is ~10,000× as great in air as in water. Carbon dioxide in air has a diffusion coefficient of 16 mm2/s, and in water its diffusion coefficient is 0.0016 mm2/s. Diffusivity has dimensions of length2 / time, or m2/s in SI units and cm2/s in CGS units. Temperature dependence of the diffusion coefficient Solids The diffusion coefficient in solids at different temperatures is generally found to be well predicted by the Arrhenius equation: where D is the diffusion coefficient (in m2/s), D0 is the maximal diffusion coefficient (at infinite temperature; in m2/s), EA is the activation energy for diffusion (in J/mol), T is the absolute temperature (in K), R ≈ 8.31446J/(mol⋅K) is the universal gas constant. Liquids An approximate dependence of the diffusion coefficient on temperature in liquids can often be found using Stokes–Einstein equation, which predicts that where D is the diffusion coefficient, T1 and T2 are the corresponding absolute temperatures, μ is the dynamic viscosity of the solvent. Gases The dependence of the diffusion coefficient on temperature for gases can be expre Document 1::: In thermodynamics and chemical engineering, the vapor–liquid equilibrium (VLE) describes the distribution of a chemical species between the vapor phase and a liquid phase. The concentration of a vapor in contact with its liquid, especially at equilibrium, is often expressed in terms of vapor pressure, which will be a partial pressure (a part of the total gas pressure) if any other gas(es) are present with the vapor. The equilibrium vapor pressure of a liquid is in general strongly dependent on temperature. At vapor–liquid equilibrium, a liquid with individual components in certain concentrations will have an equilibrium vapor in which the concentrations or partial pressures of the vapor components have certain values depending on all of the liquid component concentrations and the temperature. The converse is also true: if a vapor with components at certain concentrations or partial pressures is in vapor–liquid equilibrium with its liquid, then the component concentrations in the liquid will be determined dependent on the vapor concentrations and on the temperature. The equilibrium concentration of each component in the liquid phase is often different from its concentration (or vapor pressure) in the vapor phase, but there is a relationship. The VLE concentration data can be determined experimentally or approximated with the help of theories such as Raoult's law, Dalton's law, and Henry's law. Such vapor–liquid equilibrium information is useful in designing columns for distillation, especially fractional distillation, which is a particular specialty of chemical engineers. Distillation is a process used to separate or partially separate components in a mixture by boiling (vaporization) followed by condensation. Distillation takes advantage of differences in concentrations of components in the liquid and vapor phases. In mixtures containing two or more components, the concentrations of each component are often expressed as mole fractions. The mole fraction of Document 2::: Relative volatility is a measure comparing the vapor pressures of the components in a liquid mixture of chemicals. This quantity is widely used in designing large industrial distillation processes. In effect, it indicates the ease or difficulty of using distillation to separate the more volatile components from the less volatile components in a mixture. By convention, relative volatility is usually denoted as . Relative volatilities are used in the design of all types of distillation processes as well as other separation or absorption processes that involve the contacting of vapor and liquid phases in a series of equilibrium stages. Relative volatilities are not used in separation or absorption processes that involve components reacting with each other (for example, the absorption of gaseous carbon dioxide in aqueous solutions of sodium hydroxide). Definition For a liquid mixture of two components (called a binary mixture) at a given temperature and pressure, the relative volatility is defined as When their liquid concentrations are equal, more volatile components have higher vapor pressures than less volatile components. Thus, a value (= ) for a more volatile component is larger than a value for a less volatile component. That means that ≥ 1 since the larger value of the more volatile component is in the numerator and the smaller of the less volatile component is in the denominator. is a unitless quantity. When the volatilities of both key components are equal, = 1 and separation of the two by distillation would be impossible under the given conditions because the compositions of the liquid and the vapor phase are the same (azeotrope). As the value of increases above 1, separation by distillation becomes progressively easier. A liquid mixture containing two components is called a binary mixture. When a binary mixture is distilled, complete separation of the two components is rarely achieved. Typically, the overhead fraction from the distillation Document 3::: At equilibrium, the relationship between water content and equilibrium relative humidity of a material can be displayed graphically by a curve, the so-called moisture sorption isotherm. For each humidity value, a sorption isotherm indicates the corresponding water content value at a given, constant temperature. If the composition or quality of the material changes, then its sorption behaviour also changes. Because of the complexity of sorption process the isotherms cannot be determined explicitly by calculation, but must be recorded experimentally for each product. The relationship between water content and water activity (aw) is complex. An increase in aw is usually accompanied by an increase in water content, but in a non-linear fashion. This relationship between water activity and moisture content at a given temperature is called the moisture sorption isotherm. These curves are determined experimentally and constitute the fingerprint of a food system. BET theory (Brunauer-Emmett-Teller) provides a calculation to describe the physical adsorption of gas molecules on a solid surface. Because of the complexity of the process, these calculations are only moderately successful; however, Stephen Brunauer was able to classify sorption isotherms into five generalized shapes as shown in Figure 2. He found that Type II and Type III isotherms require highly porous materials or desiccants, with first monolayer adsorption, followed by multilayer adsorption and finally leading to capillary condensation, explaining these materials high moisture capacity at high relative humidity. Care must be used in extracting data from isotherms, as the representation for each axis may vary in its designation. Brunauer provided the vertical axis as moles of gas adsorbed divided by the moles of the dry material, and on the horizontal axis he used the ratio of partial pressure of the gas just over the sample, divided by its partial pressure at saturation. More modern isotherms showing the Document 4::: Johannes Diderik van der Waals (; 23 November 1837 – 8 March 1923) was a Dutch theoretical physicist and thermodynamicist famous for his pioneering work on the equation of state for gases and liquids. Van der Waals started his career as a schoolteacher. He became the first physics professor of the University of Amsterdam when in 1877 the old Athenaeum was upgraded to Municipal University. Van der Waals won the 1910 Nobel Prize in physics for his work on the equation of state for gases and liquids. His name is primarily associated with the Van der Waals equation of state that describes the behavior of gases and their condensation to the liquid phase. His name is also associated with Van der Waals forces (forces between stable molecules), with Van der Waals molecules (small molecular clusters bound by Van der Waals forces), and with Van der Waals radii (sizes of molecules). James Clerk Maxwell once said that, "there can be no doubt that the name of Van der Waals will soon be among the foremost in molecular science." In his 1873 thesis, Van der Waals noted the non-ideality of real gases and attributed it to the existence of intermolecular interactions. He introduced the first equation of state derived by the assumption of a finite volume occupied by the constituent molecules. Spearheaded by Ernst Mach and Wilhelm Ostwald, a strong philosophical current that denied the existence of molecules arose towards the end of the 19th century. The molecular existence was considered unproven and the molecular hypothesis unnecessary. At the time Van der Waals's thesis was written (1873), the molecular structure of fluids had not been accepted by most physicists, and liquid and vapor were often considered as chemically distinct. But Van der Waals's work affirmed the reality of molecules and allowed an assessment of their size and attractive strength. His new formula revolutionized the study of equations of state. By comparing his equation of state with experimental data, Van der W The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. The vapor pressures of various liquids depends primarily on the strength of what kind of attractions between individual particles? A. diffusion B. intermolecular C. gravitational D. outer molecular Answer:
sciq-4343
multiple_choice
What aspect of beryllium allows it to absorb x-rays?
[ "relative atomic number", "magnetism", "low atomic number", "high atomic number" ]
C
Relavent Documents: Document 0::: The Röntgen Memorial Site in Würzburg, Germany, is dedicated to the work of the German physicist Wilhelm Conrad Röntgen (1845–1923) and his discovery of X-rays, for which he was granted the Nobel Prize in physics. It contains an exhibition of historical instruments, machines and documents. Location The Röntgen Memorial Site is in the foyer, corridors and two laboratory rooms of the former Physics Institute of the University of Würzburg in Röntgenring 8, a building that is now used by the University of Applied Sciences Würzburg-Schweinfurt. The road, where the building lies, was renamed in 1909 from Pleicherring to Röntgenring. History On the late Friday evening of 8. November 1895 Röntgen discovered for the first time the rays which penetrate through solid materials and gave them the name X-rays. He presented this in a lecture and publication On a new type of rays - Über eine neue Art von Strahlen on 23 January 1896 at the Physical Medical Society of Würzburg. During the discussion of this lecture, the anatomist Albert von Kölliker proposed to call these rays Röntgen radiation after their inventor, a term that is still being used in Germany. Exhibition The Röntgen Memorial Site gives an insight into the particle physics of the late 19th century. It shows an experimental set-up of cathodic rays beside the apparatus of the discovery. An experiment of penetrating solid materials by X-rays is shown in the historic laboratory of Röntgen. A separate room shows various X-ray tubes, a medical X-ray machine of Siemens & Halske from 1912 and several original documents. In the foyer a short German movie explains the purpose of the Memorial Site and the life of Röntgen. In the corridor some personal belongings of Röntgen are displayed to give some background information on his personal and historical circumstances. After remodeling in 2015 the tables and captures of the exhibition are now in English and German language. Society The site is managed by the non-profit Document 1::: Conceptual questions or conceptual problems in science, technology, engineering, and mathematics (STEM) education are questions that can be answered based only on the knowledge of relevant concepts, rather than performing extensive calculations. They contrast with most homework and exam problems in science and engineering that typically require plugging in numerical values into previously discussed formulas. Such "plug-and-chug" numerical problems can often be solved correctly by just matching the pattern of the problem to a previously discussed problem and changing the numerical inputs, which requires significant amounts of time to perform the calculations but does not test or deepen the understanding of how the concepts and formulas should work together. Conceptual questions, therefore, provide a good complement to conventional numerical problems because they need minimal or no calculations and instead encourage the students to engage more deeply with the underlying concepts and how they relate to formulas. Conceptual problems are often formulated as multiple-choice questions, making them easy to use during in-class discussions, particularly when utilizing active learning, peer instruction, and audience response. An example of a conceptual question in undergraduate thermodynamics is provided below: During adiabatic expansion of an ideal gas, its temperatureincreases decreases stays the same Impossible to tell/need more information The use of conceptual questions in physics was popularized by Eric Mazur, particularly in the form of multiple-choice tests that he called ConcepTests. In recent years, multiple websites that maintain lists of conceptual questions have been created by instructors for various disciplines. Some books on physics provide many examples of conceptual questions as well. Multiple conceptual questions can be assembled into a concept inventory to test the working knowledge of students at the beginning of a course or to track the improvement in Document 2::: Advanced Placement (AP) Physics C: Electricity and Magnetism (also known as AP Physics C: E&M or AP E&M) is an introductory physics course administered by the College Board as part of its Advanced Placement program. It is intended to proxy a second-semester calculus-based university course in electricity and magnetism. The content of Physics C: E&M overlaps with that of AP Physics 2, but Physics 2 is algebra-based and covers other topics outside of electromagnetism, while Physics C is calculus-based and only covers electromagnetism. Physics C: E&M may be combined with its mechanics counterpart to form a year-long course that prepares for both exams. Course content E&M is equivalent to an introductory college course in electricity and magnetism for physics or engineering majors. The course modules are: Electrostatics Conductors, capacitors, and dielectrics Electric circuits Magnetic fields Electromagnetism. Methods of calculus are used wherever appropriate in formulating physical principles and in applying them to physical problems. Therefore, students should have completed or be concurrently enrolled in a calculus class. AP test The course culminates in an optional exam for which high-performing students may receive some credit towards their college coursework, depending on the institution. Registration The AP examination for AP Physics C: Electricity and Magnetism is separate from the AP examination for AP Physics C: Mechanics. Before 2006, test-takers paid only once and were given the choice of taking either one or two parts of the Physics C test. Format The exam is typically administered on a Monday afternoon in May. The exam is configured in two categories: a 35-question multiple choice section and a 3-question free response section. Test takers are allowed to use an approved calculator during the entire exam. The test is weighted such that each section is worth half of the final score. This and AP Physics C: Mechanics are the shortest AP exams, with Document 3::: Antoine Henri Becquerel (; ; 15 December 1852 – 25 August 1908) was a French engineer, physicist, Nobel laureate, and the first person to discover radioactivity. For work in this field he, along with Marie Skłodowska-Curie and Pierre Curie, received the 1903 Nobel Prize in Physics. The SI unit for radioactivity, the becquerel (Bq), is named after him. Biography Early life Becquerel was born in Paris, France, into a wealthy family which produced four generations of notable physicists, including Becquerel's grandfather (Antoine César Becquerel), father (Alexandre-Edmond Becquerel), and son (Jean Becquerel). Henri started off his education by attending the Lycée Louis-le-Grand school, a prep school in Paris. He studied engineering at the École Polytechnique and the École des Ponts et Chaussées. Career In Becquerel's early career, he became the third in his family to occupy the physics chair at the Muséum National d'Histoire Naturelle in 1892. Later on in 1894, Becquerel became chief engineer in the Department of Bridges and Highways before he started with his early experiments. Becquerel's earliest works centered on the subject of his doctoral thesis: the plane polarization of light, with the phenomenon of phosphorescence and absorption of light by crystals. Early in his career, Becquerel also studied the Earth's magnetic fields. In 1895, he was appointed as a professor at the École Polytechnique. Becquerel's discovery of spontaneous radioactivity is a famous example of serendipity, of how chance favors the prepared mind. Becquerel had long been interested in phosphorescence, the emission of light of one color following a body's exposure to light of another color. In early 1896, there was a wave of excitement following Wilhelm Conrad Röntgen's discovery of X-rays on 5 January. During the experiment, Röntgen "found that the Crookes tubes he had been using to study cathode rays emitted a new kind of invisible ray that was capable of penetrating through black paper". Document 4::: GRE Subject Biochemistry, Cell and Molecular Biology was a standardized exam provided by ETS (Educational Testing Service) that was discontinued in December 2016. It is a paper-based exam and there are no computer-based versions of it. ETS places this exam three times per year: once in April, once in October and once in November. Some graduate programs in the United States recommend taking this exam, while others require this exam score as a part of the application to their graduate programs. ETS sends a bulletin with a sample practice test to each candidate after registration for the exam. There are 180 questions within the biochemistry subject test. Scores are scaled and then reported as a number between 200 and 990; however, in recent versions of the test, the maximum and minimum reported scores have been 760 (corresponding to the 99 percentile) and 320 (1 percentile) respectively. The mean score for all test takers from July, 2009, to July, 2012, was 526 with a standard deviation of 95. After learning that test content from editions of the GRE® Biochemistry, Cell and Molecular Biology (BCM) Test has been compromised in Israel, ETS made the decision not to administer this test worldwide in 2016–17. Content specification Since many students who apply to graduate programs in biochemistry do so during the first half of their fourth year, the scope of most questions is largely that of the first three years of a standard American undergraduate biochemistry curriculum. A sampling of test item content is given below: Biochemistry (36%) A Chemical and Physical Foundations Thermodynamics and kinetics Redox states Water, pH, acid-base reactions and buffers Solutions and equilibria Solute-solvent interactions Chemical interactions and bonding Chemical reaction mechanisms B Structural Biology: Structure, Assembly, Organization and Dynamics Small molecules Macromolecules (e.g., nucleic acids, polysaccharides, proteins and complex lipids) Supramolecular complexes (e.g. The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. What aspect of beryllium allows it to absorb x-rays? A. relative atomic number B. magnetism C. low atomic number D. high atomic number Answer:
sciq-3079
multiple_choice
The force that a magnet exerts on certain materials is called what?
[ "magnetic force", "centripetal force", "stellar force", "velocity force" ]
A
Relavent Documents: Document 0::: In electromagnetism, the magnetic moment is the magnetic strength and orientation of a magnet or other object that produces a magnetic field, expressed as a vector. Examples of objects that have magnetic moments include loops of electric current (such as electromagnets), permanent magnets, elementary particles (such as electrons), composite particles (such as protons and neutrons), various molecules, and many astronomical objects (such as many planets, some moons, stars, etc). More precisely, the term magnetic moment normally refers to a system's magnetic dipole moment, the component of the magnetic moment that can be represented by an equivalent magnetic dipole: a magnetic north and south pole separated by a very small distance. The magnetic dipole component is sufficient for small enough magnets or for large enough distances. Higher-order terms (such as the magnetic quadrupole moment) may be needed in addition to the dipole moment for extended objects. The magnetic dipole moment of an object determines the magnitude of torque that the object experiences in a given magnetic field. Objects with larger magnetic moments experience larger torques when the same magnetic field is applied. The strength (and direction) of this torque depends not only on the magnitude of the magnetic moment but also on its orientation relative to the direction of the magnetic field. The magnetic moment may therefore be considered to be a vector. The direction of the magnetic moment points from the south to north pole of the magnet (inside the magnet). The magnetic field of a magnetic dipole is proportional to its magnetic dipole moment. The dipole component of an object's magnetic field is symmetric about the direction of its magnetic dipole moment, and decreases as the inverse cube of the distance from the object. Definition, units, and measurement Definition The magnetic moment can be defined as a vector relating the aligning torque on the object from an externally applied magnetic Document 1::: In physics, the magnetomotive force (abbreviated mmf or MMF, symbol ) is a quantity appearing in the equation for the magnetic flux in a magnetic circuit, Hopkinson's law. It is the property of certain substances or phenomena that give rise to magnetic fields: where is the magnetic flux and is the reluctance of the circuit. It can be seen that the magnetomotive force plays a role in this equation analogous to the voltage in Ohm's law, , since it is the cause of magnetic flux in a magnetic circuit: where is the number of turns in the coil and is the electric current through the circuit. where is the magnetic flux and is the magnetic reluctance where is the magnetizing force (the strength of the magnetizing field) and is the mean length of a solenoid or the circumference of a toroid. Units The SI unit of mmf is the ampere, the same as the unit of current (analogously the units of emf and voltage are both the volt). Informally, and frequently, this unit is stated as the ampere-turn to avoid confusion with current. This was the unit name in the MKS system. Occasionally, the cgs system unit of the gilbert may also be encountered. History The term magnetomotive force was coined by Henry Augustus Rowland in 1880. Rowland intended this to indicate a direct analogy with electromotive force. The idea of a magnetic analogy to electromotive force can be found much earlier in the work of Michael Faraday (1791–1867) and it is hinted at by James Clerk Maxwell (1831–1879). However, Rowland coined the term and was the first to make explicit an Ohm's law for magnetic circuits in 1873. Ohm's law for magnetic circuits is sometimes referred to as Hopkinson's law rather than Rowland's law as some authors attribute the law to John Hopkinson instead of Rowland. According to a review of magnetic circuit analysis methods this is an incorrect attribution originating from an 1885 paper by Hopkinson. Furthermore, Hopkinson actually cites Rowland's 1873 paper in th Document 2::: A magnetic circuit is made up of one or more closed loop paths containing a magnetic flux. The flux is usually generated by permanent magnets or electromagnets and confined to the path by magnetic cores consisting of ferromagnetic materials like iron, although there may be air gaps or other materials in the path. Magnetic circuits are employed to efficiently channel magnetic fields in many devices such as electric motors, generators, transformers, relays, lifting electromagnets, SQUIDs, galvanometers, and magnetic recording heads. The relation between magnetic flux, magnetomotive force, and magnetic reluctance in an unsaturated magnetic circuit can be described by Hopkinson's law, which bears a superficial resemblance to Ohm's law in electrical circuits, resulting in a one-to-one correspondence between properties of a magnetic circuit and an analogous electric circuit. Using this concept the magnetic fields of complex devices such as transformers can be quickly solved using the methods and techniques developed for electrical circuits. Some examples of magnetic circuits are: horseshoe magnet with iron keeper (low-reluctance circuit) horseshoe magnet with no keeper (high-reluctance circuit) electric motor (variable-reluctance circuit) some types of pickup cartridge (variable-reluctance circuits) Magnetomotive force (MMF) Similar to the way that electromotive force (EMF) drives a current of electrical charge in electrical circuits, magnetomotive force (MMF) 'drives' magnetic flux through magnetic circuits. The term 'magnetomotive force', though, is a misnomer since it is not a force nor is anything moving. It is perhaps better to call it simply MMF. In analogy to the definition of EMF, the magnetomotive force around a closed loop is defined as: The MMF represents the potential that a hypothetical magnetic charge would gain by completing the loop. The magnetic flux that is driven is not a current of magnetic charge; it merely has the same relationshi Document 3::: Magnetic deviation is the error induced in a compass by local magnetic fields, which must be allowed for, along with magnetic declination, if accurate bearings are to be calculated. (More loosely, "magnetic deviation" is used by some to mean the same as "magnetic declination". This article is about the former meaning.) Compass readings Compasses are used to determine the direction of true North. However, the compass reading must be corrected for two effects. The first is magnetic declination or variation—the angular difference between magnetic North (the local direction of the Earth's magnetic field) and true North. The second is magnetic deviation—the angular difference between magnetic North and the compass needle due to nearby sources of interference such as magnetically permeable bodies, or other magnetic fields within the field of influence. Sources In navigation manuals, magnetic deviation refers specifically to compass error caused by magnetized iron within a ship or aircraft. This iron has a mixture of permanent magnetization and an induced (temporary) magnetization that is induced by the Earth's magnetic field. Because the latter depends on the orientation of the craft relative to the Earth's field, it can be difficult to analyze and correct for it. The deviation errors caused by magnetism in the ship's structure are minimised by precisely positioning small magnets and iron compensators close to the compass. To compensate for the induced magnetization, two magnetically soft iron spheres are placed on side arms. However, because the magnetic "signature" of every ship changes slowly with location, and with time, it is necessary to adjust the compensating magnets, periodically, to keep the deviation errors to a practical minimum. Magnetic compass adjustment and correction is one of the subjects in the examination curriculum for a shipmaster's certificate of competency. The sources of magnetic deviation vary from compass to compass or vehicle to vehicle. H Document 4::: Biomagnetism is the phenomenon of magnetic fields produced by living organisms; it is a subset of bioelectromagnetism. In contrast, organisms' use of magnetism in navigation is magnetoception and the study of the magnetic fields' effects on organisms is magnetobiology. (The word biomagnetism has also been used loosely to include magnetobiology, further encompassing almost any combination of the words magnetism, cosmology, and biology, such as "magnetoastrobiology".) The origin of the word biomagnetism is unclear, but seems to have appeared several hundred years ago, linked to the expression "animal magnetism". The present scientific definition took form in the 1970s, when an increasing number of researchers began to measure the magnetic fields produced by the human body. The first valid measurement was actually made in 1963, but the field of research began to expand only after a low-noise technique was developed in 1970. Today the community of biomagnetic researchers does not have a formal organization, but international conferences are held every two years, with about 600 attendees. Most conference activity centers on the MEG (magnetoencephalogram), the measurement of the magnetic field of the brain. Prominent researchers David Cohen John Wikswo Samuel Williamson See also Bioelectrochemistry Human magnetism Magnetite Magnetocardiography Magnetoception - sensing of magnetic fields by organisms Magnetoelectrochemistry Magnetoencephalography Magnetogastrography Magnetomyography SQUID Notes Further reading Williamson SH, Romani GL, Kaufman L, Modena I, editors. Biomagnetism: An Interdisciplinary Approach. 1983. NATO ASI series. New York: Plenum Press. Cohen, D. Boston and the history of biomagnetism. Neurology and Clinical Neurophysiology 2004; 30: 1. History of Biomagnetism Bioelectromagnetics Magnetism The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. The force that a magnet exerts on certain materials is called what? A. magnetic force B. centripetal force C. stellar force D. velocity force Answer:
sciq-8100
multiple_choice
Down syndrome is caused by having three copies of what, a condition known as trisomy?
[ "DNA", "genome", "variation", "chromosome" ]
D
Relavent Documents: Document 0::: Down syndrome is a chromosomal abnormality characterized by the presence of an extra copy of genetic material on chromosome 21, either in whole (trisomy 21) or part (such as due to translocations). The effects of the extra copy varies greatly from individual to individual, depending on the extent of the extra copy, genetic background, environmental factors, and random chance. Down syndrome can occur in all human populations, and analogous effects have been found in other species, such as chimpanzees and mice. In 2005, researchers have been able to create transgenic mice with most of human chromosome 21 (in addition to their normal chromosomes). A typical human karyotype is shown here. Every chromosome has two copies. In the bottom right, there are chromosomal differences between males (XY) and females (XX), which do not concern us. A typical human karyotype is designated as 46,XX or 46,XY, indicating 46 chromosomes with an XX arrangement for females and 46 chromosomes with an XY arrangement for males. For this article, we will use females for the karyotype designation (46,XX). Trisomy 21 Trisomy 21 (47,XY,+21) is caused by a meiotic nondisjunction event. A typical gamete (either egg or sperm) has one copy of each chromosome (23 total). When it is combined with a gamete from the other parent during conception, the child has 46 chromosomes. However, with nondisjunction, a gamete is produced with an extra copy of chromosome 21 (the gamete has 24 chromosomes). When combined with a typical gamete from the other parent, the child now has 47 chromosomes, with three copies of chromosome 21. The trisomy 21 karyotype figure shows the chromosomal arrangement, with the prominent extra chromosome 21. Trisomy 21 is the cause of approximately 95% of observed Down syndrome, with 88% coming from nondisjunction in the maternal gamete and 8% coming from nondisjunction in the paternal gamete. Mitotic nondisjunction after conception would lead to mosaicism, and is discussed later. Document 1::: Mouse models have frequently been used to study Down syndrome due to the close similarity in the genomes of mice and humans, and the prevalence of mice usage in laboratory research. Background Trisomy 21, an extra copy of the 21st chromosome, is responsible for causing Down syndrome, and the mouse chromosome 16 closely resembles human chromosome 21. In 1979, trisomy of the mouse chromosome 16 (Ts16) initially showed potential to be a model organism for human Down syndrome. However, Ts16 embryos rarely survive until birth, making them unable to serve as a model for behavior and postnatal development. This dissimilarity in survival between species arises from the presence of genes on mouse chromosome 16 that are not present on human chromosome 21, introducing additional gene dosage imbalances. Because of this disadvantage, more specific mouse models have been utilized. Ts65Dn Model The Ts65Dn mouse model was first introduced in 1993, and more specifically resembles human trisomy 21 than the Ts16 model. In Ts65Dn, cells possess an extra copy of a segment of genes on chromosome 16 as well as a segment of genes on chromosome 17. From this model, various Down syndrome phenotypes are produced, including behavioral abnormalities and cognitive defects. DNA damage Ts65Dn mouse muscle stem cells accumulate DNA damage. These cells also over-express a histone deubiquitinating enzyme, Usp16, which regulates the DNA damage response. These dysfunctions of muscle stem cells may impair muscle regeneration and contribute to Down syndrome pathologies. T65Dn mice have significantly reduced numbers of hematopoietic stem cells (HSCs) along with an increase in HSC production of reactive oxygen species compared to euploid cells of wild-type littermates. Spontaneous DNA double-strand breaks are significantly increased in HSCs from Ts65Dn mice, and this correlates with significantly reduced HSC clonogenic activity compared to controls. HSCs from Ts65DN mice are also less proficien Document 2::: Kathryn "Kay" McGee (née Greene, May 6, 1920, in Chicago, Illinois – February 16, 2012 in River Forest, Illinois) was an American activist, recognized for founding two of the first organizations for the benefit of those with Down Syndrome. She worked seeking recognition, rights and opportunities for people with Down Syndrome. The birth of her fourth child, Tricia McGee, on March 16, 1960, commenced a decades long effort to bring parents of children with Down Syndrome together to create medical and educational options for such children. Tricia McGee was diagnosed as a mongoloid shortly after birth, which is what doctors called a person with Down Syndrome when Tricia was born, but is now considered a slur. Down Syndrome is a genetic disorder that was first described in 1866 by British doctor John L. Down. It was discovered to be caused by an extra chromosome by French pediatrician Jérôme Lejeune in July 1958, less than two years before Tricia was born. Medical advice in 1960 was typically to institutionalize children with Down Syndrome. After Tricia's birth in 1960, the family pediatrician recommended that the McGees place her in an institution rather than bring her home from the hospital. A few years later when he saw her functioning well at the Alcuin Montessori School in River Forest, Illinois, he explained that he had been told in medical school to make that recommendation to people, and said that he would never do so again. After bringing Tricia home and adjusting to the reality that such an infant faces exceptional developmental challenges, Kay and Martin attempted to learn about Down Syndrome and find similarly situated parents in the Chicago area. Early experience and efforts at organizing parents Within six months Kay determined that there were children with Down Syndrome in communities but that they were not visible as society was not accepting and parents were protective of their vulnerable family members. In late 1960 Kay invited those parents she was ab Document 3::: Research of Down syndrome-related genes is based on studying the genes located on chromosome 21. In general, this leads to an overexpression of the genes. Understanding the genes involved may help to target medical treatment to individuals with Down syndrome. It is estimated that chromosome 21 contains 200 to 250 genes. Recent research has identified a region of the chromosome that contains the main genes responsible for the pathogenesis of Down syndrome, located proximal to 21q22.3. The search for major genes involved in Down syndrome characteristics is normally in the region 21q21–21q22.3. Genes Some suspected genes involved in features of Down syndrome are given in the Table 1: General research Research by Arron et al. shows that some of the phenotypes associated with Down syndrome can be related to the disregulation of transcription factors (596), and in particular, NFAT. NFAT is controlled in part by two proteins, DSCR1 and DYRK1A; these genes are located on chromosome-21 (Epstein 582). In people with Down syndrome, these proteins have 1.5 times greater concentration than normal (Arron et al. 597). The elevated levels of DSCR1 and DYRK1A keep NFAT primarily located in the cytoplasm rather than in the nucleus, preventing NFATc from activating the transcription of target genes and thus the production of certain proteins (Epstein 583). This disregulation was discovered by testing in transgenic mice that had segments of their chromosomes duplicated to simulate a human chromosome-21 trisomy (Arron et al. 597). A test involving grip strength showed that the genetically modified mice had a significantly weaker grip, much like the characteristically poor muscle tone of an individual with Down syndrome (Arron et al. 596). The mice squeezed a probe with a paw and displayed a 0.2 newton weaker grip (Arron et al. 596). Down syndrome is also characterized by increased socialization. When modified and unmodified mice were observed for social interaction, the modifie Document 4::: Chromosome 21 open reading frame 91 is a protein that in humans is encoded by the C21orf91 gene. EURL is a structural protein gene that is encoded within the human chromosome 21. It stands for gene Expressed in Undifferentiated Retina and Lens and was first found in chick embryos. It is also known as C21orf 91 (Chromosome 21 open reading frame 91). This gene produces many molecules; among them is a protein that influences neural development. This protein-coding region helps to code for neural development in humans and is strongly associated with neural progenitor cells as well as neurons associated with the cerebral cortex of the brain. Thus, being on chromosome 21, defects linked to this gene are heavily correlated to Down Syndrome. There are some knockout models regarding other genes involved in Down Syndrome, but there seems to be primary interest in a knockdown model for this specific gene. It is believed that because there is three codes of this gene rather than two, that the higher concentration of this molecule has the implications leading to Down Syndrome. Scientists are currently working on a hypothesis that the dosage of the EURL protein is directly correlated to neural development in the embryo and how an altered dosage leads to the neural deficits seen in Down Syndrome. The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. Down syndrome is caused by having three copies of what, a condition known as trisomy? A. DNA B. genome C. variation D. chromosome Answer:
sciq-4123
multiple_choice
What is the process by which a liquid changes to a solid?
[ "vaporizing", "freezing", "boiling", "melting" ]
B
Relavent Documents: Document 0::: In materials science, liquefaction is a process that generates a liquid from a solid or a gas or that generates a non-liquid phase which behaves in accordance with fluid dynamics. It occurs both naturally and artificially. As an example of the latter, a "major commercial application of liquefaction is the liquefaction of air to allow separation of the constituents, such as oxygen, nitrogen, and the noble gases." Another is the conversion of solid coal into a liquid form usable as a substitute for liquid fuels. Geology In geology, soil liquefaction refers to the process by which water-saturated, unconsolidated sediments are transformed into a substance that acts like a liquid, often in an earthquake. Soil liquefaction was blamed for building collapses in the city of Palu, Indonesia in October 2018. In a related phenomenon, liquefaction of bulk materials in cargo ships may cause a dangerous shift in the load. Physics and chemistry In physics and chemistry, the phase transitions from solid and gas to liquid (melting and condensation, respectively) may be referred to as liquefaction. The melting point (sometimes called liquefaction point) is the temperature and pressure at which a solid becomes a liquid. In commercial and industrial situations, the process of condensing a gas to liquid is sometimes referred to as liquefaction of gases. Coal Coal liquefaction is the production of liquid fuels from coal using a variety of industrial processes. Dissolution Liquefaction is also used in commercial and industrial settings to refer to mechanical dissolution of a solid by mixing, grinding or blending with a liquid. Food preparation In kitchen or laboratory settings, solids may be chopped into smaller parts sometimes in combination with a liquid, for example in food preparation or laboratory use. This may be done with a blender, or liquidiser in British English. Irradiation Liquefaction of silica and silicate glasses occurs on electron beam irradiation of nanos Document 1::: Physical changes are changes affecting the form of a chemical substance, but not its chemical composition. Physical changes are used to separate mixtures into their component compounds, but can not usually be used to separate compounds into chemical elements or simpler compounds. Physical changes occur when objects or substances undergo a change that does not change their chemical composition. This contrasts with the concept of chemical change in which the composition of a substance changes or one or more substances combine or break up to form new substances. In general a physical change is reversible using physical means. For example, salt dissolved in water can be recovered by allowing the water to evaporate. A physical change involves a change in physical properties. Examples of physical properties include melting, transition to a gas, change of strength, change of durability, changes to crystal form, textural change, shape, size, color, volume and density. An example of a physical change is the process of tempering steel to form a knife blade. A steel blank is repeatedly heated and hammered which changes the hardness of the steel, its flexibility and its ability to maintain a sharp edge. Many physical changes also involve the rearrangement of atoms most noticeably in the formation of crystals. Many chemical changes are irreversible, and many physical changes are reversible, but reversibility is not a certain criterion for classification. Although chemical changes may be recognized by an indication such as odor, color change, or production of a gas, every one of these indicators can result from physical change. Examples Heating and cooling Many elements and some compounds change from solids to liquids and from liquids to gases when heated and the reverse when cooled. Some substances such as iodine and carbon dioxide go directly from solid to gas in a process called sublimation. Magnetism Ferro-magnetic materials can become magnetic. The process is reve Document 2::: Vaporization (or vaporisation) of an element or compound is a phase transition from the liquid phase to vapor. There are two types of vaporization: Evaporation and boiling. Evaporation is a surface phenomenon, where as boiling is a bulk phenomenon. Evaporation is a phase transition from the liquid phase to vapor (a state of substance below critical temperature) that occurs at temperatures below the boiling temperature at a given pressure. Evaporation occurs on the surface. Evaporation only occurs when the partial pressure of vapor of a substance is less than the equilibrium vapor pressure. For example, due to constantly decreasing pressures, vapor pumped out of a solution will eventually leave behind a cryogenic liquid. Boiling is also a phase transition from the liquid phase to gas phase, but boiling is the formation of vapor as bubbles of vapor below the surface of the liquid. Boiling occurs when the equilibrium vapor pressure of the substance is greater than or equal to the atmospheric pressure. The temperature at which boiling occurs is the boiling temperature, or boiling point. The boiling point varies with the pressure of the environment. Sublimation is a direct phase transition from the solid phase to the gas phase, skipping the intermediate liquid phase. Because it does not involve the liquid phase, it is not a form of vaporization. The term vaporization has also been used in a colloquial or hyperbolic way to refer to the physical destruction of an object that is exposed to intense heat or explosive force, where the object is actually blasted into small pieces rather than literally converted to gaseous form. Examples of this usage include the "vaporization" of the uninhabited Marshall Island of Elugelab in the 1952 Ivy Mike thermonuclear test. Many other examples can be found throughout the various MythBusters episodes that have involved explosives, chief among them being Cement Mix-Up, where they "vaporized" a cement truck with ANFO. At the moment o Document 3::: Sorption is a physical and chemical process by which one substance becomes attached to another. Specific cases of sorption are treated in the following articles: Absorption "the incorporation of a substance in one state into another of a different state" (e.g., liquids being absorbed by a solid or gases being absorbed by a liquid); Adsorption The physical adherence or bonding of ions and molecules onto the surface of another phase (e.g., reagents adsorbed to a solid catalyst surface); Ion exchange An exchange of ions between two electrolytes or between an electrolyte solution and a complex. The reverse of sorption is desorption. Sorption rate The adsorption and absorption rate of a diluted solute in gas or liquid solution to a surface or interface can be calculated using Fick's laws of diffusion. See also Sorption isotherm Document 4::: Deposition is the phase transition in which gas transforms into solid without passing through the liquid phase. Deposition is a thermodynamic process. The reverse of deposition is sublimation and hence sometimes deposition is called desublimation. Applications Examples One example of deposition is the process by which, in sub-freezing air, water vapour changes directly to ice without first becoming a liquid. This is how frost and hoar frost form on the ground or other surfaces. Another example is when frost forms on a leaf. For deposition to occur, thermal energy must be removed from a gas. When the air becomes cold enough, water vapour in the air surrounding the leaf loses enough thermal energy to change into a solid. Even though the air temperature may be below the dew point, the water vapour may not be able to condense spontaneously if there is no way to remove the latent heat. When the leaf is introduced, the supercooled water vapour immediately begins to condense, but by this point is already past the freezing point. This causes the water vapour to change directly into a solid. Another example is the soot that is deposited on the walls of chimneys. Soot molecules rise from the fire in a hot and gaseous state. When they come into contact with the walls they cool, and change to the solid state, without formation of the liquid state. The process is made use of industrially in combustion chemical vapour deposition. Industrial applications There is an industrial coatings process, known as evaporative deposition, whereby a solid material is heated to the gaseous state in a low-pressure chamber, the gas molecules travel across the chamber space and then deposit to the solid state on a target surface, forming a smooth and thin layer on the target surface. Again, the molecules do not go through an intermediate liquid state when going from the gas to the solid. See also physical vapor deposition, which is a class of processes used to deposit thin films of various The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. What is the process by which a liquid changes to a solid? A. vaporizing B. freezing C. boiling D. melting Answer:
sciq-1236
multiple_choice
What are the "levels" in a food chain or web called?
[ "parts", "root", "gauges", "trophic" ]
D
Relavent Documents: Document 0::: The trophic level of an organism is the position it occupies in a food web. A food chain is a succession of organisms that eat other organisms and may, in turn, be eaten themselves. The trophic level of an organism is the number of steps it is from the start of the chain. A food web starts at trophic level 1 with primary producers such as plants, can move to herbivores at level 2, carnivores at level 3 or higher, and typically finish with apex predators at level 4 or 5. The path along the chain can form either a one-way flow or a food "web". Ecological communities with higher biodiversity form more complex trophic paths. The word trophic derives from the Greek τροφή (trophē) referring to food or nourishment. History The concept of trophic level was developed by Raymond Lindeman (1942), based on the terminology of August Thienemann (1926): "producers", "consumers", and "reducers" (modified to "decomposers" by Lindeman). Overview The three basic ways in which organisms get food are as producers, consumers, and decomposers. Producers (autotrophs) are typically plants or algae. Plants and algae do not usually eat other organisms, but pull nutrients from the soil or the ocean and manufacture their own food using photosynthesis. For this reason, they are called primary producers. In this way, it is energy from the sun that usually powers the base of the food chain. An exception occurs in deep-sea hydrothermal ecosystems, where there is no sunlight. Here primary producers manufacture food through a process called chemosynthesis. Consumers (heterotrophs) are species that cannot manufacture their own food and need to consume other organisms. Animals that eat primary producers (like plants) are called herbivores. Animals that eat other animals are called carnivores, and animals that eat both plants and other animals are called omnivores. Decomposers (detritivores) break down dead plant and animal material and wastes and release it again as energy and nutrients into Document 1::: Consumer–resource interactions are the core motif of ecological food chains or food webs, and are an umbrella term for a variety of more specialized types of biological species interactions including prey-predator (see predation), host-parasite (see parasitism), plant-herbivore and victim-exploiter systems. These kinds of interactions have been studied and modeled by population ecologists for nearly a century. Species at the bottom of the food chain, such as algae and other autotrophs, consume non-biological resources, such as minerals and nutrients of various kinds, and they derive their energy from light (photons) or chemical sources. Species higher up in the food chain survive by consuming other species and can be classified by what they eat and how they obtain or find their food. Classification of consumer types The standard categorization Various terms have arisen to define consumers by what they eat, such as meat-eating carnivores, fish-eating piscivores, insect-eating insectivores, plant-eating herbivores, seed-eating granivores, and fruit-eating frugivores and omnivores are meat eaters and plant eaters. An extensive classification of consumer categories based on a list of feeding behaviors exists. The Getz categorization Another way of categorizing consumers, proposed by South African American ecologist Wayne Getz, is based on a biomass transformation web (BTW) formulation that organizes resources into five components: live and dead animal, live and dead plant, and particulate (i.e. broken down plant and animal) matter. It also distinguishes between consumers that gather their resources by moving across landscapes from those that mine their resources by becoming sessile once they have located a stock of resources large enough for them to feed on during completion of a full life history stage. In Getz's scheme, words for miners are of Greek etymology and words for gatherers are of Latin etymology. Thus a bestivore, such as a cat, preys on live animal Document 2::: The soil food web is the community of organisms living all or part of their lives in the soil. It describes a complex living system in the soil and how it interacts with the environment, plants, and animals. Food webs describe the transfer of energy between species in an ecosystem. While a food chain examines one, linear, energy pathway through an ecosystem, a food web is more complex and illustrates all of the potential pathways. Much of this transferred energy comes from the sun. Plants use the sun’s energy to convert inorganic compounds into energy-rich, organic compounds, turning carbon dioxide and minerals into plant material by photosynthesis. Plant flowers exude energy-rich nectar above ground and plant roots exude acids, sugars, and ectoenzymes into the rhizosphere, adjusting the pH and feeding the food web underground. Plants are called autotrophs because they make their own energy; they are also called producers because they produce energy available for other organisms to eat. Heterotrophs are consumers that cannot make their own food. In order to obtain energy they eat plants or other heterotrophs. Above ground food webs In above ground food webs, energy moves from producers (plants) to primary consumers (herbivores) and then to secondary consumers (predators). The phrase, trophic level, refers to the different levels or steps in the energy pathway. In other words, the producers, consumers, and decomposers are the main trophic levels. This chain of energy transferring from one species to another can continue several more times, but eventually ends. At the end of the food chain, decomposers such as bacteria and fungi break down dead plant and animal material into simple nutrients. Methodology The nature of soil makes direct observation of food webs difficult. Since soil organisms range in size from less than 0.1 mm (nematodes) to greater than 2 mm (earthworms) there are many different ways to extract them. Soil samples are often taken using a metal Document 3::: Hierarchy theory is a means of studying ecological systems in which the relationship between all of the components is of great complexity. Hierarchy theory focuses on levels of organization and issues of scale, with a specific focus on the role of the observer in the definition of the system. Complexity in this context does not refer to an intrinsic property of the system but to the possibility of representing the systems in a plurality of non-equivalent ways depending on the pre-analytical choices of the observer. Instead of analyzing the whole structure, hierarchy theory refers to the analysis of hierarchical levels, and the interactions between them. See also Biological organisation Timothy F. H. Allen Deep history Big history Deep time Deep ecology Infrastructure-based development World-systems theory Structuralist economics Dependency theory Document 4::: This glossary of biology terms is a list of definitions of fundamental terms and concepts used in biology, the study of life and of living organisms. It is intended as introductory material for novices; for more specific and technical definitions from sub-disciplines and related fields, see Glossary of cell biology, Glossary of genetics, Glossary of evolutionary biology, Glossary of ecology, Glossary of environmental science and Glossary of scientific naming, or any of the organism-specific glossaries in :Category:Glossaries of biology. A B C D E F G H I J K L M N O P R S T U V W X Y Z Related to this search Index of biology articles Outline of biology Glossaries of sub-disciplines and related fields: Glossary of botany Glossary of ecology Glossary of entomology Glossary of environmental science Glossary of genetics Glossary of ichthyology Glossary of ornithology Glossary of scientific naming Glossary of speciation Glossary of virology The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. What are the "levels" in a food chain or web called? A. parts B. root C. gauges D. trophic Answer:
sciq-8615
multiple_choice
What is the transfer of thermal energy between substances called?
[ "Permeation", "heat", "Diffusion", "Radiation" ]
B
Relavent Documents: Document 0::: Heat transfer is a discipline of thermal engineering that concerns the generation, use, conversion, and exchange of thermal energy (heat) between physical systems. Heat transfer is classified into various mechanisms, such as thermal conduction, thermal convection, thermal radiation, and transfer of energy by phase changes. Engineers also consider the transfer of mass of differing chemical species (mass transfer in the form of advection), either cold or hot, to achieve heat transfer. While these mechanisms have distinct characteristics, they often occur simultaneously in the same system. Heat conduction, also called diffusion, is the direct microscopic exchanges of kinetic energy of particles (such as molecules) or quasiparticles (such as lattice waves) through the boundary between two systems. When an object is at a different temperature from another body or its surroundings, heat flows so that the body and the surroundings reach the same temperature, at which point they are in thermal equilibrium. Such spontaneous heat transfer always occurs from a region of high temperature to another region of lower temperature, as described in the second law of thermodynamics. Heat convection occurs when the bulk flow of a fluid (gas or liquid) carries its heat through the fluid. All convective processes also move heat partly by diffusion, as well. The flow of fluid may be forced by external processes, or sometimes (in gravitational fields) by buoyancy forces caused when thermal energy expands the fluid (for example in a fire plume), thus influencing its own transfer. The latter process is often called "natural convection". The former process is often called "forced convection." In this case, the fluid is forced to flow by use of a pump, fan, or other mechanical means. Thermal radiation occurs through a vacuum or any transparent medium (solid or fluid or gas). It is the transfer of energy by means of photons or electromagnetic waves governed by the same laws. Overview Heat Document 1::: Thermofluids is a branch of science and engineering encompassing four intersecting fields: Heat transfer Thermodynamics Fluid mechanics Combustion The term is a combination of "thermo", referring to heat, and "fluids", which refers to liquids, gases and vapors. Temperature, pressure, equations of state, and transport laws all play an important role in thermofluid problems. Phase transition and chemical reactions may also be important in a thermofluid context. The subject is sometimes also referred to as "thermal fluids". Heat transfer Heat transfer is a discipline of thermal engineering that concerns the transfer of thermal energy from one physical system to another. Heat transfer is classified into various mechanisms, such as heat conduction, convection, thermal radiation, and phase-change transfer. Engineers also consider the transfer of mass of differing chemical species, either cold or hot, to achieve heat transfer. Sections include : Energy transfer by heat, work and mass Laws of thermodynamics Entropy Refrigeration Techniques Properties and nature of pure substances Applications Engineering : Predicting and analysing the performance of machines Thermodynamics Thermodynamics is the science of energy conversion involving heat and other forms of energy, most notably mechanical work. It studies and interrelates the macroscopic variables, such as temperature, volume and pressure, which describe physical, thermodynamic systems. Fluid mechanics Fluid Mechanics the study of the physical forces at work during fluid flow. Fluid mechanics can be divided into fluid kinematics, the study of fluid motion, and fluid kinetics, the study of the effect of forces on fluid motion. Fluid mechanics can further be divided into fluid statics, the study of fluids at rest, and fluid dynamics, the study of fluids in motion. Some of its more interesting concepts include momentum and reactive forces in fluid flow and fluid machinery theory and performance. Sections include: Flu Document 2::: Conduction is the process by which heat is transferred from the hotter end to the colder end of an object. The ability of the object to conduct heat is known as its thermal conductivity, and is denoted . Heat spontaneously flows along a temperature gradient (i.e. from a hotter body to a colder body). For example, heat is conducted from the hotplate of an electric stove to the bottom of a saucepan in contact with it. In the absence of an opposing external driving energy source, within a body or between bodies, temperature differences decay over time, and thermal equilibrium is approached, temperature becoming more uniform. In conduction, the heat flow is within and through the body itself. In contrast, in heat transfer by thermal radiation, the transfer is often between bodies, which may be separated spatially. Heat can also be transferred by a combination of conduction and radiation. In solids, conduction is mediated by the combination of vibrations and collisions of molecules, propagation and collisions of phonons, and diffusion and collisions of free electrons. In gases and liquids, conduction is due to the collisions and diffusion of molecules during their random motion. Photons in this context do not collide with one another, and so heat transport by electromagnetic radiation is conceptually distinct from heat conduction by microscopic diffusion and collisions of material particles and phonons. But the distinction is often not easily observed unless the material is semi-transparent. In the engineering sciences, heat transfer includes the processes of thermal radiation, convection, and sometimes mass transfer. Usually, more than one of these processes occurs in a given situation. Overview On a microscopic scale, conduction occurs within a body considered as being stationary; this means that the kinetic and potential energies of the bulk motion of the body are separately accounted for. Internal energy diffuses as rapidly moving or vibrating atoms and molecule Document 3::: Heat transfer physics describes the kinetics of energy storage, transport, and energy transformation by principal energy carriers: phonons (lattice vibration waves), electrons, fluid particles, and photons. Heat is energy stored in temperature-dependent motion of particles including electrons, atomic nuclei, individual atoms, and molecules. Heat is transferred to and from matter by the principal energy carriers. The state of energy stored within matter, or transported by the carriers, is described by a combination of classical and quantum statistical mechanics. The energy is different made (converted) among various carriers. The heat transfer processes (or kinetics) are governed by the rates at which various related physical phenomena occur, such as (for example) the rate of particle collisions in classical mechanics. These various states and kinetics determine the heat transfer, i.e., the net rate of energy storage or transport. Governing these process from the atomic level (atom or molecule length scale) to macroscale are the laws of thermodynamics, including conservation of energy. Introduction Heat is thermal energy associated with temperature-dependent motion of particles. The macroscopic energy equation for infinitesimal volume used in heat transfer analysis is where is heat flux vector, is temporal change of internal energy ( is density, is specific heat capacity at constant pressure, is temperature and is time), and is the energy conversion to and from thermal energy ( and are for principal energy carriers). So, the terms represent energy transport, storage and transformation. Heat flux vector is composed of three macroscopic fundamental modes, which are conduction (, : thermal conductivity), convection (, : velocity), and radiation (, : angular frequency, : polar angle, : spectral, directional radiation intensity, : unit vector), i.e., . Once states and kinetics of the energy conversion and thermophysical properties are known, the fate of heat Document 4::: Thermal engineering is a specialized sub-discipline of mechanical engineering that deals with the movement of heat energy and transfer. The energy can be transferred between two mediums or transformed into other forms of energy. A thermal engineer will have knowledge of thermodynamics and the process to convert generated energy from thermal sources into chemical, mechanical, or electrical energy. Many process plants use a wide variety of machines that utilize components that use heat transfer in some way. Many plants use heat exchangers in their operations. A thermal engineer must allow the proper amount of energy to be transferred for correct use. Too much and the components could fail, too little and the system will not function at all. Thermal engineers must have an understanding of economics and the components that they will be servicing or interacting with. Some components that a thermal engineer could work with include heat exchangers, heat sinks, bi-metals strips, radiators and many more. Some systems that require a thermal engineer include; Boilers, heat pumps, water pumps, engines, and more. Part of being a thermal engineer is to improve a current system and make it more efficient than the current system. Many industries employ thermal engineers, some main ones are the automotive manufacturing industry, commercial construction, and Heating Ventilation and Cooling industry. Job opportunities for a thermal engineer are very broad and promising. Thermal engineering may be practiced by mechanical engineers and chemical engineers. One or more of the following disciplines may be involved in solving a particular thermal engineering problem: Thermodynamics, Fluid mechanics, Heat transfer, or Mass transfer. One branch of knowledge used frequently in thermal engineering is that of thermofluids. Applications Boiler design Combustion engines Cooling systems Cooling of computer chips Heat exchangers HVAC Process Fired Heaters Refrigeration Systems Compressed Air Sy The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. What is the transfer of thermal energy between substances called? A. Permeation B. heat C. Diffusion D. Radiation Answer:
sciq-3100
multiple_choice
What are the two basic parts that all flowering plants have in common?
[ "blade and petiole", "metal and petiole", "leaf and petiole", "stalk and petiole" ]
A
Relavent Documents: Document 0::: In botany, floral morphology is the study of the diversity of forms and structures presented by the flower, which, by definition, is a branch of limited growth that bears the modified leaves responsible for reproduction and protection of the gametes, called floral pieces. Fertile leaves or sporophylls carry sporangiums, which will produce male and female gametes and therefore are responsible for producing the next generation of plants. The sterile leaves are modified leaves whose function is to protect the fertile parts or to attract pollinators. The branch of the flower that joins the floral parts to the stem is a shaft called the pedicel, which normally dilates at the top to form the receptacle in which the various floral parts are inserted. All spermatophytes ("seed plants") possess flowers as defined here (in a broad sense), but the internal organization of the flower is very different in the two main groups of spermatophytes: living gymnosperms and angiosperms. Gymnosperms may possess flowers that are gathered in strobili, or the flower itself may be a strobilus of fertile leaves. Instead a typical angiosperm flower possesses verticils or ordered whorls that, from the outside in, are composed first of sterile parts, commonly called sepals (if their main function is protective) and petals (if their main function is to attract pollinators), and then the fertile parts, with reproductive function, which are composed of verticils or whorls of stamens (which carry the male gametes) and finally carpels (which enclose the female gametes). The arrangement of the floral parts on the axis, the presence or absence of one or more floral parts, the size, the pigmentation and the relative arrangement of the floral parts are responsible for the existence of a great variety of flower types. Such diversity is particularly important in phylogenetic and taxonomic studies of angiosperms. The evolutionary interpretation of the different flower types takes into account aspects of Document 1::: The floral axis (sometimes referred to as the receptacle) is the area of the flower upon which the reproductive organs and other ancillary organs are attached. It is also the point at the center of a floral diagram. Many flowers in division Angiosperma appear on floral axes. The floral axis can differ in form depending on the type of plant. For example, monocotyledons have a weakly developed floral axis compared to dicotyledons, and will therefore rarely possess a floral disc, which is common among dicotyledons. Floral diagramming Floral diagramming is a method used to graphically describe a flower. In the context of floral diagramming, the floral axis represents the center point around which the diagram is oriented. The floral axis can also be referred to as the receptacle in floral diagrams or when describing the structure of the flower. The main or mother axis in floral diagrams is not synonymous with the floral axis, rather it refers to where the stem of the flower is in relation to the diagram. The floral axis is also useful for identifying the type of symmetry that a flower exhibits. Function The floral axis serves as the attachment point for organs of the flower, such as the reproductive organs (pistil and stamen) and other organs such as the sepals and carpels. The floral axis acts much like a modified stem and births the organs that are attached to it. The fusion of a plant's organs and the amount of organs that are developed from the floral axis largely depends on the determinateness of the floral axis. The floral axis does perform different functions for different types of plants. For instance, with dicotyledons, the floral axis acts as a nectary, while that is not the case with monocotyledons. More specialized functions can also be performed by the floral axis. For example, in the plant Hibiscus, the floral axis is able to proliferate and produce fruit, rendering processes like self pollination unnecessary. Document 2::: Floral diagram is a graphic representation of flower structure. It shows the number of floral organs, their arrangement and fusion. Different parts of the flower are represented by their respective symbols. Floral diagrams are useful for flower identification or can help in understanding angiosperm evolution. They were introduced in the late 19th century and are generally attributed to A. W. Eichler. They are typically used with the floral formula of that flower to study its morphology. History In the 19th century, two contrasting methods of describing the flower were introduced: the textual floral formulae and pictorial floral diagrams. Floral diagrams are credited to A. W. Eichler, his extensive work Blüthendiagramme (1875, 1878) remains a valuable source of information on floral morphology. Eichler inspired later generation of scientists, including John Henry Schaffner. Diagrams were included e.g. in Types of Floral Mechanism by Church (1908). They were used in different textbooks, e.g. Organogenesis of Flowers by Sattler (1973), Botanische Bestimmungsübungen by Stützel (2006) or Plant Systematics by Simpson (2010). Floral Diagrams (2010) by Ronse De Craene followed Eichler’s approach using the contemporary APG II system. Basic characteristics and significance A floral diagram is a schematic cross-section through a young flower. It may be also defined as “projection of the flower perpendicular to its axis”. It usually shows the number of floral parts, their sizes, relative positions and fusion. Different organs are represented by distinguishable symbols, which may be uniform for one organ type, or may reflect concrete morphology. The diagram may also include symbols that don’t represent physical structures, but carry additional information (e.g. symmetry plane orientation). There is no agreement on how floral diagrams should be drawn, it depends on the author whether it is just a rough representation, or whether structural details of the flower are included. Document 3::: Edible plant stems are one part of plants that are eaten by humans. Most plants are made up of stems, roots, leaves, flowers, and produce fruits containing seeds. Humans most commonly eat the seeds (e.g. maize, wheat), fruit (e.g. tomato, avocado, banana), flowers (e.g. broccoli), leaves (e.g. lettuce, spinach, and cabbage), roots (e.g. carrots, beets), and stems (e.g. asparagus of many plants. There are also a few edible petioles (also known as leaf stems) such as celery or rhubarb. Plant stems have a variety of functions. Stems support the entire plant and have buds, leaves, flowers, and fruits. Stems are also a vital connection between leaves and roots. They conduct water and mineral nutrients through xylem tissue from roots upward, and organic compounds and some mineral nutrients through phloem tissue in any direction within the plant. Apical meristems, located at the shoot tip and axillary buds on the stem, allow plants to increase in length, surface, and mass. In some plants, such as cactus, stems are specialized for photosynthesis and water storage. Modified stems Typical stems are located above ground, but there are modified stems that can be found either above or below ground. Modified stems located above ground are phylloids, stolons, runners, or spurs. Modified stems located below ground are corms, rhizomes, and tubers. Detailed description of edible plant stems Asparagus The edible portion is the rapidly emerging stems that arise from the crowns in the Bamboo The edible portion is the young shoot (culm). Birch Trunk sap is drunk as a tonic or rendered into birch syrup, vinegar, beer, soft drinks, and other foods. Broccoli The edible portion is the peduncle stem tissue, flower buds, and some small leaves. Cauliflower The edible portion is proliferated peduncle and flower tissue. Cinnamon Many favor the unique sweet flavor of the inner bark of cinnamon, and it is commonly used as a spice. Fig The edible portion is stem tissue. The Document 4::: A pseudanthium (; ) is an inflorescence that resembles a flower. The word is sometimes used for other structures that are neither a true flower nor a true inflorescence. Examples of pseudanthia include flower heads, composite flowers, or capitula, which are special types of inflorescences in which anything from a small cluster to hundreds or sometimes thousands of flowers are grouped together to form a single flower-like structure. Pseudanthia take various forms. The real flowers (the florets) are generally small and often greatly reduced, but the pseudanthium itself can sometimes be quite large (as in the heads of some varieties of sunflower). Pseudanthia are characteristic of the daisy and sunflower family (Asteraceae), whose flowers are differentiated into ray flowers and disk flowers, unique to this family. The disk flowers in the center of the pseudanthium are actinomorphic and the corolla is fused into a tube. Flowers on the periphery are zygomorphic and the corolla has one large lobe (the so-called "petals" of a daisy are individual ray flowers, for example). Either ray or disk flowers may be absent in some plants: Senecio vulgaris lacks ray flowers and Taraxacum officinale lacks disk flowers. The individual flowers of a pseudanthium in the family Asteraceae (or Compositae) are commonly called florets. The pseudanthium has a whorl of bracts below the flowers, forming an involucre. In all cases, a pseudanthium is superficially indistinguishable from a flower, but closer inspection of its anatomy will reveal that it is composed of multiple flowers. Thus, the pseudanthium represents an evolutionary convergence of the inflorescence to a reduced reproductive unit that may function in pollination like a single flower, at least in plants that are animal pollinated. Pseudanthia may be grouped into types. The first type has units of individual flowers that are recognizable as single flowers even if fused. In the second type, the flowers do not appear as individua The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. What are the two basic parts that all flowering plants have in common? A. blade and petiole B. metal and petiole C. leaf and petiole D. stalk and petiole Answer:
sciq-8538
multiple_choice
What bodily function do the triceps help perform?
[ "extend the arm", "lift the leg", "perform crunches", "make a fist" ]
A
Relavent Documents: Document 0::: Kinesiology () is the scientific study of human body movement. Kinesiology addresses physiological, anatomical, biomechanical, pathological, neuropsychological principles and mechanisms of movement. Applications of kinesiology to human health include biomechanics and orthopedics; strength and conditioning; sport psychology; motor control; skill acquisition and motor learning; methods of rehabilitation, such as physical and occupational therapy; and sport and exercise physiology. Studies of human and animal motion include measures from motion tracking systems, electrophysiology of muscle and brain activity, various methods for monitoring physiological function, and other behavioral and cognitive research techniques. Basics Kinesiology studies the science of human movement, performance, and function by applying the fundamental sciences of Cell Biology, Molecular Biology, Chemistry, Biochemistry, Biophysics, Biomechanics, Biomathematics, Biostatistics, Anatomy, Physiology, Exercise Physiology, Pathophysiology, Neuroscience, and Nutritional science. A bachelor's degree in kinesiology can provide strong preparation for graduate study in biomedical research, as well as in professional programs, such as medicine, dentistry, physical therapy, and occupational therapy. The term "kinesiologist" is not a licensed nor professional designation in many countries, with the notable exception of Canada. Individuals with training in this area can teach physical education, work as personal trainers and sport coaches, provide consulting services, conduct research and develop policies related to rehabilitation, human motor performance, ergonomics, and occupational health and safety. In North America, kinesiologists may study to earn a Bachelor of Science, Master of Science, or Doctorate of Philosophy degree in Kinesiology or a Bachelor of Kinesiology degree, while in Australia or New Zealand, they are often conferred an Applied Science (Human Movement) degree (or higher). Many doctor Document 1::: In an isotonic contraction, tension remains the same, whilst the muscle's length changes. Isotonic contractions differ from isokinetic contractions in that in isokinetic contractions the muscle speed remains constant. While superficially identical, as the muscle's force changes via the length-tension relationship during a contraction, an isotonic contraction will keep force constant while velocity changes, but an isokinetic contraction will keep velocity constant while force changes. A near isotonic contraction is known as Auxotonic contraction. There are two types of isotonic contractions: (1) concentric and (2) eccentric. In a concentric contraction, the muscle tension rises to meet the resistance, then remains the same as the muscle shortens. In eccentric, the muscle lengthens due to the resistance being greater than the force the muscle is producing. Concentric This type is typical of most exercise. The external force on the muscle is less than the force the muscle is generating - a shortening contraction. The effect is not visible during the classic biceps curl, which is in fact auxotonic because the resistance (torque due to the weight being lifted) does not remain the same through the exercise. Tension is highest at a parallel to the floor level, and eases off above and below this point. Therefore, tension changes as well as muscle length. Eccentric There are two main features to note regarding eccentric contractions. First, the absolute tensions achieved can be very high relative to the muscle's maximum tetanic tension generating capacity (you can set down a much heavier object than you can lift). Second, the absolute tension is relatively independent of lengthening velocity. Muscle injury and soreness are selectively associated with eccentric contraction. Muscle strengthening using exercises that involve eccentric contractions is lower than using concentric exercises. However because higher levels of tension are easier to attain during exercises th Document 2::: Kinanthropometry is defined as the study of human size, shape, proportion, composition, maturation, and gross function, in order to understand growth, exercise, performance, and nutrition. It is a scientific discipline that is concerned with the measurement of individuals in a variety of morphological perspectives, its application to movement and those factors which influence movement, including: components of body build, body measurements, proportions, composition, shape and maturation; motor abilities and cardiorespiratory capacities; physical activity including recreational activity as well as highly specialized sports performance. The predominant focus is upon obtaining detailed measurements upon the body composition of a given person. Kinanthropometry is the interface between human anatomy and movement. It is the application of a series of measurements made on the body and from these we can use the data that we gather directly or perform calculations using the data to produce various indices and body composition predictions and to measure and describe physique. Kinanthropometry is an unknown word for many people except those inside the field of sport science. Describing the etymology of the word kinanthropometry can help illustrate simply what you are going to talk about. However, if you have to say just a few sentences about the general scope of it, some problems will arise immediately. Is it a science? Why are its central definitions so ambiguous and various? For what really matter the kinanthropometric assessment. And so on. Defining a particular aim for kinanthropometry is central for its full understanding. Ross et al. (1972) said “K is a scientific discipline that studies the body size, the proportionality, the performance of movement, the body composition and principal functions of the body. This so well cited definition is not completely exact as the last four words show. What are the kinanthropometric methods that truly tell us something about prin Document 3::: Normal aging movement control in humans is about the changes in the muscles, motor neurons, nerves, sensory functions, gait, fatigue, visual and manual responses, in men and women as they get older but who do not have neurological, muscular (atrophy, dystrophy...) or neuromuscular disorder. With aging, neuromuscular movements are impaired, though with training or practice, some aspects may be prevented. Force production For voluntary force production, action potentials occur in the cortex. They propagate in the spinal cord, the motor neurons and the set of muscle fibers they innervate. This results in a twitch which properties are driven by two mechanisms: motor unit recruitment and rate coding. Both mechanisms are affected with aging. For instance, the number of motor units may decrease, the size of the motor units, i.e. the number of muscle fibers they innervate may increase, the frequency at which the action potentials are triggered may be reduced. Consequently, force production is generally impaired in old adults. Aging is associated with decreases in muscle mass and strength. These decreases may be partially due to losses of alpha motor neurons. By the age of 70, these losses occur in both proximal and distal muscles. In biceps brachii and brachialis, old adults show decreased strength (by 1/3) correlated with a reduction in the number of motor units (by 1/2). Old adults show evidence that remaining motor units may become larger as motor units innervate collateral muscle fibers. In first dorsal interosseus, almost all motor units are recruited at moderate rate coding, leading to 30-40% of maximal voluntary contraction (MVC). Motor unit discharge rates measured at 50% MVC are not significantly different in the young subjects from those observed in the old adults. However, for the maximal effort contractions, there is an appreciable difference in discharge rates between the two age groups. Discharge rates obtained at 100% of MVC are 64% smaller in the old adul Document 4::: Myology is the study of the muscular system, including the study of the structure, function and diseases of muscle. The muscular system consists of skeletal muscle, which contracts to move or position parts of the body (e.g., the bones that articulate at joints), smooth and cardiac muscle that propels, expels or controls the flow of fluids and contained substance. See also Myotomy Oral myology The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. What bodily function do the triceps help perform? A. extend the arm B. lift the leg C. perform crunches D. make a fist Answer:
sciq-9223
multiple_choice
When additional water is added to an aqueous solution, what happens to the concentration of that solution?
[ "no change", "decreases", "increases", "doubles" ]
B
Relavent Documents: Document 0::: Conceptual questions or conceptual problems in science, technology, engineering, and mathematics (STEM) education are questions that can be answered based only on the knowledge of relevant concepts, rather than performing extensive calculations. They contrast with most homework and exam problems in science and engineering that typically require plugging in numerical values into previously discussed formulas. Such "plug-and-chug" numerical problems can often be solved correctly by just matching the pattern of the problem to a previously discussed problem and changing the numerical inputs, which requires significant amounts of time to perform the calculations but does not test or deepen the understanding of how the concepts and formulas should work together. Conceptual questions, therefore, provide a good complement to conventional numerical problems because they need minimal or no calculations and instead encourage the students to engage more deeply with the underlying concepts and how they relate to formulas. Conceptual problems are often formulated as multiple-choice questions, making them easy to use during in-class discussions, particularly when utilizing active learning, peer instruction, and audience response. An example of a conceptual question in undergraduate thermodynamics is provided below: During adiabatic expansion of an ideal gas, its temperatureincreases decreases stays the same Impossible to tell/need more information The use of conceptual questions in physics was popularized by Eric Mazur, particularly in the form of multiple-choice tests that he called ConcepTests. In recent years, multiple websites that maintain lists of conceptual questions have been created by instructors for various disciplines. Some books on physics provide many examples of conceptual questions as well. Multiple conceptual questions can be assembled into a concept inventory to test the working knowledge of students at the beginning of a course or to track the improvement in Document 1::: Additional Mathematics is a qualification in mathematics, commonly taken by students in high-school (or GCSE exam takers in the United Kingdom). It features a range of problems set out in a different format and wider content to the standard Mathematics at the same level. Additional Mathematics in Singapore In Singapore, Additional Mathematics is an optional subject offered to pupils in secondary school—specifically those who have an aptitude in Mathematics and are in the Normal (Academic) stream or Express stream. The syllabus covered is more in-depth as compared to Elementary Mathematics, with additional topics including Algebra binomial expansion, proofs in plane geometry, differential calculus and integral calculus. Additional Mathematics is also a prerequisite for students who are intending to offer H2 Mathematics and H2 Further Mathematics at A-level (if they choose to enter a Junior College after secondary school). Students without Additional Mathematics at the 'O' level will usually be offered H1 Mathematics instead. Examination Format The syllabus was updated starting with the 2021 batch of candidates. There are two written papers, each comprising half of the weightage towards the subject. Each paper is 2 hours 15 minutes long and worth 90 marks. Paper 1 has 12 to 14 questions, while Paper 2 has 9 to 11 questions. Generally, Paper 2 would have a graph plotting question based on linear law. GCSE Additional Mathematics in Northern Ireland In Northern Ireland, Additional Mathematics was offered as a GCSE subject by the local examination board, CCEA. There were two examination papers: one which tested topics in Pure Mathematics, and one which tested topics in Mechanics and Statistics. It was discontinued in 2014 and replaced with GCSE Further Mathematics—a new qualification whose level exceeds both those offered by GCSE Mathematics, and the analogous qualifications offered in England. Further Maths IGCSE and Additional Maths FSMQ in England Starting from Document 2::: In chemical biology, tonicity is a measure of the effective osmotic pressure gradient; the water potential of two solutions separated by a partially-permeable cell membrane. Tonicity depends on the relative concentration of selective membrane-impermeable solutes across a cell membrane which determine the direction and extent of osmotic flux. It is commonly used when describing the swelling-versus-shrinking response of cells immersed in an external solution. Unlike osmotic pressure, tonicity is influenced only by solutes that cannot cross the membrane, as only these exert an effective osmotic pressure. Solutes able to freely cross the membrane do not affect tonicity because they will always equilibrate with equal concentrations on both sides of the membrane without net solvent movement. It is also a factor affecting imbibition. There are three classifications of tonicity that one solution can have relative to another: hypertonic, hypotonic, and isotonic. A hypotonic solution example is distilled water. Hypertonic solution A hypertonic solution has a greater concentration of non-permeating solutes than another solution. In biology, the tonicity of a solution usually refers to its solute concentration relative to that of another solution on the opposite side of a cell membrane; a solution outside of a cell is called hypertonic if it has a greater concentration of solutes than the cytosol inside the cell. When a cell is immersed in a hypertonic solution, osmotic pressure tends to force water to flow out of the cell in order to balance the concentrations of the solutes on either side of the cell membrane. The cytosol is conversely categorized as hypotonic, opposite of the outer solution. When plant cells are in a hypertonic solution, the flexible cell membrane pulls away from the rigid cell wall, but remains joined to the cell wall at points called plasmodesmata. The cells often take on the appearance of a pincushion, and the plasmodesmata almost cease to function b Document 3::: Further Mathematics is the title given to a number of advanced secondary mathematics courses. The term "Higher and Further Mathematics", and the term "Advanced Level Mathematics", may also refer to any of several advanced mathematics courses at many institutions. In the United Kingdom, Further Mathematics describes a course studied in addition to the standard mathematics AS-Level and A-Level courses. In the state of Victoria in Australia, it describes a course delivered as part of the Victorian Certificate of Education (see § Australia (Victoria) for a more detailed explanation). Globally, it describes a course studied in addition to GCE AS-Level and A-Level Mathematics, or one which is delivered as part of the International Baccalaureate Diploma. In other words, more mathematics can also be referred to as part of advanced mathematics, or advanced level math. United Kingdom Background A qualification in Further Mathematics involves studying both pure and applied modules. Whilst the pure modules (formerly known as Pure 4–6 or Core 4–6, now known as Further Pure 1–3, where 4 exists for the AQA board) build on knowledge from the core mathematics modules, the applied modules may start from first principles. The structure of the qualification varies between exam boards. With regard to Mathematics degrees, most universities do not require Further Mathematics, and may incorporate foundation math modules or offer "catch-up" classes covering any additional content. Exceptions are the University of Warwick, the University of Cambridge which requires Further Mathematics to at least AS level; University College London requires or recommends an A2 in Further Maths for its maths courses; Imperial College requires an A in A level Further Maths, while other universities may recommend it or may promise lower offers in return. Some schools and colleges may not offer Further mathematics, but online resources are available Although the subject has about 60% of its cohort obtainin Document 4::: GRE Subject Biochemistry, Cell and Molecular Biology was a standardized exam provided by ETS (Educational Testing Service) that was discontinued in December 2016. It is a paper-based exam and there are no computer-based versions of it. ETS places this exam three times per year: once in April, once in October and once in November. Some graduate programs in the United States recommend taking this exam, while others require this exam score as a part of the application to their graduate programs. ETS sends a bulletin with a sample practice test to each candidate after registration for the exam. There are 180 questions within the biochemistry subject test. Scores are scaled and then reported as a number between 200 and 990; however, in recent versions of the test, the maximum and minimum reported scores have been 760 (corresponding to the 99 percentile) and 320 (1 percentile) respectively. The mean score for all test takers from July, 2009, to July, 2012, was 526 with a standard deviation of 95. After learning that test content from editions of the GRE® Biochemistry, Cell and Molecular Biology (BCM) Test has been compromised in Israel, ETS made the decision not to administer this test worldwide in 2016–17. Content specification Since many students who apply to graduate programs in biochemistry do so during the first half of their fourth year, the scope of most questions is largely that of the first three years of a standard American undergraduate biochemistry curriculum. A sampling of test item content is given below: Biochemistry (36%) A Chemical and Physical Foundations Thermodynamics and kinetics Redox states Water, pH, acid-base reactions and buffers Solutions and equilibria Solute-solvent interactions Chemical interactions and bonding Chemical reaction mechanisms B Structural Biology: Structure, Assembly, Organization and Dynamics Small molecules Macromolecules (e.g., nucleic acids, polysaccharides, proteins and complex lipids) Supramolecular complexes (e.g. The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. When additional water is added to an aqueous solution, what happens to the concentration of that solution? A. no change B. decreases C. increases D. doubles Answer:
sciq-816
multiple_choice
Which competition leads to one species going extinct or both becoming more specialized?
[ "interspecific", "beneficial", "mimicry", "intraspecific" ]
A
Relavent Documents: Document 0::: Interspecific competition, in ecology, is a form of competition in which individuals of different species compete for the same resources in an ecosystem (e.g. food or living space). This can be contrasted with mutualism, a type of symbiosis. Competition between members of the same species is called intraspecific competition. If a tree species in a dense forest grows taller than surrounding tree species, it is able to absorb more of the incoming sunlight. However, less sunlight is then available for the trees that are shaded by the taller tree, thus interspecific competition. Leopards and lions can also be in interspecific competition, since both species feed on the same prey, and can be negatively impacted by the presence of the other because they will have less food. Competition is only one of many interacting biotic and abiotic factors that affect community structure. Moreover, competition is not always a straightforward, direct, interaction. Interspecific competition may occur when individuals of two separate species share a limiting resource in the same area. If the resource cannot support both populations, then lowered fecundity, growth, or survival may result in at least one species. Interspecific competition has the potential to alter populations, communities and the evolution of interacting species. On an individual organism level, competition can occur as interference or exploitative competition. Types All of the types described here can also apply to intraspecific competition, that is, competition among individuals within a species. Also, any specific example of interspecific competition can be described in terms of both a mechanism (e.g., resource or interference) and an outcome (symmetric or asymmetric). Based on mechanism Exploitative competition, also referred to as resource competition, is a form of competition in which one species consumes and either reduces or more efficiently uses a shared limiting resource and therefore depletes the availab Document 1::: Conservation is the maintenance of biological diversity. Conservation can focus on preserving diversity at genetic, species, community or whole ecosystem levels. This article will examine conservation at the species level, because mutualisms involve interactions between species. The ultimate goal of conservation at this level is to prevent the extinction of species. However, species conservation has the broader aim of maintaining the abundance and distribution of all species, not only those threatened with extinction (van Dyke 2008). Determining the value of conserving particular species can be done through the use of evolutionary significant units, which essentially attempt to prioritise the conservation of the species which are rarest, fastest declining, and most distinct genotypically and phenotypically (Moritz 1994, Fraser and Bernatchez 2001). Mutualisms can be defined as "interspecific interactions in which each of two partner species receives a net benefit" (Bronstein et al. 2004). Here net benefit is defined as, a short-term increase in inclusive fitness (IF). Incorporating the concept of genetic relatedness (through IF) is essential because many mutualisms involve the eusocial insects, where the majority of individuals are not reproductively active. The short-term component is chosen because it is operationally useful, even though the role of long-term adaptation is not considered (de Mazancourt et al. 2005). This definition of mutualism should be suffice for this article, although it neglects discussion of the many subtitles of IF theory applied to mutualisms, and the difficulties of examining short-term compared to long-term benefits, which are discussed in Foster and Wenselneers (2006) and de Mazancourt et al. (2005) respectively. Mutualisms can be broadly divided into two categories. Firstly, obligate mutualism, where two mutualistic partners are completely interdependent for survival and reproduction. Secondly, facultative mutualism, where two mutuali Document 2::: In ecological theory, the Hutchinson's ratio is the ratio of the size differences between similar species when they are living together as compared to when they are isolated. It is named after G. Evelyn Hutchinson who concluded that various key attributes in species varied according to the ratio of 1:1.1 to 1:1.4. The mean ratio 1.3 can be interpreted as the amount of separation necessary to obtain coexistence of species at the same trophic level. The variation in trophic structures of sympatric congeneric species is presumed to lead to niche differentiation, and allowing coexistence of multiple similar species in the same habitat by the partitioning of food resources. Hutchinson concluded that this size ratio could be used as an indicator of the kind of difference necessary to permit two species to co-occur in different niches but at the same level of the food web. The rule's legitimacy has been questioned, as other categories of objects also exhibit size ratios of roughly 1.3. Studies done on interspecific competition and niche changes in Tits (Parus spp.) show that when there are multiple species in the same community there is an expected change in foraging when they are of similar size (size ratio 1-1.2). There was no change found among the less similar species. In this paper this was strong evidence for niche differentiation for interspecific competition, and would also be a good argument for Hutchinson's rule. The simplest and perhaps the most effective way to differentiate the ecological niches of coexisting species is their morphological differentiation (in particular, size differentiation). Hutchinson showed that the average body size ratio in species of the same genus that belong to the same community and use the same resource is about 1.3 (from 1.1 to 1.4) and the respective body weight ratio is 2. This empirical pattern tells us that this rule does not apply to all organisms and ecological situations. And, therefore, it would be of particular Document 3::: Any action or influence that species have on each other is considered a biological interaction. These interactions between species can be considered in several ways. One such way is to depict interactions in the form of a network, which identifies the members and the patterns that connect them. Species interactions are considered primarily in terms of trophic interactions, which depict which species feed on others. Currently, ecological networks that integrate non-trophic interactions are being built. The type of interactions they can contain can be classified into six categories: mutualism, commensalism, neutralism, amensalism, antagonism, and competition. Observing and estimating the fitness costs and benefits of species interactions can be very problematic. The way interactions are interpreted can profoundly affect the ensuing conclusions. Interaction characteristics Characterization of interactions can be made according to various measures, or any combination of them. Prevalence Prevalence identifies the proportion of the population affected by a given interaction, and thus quantifies whether it is relatively rare or common. Generally, only common interactions are considered. Negative/ Positive Whether the interaction is beneficial or harmful to the species involved determines the sign of the interaction, and what type of interaction it is classified as. To establish whether they are harmful or beneficial, careful observational and/or experimental studies can be conducted, in an attempt to establish the cost/benefit balance experienced by the members. Strength The sign of an interaction does not capture the impact on fitness of that interaction. One example of this is of antagonism, in which predators may have a much stronger impact on their prey species (death), than parasites (reduction in fitness). Similarly, positive interactions can produce anything from a negligible change in fitness to a life or death impact. Relationship in space and time The rel Document 4::: Competition is an interaction between organisms or species in which both require a resource that is in limited supply (such as food, water, or territory). Competition lowers the fitness of both organisms involved since the presence of one of the organisms always reduces the amount of the resource available to the other. In the study of community ecology, competition within and between members of a species is an important biological interaction. Competition is one of many interacting biotic and abiotic factors that affect community structure, species diversity, and population dynamics (shifts in a population over time). There are three major mechanisms of competition: interference, exploitation, and apparent competition (in order from most direct to least direct). Interference and exploitation competition can be classed as "real" forms of competition, while apparent competition is not, as organisms do not share a resource, but instead share a predator. Competition among members of the same species is known as intraspecific competition, while competition between individuals of different species is known as interspecific competition. According to the competitive exclusion principle, species less suited to compete for resources must either adapt or die out, although competitive exclusion is rarely found in natural ecosystems. According to evolutionary theory, competition within and between species for resources is important in natural selection. More recently, however, researchers have suggested that evolutionary biodiversity for vertebrates has been driven not by competition between organisms, but by these animals adapting to colonize empty livable space; this is termed the 'Room to Roam' hypothesis. Interference competition During interference competition, also called contest competition, organisms interact directly by fighting for scarce resources. For example, large aphids defend feeding sites on cottonwood leaves by ejecting smaller aphids from better sites. The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. Which competition leads to one species going extinct or both becoming more specialized? A. interspecific B. beneficial C. mimicry D. intraspecific Answer:
sciq-8744
multiple_choice
What can be thought of as the most biologically productive regions on earth?
[ "coasts", "estuaries", "forests", "swamps" ]
B
Relavent Documents: Document 0::: Several universities have designed interdisciplinary courses with a focus on human biology at the undergraduate level. There is a wide variation in emphasis ranging from business, social studies, public policy, healthcare and pharmaceutical research. Americas Human Biology major at Stanford University, Palo Alto (since 1970) Stanford's Human Biology Program is an undergraduate major; it integrates the natural and social sciences in the study of human beings. It is interdisciplinary and policy-oriented and was founded in 1970 by a group of Stanford faculty (Professors Dornbusch, Ehrlich, Hamburg, Hastorf, Kennedy, Kretchmer, Lederberg, and Pittendrigh). It is a very popular major and alumni have gone to post-graduate education, medical school, law, business and government. Human and Social Biology (Caribbean) Human and Social Biology is a Level 4 & 5 subject in the secondary and post-secondary schools in the Caribbean and is optional for the Caribbean Secondary Education Certification (CSEC) which is equivalent to Ordinary Level (O-Level) under the British school system. The syllabus centers on structure and functioning (anatomy, physiology, biochemistry) of human body and the relevance to human health with Caribbean-specific experience. The syllabus is organized under five main sections: Living organisms and the environment, life processes, heredity and variation, disease and its impact on humans, the impact of human activities on the environment. Human Biology Program at University of Toronto The University of Toronto offers an undergraduate program in Human Biology that is jointly offered by the Faculty of Arts & Science and the Faculty of Medicine. The program offers several major and specialist options in: human biology, neuroscience, health & disease, global health, and fundamental genetics and its applications. Asia BSc (Honours) Human Biology at All India Institute of Medical Sciences, New Delhi (1980–2002) BSc (honours) Human Biology at AIIMS (New Document 1::: Ecosystem diversity deals with the variations in ecosystems within a geographical location and its overall impact on human existence and the environment. Ecosystem diversity addresses the combined characteristics of biotic properties (biodiversity) and abiotic properties (geodiversity). It is a variation in the ecosystems found in a region or the variation in ecosystems over the whole planet. Ecological diversity includes the variation in both terrestrial and aquatic ecosystems. Ecological diversity can also take into account the variation in the complexity of a biological community, including the number of different niches, the number of and other ecological processes. An example of ecological diversity on a global scale would be the variation in ecosystems, such as deserts, forests, grasslands, wetlands and oceans. Ecological diversity is the largest scale of biodiversity, and within each ecosystem, there is a great deal of both species and genetic diversity. Impact Diversity in the ecosystem is significant to human existence for a variety of reasons. Ecosystem diversity boosts the availability of oxygen via the process of photosynthesis amongst plant organisms domiciled in the habitat. Diversity in an aquatic environment helps in the purification of water by plant varieties for use by humans. Diversity increases plant varieties which serves as a good source for medicines and herbs for human use. A lack of diversity in the ecosystem produces an opposite result. Examples Some examples of ecosystems that are rich in diversity are: Deserts Forests Large marine ecosystems Marine ecosystems Old-growth forests Rainforests Tundra Coral reefs Marine Ecosystem diversity as a result of evolutionary pressure Ecological diversity around the world can be directly linked to the evolutionary and selective pressures that constrain the diversity outcome of the ecosystems within different niches. Tundras, Rainforests, coral reefs and deciduous forests all are form Document 2::: Ecological classification or ecological typology is the classification of land or water into geographical units that represent variation in one or more ecological features. Traditional approaches focus on geology, topography, biogeography, soils, vegetation, climate conditions, living species, habitats, water resources, and sometimes also anthropic factors. Most approaches pursue the cartographical delineation or regionalisation of distinct areas for mapping and planning. Approaches to classifications Different approaches to ecological classifications have been developed in terrestrial, freshwater and marine disciplines. Traditionally these approaches have focused on biotic components (vegetation classification), abiotic components (environmental approaches) or implied ecological and evolutionary processes (biogeographical approaches). Ecosystem classifications are specific kinds of ecological classifications that consider all four elements of the definition of ecosystems: a biotic component, an abiotic complex, the interactions between and within them, and the physical space they occupy (ecotope). Vegetation classification Vegetation is often used to classify terrestrial ecological units. Vegetation classification can be based on vegetation structure and floristic composition. Classifications based entirely on vegetation structure overlap with land cover mapping categories. Many schemes of vegetation classification are in use by the land, resource and environmental management agencies of different national and state jurisdictions. The International Vegetation Classification (IVC or EcoVeg) has been recently proposed but has not been yet widely adopted. Vegetation classifications have limited use in aquatic systems, since only a handful of freshwater or marine habitats are dominated by plants (e.g. kelp forests or seagrass meadows). Also, some extreme terrestrial environments, like subterranean or cryogenic ecosystems, are not properly described in vegetation c Document 3::: Plant ecology is a subdiscipline of ecology that studies the distribution and abundance of plants, the effects of environmental factors upon the abundance of plants, and the interactions among plants and between plants and other organisms. Examples of these are the distribution of temperate deciduous forests in North America, the effects of drought or flooding upon plant survival, and competition among desert plants for water, or effects of herds of grazing animals upon the composition of grasslands. A global overview of the Earth's major vegetation types is provided by O.W. Archibold. He recognizes 11 major vegetation types: tropical forests, tropical savannas, arid regions (deserts), Mediterranean ecosystems, temperate forest ecosystems, temperate grasslands, coniferous forests, tundra (both polar and high mountain), terrestrial wetlands, freshwater ecosystems and coastal/marine systems. This breadth of topics shows the complexity of plant ecology, since it includes plants from floating single-celled algae up to large canopy forming trees. One feature that defines plants is photosynthesis. Photosynthesis is the process of a chemical reactions to create glucose and oxygen, which is vital for plant life. One of the most important aspects of plant ecology is the role plants have played in creating the oxygenated atmosphere of earth, an event that occurred some 2 billion years ago. It can be dated by the deposition of banded iron formations, distinctive sedimentary rocks with large amounts of iron oxide. At the same time, plants began removing carbon dioxide from the atmosphere, thereby initiating the process of controlling Earth's climate. A long term trend of the Earth has been toward increasing oxygen and decreasing carbon dioxide, and many other events in the Earth's history, like the first movement of life onto land, are likely tied to this sequence of events. One of the early classic books on plant ecology was written by J.E. Weaver and F.E. Clements. It Document 4::: The University of Michigan Biological Station (UMBS) is a research and teaching facility operated by the University of Michigan. It is located on the south shore of Douglas Lake in Cheboygan County, Michigan. The station consists of 10,000 acres (40 km2) of land near Pellston, Michigan in the northern Lower Peninsula of Michigan and 3,200 acres (13 km2) on Sugar Island in the St. Mary's River near Sault Ste. Marie, in the Upper Peninsula. It is one of only 28 Biosphere Reserves in the United States. Overview Founded in 1909, it has grown to include approximately 150 buildings, including classrooms, student cabins, dormitories, a dining hall, and research facilities. Undergraduate and graduate courses are available in the spring and summer terms. It has a full-time staff of 15. In the 2000s, UMBS is increasingly focusing on the measurement of climate change. Its field researchers are gauging the impact of global warming and increased levels of atmospheric carbon dioxide on the ecosystem of the upper Great Lakes region, and are using field data to improve the computer models used to forecast further change. Several archaeological digs have been conducted at the station as well. UMBS field researchers sometimes call the station "bug camp" amongst themselves. This is believed to be due to the number of mosquitoes and other insects present. It is also known as "The Bio-Station". The UMBS is also home to Michigan's most endangered species and one of the most endangered species in the world: the Hungerford's Crawling Water Beetle. The species lives in only five locations in the world, two of which are in Emmet County. One of these, a two and a half mile stretch downstream from the Douglas Road crossing of the East Branch of the Maple River supports the only stable population of the Hungerford's Crawling Water Beetle, with roughly 1000 specimens. This area, though technically not part of the UMBS is largely within and along the boundary of the University of Michigan The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. What can be thought of as the most biologically productive regions on earth? A. coasts B. estuaries C. forests D. swamps Answer:
sciq-2435
multiple_choice
What is muscle tissue that is attached to the bone called?
[ "cartilage", "ligament", "skeletal tisse", "epithelial tissue" ]
C
Relavent Documents: Document 0::: Vertebrates Tendon cells, or tenocytes, are elongated fibroblast type cells. The cytoplasm is stretched between the collagen fibres of the tendon. They have a central cell nucleus with a prominent nucleolus. Tendon cells have a well-developed rough endoplasmic reticulum and they are responsible for synthesis and turnover of tendon fibres and ground substance. Invertebrates Tendon cells form a connecting epithelial layer between the muscle and shell in molluscs. In gastropods, for example, the retractor muscles connect to the shell via tendon cells. Muscle cells are attached to the collagenous myo-tendon space via hemidesmosomes. The myo-tendon space is then attached to the base of the tendon cells via basal hemidesmosomes, while apical hemidesmosomes, which sit atop microvilli, attach the tendon cells to a thin layer of collagen. This is in turn attached to the shell via organic fibres which insert into the shell. Molluscan tendon cells appear columnar and contain a large basal cell nucleus. The cytoplasm is filled with granular endoplasmic reticulum and sparse golgi. Dense bundles of microfilaments run the length of the cell connecting the basal to the apical hemidesmosomes. See also List of human cell types derived from the germ layers List of distinct cell types in the adult human body Document 1::: Outline h1.00: Cytology h2.00: General histology H2.00.01.0.00001: Stem cells H2.00.02.0.00001: Epithelial tissue H2.00.02.0.01001: Epithelial cell H2.00.02.0.02001: Surface epithelium H2.00.02.0.03001: Glandular epithelium H2.00.03.0.00001: Connective and supportive tissues H2.00.03.0.01001: Connective tissue cells H2.00.03.0.02001: Extracellular matrix H2.00.03.0.03001: Fibres of connective tissues H2.00.03.1.00001: Connective tissue proper H2.00.03.1.01001: Ligaments H2.00.03.2.00001: Mucoid connective tissue; Gelatinous connective tissue H2.00.03.3.00001: Reticular tissue H2.00.03.4.00001: Adipose tissue H2.00.03.5.00001: Cartilage tissue H2.00.03.6.00001: Chondroid tissue H2.00.03.7.00001: Bone tissue; Osseous tissue H2.00.04.0.00001: Haemotolymphoid complex H2.00.04.1.00001: Blood cells H2.00.04.1.01001: Erythrocyte; Red blood cell H2.00.04.1.02001: Leucocyte; White blood cell H2.00.04.1.03001: Platelet; Thrombocyte H2.00.04.2.00001: Plasma H2.00.04.3.00001: Blood cell production H2.00.04.4.00001: Postnatal sites of haematopoiesis H2.00.04.4.01001: Lymphoid tissue H2.00.05.0.00001: Muscle tissue H2.00.05.1.00001: Smooth muscle tissue Document 2::: H2.00.04.4.01001: Lymphoid tissue H2.00.05.0.00001: Muscle tissue H2.00.05.1.00001: Smooth muscle tissue H2.00.05.2.00001: Striated muscle tissue H2.00.06.0.00001: Nerve tissue H2.00.06.1.00001: Neuron H2.00.06.2.00001: Synapse H2.00.06.2.00001: Neuroglia h3.01: Bones h3.02: Joints h3.03: Muscles h3.04: Alimentary system h3.05: Respiratory system h3.06: Urinary system h3.07: Genital system h3.08: Document 3::: Stroma () is the part of a tissue or organ with a structural or connective role. It is made up of all the parts without specific functions of the organ - for example, connective tissue, blood vessels, ducts, etc. The other part, the parenchyma, consists of the cells that perform the function of the tissue or organ. There are multiple ways of classifying tissues: one classification scheme is based on tissue functions and another analyzes their cellular components. Stromal tissue falls into the "functional" class that contributes to the body's support and movement. The cells which make up stroma tissues serve as a matrix in which the other cells are embedded. Stroma is made of various types of stromal cells. Examples of stroma include: stroma of iris stroma of cornea stroma of ovary stroma of thyroid gland stroma of thymus stroma of bone marrow lymph node stromal cell multipotent stromal cell (mesenchymal stem cell) Structure Stromal connective tissues are found in the stroma; this tissue belongs to the group connective tissue proper. The function of connective tissue proper is to secure the parenchymal tissue, including blood vessels and nerves of the stroma, and to construct organs and spread mechanical tension to reduce localised stress. Stromal tissue is primarily made of extracellular matrix containing connective tissue cells. Extracellular matrix is primarily composed of ground substance - a porous, hydrated gel, made mainly from proteoglycan aggregates - and connective tissue fibers. There are three types of fibers commonly found within the stroma: collagen type I, elastic, and reticular (collagen type III) fibres. Cells Wandering cells - cells that migrate into the tissue from blood stream in response to a variety of stimuli; for example, immune system blood cells causing inflammatory response. Fixed cells - cells that are permanent inhabitants of the tissue. Fibroblast - produce and secrete the organic parts of the ground substance and extrace Document 4::: This table lists the epithelia of different organs of the human body Human anatomy The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. What is muscle tissue that is attached to the bone called? A. cartilage B. ligament C. skeletal tisse D. epithelial tissue Answer:
sciq-1588
multiple_choice
What is the term for the class of ectothermic, four-legged vertebrates that produce amniotic eggs?
[ "reptiles", "mammals", "Turtles", "amphibians" ]
A
Relavent Documents: Document 0::: Amniotes are animals belonging to the clade Amniota, a large group of tetrapod vertebrates that comprises the vast majority of living terrestrial vertebrates. Amniotes evolved from amphibian ancestors during the Carboniferous period and further diverged into two groups, namely the sauropsids (including all reptiles and birds) and synapsids (including mammals and extinct ancestors like "pelycosaurs" and therapsids). They are distinguished from the other living tetrapod clade — the lissamphibians (frogs/toads, salamanders, newts and caecilians) — by the development of three extraembryonic membranes (amnion for embryonic protection, chorion for gas exchange, and allantois for metabolic waste disposal or storage), thicker and keratinized skin, and costal respiration (breathing by expanding/constricting the rib cage). All three main amniote features listed above, namely the presence of an amniotic buffer, water-impermeable cutes and a robust air-breathing respiratory system, are very important for living on land as true terrestrial animals — the ability to survive and procreate in locations away from water bodies, better homeostasis in drier environments, and more efficient non-aquatic gas exchange to power terrestrial locomotions, although they might still require regular access to drinking water for rehydration like the semiaquatic amphibians do. Because the amnion and the fluid it secretes shields the embryo from environmental fluctuations, amniotes can reproduce on dry land by either laying shelled eggs (reptiles, birds and monotremes) or nurturing fertilized eggs within the mother (marsupial and placental mammals), unlike anamniotes (fish and amphibians) that have to spawn in or closely adjacent to aquatic environments. Additional unique features are the presence of adrenocortical and chromaffin tissues as a discrete pair of glands near their kidneys, which are more complex, the presence of an astragalus for better extremity range of motion, and the complete loss o Document 1::: The "Standard Event System" (SES) to Study Vertebrate Embryos was developed in 2009 to establish a common language in comparative embryology. Homologous developmental characters are defined therein and should be recognisable in all vertebrate embryos. The SES includes a protocol on how to describe and depict vertebrate embryonic characters. The SES was initially developed for external developmental characters of organogenesis, particularly for turtle embryos. However, it is expandable both taxonomically and in regard to anatomical or molecular characters. This article should act as an overview on the species staged with SES and document the expansions of this system. New entries need to be validated based on the citation of scientific publications. The guideline on how to establish new SES-characters and to describe species can be found in the original paper of Werneburg (2009). SES-characters are used to reconstruct ancestral developmental sequences in evolution such as that of the last common ancestor of placental mammals. Also the plasticity of developmental characters can be documented and analysed. SES-staged species Overview on the vertebrate species staged with SES. SES-characters New SES-characters are continuously described in new publications. Currently, characters of organogenesis are described for Vertebrata (V), Gnathostomata (G), Tetrapoda (T), Amniota (A), Sauropsida (S), Squamata (SQ), Mammalia (M), and Monotremata (MO). In total, 166 SES-characters are currently defined. Document 2::: Early stages of embryogenesis of tailless amphibians Embryogenesis in living creatures occurs in different ways depending on class and species. One of the most basic criteria of such development is independence from a water habitat. Amphibians were the earliest animals to adapt themselves to a mixed environment containing both water and dry land. The embryonic development of tailless amphibians is presented below using the African clawed frog (Xenopus laevis) and the northern leopard frog (Rana pipiens) as examples. The oocyte in these frog species is a polarized cell - it has specified axes and poles. The animal pole of the cell contains pigment cells, whereas the vegetal pole (the yolk) contains most of the nutritive material. The pigment is composed of light-absorbing melanin. The sperm cell enters the oocyte in the region of the animal pole. Two blocks - defensive mechanisms meant to prevent polyspermy - occur: the fast block and the slow block. A relatively short time after fertilization, the cortical cytoplasm (located just beneath the cell membrane) rotates by 30 degrees. This results in the creation of the gray crescent. Its establishment determines the location of the dorsal and ventral (up-down) axis, as well as of the anterior and posterior (front-back) axis and the dextro-sinistral (left-right) axis of the embryo. Embryo cleavage The cleavage (cell division) of a frog’s embryo is complete and uneven, because most of the yolk is gathered in the vegetal region. The first cleavage runs across the animal-vegetal axis, dividing the gray crescent into two parts. The second cleavage also cuts through the gray crescent, although always running perpendicularly to the first one. This results in the creation of four identical blastomeres - separate cells now forming the embryo. The third cleavage runs equatorially and closer to the animal pole, thus creating blastomeres of unequal size (micromeres in the animal region and macromeres in the vegetal region). Document 3::: Animals are multicellular, eukaryotic organisms in the biological kingdom Animalia. With few exceptions, animals consume organic material, breathe oxygen, have myocytes and are able to move, can reproduce sexually, and grow from a hollow sphere of cells, the blastula, during embryonic development. As of 2022, 2.16 million living animal species have been described—of which around 1.05 million are insects, over 85,000 are molluscs, and around 65,000 are vertebrates. It has been estimated there are around 7.77 million animal species. Animals range in length from to . They have complex interactions with each other and their environments, forming intricate food webs. The scientific study of animals is known as zoology. Most living animal species are in Bilateria, a clade whose members have a bilaterally symmetric body plan. The Bilateria include the protostomes, containing animals such as nematodes, arthropods, flatworms, annelids and molluscs, and the deuterostomes, containing the echinoderms and the chordates, the latter including the vertebrates. Life forms interpreted as early animals were present in the Ediacaran biota of the late Precambrian. Many modern animal phyla became clearly established in the fossil record as marine species during the Cambrian explosion, which began around 539 million years ago. 6,331 groups of genes common to all living animals have been identified; these may have arisen from a single common ancestor that lived 650 million years ago. Historically, Aristotle divided animals into those with blood and those without. Carl Linnaeus created the first hierarchical biological classification for animals in 1758 with his Systema Naturae, which Jean-Baptiste Lamarck expanded into 14 phyla by 1809. In 1874, Ernst Haeckel divided the animal kingdom into the multicellular Metazoa (now synonymous with Animalia) and the Protozoa, single-celled organisms no longer considered animals. In modern times, the biological classification of animals relies on ad Document 4::: The anamniotes are an informal group of craniates comprising all fishes and amphibians, which lay their eggs in aquatic environments. They are distinguished from the amniotes (reptiles, birds and mammals), which can reproduce on dry land either by laying shelled eggs or by carrying fertilized eggs within the female. Older sources, particularly before the 20th century, may refer to anamniotes as "lower vertebrates" and amniotes as "higher vertebrates", based on the antiquated idea of the evolutionary great chain of being. The name "anamniote" is a back-formation word created by adding the prefix an- to the word amniote, which in turn refers to the amnion, an extraembryonic membrane present during the amniotes' embryonic development which serves as a biochemical barrier that shields the embryo from environmental fluctuations by regulating the oxygen, carbon dioxide and metabolic waste exchanges and secreting a cushioning fluid. As the name suggests, anamniote embryos lack an amnion during embryonic development, and therefore rely on the presence of external water to provide oxygen and help dilute and excrete waste products (particularly ammonia) via diffusion in order for the embryo to complete development without being intoxicated by their own metabolites. This means anamniotes are almost always dependent on an aqueous (or at least very moist) environment for reproduction and are thus restricted to spawning in or near water bodies. They are also highly sensitive to chemical and temperature variation in the surrounding water, and are also more vulnerable to egg predation and parasitism. During their life cycle, all anamniote classes pass through a completely aquatic egg stage, as well as an aquatic larval stage during which all hatchlings are gill-dependent and morphologically resemble tiny finless fish (known as a fry or a tadpole for fish and amphibians, respectively), before metamorphosizing into juvenile and adult forms (which might be aquatic, semiaquatic or e The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. What is the term for the class of ectothermic, four-legged vertebrates that produce amniotic eggs? A. reptiles B. mammals C. Turtles D. amphibians Answer:
sciq-7210
multiple_choice
How many years can dissolved carbon be stored in the deep ocean?
[ "thousands", "hundreds", "tens", "unknown" ]
A
Relavent Documents: Document 0::: Conceptual questions or conceptual problems in science, technology, engineering, and mathematics (STEM) education are questions that can be answered based only on the knowledge of relevant concepts, rather than performing extensive calculations. They contrast with most homework and exam problems in science and engineering that typically require plugging in numerical values into previously discussed formulas. Such "plug-and-chug" numerical problems can often be solved correctly by just matching the pattern of the problem to a previously discussed problem and changing the numerical inputs, which requires significant amounts of time to perform the calculations but does not test or deepen the understanding of how the concepts and formulas should work together. Conceptual questions, therefore, provide a good complement to conventional numerical problems because they need minimal or no calculations and instead encourage the students to engage more deeply with the underlying concepts and how they relate to formulas. Conceptual problems are often formulated as multiple-choice questions, making them easy to use during in-class discussions, particularly when utilizing active learning, peer instruction, and audience response. An example of a conceptual question in undergraduate thermodynamics is provided below: During adiabatic expansion of an ideal gas, its temperatureincreases decreases stays the same Impossible to tell/need more information The use of conceptual questions in physics was popularized by Eric Mazur, particularly in the form of multiple-choice tests that he called ConcepTests. In recent years, multiple websites that maintain lists of conceptual questions have been created by instructors for various disciplines. Some books on physics provide many examples of conceptual questions as well. Multiple conceptual questions can be assembled into a concept inventory to test the working knowledge of students at the beginning of a course or to track the improvement in Document 1::: GRE Subject Biochemistry, Cell and Molecular Biology was a standardized exam provided by ETS (Educational Testing Service) that was discontinued in December 2016. It is a paper-based exam and there are no computer-based versions of it. ETS places this exam three times per year: once in April, once in October and once in November. Some graduate programs in the United States recommend taking this exam, while others require this exam score as a part of the application to their graduate programs. ETS sends a bulletin with a sample practice test to each candidate after registration for the exam. There are 180 questions within the biochemistry subject test. Scores are scaled and then reported as a number between 200 and 990; however, in recent versions of the test, the maximum and minimum reported scores have been 760 (corresponding to the 99 percentile) and 320 (1 percentile) respectively. The mean score for all test takers from July, 2009, to July, 2012, was 526 with a standard deviation of 95. After learning that test content from editions of the GRE® Biochemistry, Cell and Molecular Biology (BCM) Test has been compromised in Israel, ETS made the decision not to administer this test worldwide in 2016–17. Content specification Since many students who apply to graduate programs in biochemistry do so during the first half of their fourth year, the scope of most questions is largely that of the first three years of a standard American undergraduate biochemistry curriculum. A sampling of test item content is given below: Biochemistry (36%) A Chemical and Physical Foundations Thermodynamics and kinetics Redox states Water, pH, acid-base reactions and buffers Solutions and equilibria Solute-solvent interactions Chemical interactions and bonding Chemical reaction mechanisms B Structural Biology: Structure, Assembly, Organization and Dynamics Small molecules Macromolecules (e.g., nucleic acids, polysaccharides, proteins and complex lipids) Supramolecular complexes (e.g. Document 2::: The actuarial credentialing and exam process usually requires passing a rigorous series of professional examinations, most often taking several years in total, before one can become recognized as a credentialed actuary. In some countries, such as Denmark, most study takes place in a university setting. In others, such as the U.S., most study takes place during employment through a series of examinations. In the UK, and countries based on its process, there is a hybrid university-exam structure. Australia The education system in Australia is divided into three components: an exam-based curriculum; a professionalism course; and work experience. The system is governed by the Institute of Actuaries of Australia. The exam-based curriculum is in three parts. Part I relies on exemptions from an accredited under-graduate degree from either Bond University, Monash University, Macquarie University, University of New South Wales, University of Melbourne, Australian National University or Curtin University. The courses cover subjects including finance, financial mathematics, economics, contingencies, demography, models, probability and statistics. Students may also gain exemptions by passing the exams of the Institute of Actuaries in London. Part II is the Actuarial control cycle and is also offered by each of the universities above. Part III consists of four half-year courses of which two are compulsory and the other two allow specialization. To become an Associate, one needs to complete Part I and Part II of the accreditation process, perform 3 years of recognized work experience, and complete a professionalism course. To become a Fellow, candidates must complete Part I, II, III, and take a professionalism course. Work experience is not required, however, as the Institute deems that those who have successfully completed Part III have shown enough level of professionalism. China Actuarial exams were suspended in 2014 but reintroduced in 2023. Denmark In Denmark it normal Document 3::: Centro de Estudios Científicos (CECs; Center for Scientific Studies) is a private, non-profit corporation based in Valdivia, Chile, devoted to the development, promotion and diffusion of scientific research. CECs research areas include biophysics, molecular physiology, theoretical physics, glaciology and climate change. The centre was created in 1984 as Centro de Estudios Científicos de Santiago, with a grant of 150,000 dollars a year (for three years) from the Tinker Foundation of New York City. In 2004-2005 glaciologists from CECs organized the Chilean South Pole Expedition in collaboration with the Chilean Navy and Instituto Antártico Chileno. CECs was founded in Santiago but is since 2000 housed in the recently modernized, German-style Hotel Schuster located by Valdivia River. Claudio Bunster, a physicist and winner of Chile's National Prize for Exact Sciences, is the director of CECs. In 2014 CECs discovered what would be a subglacial lake in the West Antarctica, They investigated and concluded after a year that it is a lake, which was named Lake CECs in honor of the institution. The conclusion was published in Geophysical Research Letters on May 22, 2015. The authors of the discovery are Andrés Rivera, Jose Uribe, Rodrigo Zamora and Jonathan Oberreuter. Document 4::: A pre-STEM program is a course of study at any two-year college that prepares a student to transfer to a four-year school to earn a bachelor's degree in a STEM field. Overview The concept of a pre-STEM program is being developed to address America's need for more college-trained professionals in science, technology, engineering, and mathematics (STEM). It is an innovation meant to fill a gap at community colleges that do not have 'major' degree paths that students identify with on their way to earning an Associates degree. Students must complete a considerable amount of STEM coursework before transferring from a two-year school to a four-year school and earn a baccalaureate degree in a STEM field. Schools with a pre-STEM program are able to identify those students and support them with STEM-specific academic and career advising, increasing the student's chances of going on to earn a STEM baccalaureate degree in a timely fashion. With over 50% of America's college-bound students starting their college career at public or private two-year school, and with a very small proportion of students who start college at a two-year school matriculating to and earning STEM degrees from four-year schools, pre-STEM programs have great potential for broadening participation in baccalaureate STEM studies. Example programs The effectiveness of pre-STEM programs is being investigated by a consortium of schools in Missouri: Moberly Area Community College, St. Charles Community College, Metropolitan Community College, and Truman State University. A larger group of schools met at the Belknap Springs Meetings in October 2009 to discuss the challenges and opportunities presented by STEM-focused partnerships between 2-year and 4-year schools. Each program represented a two-year school and a four-year school that were trying to increase the number of people who earn a baccalaureate degree in a STEM area through various means, some of which were pre-STEM programs. Other methods includes The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. How many years can dissolved carbon be stored in the deep ocean? A. thousands B. hundreds C. tens D. unknown Answer:
ai2_arc-845
multiple_choice
Chris left a glass of water on a windowsill. When he looked at the glass a few days later, some of the water had evaporated. Which of the following best describes what happened to the particles of water that evaporated?
[ "They became larger in size.", "They spread out into the air.", "They were absorbed by the glass.", "They passed through the glass into the air." ]
B
Relavent Documents: Document 0::: Conceptual questions or conceptual problems in science, technology, engineering, and mathematics (STEM) education are questions that can be answered based only on the knowledge of relevant concepts, rather than performing extensive calculations. They contrast with most homework and exam problems in science and engineering that typically require plugging in numerical values into previously discussed formulas. Such "plug-and-chug" numerical problems can often be solved correctly by just matching the pattern of the problem to a previously discussed problem and changing the numerical inputs, which requires significant amounts of time to perform the calculations but does not test or deepen the understanding of how the concepts and formulas should work together. Conceptual questions, therefore, provide a good complement to conventional numerical problems because they need minimal or no calculations and instead encourage the students to engage more deeply with the underlying concepts and how they relate to formulas. Conceptual problems are often formulated as multiple-choice questions, making them easy to use during in-class discussions, particularly when utilizing active learning, peer instruction, and audience response. An example of a conceptual question in undergraduate thermodynamics is provided below: During adiabatic expansion of an ideal gas, its temperatureincreases decreases stays the same Impossible to tell/need more information The use of conceptual questions in physics was popularized by Eric Mazur, particularly in the form of multiple-choice tests that he called ConcepTests. In recent years, multiple websites that maintain lists of conceptual questions have been created by instructors for various disciplines. Some books on physics provide many examples of conceptual questions as well. Multiple conceptual questions can be assembled into a concept inventory to test the working knowledge of students at the beginning of a course or to track the improvement in Document 1::: The Stefan flow, occasionally called Stefan's flow, is a transport phenomenon concerning the movement of a chemical species by a flowing fluid (typically in the gas phase) that is induced to flow by the production or removal of the species at an interface. Any process that adds the species of interest to or removes it from the flowing fluid may cause the Stefan flow, but the most common processes include evaporation, condensation, chemical reaction, sublimation, ablation, adsorption, absorption, and desorption. It was named after the Slovenian physicist, mathematician, and poet Josef Stefan for his early work on calculating evaporation rates. The Stefan flow is distinct from diffusion as described by Fick's law, but diffusion almost always also occurs in multi-species systems that are experiencing the Stefan flow. In systems undergoing one of the species addition or removal processes mentioned previously, the addition or removal generates a mean flow in the flowing fluid as the fluid next to the interface is displaced by the production or removal of additional fluid by the processes occurring at the interface. The transport of the species by this mean flow is the Stefan flow. When concentration gradients of the species are also present, diffusion transports the species relative to the mean flow. The total transport rate of the species is then given by a summation of the Stefan flow and diffusive contributions. An example of the Stefan flow occurs when a droplet of liquid evaporates in air. In this case, the vapor/air mixture surrounding the droplet is the flowing fluid, and liquid/vapor boundary of the droplet is the interface. As heat is absorbed by the droplet from the environment, some of the liquid evaporates into vapor at the surface of the droplet, and flows away from the droplet as it is displaced by additional vapor evaporating from the droplet. This process causes the flowing medium to move away from the droplet at some mean speed that is dependent on Document 2::: At equilibrium, the relationship between water content and equilibrium relative humidity of a material can be displayed graphically by a curve, the so-called moisture sorption isotherm. For each humidity value, a sorption isotherm indicates the corresponding water content value at a given, constant temperature. If the composition or quality of the material changes, then its sorption behaviour also changes. Because of the complexity of sorption process the isotherms cannot be determined explicitly by calculation, but must be recorded experimentally for each product. The relationship between water content and water activity (aw) is complex. An increase in aw is usually accompanied by an increase in water content, but in a non-linear fashion. This relationship between water activity and moisture content at a given temperature is called the moisture sorption isotherm. These curves are determined experimentally and constitute the fingerprint of a food system. BET theory (Brunauer-Emmett-Teller) provides a calculation to describe the physical adsorption of gas molecules on a solid surface. Because of the complexity of the process, these calculations are only moderately successful; however, Stephen Brunauer was able to classify sorption isotherms into five generalized shapes as shown in Figure 2. He found that Type II and Type III isotherms require highly porous materials or desiccants, with first monolayer adsorption, followed by multilayer adsorption and finally leading to capillary condensation, explaining these materials high moisture capacity at high relative humidity. Care must be used in extracting data from isotherms, as the representation for each axis may vary in its designation. Brunauer provided the vertical axis as moles of gas adsorbed divided by the moles of the dry material, and on the horizontal axis he used the ratio of partial pressure of the gas just over the sample, divided by its partial pressure at saturation. More modern isotherms showing the Document 3::: A breakthrough curve in adsorption is the course of the effluent adsorptive concentration at the outlet of a fixed bed adsorber. Breakthrough curves are important for adsorptive separation technologies and for the characterization of porous materials. Importance Since almost all adsorptive separation processes are dynamic -meaning, that they are running under flow - testing porous materials for those applications for their separation performance has to be tested under flow as well. Since separation processes run with mixtures of different components, measuring several breakthrough curves results in thermodynamic mixture equilibria - mixture sorption isotherms, that are hardly accessible with static manometric sorption characterization. This enables the determination of sorption selectivities in gaseous and liquid phase. The determination of breakthrough curves is the foundation of many other processes, like the pressure swing adsorption. Within this process, the loading of one adsorber is equivalent to a breakthrough experiment. Measurement A fixed bed of porous materials (e.g. activated carbons and zeolites) is pressurized and purged with a carrier gas. After becoming stationary one or more adsorptives are added to the carrier gas, resulting in a step-wise change of the inlet concentration. This is in contrast to chromatographic separation processes, where pulse-wise changes of the inlet concentrations are used. The course of the adsorptive concentrations at the outlet of the fixed bed are monitored. Results Integration of the area above the entire breakthrough curve gives the maximum loading of the adsorptive material. Additionally, the duration of the breakthrough experiment until a certain threshold of the adsorptive concentration at the outlet can be measured, which enables the calculation of a technically usable sorption capacity. Up to this time, the quality of the product stream can be maintained. The shape of the breakthrough curves contains informat Document 4::: Moisture expansion is the tendency of matter to change in volume in response to a change in moisture content. The macroscopic effect is similar to that of thermal expansion but the microscopic causes are very different. Moisture expansion is caused by hygroscopy. Matter The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. Chris left a glass of water on a windowsill. When he looked at the glass a few days later, some of the water had evaporated. Which of the following best describes what happened to the particles of water that evaporated? A. They became larger in size. B. They spread out into the air. C. They were absorbed by the glass. D. They passed through the glass into the air. Answer:
sciq-11247
multiple_choice
Like the stem, what basic plant structure contains vascular bundles composed of xylem and phloem?
[ "flower", "root", "leaf", "bark" ]
C
Relavent Documents: Document 0::: Vascular plants (), also called tracheophytes () or collectively Tracheophyta (), form a large group of land plants ( accepted known species) that have lignified tissues (the xylem) for conducting water and minerals throughout the plant. They also have a specialized non-lignified tissue (the phloem) to conduct products of photosynthesis. Vascular plants include the clubmosses, horsetails, ferns, gymnosperms (including conifers), and angiosperms (flowering plants). Scientific names for the group include Tracheophyta, Tracheobionta and Equisetopsida sensu lato. Some early land plants (the rhyniophytes) had less developed vascular tissue; the term eutracheophyte has been used for all other vascular plants, including all living ones. Historically, vascular plants were known as "higher plants", as it was believed that they were further evolved than other plants due to being more complex organisms. However, this is an antiquated remnant of the obsolete scala naturae, and the term is generally considered to be unscientific. Characteristics Botanists define vascular plants by three primary characteristics: Vascular plants have vascular tissues which distribute resources through the plant. Two kinds of vascular tissue occur in plants: xylem and phloem. Phloem and xylem are closely associated with one another and are typically located immediately adjacent to each other in the plant. The combination of one xylem and one phloem strand adjacent to each other is known as a vascular bundle. The evolution of vascular tissue in plants allowed them to evolve to larger sizes than non-vascular plants, which lack these specialized conducting tissues and are thereby restricted to relatively small sizes. In vascular plants, the principal generation or phase is the sporophyte, which produces spores and is diploid (having two sets of chromosomes per cell). (By contrast, the principal generation phase in non-vascular plants is the gametophyte, which produces gametes and is haploid - with Document 1::: A stem is one of two main structural axes of a vascular plant, the other being the root. It supports leaves, flowers and fruits, transports water and dissolved substances between the roots and the shoots in the xylem and phloem, photosynthesis takes place here, stores nutrients, and produces new living tissue. The stem can also be called halm or haulm or culms. The stem is normally divided into nodes and internodes: The nodes are the points of attachment for leaves and can hold one or more leaves. There are sometimes axillary buds between the stem and leaf which can grow into branches (with leaves, conifer cones, or flowers). Adventitious roots may also be produced from the nodes. Vines may produce tendrils from nodes. The internodes distance one node from another. The term "shoots" is often confused with "stems"; "shoots" generally refers to new fresh plant growth, including both stems and other structures like leaves or flowers. In most plants, stems are located above the soil surface, but some plants have underground stems. Stems have several main functions: Support for and the elevation of leaves, flowers, and fruits. The stems keep the leaves in the light and provide a place for the plant to keep its flowers and fruits. Transport of fluids between the roots and the shoots in the xylem and phloem. Storage of nutrients. Production of new living tissue. The normal lifespan of plant cells is one to three years. Stems have cells called meristems that annually generate new living tissue. Photosynthesis. Stems have two pipe-like tissues called xylem and phloem. The xylem tissue arises from the cell facing inside and transports water by the action of transpiration pull, capillary action, and root pressure. The phloem tissue arises from the cell facing outside and consists of sieve tubes and their companion cells. The function of phloem tissue is to distribute food from photosynthetic tissue to other tissues. The two tissues are separated by cambium, a tis Document 2::: Xylem is one of the two types of transport tissue in vascular plants, the other being phloem. The basic function of the xylem is to transport water from roots to stems and leaves, but it also transports nutrients. The word xylem is derived from the Ancient Greek word (xylon), meaning "wood"; the best-known xylem tissue is wood, though it is found throughout a plant. The term was introduced by Carl Nägeli in 1858. Structure The most distinctive xylem cells are the long tracheary elements that transport water. Tracheids and vessel elements are distinguished by their shape; vessel elements are shorter, and are connected together into long tubes that are called vessels. Xylem also contains two other type of cells: parenchyma and fibers. Xylem can be found: in vascular bundles, present in non-woody plants and non-woody parts of woody plants in secondary xylem, laid down by a meristem called the vascular cambium in woody plants as part of a stelar arrangement not divided into bundles, as in many ferns. In transitional stages of plants with secondary growth, the first two categories are not mutually exclusive, although usually a vascular bundle will contain primary xylem only. The branching pattern exhibited by xylem follows Murray's law. Primary and secondary xylem Primary xylem is formed during primary growth from procambium. It includes protoxylem and metaxylem. Metaxylem develops after the protoxylem but before secondary xylem. Metaxylem has wider vessels and tracheids than protoxylem. Secondary xylem is formed during secondary growth from vascular cambium. Although secondary xylem is also found in members of the gymnosperm groups Gnetophyta and Ginkgophyta and to a lesser extent in members of the Cycadophyta, the two main groups in which secondary xylem can be found are: conifers (Coniferae): there are approximately 600 known species of conifers. All species have secondary xylem, which is relatively uniform in structure throughout this group. Many conife Document 3::: Vascular tissue is a complex conducting tissue, formed of more than one cell type, found in vascular plants. The primary components of vascular tissue are the xylem and phloem. These two tissues transport fluid and nutrients internally. There are also two meristems associated with vascular tissue: the vascular cambium and the cork cambium. All the vascular tissues within a particular plant together constitute the vascular tissue system of that plant. The cells in vascular tissue are typically long and slender. Since the xylem and phloem function in the conduction of water, minerals, and nutrients throughout the plant, it is not surprising that their form should be similar to pipes. The individual cells of phloem are connected end-to-end, just as the sections of a pipe might be. As the plant grows, new vascular tissue differentiates in the growing tips of the plant. The new tissue is aligned with existing vascular tissue, maintaining its connection throughout the plant. The vascular tissue in plants is arranged in long, discrete strands called vascular bundles. These bundles include both xylem and phloem, as well as supporting and protective cells. In stems and roots, the xylem typically lies closer to the interior of the stem with phloem towards the exterior of the stem. In the stems of some Asterales dicots, there may be phloem located inwardly from the xylem as well. Between the xylem and phloem is a meristem called the vascular cambium. This tissue divides off cells that will become additional xylem and phloem. This growth increases the girth of the plant, rather than its length. As long as the vascular cambium continues to produce new cells, the plant will continue to grow more stout. In trees and other plants that develop wood, the vascular cambium allows the expansion of vascular tissue that produces woody growth. Because this growth ruptures the epidermis of the stem, woody plants also have a cork cambium that develops among the phloem. The cork cambium g Document 4::: In biology, tissue is a historically derived biological organizational level between cells and a complete organ. A tissue is therefore often thought of as an assembly of similar cells and their extracellular matrix from the same embryonic origin that together carry out a specific function. Organs are then formed by the functional grouping together of multiple tissues. Biological organisms follow this hierarchy: Cells < Tissue < Organ < Organ System < Organism The English word "tissue" derives from the French word "tissu", the past participle of the verb tisser, "to weave". The study of tissues is known as histology or, in connection with disease, as histopathology. Xavier Bichat is considered as the "Father of Histology". Plant histology is studied in both plant anatomy and physiology. The classical tools for studying tissues are the paraffin block in which tissue is embedded and then sectioned, the histological stain, and the optical microscope. Developments in electron microscopy, immunofluorescence, and the use of frozen tissue-sections have enhanced the detail that can be observed in tissues. With these tools, the classical appearances of tissues can be examined in health and disease, enabling considerable refinement of medical diagnosis and prognosis. Plant tissue In plant anatomy, tissues are categorized broadly into three tissue systems: the epidermis, the ground tissue, and the vascular tissue. Epidermis – Cells forming the outer surface of the leaves and of the young plant body. Vascular tissue – The primary components of vascular tissue are the xylem and phloem. These transport fluids and nutrients internally. Ground tissue – Ground tissue is less differentiated than other tissues. Ground tissue manufactures nutrients by photosynthesis and stores reserve nutrients. Plant tissues can also be divided differently into two types: Meristematic tissues Permanent tissues. Meristematic tissue Meristematic tissue consists of actively dividing cell The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. Like the stem, what basic plant structure contains vascular bundles composed of xylem and phloem? A. flower B. root C. leaf D. bark Answer:
sciq-1553
multiple_choice
What are monosaccharides and disaccharides also called?
[ "complex sugars", "simple sugars", "simple chemicals", "basic sugars" ]
B
Relavent Documents: Document 0::: A diose is a monosaccharide containing two carbon atoms. Because the general chemical formula of an unmodified monosaccharide is (C·H2O)n, where n is three or greater, it does not meet the formal definition of a monosaccharide. However, since it does fit the formula (C·H2O)n, it is sometimes thought of as the most basic sugar. There is only one possible diose, glycolaldehyde (2-hydroxyethanal), which is an aldodiose (a ketodiose is not possible since there are only two carbons). See also Triose Tetrose Pentose Hexose Heptose Document 1::: Structure and nomenclature Carbohydrates are generally divided into monosaccharides, oligosaccharides, and polysaccharides depending on the number of sugar subunits. Maltose, with two sugar units, is a disaccharide, which falls under oligosaccharides. Glucose is a hexose: a monosaccharide containing six carbon atoms. The two glucose units are in the pyranose form and are joined by an O-glycosidic bond, with the first carbon (C1) of the first glucose linked to the fourth carbon (C4) of the second glucose, indicated as (1→4). The link is characterized as α because the glycosidic bond to the anomeric carbon (C1) is in the opposite plane from the substituent in the same ring (C6 of the first glucose). If the glycosidic bond to the anomeric carbon (C1) were in the same plane as the substituent, it would be classified as a β(1→4) bond, and the resulting molecule would be cellobiose. The anomeric carbon (C1) of the second glucose molecule, which is not involved in a glycosidic bond, could be either an α- or β-anomer depending on the bond direction of the attached hydroxyl group relative to the substituent of the same ring, resulting in either α- Document 2::: A reducing sugar is any sugar that is capable of acting as a reducing agent. In an alkaline solution, a reducing sugar forms some aldehyde or ketone, which allows it to act as a reducing agent, for example in Benedict's reagent. In such a reaction, the sugar becomes a carboxylic acid. All monosaccharides are reducing sugars, along with some disaccharides, some oligosaccharides, and some polysaccharides. The monosaccharides can be divided into two groups: the aldoses, which have an aldehyde group, and the ketoses, which have a ketone group. Ketoses must first tautomerize to aldoses before they can act as reducing sugars. The common dietary monosaccharides galactose, glucose and fructose are all reducing sugars. Disaccharides are formed from two monosaccharides and can be classified as either reducing or nonreducing. Nonreducing disaccharides like sucrose and trehalose have glycosidic bonds between their anomeric carbons and thus cannot convert to an open-chain form with an aldehyde group; they are stuck in the cyclic form. Reducing disaccharides like lactose and maltose have only one of their two anomeric carbons involved in the glycosidic bond, while the other is free and can convert to an open-chain form with an aldehyde group. The aldehyde functional group allows the sugar to act as a reducing agent, for example, in the Tollens' test or Benedict's test. The cyclic hemiacetal forms of aldoses can open to reveal an aldehyde, and certain ketoses can undergo tautomerization to become aldoses. However, acetals, including those found in polysaccharide linkages, cannot easily become free aldehydes. Reducing sugars react with amino acids in the Maillard reaction, a series of reactions that occurs while cooking food at high temperatures and that is important in determining the flavor of food. Also, the levels of reducing sugars in wine, juice, and sugarcane are indicative of the quality of these food products. Terminology Oxidation-reduction A reducing sugar is on Document 3::: In biochemistry, saccharification is a term for denoting any chemical change wherein a monosaccharide molecule remains intact after becoming unbound from another saccharide. For example, when a carbohydrate is broken into its component sugar molecules by hydrolysis (e.g., sucrose being broken down into glucose and fructose). Enzymes such as amylases (e.g. in saliva) and glycoside hydrolase (e.g. within the brush border of the small intestine) are able to perform exact saccharification through enzymatic hydrolysis. Through thermolysis, saccharification can also occur as a transient result, among many other possible effects, during caramelization. See also Glycosidic bond Glycoside hydrolase Gelation Document 4::: 2α-Mannobiose is a disaccharide. It is formed by a condensation reaction, when two mannose molecules react together, in the formation of a glycosidic bond. The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. What are monosaccharides and disaccharides also called? A. complex sugars B. simple sugars C. simple chemicals D. basic sugars Answer:
sciq-6628
multiple_choice
What is the transfer of thermal energy by waves that can travel through empty space called?
[ "radiation", "induction", "convection", "vibration" ]
A
Relavent Documents: Document 0::: Heat transfer is a discipline of thermal engineering that concerns the generation, use, conversion, and exchange of thermal energy (heat) between physical systems. Heat transfer is classified into various mechanisms, such as thermal conduction, thermal convection, thermal radiation, and transfer of energy by phase changes. Engineers also consider the transfer of mass of differing chemical species (mass transfer in the form of advection), either cold or hot, to achieve heat transfer. While these mechanisms have distinct characteristics, they often occur simultaneously in the same system. Heat conduction, also called diffusion, is the direct microscopic exchanges of kinetic energy of particles (such as molecules) or quasiparticles (such as lattice waves) through the boundary between two systems. When an object is at a different temperature from another body or its surroundings, heat flows so that the body and the surroundings reach the same temperature, at which point they are in thermal equilibrium. Such spontaneous heat transfer always occurs from a region of high temperature to another region of lower temperature, as described in the second law of thermodynamics. Heat convection occurs when the bulk flow of a fluid (gas or liquid) carries its heat through the fluid. All convective processes also move heat partly by diffusion, as well. The flow of fluid may be forced by external processes, or sometimes (in gravitational fields) by buoyancy forces caused when thermal energy expands the fluid (for example in a fire plume), thus influencing its own transfer. The latter process is often called "natural convection". The former process is often called "forced convection." In this case, the fluid is forced to flow by use of a pump, fan, or other mechanical means. Thermal radiation occurs through a vacuum or any transparent medium (solid or fluid or gas). It is the transfer of energy by means of photons or electromagnetic waves governed by the same laws. Overview Heat Document 1::: Thermofluids is a branch of science and engineering encompassing four intersecting fields: Heat transfer Thermodynamics Fluid mechanics Combustion The term is a combination of "thermo", referring to heat, and "fluids", which refers to liquids, gases and vapors. Temperature, pressure, equations of state, and transport laws all play an important role in thermofluid problems. Phase transition and chemical reactions may also be important in a thermofluid context. The subject is sometimes also referred to as "thermal fluids". Heat transfer Heat transfer is a discipline of thermal engineering that concerns the transfer of thermal energy from one physical system to another. Heat transfer is classified into various mechanisms, such as heat conduction, convection, thermal radiation, and phase-change transfer. Engineers also consider the transfer of mass of differing chemical species, either cold or hot, to achieve heat transfer. Sections include : Energy transfer by heat, work and mass Laws of thermodynamics Entropy Refrigeration Techniques Properties and nature of pure substances Applications Engineering : Predicting and analysing the performance of machines Thermodynamics Thermodynamics is the science of energy conversion involving heat and other forms of energy, most notably mechanical work. It studies and interrelates the macroscopic variables, such as temperature, volume and pressure, which describe physical, thermodynamic systems. Fluid mechanics Fluid Mechanics the study of the physical forces at work during fluid flow. Fluid mechanics can be divided into fluid kinematics, the study of fluid motion, and fluid kinetics, the study of the effect of forces on fluid motion. Fluid mechanics can further be divided into fluid statics, the study of fluids at rest, and fluid dynamics, the study of fluids in motion. Some of its more interesting concepts include momentum and reactive forces in fluid flow and fluid machinery theory and performance. Sections include: Flu Document 2::: Flux describes any effect that appears to pass or travel (whether it actually moves or not) through a surface or substance. Flux is a concept in applied mathematics and vector calculus which has many applications to physics. For transport phenomena, flux is a vector quantity, describing the magnitude and direction of the flow of a substance or property. In vector calculus flux is a scalar quantity, defined as the surface integral of the perpendicular component of a vector field over a surface. Terminology The word flux comes from Latin: fluxus means "flow", and fluere is "to flow". As fluxion, this term was introduced into differential calculus by Isaac Newton. The concept of heat flux was a key contribution of Joseph Fourier, in the analysis of heat transfer phenomena. His seminal treatise Théorie analytique de la chaleur (The Analytical Theory of Heat), defines fluxion as a central quantity and proceeds to derive the now well-known expressions of flux in terms of temperature differences across a slab, and then more generally in terms of temperature gradients or differentials of temperature, across other geometries. One could argue, based on the work of James Clerk Maxwell, that the transport definition precedes the definition of flux used in electromagnetism. The specific quote from Maxwell is: According to the transport definition, flux may be a single vector, or it may be a vector field / function of position. In the latter case flux can readily be integrated over a surface. By contrast, according to the electromagnetism definition, flux is the integral over a surface; it makes no sense to integrate a second-definition flux for one would be integrating over a surface twice. Thus, Maxwell's quote only makes sense if "flux" is being used according to the transport definition (and furthermore is a vector field rather than single vector). This is ironic because Maxwell was one of the major developers of what we now call "electric flux" and "magnetic flux" accor Document 3::: Conduction is the process by which heat is transferred from the hotter end to the colder end of an object. The ability of the object to conduct heat is known as its thermal conductivity, and is denoted . Heat spontaneously flows along a temperature gradient (i.e. from a hotter body to a colder body). For example, heat is conducted from the hotplate of an electric stove to the bottom of a saucepan in contact with it. In the absence of an opposing external driving energy source, within a body or between bodies, temperature differences decay over time, and thermal equilibrium is approached, temperature becoming more uniform. In conduction, the heat flow is within and through the body itself. In contrast, in heat transfer by thermal radiation, the transfer is often between bodies, which may be separated spatially. Heat can also be transferred by a combination of conduction and radiation. In solids, conduction is mediated by the combination of vibrations and collisions of molecules, propagation and collisions of phonons, and diffusion and collisions of free electrons. In gases and liquids, conduction is due to the collisions and diffusion of molecules during their random motion. Photons in this context do not collide with one another, and so heat transport by electromagnetic radiation is conceptually distinct from heat conduction by microscopic diffusion and collisions of material particles and phonons. But the distinction is often not easily observed unless the material is semi-transparent. In the engineering sciences, heat transfer includes the processes of thermal radiation, convection, and sometimes mass transfer. Usually, more than one of these processes occurs in a given situation. Overview On a microscopic scale, conduction occurs within a body considered as being stationary; this means that the kinetic and potential energies of the bulk motion of the body are separately accounted for. Internal energy diffuses as rapidly moving or vibrating atoms and molecule Document 4::: Conceptual questions or conceptual problems in science, technology, engineering, and mathematics (STEM) education are questions that can be answered based only on the knowledge of relevant concepts, rather than performing extensive calculations. They contrast with most homework and exam problems in science and engineering that typically require plugging in numerical values into previously discussed formulas. Such "plug-and-chug" numerical problems can often be solved correctly by just matching the pattern of the problem to a previously discussed problem and changing the numerical inputs, which requires significant amounts of time to perform the calculations but does not test or deepen the understanding of how the concepts and formulas should work together. Conceptual questions, therefore, provide a good complement to conventional numerical problems because they need minimal or no calculations and instead encourage the students to engage more deeply with the underlying concepts and how they relate to formulas. Conceptual problems are often formulated as multiple-choice questions, making them easy to use during in-class discussions, particularly when utilizing active learning, peer instruction, and audience response. An example of a conceptual question in undergraduate thermodynamics is provided below: During adiabatic expansion of an ideal gas, its temperatureincreases decreases stays the same Impossible to tell/need more information The use of conceptual questions in physics was popularized by Eric Mazur, particularly in the form of multiple-choice tests that he called ConcepTests. In recent years, multiple websites that maintain lists of conceptual questions have been created by instructors for various disciplines. Some books on physics provide many examples of conceptual questions as well. Multiple conceptual questions can be assembled into a concept inventory to test the working knowledge of students at the beginning of a course or to track the improvement in The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. What is the transfer of thermal energy by waves that can travel through empty space called? A. radiation B. induction C. convection D. vibration Answer:
sciq-11251
multiple_choice
In adaptive radiation, what is the name of the initial species that then subsequently becomes multiple other ones?
[ "pioneer", "father", "founder", "Mother" ]
C
Relavent Documents: Document 0::: An evolutionary radiation is an increase in taxonomic diversity that is caused by elevated rates of speciation, that may or may not be associated with an increase in morphological disparity. A significantly large and diverse radiation within a relatively short geologic time scale (e.g. a period or epoch) is often referred to as an explosion. Radiations may affect one clade or many, and be rapid or gradual; where they are rapid, and driven by a single lineage's adaptation to their environment, they are termed adaptive radiations. Examples Perhaps the most familiar example of an evolutionary radiation is that of placental mammals immediately after the extinction of the non-avian dinosaurs at the end of the Cretaceous, about 66 million years ago. At that time, the placental mammals were mostly small, insect-eating animals similar in size and shape to modern shrews. By the Eocene (58–37 million years ago), they had evolved into such diverse forms as bats, whales, and horses. Other familiar radiations include the Avalon Explosion, the Cambrian Explosion, the Great Ordovician Biodiversification Event, the Carboniferous-Earliest Permian Biodiversification Event, the Mesozoic–Cenozoic Radiation, the radiation of land plants after their colonisation of land, the Cretaceous radiation of angiosperms, and the diversification of insects, a radiation that has continued almost unabated since the Devonian, . Types Adaptive radiations involve an increase in a clade's speciation rate coupled with divergence of morphological features that are directly related to ecological habits; these radiations involve speciation not driven by geographic factors and occurring in sympatry; they also may be associated with the acquisition of a key trait. Nonadaptive radiations arguably encompass every type of evolutionary radiation that is not an adaptive radiation, although when a more precise mechanism is known to drive diversity, it can be useful to refer to the pattern as, e.g., a geographic r Document 1::: Plant evolution is the subset of evolutionary phenomena that concern plants. Evolutionary phenomena are characteristics of populations that are described by averages, medians, distributions, and other statistical methods. This distinguishes plant evolution from plant development, a branch of developmental biology which concerns the changes that individuals go through in their lives. The study of plant evolution attempts to explain how the present diversity of plants arose over geologic time. It includes the study of genetic change and the consequent variation that often results in speciation, one of the most important types of radiation into taxonomic groups called clades. A description of radiation is called a phylogeny and is often represented by type of diagram called a phylogenetic tree. Evolutionary trends Differences between plant and animal physiology and reproduction cause minor differences in how they evolve. One major difference is the totipotent nature of plant cells, allowing them to reproduce asexually much more easily than most animals. They are also capable of polyploidy – where more than two chromosome sets are inherited from the parents. This allows relatively fast bursts of evolution to occur, for example by the effect of gene duplication. The long periods of dormancy that seed plants can employ also makes them less vulnerable to extinction, as they can "sit out" the tough periods and wait until more clement times to leap back to life. The effect of these differences is most profoundly seen during extinction events. These events, which wiped out between 6 and 62% of terrestrial animal families, had "negligible" effect on plant families. However, the ecosystem structure is significantly rearranged, with the abundances and distributions of different groups of plants changing profoundly. These effects are perhaps due to the higher diversity within families, as extinction – which was common at the species level – was very selective. For example, win Document 2::: In evolutionary biology, adaptive radiation is a process in which organisms diversify rapidly from an ancestral species into a multitude of new forms, particularly when a change in the environment makes new resources available, alters biotic interactions or opens new environmental niches. Starting with a single ancestor, this process results in the speciation and phenotypic adaptation of an array of species exhibiting different morphological and physiological traits. The prototypical example of adaptive radiation is finch speciation on the Galapagos ("Darwin's finches"), but examples are known from around the world. Characteristics Four features can be used to identify an adaptive radiation: A common ancestry of component species: specifically a recent ancestry. Note that this is not the same as a monophyly in which all descendants of a common ancestor are included. A phenotype-environment correlation: a significant association between environments and the morphological and physiological traits used to exploit those environments. Trait utility: the performance or fitness advantages of trait values in their corresponding environments. Rapid speciation: presence of one or more bursts in the emergence of new species around the time that ecological and phenotypic divergence is underway. Conditions Adaptive radiations are thought to be triggered by an ecological opportunity or a new adaptive zone. Sources of ecological opportunity can be the loss of antagonists (competitors or predators), the evolution of a key innovation or dispersal to a new environment. Any one of these ecological opportunities has the potential to result in an increase in population size and relaxed stabilizing (constraining) selection. As genetic diversity is positively correlated with population size the expanded population will have more genetic diversity compared to the ancestral population. With reduced stabilizing selection phenotypic diversity can also increase. In addition, intraspecific Document 3::: This is a list of topics in evolutionary biology. A abiogenesis – adaptation – adaptive mutation – adaptive radiation – allele – allele frequency – allochronic speciation – allopatric speciation – altruism – : anagenesis – anti-predator adaptation – applications of evolution – aposematism – Archaeopteryx – aquatic adaptation – artificial selection – atavism B Henry Walter Bates – biological organisation – Brassica oleracea – breed C Cambrian explosion – camouflage – Sean B. Carroll – catagenesis – gene-centered view of evolution – cephalization – Sergei Chetverikov – chronobiology – chronospecies – clade – cladistics – climatic adaptation – coalescent theory – co-evolution – co-operation – coefficient of relationship – common descent – convergent evolution – creation–evolution controversy – cultivar – conspecific song preference D Darwin (unit) – Charles Darwin – Darwinism – Darwin's finches – Richard Dawkins – directed mutagenesis – Directed evolution – directional selection – Theodosius Dobzhansky – dog breeding – domestication – domestication of the horse E E. coli long-term evolution experiment – ecological genetics – ecological selection – ecological speciation – Endless Forms Most Beautiful – endosymbiosis – error threshold (evolution) – evidence of common descent – evolution – evolutionary arms race – evolutionary capacitance Evolution: of ageing – of the brain – of cetaceans – of complexity – of dinosaurs – of the eye – of fish – of the horse – of insects – of human intelligence – of mammalian auditory ossicles – of mammals – of monogamy – of sex – of sirenians – of tetrapods – of the wolf evolutionary developmental biology – evolutionary dynamics – evolutionary game theory – evolutionary history of life – evolutionary history of plants – evolutionary medicine – evolutionary neuroscience – evolutionary psychology – evolutionary radiation – evolutionarily stable strategy – evolutionary taxonomy – evolutionary tree – evolvability – experimental evol Document 4::: Adaptive type – in evolutionary biology – is any population or taxon which have the potential for a particular or total occupation of given free of underutilized home habitats or position in the general economy of nature. In evolutionary sense, the emergence of new adaptive type is usually a result of adaptive radiation certain groups of organisms in which they arise categories that can effectively exploit temporary, or new conditions of the environment. Such evolutive units with its distinctive – morphological and anatomical, physiological and other characteristics, i.e. genetic and adjustments (feature) have a predisposition for an occupation certain home habitats or position in the general nature economy. Simply, the adaptive type is one group organisms whose general biological properties represent a key to open the entrance to the observed adaptive zone in the observed natural ecological complex. Adaptive types are spatially and temporally specific. Since the frames of general biological properties these types of substantially genetic are defined between, in effect the emergence of new adaptive types of the corresponding change in population genetic structure and eternal contradiction between the need for optimal adapted well the conditions of living environment, while maintaining genetic variation for survival in a possible new circumstances. For example, the specific place in the economy of nature existed millions of years before the appearance of human type. However, just when the process of evolution of primates (order Primates) reached a level that is able to occupy that position, it is open, and then (in leaving world) an unprecedented acceleration increasingly spreading. Culture, in the broadest sense, is a key adaptation of adaptive type type of Homo sapiens the occupation of existing adaptive zone through work, also in the broadest sense of the term. The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. In adaptive radiation, what is the name of the initial species that then subsequently becomes multiple other ones? A. pioneer B. father C. founder D. Mother Answer:
sciq-3951
multiple_choice
Where does much of the blood that enters the atria flow?
[ "muscles", "ventricles", "arteries", "lungs" ]
B
Relavent Documents: Document 0::: Great vessels are the large vessels that bring blood to and from the heart. These are: Superior vena cava Inferior vena cava Pulmonary arteries Pulmonary veins Aorta Transposition of the great vessels is a group of congenital heart defects involving an abnormal spatial arrangement of any of the great vessels. Document 1::: Pulmocutaneous circulation is part of the amphibian circulatory system. It is responsible for directing blood to the skin and lungs. Blood flows from the ventricle into an artery called the conus arteriosus and from there into either the left or right truncus arteriosus. They in turn each split the ventricle's output into the pulmocutaneous circuit and the systemic circuit. See also Double circulatory system Document 2::: Veins () are blood vessels in the circulatory system of humans and most other animals that carry blood toward the heart. Most veins carry deoxygenated blood from the tissues back to the heart; exceptions are those of the pulmonary and fetal circulations which carry oxygenated blood to the heart. In the systemic circulation arteries carry oxygenated blood away from the heart, and veins return deoxygenated blood to the heart, in the deep veins. There are three sizes of veins, large, medium, and small. Smaller veins are called venules, and the smallest the post-capillary venules are microscopic that make up the veins of the microcirculation. Veins are often closer to the skin than arteries. Veins have less smooth muscle and connective tissue and wider internal diameters than arteries. Because of their thinner walls and wider lumens they are able to expand and hold more blood. This greater capacity gives them the term of capacitance vessels. At any time, nearly 70% of the total volume of blood in the human body is in the veins. In medium and large sized veins the flow of blood is maintained by one-way (unidirectional) venous valves to prevent backflow. In the lower limbs this is also aided by muscle pumps, also known as venous pumps that exert pressure on intramuscular veins when they contract and drive blood back to the heart. Structure There are three sizes of vein, large, medium, and small. Smaller veins are called venules. The smallest veins are the post-capillary venules. Veins have a similar three-layered structure to arteries. The layers known as tunicae have a concentric arrangement that forms the wall of the vessel. The outer layer, is a thick layer of connective tissue called the tunica externa or adventitia; this layer is absent in the post-capillary venules. The middle layer, consists of bands of smooth muscle and is known as the tunica media. The inner layer, is a thin lining of endothelium known as the tunica intima. The tunica media in the veins is mu Document 3::: The pulmonary veins are the veins that transfer oxygenated blood from the lungs to the heart. The largest pulmonary veins are the four main pulmonary veins, two from each lung that drain into the left atrium of the heart. The pulmonary veins are part of the pulmonary circulation. Structure There are four main pulmonary veins, two from each lung – an inferior and a superior main vein, emerging from each hilum. The main pulmonary veins receive blood from three or four feeding veins in each lung, and drain into the left atrium. The peripheral feeding veins do not follow the bronchial tree. They run between the pulmonary segments from which they drain the blood. At the root of the lung, the right superior pulmonary vein lies in front of and a little below the pulmonary artery; the inferior is situated at the lowest part of the lung hilum. Behind the pulmonary artery is the bronchus. The right main pulmonary veins (contains oxygenated blood) pass behind the right atrium and superior vena cava; the left in front of the descending thoracic aorta. Variation Occasionally the three lobar veins on the right side remain separate, and not infrequently the two left lobar veins end by a common opening into the left atrium. Therefore, the number of pulmonary veins opening into the left atrium can vary between three and five in the healthy population. The two left lobar veins may be united as a single pulmonary vein in about 25% of people; the two right veins may be united in about 3%. Function The pulmonary veins play an essential role in respiration, by receiving blood that has been oxygenated in the alveoli and returning it to the left atrium. Clinical significance As part of the pulmonary circulation they carry oxygenated blood back to the heart, as opposed to the veins of the systemic circulation which carry deoxygenated blood. On chest X-ray, the diameters of pulmonary veins increases from upper to lower lobes, from 3 mm at the first intercoastal space, to 6 mm jus Document 4::: In haemodynamics, the body must respond to physical activities, external temperature, and other factors by homeostatically adjusting its blood flow to deliver nutrients such as oxygen and glucose to stressed tissues and allow them to function. Haemodynamic response (HR) allows the rapid delivery of blood to active neuronal tissues. The brain consumes large amounts of energy but does not have a reservoir of stored energy substrates. Since higher processes in the brain occur almost constantly, cerebral blood flow is essential for the maintenance of neurons, astrocytes, and other cells of the brain. This coupling between neuronal activity and blood flow is also referred to as neurovascular coupling. Vascular anatomy overview In order to understand how blood is delivered to cranial tissues, it is important to understand the vascular anatomy of the space itself. Large cerebral arteries in the brain split into smaller arterioles, also known as pial arteries. These consist of endothelial cells and smooth muscle cells, and as these pial arteries further branch and run deeper into the brain, they associate with glial cells, namely astrocytes. The intracerebral arterioles and capillaries are unlike systemic arterioles and capillaries in that they do not readily allow substances to diffuse through them; they are connected by tight junctions in order to form the blood brain barrier (BBB). Endothelial cells, smooth muscle, neurons, astrocytes, and pericytes work together in the brain order to maintain the BBB while still delivering nutrients to tissues and adjusting blood flow in the intracranial space to maintain homeostasis. As they work as a functional neurovascular unit, alterations in their interactions at the cellular level can impair HR in the brain and lead to deviations in normal nervous function. Mechanisms Various cell types play a role in HR, including astrocytes, smooth muscle cells, endothelial cells of blood vessels, and pericytes. These cells control whether th The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. Where does much of the blood that enters the atria flow? A. muscles B. ventricles C. arteries D. lungs Answer:
sciq-3444
multiple_choice
What is the most important process for the survival of a species?
[ "digestion", "differentiation", "metabolism", "reproduction" ]
D
Relavent Documents: Document 0::: Biological processes are those processes that are vital for an organism to live, and that shape its capacities for interacting with its environment. Biological processes are made of many chemical reactions or other events that are involved in the persistence and transformation of life forms. Metabolism and homeostasis are examples. Biological processes within an organism can also work as bioindicators. Scientists are able to look at an individual's biological processes to monitor the effects of environmental changes. Regulation of biological processes occurs when any process is modulated in its frequency, rate or extent. Biological processes are regulated by many means; examples include the control of gene expression, protein modification or interaction with a protein or substrate molecule. Homeostasis: regulation of the internal environment to maintain a constant state; for example, sweating to reduce temperature Organization: being structurally composed of one or more cells – the basic units of life Metabolism: transformation of energy by converting chemicals and energy into cellular components (anabolism) and decomposing organic matter (catabolism). Living things require energy to maintain internal organization (homeostasis) and to produce the other phenomena associated with life. Growth: maintenance of a higher rate of anabolism than catabolism. A growing organism increases in size in all of its parts, rather than simply accumulating matter. Response to stimuli: a response can take many forms, from the contraction of a unicellular organism to external chemicals, to complex reactions involving all the senses of multicellular organisms. A response is often expressed by motion; for example, the leaves of a plant turning toward the sun (phototropism), and chemotaxis. Reproduction: the ability to produce new individual organisms, either asexually from a single parent organism or sexually from two parent organisms. Interaction between organisms. the processes Document 1::: Several universities have designed interdisciplinary courses with a focus on human biology at the undergraduate level. There is a wide variation in emphasis ranging from business, social studies, public policy, healthcare and pharmaceutical research. Americas Human Biology major at Stanford University, Palo Alto (since 1970) Stanford's Human Biology Program is an undergraduate major; it integrates the natural and social sciences in the study of human beings. It is interdisciplinary and policy-oriented and was founded in 1970 by a group of Stanford faculty (Professors Dornbusch, Ehrlich, Hamburg, Hastorf, Kennedy, Kretchmer, Lederberg, and Pittendrigh). It is a very popular major and alumni have gone to post-graduate education, medical school, law, business and government. Human and Social Biology (Caribbean) Human and Social Biology is a Level 4 & 5 subject in the secondary and post-secondary schools in the Caribbean and is optional for the Caribbean Secondary Education Certification (CSEC) which is equivalent to Ordinary Level (O-Level) under the British school system. The syllabus centers on structure and functioning (anatomy, physiology, biochemistry) of human body and the relevance to human health with Caribbean-specific experience. The syllabus is organized under five main sections: Living organisms and the environment, life processes, heredity and variation, disease and its impact on humans, the impact of human activities on the environment. Human Biology Program at University of Toronto The University of Toronto offers an undergraduate program in Human Biology that is jointly offered by the Faculty of Arts & Science and the Faculty of Medicine. The program offers several major and specialist options in: human biology, neuroscience, health & disease, global health, and fundamental genetics and its applications. Asia BSc (Honours) Human Biology at All India Institute of Medical Sciences, New Delhi (1980–2002) BSc (honours) Human Biology at AIIMS (New Document 2::: This glossary of biology terms is a list of definitions of fundamental terms and concepts used in biology, the study of life and of living organisms. It is intended as introductory material for novices; for more specific and technical definitions from sub-disciplines and related fields, see Glossary of cell biology, Glossary of genetics, Glossary of evolutionary biology, Glossary of ecology, Glossary of environmental science and Glossary of scientific naming, or any of the organism-specific glossaries in :Category:Glossaries of biology. A B C D E F G H I J K L M N O P R S T U V W X Y Z Related to this search Index of biology articles Outline of biology Glossaries of sub-disciplines and related fields: Glossary of botany Glossary of ecology Glossary of entomology Glossary of environmental science Glossary of genetics Glossary of ichthyology Glossary of ornithology Glossary of scientific naming Glossary of speciation Glossary of virology Document 3::: Merriam-Webster defines chemotaxonomy as the method of biological classification based on similarities and dissimilarity in the structure of certain compounds among the organisms being classified. Advocates argue that, as proteins are more closely controlled by genes and less subjected to natural selection than the anatomical features, they are more reliable indicators of genetic relationships. The compounds studied most are proteins, amino acids, nucleic acids, peptides etc. Physiology is the study of working of organs in a living being. Since working of the organs involves chemicals of the body, these compounds are called biochemical evidences. The study of morphological change has shown that there are changes in the structure of animals which result in evolution. When changes take place in the structure of a living organism, they will naturally be accompanied by changes in the physiological or biochemical processes. John Griffith Vaughan was one of the pioneers of chemotaxonomy. Biochemical products The body of any animal in the animal kingdom is made up of a number of chemicals. Of these, only a few biochemical products have been taken into consideration to derive evidence for evolution. Protoplasm: Every living cell, from a bacterium to an elephant, from grasses to the blue whale, has protoplasm. Though the complexity and constituents of the protoplasm increases from lower to higher living organism, the basic compound is always the protoplasm. Evolutionary significance: From this evidence, it is clear that all living things have a common origin point or a common ancestor, which in turn had protoplasm. Its complexity increased due to changes in the mode of life and habitat. Nucleic acids: DNA and RNA are the two types of nucleic acids present in all living organisms. They are present in the chromosomes. The structure of these acids has been found to be similar in all animals. DNA always has two chains forming a double helix, and each chain is made up of nuc Document 4::: Evolutionary biology is the subfield of biology that studies the evolutionary processes (natural selection, common descent, speciation) that produced the diversity of life on Earth. It is also defined as the study of the history of life forms on Earth. Evolution holds that all species are related and gradually change over generations. In a population, the genetic variations affect the phenotypes (physical characteristics) of an organism. These changes in the phenotypes will be an advantage to some organisms, which will then be passed onto their offspring. Some examples of evolution in species over many generations are the peppered moth and flightless birds. In the 1930s, the discipline of evolutionary biology emerged through what Julian Huxley called the modern synthesis of understanding, from previously unrelated fields of biological research, such as genetics and ecology, systematics, and paleontology. The investigational range of current research has widened to encompass the genetic architecture of adaptation, molecular evolution, and the different forces that contribute to evolution, such as sexual selection, genetic drift, and biogeography. Moreover, the newer field of evolutionary developmental biology ("evo-devo") investigates how embryogenesis is controlled, thus yielding a wider synthesis that integrates developmental biology with the fields of study covered by the earlier evolutionary synthesis. Subfields Evolution is the central unifying concept in biology. Biology can be divided into various ways. One way is by the level of biological organization, from molecular to cell, organism to population. Another way is by perceived taxonomic group, with fields such as zoology, botany, and microbiology, reflecting what was once seen as the major divisions of life. A third way is by approaches, such as field biology, theoretical biology, experimental evolution, and paleontology. These alternative ways of dividing up the subject have been combined with evolution The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. What is the most important process for the survival of a species? A. digestion B. differentiation C. metabolism D. reproduction Answer:
sciq-9573
multiple_choice
The anterior muscles of the neck facilitate swallowing and what else?
[ "hearing", "perspiration", "speech", "crying" ]
C
Relavent Documents: Document 0::: Swallowing, sometimes called deglutition in scientific contexts, is the process in the human or animal body that allows for a substance to pass from the mouth, to the pharynx, and into the esophagus, while shutting the epiglottis. Swallowing is an important part of eating and drinking. If the process fails and the material (such as food, drink, or medicine) goes through the trachea, then choking or pulmonary aspiration can occur. In the human body the automatic temporary closing of the epiglottis is controlled by the swallowing reflex. The portion of food, drink, or other material that will move through the neck in one swallow is called a bolus. In colloquial English, the term "swallowing" is also used to describe the action of taking in a large mouthful of food without any biting, where the word gulping is more adequate. In humans Swallowing comes so easily to most people that the process rarely prompts much thought. However, from the viewpoints of physiology, of speech–language pathology, and of health care for people with difficulty in swallowing (dysphagia), it is an interesting topic with extensive scientific literature. Coordination and control Eating and swallowing are complex neuromuscular activities consisting essentially of three phases, an oral, pharyngeal and esophageal phase. Each phase is controlled by a different neurological mechanism. The oral phase, which is entirely voluntary, is mainly controlled by the medial temporal lobes and limbic system of the cerebral cortex with contributions from the motor cortex and other cortical areas. The pharyngeal swallow is started by the oral phase and subsequently is coordinated by the swallowing center on the medulla oblongata and pons. The reflex is initiated by touch receptors in the pharynx as a bolus of food is pushed to the back of the mouth by the tongue, or by stimulation of the palate (palatal reflex). Swallowing is a complex mechanism using both skeletal muscle (tongue) and smooth muscles of the p Document 1::: A posterior or subscapular group of six or seven glands is placed along the lower margin of the posterior wall of the axilla in the course of the subscapular artery. The afferents of this group drain the skin and muscles of the lower part of the back of the neck and of the posterior thoracic wall; their efferents pass to the central group of axillary glands. Additional images Document 2::: Oral myology (also known as "orofacial myology") is the field of study that involves the evaluation and treatment (known as "orofacial myofunctional therapy") of the oral and facial musculature, including the muscles of the tongue, lips, cheeks, and jaw. Use Orofacial myofunctional therapy treatment is most commonly used to retrain oral rest posture, swallowing patterns in the oral phase, and speech. Tongue thrust and thumb sucking A major focus of the field of oral myology and treatment of orofacial myofunctional disorders include tongue posture and establishing equilibrium between the tongue, lips and the cheek muscles. Tongue exercise proved to be successful in treating tongue thrust. Tongue exercise alone was reported to be successful in cessation of thumb sucking and treatment of anterior open bite malocclusion. When the tongue rests against the palate it begins to expand the maxilla by applying a slow and consistent force to the lingual (tongue side) surfaces of the teeth. This may aid in the treatment of crooked teeth and under-developed face. Sleep apnea and snoring Oral myology plays also an important role in the management of patients with sleep breathing disorders and snoring where oropharyngeal exercises were found to reduce the severity and primary symptoms of obstructive sleep apnea. Poor positioning of the tongue affects breathing and allows a series of events to occur that can affect the orofacial complex. Patients with sleep apnea and other breathing difficulties usually have decreased tone and mobility in the cheek, tongue, lip, and soft palate, and sensory alterations due to a tendency to engage in mouth breathing rather than nasal breathing. In treatment of sleep apnea, oral myology therapy involves a series of exercises designed to improve tongue position and tongue function for a better control of the extrinsic tongue muscles and place the tongue in a ‘‘proper posture during function and at rest.’’ Dysphagia Disruption of normal swallowi Document 3::: The frontalis muscle () is a muscle which covers parts of the forehead of the skull. Some sources consider the frontalis muscle to be a distinct muscle. However, Terminologia Anatomica currently classifies it as part of the occipitofrontalis muscle along with the occipitalis muscle. In humans, the frontalis muscle only serves for facial expressions. The frontalis muscle is supplied by the facial nerve and receives blood from the supraorbital and supratrochlear arteries. Structure The frontalis muscle is thin, of a quadrilateral form, and intimately adherent to the superficial fascia. It is broader than the occipitalis and its fibers are longer and paler in color. It is located on the front of the head. The muscle has no bony attachments. Its medial fibers are continuous with those of the procerus; its intermediate fibers blend with the corrugator and orbicularis oculi muscles, thus attached to the skin of the eyebrows; and its lateral fibers are also blended with the latter muscle over the zygomatic process of the frontal bone. From these attachments the fibers are directed upward, and join the galea aponeurotica below the coronal suture. The medial margins of the frontalis muscles are joined together for some distance above the root of the nose; but between the occipitales there is a considerable, though variable, interval, occupied by the galea aponeurotica. Function In humans, the frontalis muscle only serves for facial expressions. In the eyebrows, its primary function is to lift them (thus opposing the orbital portion of the orbicularis), especially when looking up. It also acts when a view is too distant or dim. The frontalis muscle also serves to wrinkle the forehead. Additional images See also Occipitofrontalis muscle Document 4::: An apical (or medial or subclavicular) group of six to twelve glands is situated partly posterior to the upper portion of the Pectoralis minor and partly above the upper border of this muscle. Its only direct territorial afferents are those that accompany the cephalic vein, and one that drains the upper peripheral part of the mamma. However, it receives the efferents of all the other axillary glands. The efferent vessels of the subclavicular group unite to form the subclavian trunk, which opens either directly into the junction of the internal jugular and subclavian veins or into the jugular lymphatic trunk; on the left side it may end in the thoracic duct. A few efferents from the subclavicular glands usually pass to the inferior deep cervical glands. Additional images The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. The anterior muscles of the neck facilitate swallowing and what else? A. hearing B. perspiration C. speech D. crying Answer:
ai2_arc-457
multiple_choice
Which action best models the motion of an arm at the elbow joint?
[ "opening a drawer", "slicing an apple", "closing a car door", "pushing a wheelbarrow" ]
C
Relavent Documents: Document 0::: A motor skill is a function that involves specific movements of the body's muscles to perform a certain task. These tasks could include walking, running, or riding a bike. In order to perform this skill, the body's nervous system, muscles, and brain have to all work together. The goal of motor skill is to optimize the ability to perform the skill at the rate of success, precision, and to reduce the energy consumption required for performance. Performance is an act of executing a motor skill or task. Continuous practice of a specific motor skill will result in a greatly improved performance, which leads to motor learning. Motor learning is a relatively permanent change in the ability to perform a skill as a result of continuous practice or experience. A fundamental movement skill is a developed ability to move the body in coordinated ways to achieve consistent performance at demanding physical tasks, such as found in sports, combat or personal locomotion, especially those unique to humans, such as ice skating, skateboarding, kayaking, or horseback riding. Movement skills generally emphasize stability, balance, and a coordinated muscular progression from prime movers (legs, hips, lower back) to secondary movers (shoulders, elbow, wrist) when conducting explosive movements, such as throwing a baseball. In most physical training, development of core musculature is a central focus. In the athletic context, fundamental movement skills draw upon human physiology and sport psychology. Types of motor skills Motor skills are movements and actions of the muscles. There are two major groups of motor skills: Gross motor skills – require the use of large muscle groups in our legs, torso, and arms to perform tasks such as: walking, balancing, and crawling. The skill required is not extensive and therefore are usually associated with continuous tasks. Much of the development of these skills occurs during early childhood. We use our gross motor skills on a daily basis without putt Document 1::: An armature is a kinematic chain used in computer animation to simulate the motions of virtual human or animal characters. In the context of animation, the inverse kinematics of the armature is the most relevant computational algorithm. There are two types of digital armatures: Keyframing (stop-motion) armatures and real-time (puppeteering) armatures. Keyframing armatures were initially developed to assist in animating digital characters without basing the movement on a live performance. The animator poses a device manually for each keyframe, while the character in the animation is set up with a mechanical structure equivalent to the armature. The device is connected to the animation software through a driver program and each move is recorded for a particular frame in time. Real-time armatures are similar, but they are puppeteered by one or more people and captured in real time. See also Linkages Skeletal animation Document 2::: In mechanical engineering, a kinematic diagram or kinematic scheme (also called a joint map or skeleton diagram) illustrates the connectivity of links and joints of a mechanism or machine rather than the dimensions or shape of the parts. Often links are presented as geometric objects, such as lines, triangles or squares, that support schematic versions of the joints of the mechanism or machine. For example, the figures show the kinematic diagrams (i) of the slider-crank that forms a piston and crank-shaft in an engine, and (ii) of the first three joints for a PUMA manipulator. |- style="text-align:center;" | || |- style="text-align:center;" | PUMA robot || and its kinematic diagram Linkage graph A kinematic diagram can be formulated as a graph by representing the joints of the mechanism as vertices and the links as edges of the graph. This version of the kinematic diagram has proven effective in enumerating kinematic structures in the process of machine design. An important consideration in this design process is the degree of freedom of the system of links and joints, which is determined using the Chebychev–Grübler–Kutzbach criterion. Elements of machines Elements of kinematics diagrams include the frame, which is the frame of reference for all the moving components, as well as links (kinematic pairs), and joints. Primary Joints include pins, sliders and other elements that allow pure rotation or pure linear motion. Higher order joints also exist that allow a combination of rotation or linear motion. Kinematic diagrams also include points of interest, and other important components. See also Free body diagram Kinematic synthesis Left-hand–right-hand activity chart Document 3::: The emulation theory of representation postulates that there are multiple internal modeling circuitries in the brain referred to as emulators. These emulators mimic the input-output patterns of many cognitive operations including action, perception, and imagery. Often running in parallel, these emulators provide resultant feedback in the form of mock sensory signals of a motor command with less delay than sensors. These forward models receive efference copies of input motor commands being sent to the body and the resulting output sensory signals. Emulators are continually updating so as to give the most accurate anticipatory signal following motor inputs. Mechanics and structure Little is known about the overall structure of emulators. It could operate like a search glossary with a very large associative memory of input-output sequences. Under this system, the emulator receives a motor command input, finds the closest matching input from its database, and then sends the associated output in that sequence. The other model is an articulated emulator. This model requires that for each significant sensor of the human musculoskeletal system there is a group of neurons with a parallel firing frequency within the emulator. These groups of neurons would receive the same input as that being sent to their corresponding part of the musculoskeletal system. For example, when raising one's hand signals will be sent to neurons responsible for wrist, elbow, and shoulder angles and arm angular inertia. Regardless of this structure both systems will grow and change over time. This is due to constant, fluctuating noise from the environment and the fact that the body changes over time. Growing limbs and muscles result in changes in both required input commands and the resulting output. This requires a degree of plasticity in the emulators. Emulators are thus continually updating, always receiving the resulting output from the musculoskeletal system from an inputted command and compa Document 4::: A Contact Region is a concept in robotics which describes the region between an object and a robot’s end effector. This is used in object manipulation planning, and with the addition of sensors built into the manipulation system, can be used to produce a surface map or contact model of the object being grasped. In Robotics For a robot to autonomously grasp an object, it is necessary for the robot to have an understanding of its own construction and movement capabilities (described through the math of inverse kinematics), and an understanding of the object to be grasped. The relationship between these two is described through a contact model, which is a set of the potential points of contact between the robot and the object being grasped. This, in turn, is used to create a more concrete mathematical representation of the grasp to be attempted, which can then be computed through path planning techniques and executed. In Mathematics Depending on the complexity of the end effector, or through usage of external sensors such as a Lidar or Depth camera, a more complex model of the planes involved in the object being grasped can be produced. In particular, sensors embedded in the fingertips of an end effector have been demonstrated to be an effective approach for producing a surface map from a given contact region. Through knowledge of the robot's position of each individual finger, the location of the sensors in each finger, and the amount of force being exerted by the object onto each sensor, points of contact can be calculated. These points of contact can then be turned into a three-dimensional ellipsis, producing a surface map of the object. Applications In hand manipulation is a typical use case. A robot hand interacts with static and deformable objects, described with soft-body dynamics. Sometimes, additional tools has to be controlled by the robot hand for example a screwdriver. Such interaction produces a complex situation in which the robot hand has similar c The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. Which action best models the motion of an arm at the elbow joint? A. opening a drawer B. slicing an apple C. closing a car door D. pushing a wheelbarrow Answer:
sciq-4197
multiple_choice
Changes in the color of the statue of liberty owe to oxidation-reduction reactions, or what simpler term?
[ "copper", "oxygen", "immersion", "corrosion" ]
D
Relavent Documents: Document 0::: Conceptual questions or conceptual problems in science, technology, engineering, and mathematics (STEM) education are questions that can be answered based only on the knowledge of relevant concepts, rather than performing extensive calculations. They contrast with most homework and exam problems in science and engineering that typically require plugging in numerical values into previously discussed formulas. Such "plug-and-chug" numerical problems can often be solved correctly by just matching the pattern of the problem to a previously discussed problem and changing the numerical inputs, which requires significant amounts of time to perform the calculations but does not test or deepen the understanding of how the concepts and formulas should work together. Conceptual questions, therefore, provide a good complement to conventional numerical problems because they need minimal or no calculations and instead encourage the students to engage more deeply with the underlying concepts and how they relate to formulas. Conceptual problems are often formulated as multiple-choice questions, making them easy to use during in-class discussions, particularly when utilizing active learning, peer instruction, and audience response. An example of a conceptual question in undergraduate thermodynamics is provided below: During adiabatic expansion of an ideal gas, its temperatureincreases decreases stays the same Impossible to tell/need more information The use of conceptual questions in physics was popularized by Eric Mazur, particularly in the form of multiple-choice tests that he called ConcepTests. In recent years, multiple websites that maintain lists of conceptual questions have been created by instructors for various disciplines. Some books on physics provide many examples of conceptual questions as well. Multiple conceptual questions can be assembled into a concept inventory to test the working knowledge of students at the beginning of a course or to track the improvement in Document 1::: Colored music notation is a technique used to facilitate enhanced learning in young music students by adding visual color to written musical notation. It is based upon the concept that color can affect the observer in various ways, and combines this with standard learning of basic notation. Basis Viewing color has been widely shown to change an individual's emotional state and stimulate neurons. The Lüscher color test observes from experiments that when individuals are required to contemplate pure red for varying lengths of time, [the experiments] have shown that this color decidedly has a stimulating effect on the nervous system; blood pressure increases, and respiration rate and heart rate both increase. Pure blue, on the other hand, has the reverse effect; observers experience a decline in blood pressure, heart rate, and breathing. Given these findings, it has been suggested that the influence of colored musical notation would be similar. Music education In music education, color is typically used in method books to highlight new material. Stimuli received through several senses excite more neurons in several localized areas of the cortex, thereby reinforcing the learning process and improving retention. This information has been proven by other researchers; Chute (1978) reported that "elementary students who viewed a colored version of an instructional film scored significantly higher on both immediate and delayed tests than did students who viewed a monochrome version". Color studies Effect on achievement A researcher in this field, George L. Rogers is the Director of Music Education at Westfield State College. He is also the author of 25 articles in publications that include the Music Educators Journal, The Instrumentalist, and the Journal of Research in Music Education. In 1991, George L. Rogers did a study that researched the effect of color-coded notation on music achievement of elementary instrumental students. Rogers states that the color-co Document 2::: A pre-STEM program is a course of study at any two-year college that prepares a student to transfer to a four-year school to earn a bachelor's degree in a STEM field. Overview The concept of a pre-STEM program is being developed to address America's need for more college-trained professionals in science, technology, engineering, and mathematics (STEM). It is an innovation meant to fill a gap at community colleges that do not have 'major' degree paths that students identify with on their way to earning an Associates degree. Students must complete a considerable amount of STEM coursework before transferring from a two-year school to a four-year school and earn a baccalaureate degree in a STEM field. Schools with a pre-STEM program are able to identify those students and support them with STEM-specific academic and career advising, increasing the student's chances of going on to earn a STEM baccalaureate degree in a timely fashion. With over 50% of America's college-bound students starting their college career at public or private two-year school, and with a very small proportion of students who start college at a two-year school matriculating to and earning STEM degrees from four-year schools, pre-STEM programs have great potential for broadening participation in baccalaureate STEM studies. Example programs The effectiveness of pre-STEM programs is being investigated by a consortium of schools in Missouri: Moberly Area Community College, St. Charles Community College, Metropolitan Community College, and Truman State University. A larger group of schools met at the Belknap Springs Meetings in October 2009 to discuss the challenges and opportunities presented by STEM-focused partnerships between 2-year and 4-year schools. Each program represented a two-year school and a four-year school that were trying to increase the number of people who earn a baccalaureate degree in a STEM area through various means, some of which were pre-STEM programs. Other methods includes Document 3::: A stain is a discoloration that can be clearly distinguished from the surface, material, or medium it is found upon. They are caused by the chemical or physical interaction of two dissimilar materials. Accidental staining may make materials appear used, degraded or permanently unclean. Intentional staining is used in biochemical research and for artistic effect, such as wood staining, rust staining and stained glass. Types There can be intentional stains (such as wood stains or paint), indicative stains (such as food coloring dye, or adding a substance to make bacteria visible under a microscope), natural stains (such as rust on iron or a patina on bronze), and accidental stains such as ketchup and synthetic oil on clothing. Different types of material can be stained by different substances, and stain resistance is an important characteristic in modern textile engineering. Formation The primary method of stain formation is surface stains, where the staining substance is spilled out onto the surface or material and is trapped in the fibers, pores, indentations, or other capillary structures on the surface. The material that is trapped coats the underlying material, and the stain reflects backlight according to its own color. Applying paint, spilled food, and wood stains are of this nature. A secondary method of stain involves a chemical or molecular reaction between the material and the staining material. Many types of natural stains fall into this category. Finally, there can also be molecular attraction between the material and the staining material, involving being held in a covalent bond and showing the color of the bound substance. Properties In many cases, stains are affected by heat and may become reactive enough to bond with the underlying material. Applied heat, such as from ironing, dry cleaning or sunlight, can cause a chemical reaction on an otherwise removable stain, turning it into a chemical. Removal Various laundry techniques exist to attempt t Document 4::: The Force Concept Inventory is a test measuring mastery of concepts commonly taught in a first semester of physics developed by Hestenes, Halloun, Wells, and Swackhamer (1985). It was the first such "concept inventory" and several others have been developed since for a variety of topics. The FCI was designed to assess student understanding of the Newtonian concepts of force. Hestenes (1998) found that while "nearly 80% of the [students completing introductory college physics courses] could state Newton's Third Law at the beginning of the course, FCI data showed that less than 15% of them fully understood it at the end". These results have been replicated in a number of studies involving students at a range of institutions (see sources section below), and have led to greater recognition in the physics education research community of the importance of students' "active engagement" with the materials to be mastered. The 1995 version has 30 five-way multiple choice questions. Example question (question 4): Gender differences The FCI shows a gender difference in favor of males that has been the subject of some research in regard to gender equity in education. Men score on average about 10% higher. The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. Changes in the color of the statue of liberty owe to oxidation-reduction reactions, or what simpler term? A. copper B. oxygen C. immersion D. corrosion Answer:
ai2_arc-903
multiple_choice
Which of the following is NOT a description of compounds?
[ "They can exist in the form of atoms or molecules.", "They consist of atoms of two or more elements bonded together.", "They have properties that are different from their component elements.", "They can be broken down into elements by chemical means but not physical means." ]
A
Relavent Documents: Document 0::: In chemical nomenclature, the IUPAC nomenclature of organic chemistry is a method of naming organic chemical compounds as recommended by the International Union of Pure and Applied Chemistry (IUPAC). It is published in the Nomenclature of Organic Chemistry (informally called the Blue Book). Ideally, every possible organic compound should have a name from which an unambiguous structural formula can be created. There is also an IUPAC nomenclature of inorganic chemistry. To avoid long and tedious names in normal communication, the official IUPAC naming recommendations are not always followed in practice, except when it is necessary to give an unambiguous and absolute definition to a compound. IUPAC names can sometimes be simpler than older names, as with ethanol, instead of ethyl alcohol. For relatively simple molecules they can be more easily understood than non-systematic names, which must be learnt or looked over. However, the common or trivial name is often substantially shorter and clearer, and so preferred. These non-systematic names are often derived from an original source of the compound. Also, very long names may be less clear than structural formulas. Basic principles In chemistry, a number of prefixes, suffixes and infixes are used to describe the type and position of the functional groups in the compound. The steps for naming an organic compound are: Identification of the parent hydride parent hydrocarbon chain. This chain must obey the following rules, in order of precedence: It should have the maximum number of substituents of the suffix functional group. By suffix, it is meant that the parent functional group should have a suffix, unlike halogen substituents. If more than one functional group is present, the one with highest group precedence should be used. It should have the maximum number of multiple bonds. It should have the maximum length. It should have the maximum number of substituents or branches cited as prefixes It should have the ma Document 1::: This is an index of lists of molecules (i.e. by year, number of atoms, etc.). Millions of molecules have existed in the universe since before the formation of Earth. Three of them, carbon dioxide, water and oxygen were necessary for the growth of life. Although humanity had always been surrounded by these substances, it has not always known what they were composed of. By century The following is an index of list of molecules organized by time of discovery of their molecular formula or their specific molecule in case of isomers: List of compounds By number of carbon atoms in the molecule List of compounds with carbon number 1 List of compounds with carbon number 2 List of compounds with carbon number 3 List of compounds with carbon number 4 List of compounds with carbon number 5 List of compounds with carbon number 6 List of compounds with carbon number 7 List of compounds with carbon number 8 List of compounds with carbon number 9 List of compounds with carbon number 10 List of compounds with carbon number 11 List of compounds with carbon number 12 List of compounds with carbon number 13 List of compounds with carbon number 14 List of compounds with carbon number 15 List of compounds with carbon number 16 List of compounds with carbon number 17 List of compounds with carbon number 18 List of compounds with carbon number 19 List of compounds with carbon number 20 List of compounds with carbon number 21 List of compounds with carbon number 22 List of compounds with carbon number 23 List of compounds with carbon number 24 List of compounds with carbon numbers 25-29 List of compounds with carbon numbers 30-39 List of compounds with carbon numbers 40-49 List of compounds with carbon numbers 50+ Other lists List of interstellar and circumstellar molecules List of gases List of molecules with unusual names See also Molecule Empirical formula Chemical formula Chemical structure Chemical compound Chemical bond Coordination complex L Document 2::: A chemical bonding model is a theoretical model used to explain atomic bonding structure, molecular geometry, properties, and reactivity of physical matter. This can refer to: VSEPR theory, a model of molecular geometry. Valence bond theory, which describes molecular electronic structure with localized bonds and lone pairs. Molecular orbital theory, which describes molecular electronic structure with delocalized molecular orbitals. Crystal field theory, an electrostatic model for transition metal complexes. Ligand field theory, the application of molecular orbital theory to transition metal complexes. Chemical bonding Document 3::: Conceptual questions or conceptual problems in science, technology, engineering, and mathematics (STEM) education are questions that can be answered based only on the knowledge of relevant concepts, rather than performing extensive calculations. They contrast with most homework and exam problems in science and engineering that typically require plugging in numerical values into previously discussed formulas. Such "plug-and-chug" numerical problems can often be solved correctly by just matching the pattern of the problem to a previously discussed problem and changing the numerical inputs, which requires significant amounts of time to perform the calculations but does not test or deepen the understanding of how the concepts and formulas should work together. Conceptual questions, therefore, provide a good complement to conventional numerical problems because they need minimal or no calculations and instead encourage the students to engage more deeply with the underlying concepts and how they relate to formulas. Conceptual problems are often formulated as multiple-choice questions, making them easy to use during in-class discussions, particularly when utilizing active learning, peer instruction, and audience response. An example of a conceptual question in undergraduate thermodynamics is provided below: During adiabatic expansion of an ideal gas, its temperatureincreases decreases stays the same Impossible to tell/need more information The use of conceptual questions in physics was popularized by Eric Mazur, particularly in the form of multiple-choice tests that he called ConcepTests. In recent years, multiple websites that maintain lists of conceptual questions have been created by instructors for various disciplines. Some books on physics provide many examples of conceptual questions as well. Multiple conceptual questions can be assembled into a concept inventory to test the working knowledge of students at the beginning of a course or to track the improvement in Document 4::: A carbon–carbon bond is a covalent bond between two carbon atoms. The most common form is the single bond: a bond composed of two electrons, one from each of the two atoms. The carbon–carbon single bond is a sigma bond and is formed between one hybridized orbital from each of the carbon atoms. In ethane, the orbitals are sp3-hybridized orbitals, but single bonds formed between carbon atoms with other hybridizations do occur (e.g. sp2 to sp2). In fact, the carbon atoms in the single bond need not be of the same hybridization. Carbon atoms can also form double bonds in compounds called alkenes or triple bonds in compounds called alkynes. A double bond is formed with an sp2-hybridized orbital and a p-orbital that is not involved in the hybridization. A triple bond is formed with an sp-hybridized orbital and two p-orbitals from each atom. The use of the p-orbitals forms a pi bond. Chains and branching Carbon is one of the few elements that can form long chains of its own atoms, a property called catenation. This coupled with the strength of the carbon–carbon bond gives rise to an enormous number of molecular forms, many of which are important structural elements of life, so carbon compounds have their own field of study: organic chemistry. Branching is also common in C−C skeletons. Carbon atoms in a molecule are categorized by the number of carbon neighbors they have: A primary carbon has one carbon neighbor. A secondary carbon has two carbon neighbors. A tertiary carbon has three carbon neighbors. A quaternary carbon has four carbon neighbors. In "structurally complex organic molecules", it is the three-dimensional orientation of the carbon–carbon bonds at quaternary loci which dictates the shape of the molecule. Further, quaternary loci are found in many biologically active small molecules, such as cortisone and morphine. Synthesis Carbon–carbon bond-forming reactions are organic reactions in which a new carbon–carbon bond is formed. They are important in th The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. Which of the following is NOT a description of compounds? A. They can exist in the form of atoms or molecules. B. They consist of atoms of two or more elements bonded together. C. They have properties that are different from their component elements. D. They can be broken down into elements by chemical means but not physical means. Answer: