id
stringlengths
6
15
question_type
stringclasses
1 value
question
stringlengths
15
683
choices
listlengths
4
4
answer
stringclasses
5 values
explanation
stringclasses
481 values
prompt
stringlengths
1.75k
10.9k
sciq-3294
multiple_choice
What kind of energy constitutes the total kinetic energy of all the atoms that make up an object?
[ "thermal energy", "phenomena energy", "atmospheric energy", "kinetic energy" ]
A
Relavent Documents: Document 0::: Conceptual questions or conceptual problems in science, technology, engineering, and mathematics (STEM) education are questions that can be answered based only on the knowledge of relevant concepts, rather than performing extensive calculations. They contrast with most homework and exam problems in science and engineering that typically require plugging in numerical values into previously discussed formulas. Such "plug-and-chug" numerical problems can often be solved correctly by just matching the pattern of the problem to a previously discussed problem and changing the numerical inputs, which requires significant amounts of time to perform the calculations but does not test or deepen the understanding of how the concepts and formulas should work together. Conceptual questions, therefore, provide a good complement to conventional numerical problems because they need minimal or no calculations and instead encourage the students to engage more deeply with the underlying concepts and how they relate to formulas. Conceptual problems are often formulated as multiple-choice questions, making them easy to use during in-class discussions, particularly when utilizing active learning, peer instruction, and audience response. An example of a conceptual question in undergraduate thermodynamics is provided below: During adiabatic expansion of an ideal gas, its temperatureincreases decreases stays the same Impossible to tell/need more information The use of conceptual questions in physics was popularized by Eric Mazur, particularly in the form of multiple-choice tests that he called ConcepTests. In recent years, multiple websites that maintain lists of conceptual questions have been created by instructors for various disciplines. Some books on physics provide many examples of conceptual questions as well. Multiple conceptual questions can be assembled into a concept inventory to test the working knowledge of students at the beginning of a course or to track the improvement in Document 1::: Specific kinetic energy is the kinetic energy of an object per unit of mass. It is defined as . Where is the specific kinetic energy and is velocity. It has units of J/kg, which is equivalent to m2/s2. Energy (physics) Document 2::: This is a list of topics that are included in high school physics curricula or textbooks. Mathematical Background SI Units Scalar (physics) Euclidean vector Motion graphs and derivatives Pythagorean theorem Trigonometry Motion and forces Motion Force Linear motion Linear motion Displacement Speed Velocity Acceleration Center of mass Mass Momentum Newton's laws of motion Work (physics) Free body diagram Rotational motion Angular momentum (Introduction) Angular velocity Centrifugal force Centripetal force Circular motion Tangential velocity Torque Conservation of energy and momentum Energy Conservation of energy Elastic collision Inelastic collision Inertia Moment of inertia Momentum Kinetic energy Potential energy Rotational energy Electricity and magnetism Ampère's circuital law Capacitor Coulomb's law Diode Direct current Electric charge Electric current Alternating current Electric field Electric potential energy Electron Faraday's law of induction Ion Inductor Joule heating Lenz's law Magnetic field Ohm's law Resistor Transistor Transformer Voltage Heat Entropy First law of thermodynamics Heat Heat transfer Second law of thermodynamics Temperature Thermal energy Thermodynamic cycle Volume (thermodynamics) Work (thermodynamics) Waves Wave Longitudinal wave Transverse waves Transverse wave Standing Waves Wavelength Frequency Light Light ray Speed of light Sound Speed of sound Radio waves Harmonic oscillator Hooke's law Reflection Refraction Snell's law Refractive index Total internal reflection Diffraction Interference (wave propagation) Polarization (waves) Vibrating string Doppler effect Gravity Gravitational potential Newton's law of universal gravitation Newtonian constant of gravitation See also Outline of physics Physics education Document 3::: Advanced Placement (AP) Physics C: Mechanics (also known as AP Mechanics) is an introductory physics course administered by the College Board as part of its Advanced Placement program. It is intended to proxy a one-semester calculus-based university course in mechanics. The content of Physics C: Mechanics overlaps with that of AP Physics 1, but Physics 1 is algebra-based, while Physics C is calculus-based. Physics C: Mechanics may be combined with its electricity and magnetism counterpart to form a year-long course that prepares for both exams. Course content Intended to be equivalent to an introductory college course in mechanics for physics or engineering majors, the course modules are: Kinematics Newton's laws of motion Work, energy and power Systems of particles and linear momentum Circular motion and rotation Oscillations and gravitation. Methods of calculus are used wherever appropriate in formulating physical principles and in applying them to physical problems. Therefore, students should have completed or be concurrently enrolled in a Calculus I class. This course is often compared to AP Physics 1: Algebra Based for its similar course material involving kinematics, work, motion, forces, rotation, and oscillations. However, AP Physics 1: Algebra Based lacks concepts found in Calculus I, like derivatives or integrals. This course may be combined with AP Physics C: Electricity and Magnetism to make a unified Physics C course that prepares for both exams. AP test The course culminates in an optional exam for which high-performing students may receive some credit towards their college coursework, depending on the institution. Registration The AP examination for AP Physics C: Mechanics is separate from the AP examination for AP Physics C: Electricity and Magnetism. Before 2006, test-takers paid only once and were given the choice of taking either one or two parts of the Physics C test. Format The exam is typically administered on a Monday aftern Document 4::: In physics, energy density is the amount of energy stored in a given system or region of space per unit volume. It is sometimes confused with energy per unit mass which is properly called specific energy or . Often only the useful or extractable energy is measured, which is to say that inaccessible energy (such as rest mass energy) is ignored. In cosmological and other general relativistic contexts, however, the energy densities considered are those that correspond to the elements of the stress-energy tensor and therefore do include mass energy as well as energy densities associated with pressure. Energy per unit volume has the same physical units as pressure and in many situations is synonymous. For example, the energy density of a magnetic field may be expressed as and behaves like a physical pressure. Likewise, the energy required to compress a gas to a certain volume may be determined by multiplying the difference between the gas pressure and the external pressure by the change in volume. A pressure gradient describes the potential to perform work on the surroundings by converting internal energy to work until equilibrium is reached. Overview There are different types of energy stored in materials, and it takes a particular type of reaction to release each type of energy. In order of the typical magnitude of the energy released, these types of reactions are: nuclear, chemical, electrochemical, and electrical. Nuclear reactions take place in stars and nuclear power plants, both of which derive energy from the binding energy of nuclei. Chemical reactions are used by organisms to derive energy from food and by automobiles to derive energy from gasoline. Liquid hydrocarbons (fuels such as gasoline, diesel and kerosene) are today the densest way known to economically store and transport chemical energy at a large scale (1 kg of diesel fuel burns with the oxygen contained in ≈15 kg of air). Electrochemical reactions are used by most mobile devices such as laptop The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. What kind of energy constitutes the total kinetic energy of all the atoms that make up an object? A. thermal energy B. phenomena energy C. atmospheric energy D. kinetic energy Answer:
sciq-6230
multiple_choice
What type of compound contains atoms of two or more different elements in its ring structure?
[ "polymer", "hydrocarbon", "heterocyclic", "aldehyde" ]
C
Relavent Documents: Document 0::: In chemistry, a ring is an ambiguous term referring either to a simple cycle of atoms and bonds in a molecule or to a connected set of atoms and bonds in which every atom and bond is a member of a cycle (also called a ring system). A ring system that is a simple cycle is called a monocycle or simple ring, and one that is not a simple cycle is called a polycycle or polycyclic ring system. A simple ring contains the same number of sigma bonds as atoms, and a polycyclic ring system contains more sigma bonds than atoms. A molecule containing one or more rings is called a cyclic compound, and a molecule containing two or more rings (either in the same or different ring systems) is termed a polycyclic compound. A molecule containing no rings is called an acyclic or open-chain compound. Homocyclic and heterocyclic rings A homocycle or homocyclic ring is a ring in which all atoms are of the same chemical element. A heterocycle or heterocyclic ring is a ring containing atoms of at least two different elements, i.e. a non-homocyclic ring. A carbocycle or carbocyclic ring is a homocyclic ring in which all of the atoms are carbon. An important class of carbocycles are alicyclic rings, and an important subclass of these are cycloalkanes. Rings and ring systems In common usage the terms "ring" and "ring system" are frequently interchanged, with the appropriate definition depending upon context. Typically a "ring" denotes a simple ring, unless otherwise qualified, as in terms like "polycyclic ring", "fused ring", "spiro ring" and "indole ring", where clearly a polycyclic ring system is intended. Likewise, a "ring system" typically denotes a polycyclic ring system, except in terms like "monocyclic ring system" or "pyridine ring system". To reduce ambiguity, IUPAC's recommendations on organic nomenclature avoid the use of the term "ring" by using phrases such as "monocyclic parent" and "polycyclic ring system". See also Cyclic compound Polycyclic compound Heterocyclic compound Document 1::: In chemical nomenclature, the IUPAC nomenclature of organic chemistry is a method of naming organic chemical compounds as recommended by the International Union of Pure and Applied Chemistry (IUPAC). It is published in the Nomenclature of Organic Chemistry (informally called the Blue Book). Ideally, every possible organic compound should have a name from which an unambiguous structural formula can be created. There is also an IUPAC nomenclature of inorganic chemistry. To avoid long and tedious names in normal communication, the official IUPAC naming recommendations are not always followed in practice, except when it is necessary to give an unambiguous and absolute definition to a compound. IUPAC names can sometimes be simpler than older names, as with ethanol, instead of ethyl alcohol. For relatively simple molecules they can be more easily understood than non-systematic names, which must be learnt or looked over. However, the common or trivial name is often substantially shorter and clearer, and so preferred. These non-systematic names are often derived from an original source of the compound. Also, very long names may be less clear than structural formulas. Basic principles In chemistry, a number of prefixes, suffixes and infixes are used to describe the type and position of the functional groups in the compound. The steps for naming an organic compound are: Identification of the parent hydride parent hydrocarbon chain. This chain must obey the following rules, in order of precedence: It should have the maximum number of substituents of the suffix functional group. By suffix, it is meant that the parent functional group should have a suffix, unlike halogen substituents. If more than one functional group is present, the one with highest group precedence should be used. It should have the maximum number of multiple bonds. It should have the maximum length. It should have the maximum number of substituents or branches cited as prefixes It should have the ma Document 2::: A cyclic compound (or ring compound) is a term for a compound in the field of chemistry in which one or more series of atoms in the compound is connected to form a ring. Rings may vary in size from three to many atoms, and include examples where all the atoms are carbon (i.e., are carbocycles), none of the atoms are carbon (inorganic cyclic compounds), or where both carbon and non-carbon atoms are present (heterocyclic compounds with rings containing both carbon and non-carbon). Depending on the ring size, the bond order of the individual links between ring atoms, and their arrangements within the rings, carbocyclic and heterocyclic compounds may be aromatic or non-aromatic; in the latter case, they may vary from being fully saturated to having varying numbers of multiple bonds between the ring atoms. Because of the tremendous diversity allowed, in combination, by the valences of common atoms and their ability to form rings, the number of possible cyclic structures, even of small size (e.g., < 17 total atoms) numbers in the many billions. Adding to their complexity and number, closing of atoms into rings may lock particular atoms with distinct substitution (by functional groups) such that stereochemistry and chirality of the compound results, including some manifestations that are unique to rings (e.g., configurational isomers). As well, depending on ring size, the three-dimensional shapes of particular cyclic structures – typically rings of five atoms and larger – can vary and interconvert such that conformational isomerism is displayed. Indeed, the development of this important chemical concept arose historically in reference to cyclic compounds. Finally, cyclic compounds, because of the unique shapes, reactivities, properties, and bioactivities that they engender, are the majority of all molecules involved in the biochemistry, structure, and function of living organisms, and in man-made molecules such as drugs, pesticides, etc. Structure and classification A cy Document 3::: A bicyclic molecule () is a molecule that features two joined rings. Bicyclic structures occur widely, for example in many biologically important molecules like α-thujene and camphor. A bicyclic compound can be carbocyclic (all of the ring atoms are carbons), or heterocyclic (the rings' atoms consist of at least two elements), like DABCO. Moreover, the two rings can both be aliphatic (e.g. decalin and norbornane), or can be aromatic (e.g. naphthalene), or a combination of aliphatic and aromatic (e.g. tetralin). Three modes of ring junction are possible for a bicyclic compound: In spiro compounds, the two rings share only one single atom, the spiro atom, which is usually a quaternary carbon. An example of a spirocyclic compound is the photochromic switch spiropyran. In fused/condensed bicyclic compounds, two rings share two adjacent atoms. In other words, the rings share one covalent bond, i.e. the bridgehead atoms are directly connected (e.g. α-thujene and decalin). In bridged bicyclic compounds, the two rings share three or more atoms, separating the two bridgehead atoms by a bridge containing at least one atom. For example, norbornane, also known as bicyclo[2.2.1]heptane, can be viewed as a pair of cyclopentane rings each sharing three of their five carbon atoms. Camphor is a more elaborate example. Nomenclature Bicyclic molecules are described by IUPAC nomenclature. The root of the compound name depends on the total number of atoms in all rings together, possibly followed by a suffix denoting the functional group with the highest priority. Numbering of the carbon chain always begins at one bridgehead atom (where the rings meet) and follows the carbon chain along the longest path, to the next bridgehead atom. Then numbering is continued along the second longest path and so on. Fused and bridged bicyclic compounds get the prefix bicyclo, whereas spirocyclic compounds get the prefix spiro. In between the prefix and the suffix, a pair of brackets with numerals Document 4::: The prismanes are a class of hydrocarbon compounds consisting of prism-like polyhedra of various numbers of sides on the polygonal base. Chemically, it is a series of fused cyclobutane rings (a ladderane, with all-cis/all-syn geometry) that wraps around to join its ends and form a band, with cycloalkane edges. Their chemical formula is (C2H2)n, where n is the number of cyclobutane sides (the size of the cycloalkane base), and that number also forms the basis for a system of nomenclature within this class. The first few chemicals in this class are: Triprismane, tetraprismane, and pentaprismane have been synthesized and studied experimentally, and many higher members of the series have been studied using computer models. The first several members do indeed have the geometry of a regular prism, with flat n-gon bases. As n becomes increasingly large, however, modeling experiments find that highly symmetric geometry is no longer stable, and the molecule distorts into less-symmetric forms. One series of modelling experiments found that starting with [11]prismane, the regular-prism form is not a stable geometry. For example, the structure of [12]prismane would have the cyclobutane chain twisted, with the dodecagonal bases non-planar and non-parallel. Nonconvex prismanes For large base-sizes, some of the cyclobutanes can be fused anti to each other, giving a non-convex polygon base. These are geometric isomers of the prismanes. Two isomers of [12]prismane that have been studied computationally are named helvetane and israelane, based on the star-like shapes of the rings that form their bases. This was explored computationally after originally being proposed as an April fools joke. Their names refer to the shapes found on the flags of Switzerland and Israel, respectively. Polyprismanes The polyprismanes consist of multiple prismanes stacked base-to-base. The carbons at each intermediate level—the n-gon bases where the prismanes fuse to each other—have no hydrogen atom The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. What type of compound contains atoms of two or more different elements in its ring structure? A. polymer B. hydrocarbon C. heterocyclic D. aldehyde Answer:
sciq-3842
multiple_choice
What are the three forms of water as found in nature?
[ "solid , mixture , gas", "solid, liquid, gas", "balanced , liquid , gas", "ice, vapor, sleet" ]
B
Relavent Documents: Document 0::: Conceptual questions or conceptual problems in science, technology, engineering, and mathematics (STEM) education are questions that can be answered based only on the knowledge of relevant concepts, rather than performing extensive calculations. They contrast with most homework and exam problems in science and engineering that typically require plugging in numerical values into previously discussed formulas. Such "plug-and-chug" numerical problems can often be solved correctly by just matching the pattern of the problem to a previously discussed problem and changing the numerical inputs, which requires significant amounts of time to perform the calculations but does not test or deepen the understanding of how the concepts and formulas should work together. Conceptual questions, therefore, provide a good complement to conventional numerical problems because they need minimal or no calculations and instead encourage the students to engage more deeply with the underlying concepts and how they relate to formulas. Conceptual problems are often formulated as multiple-choice questions, making them easy to use during in-class discussions, particularly when utilizing active learning, peer instruction, and audience response. An example of a conceptual question in undergraduate thermodynamics is provided below: During adiabatic expansion of an ideal gas, its temperatureincreases decreases stays the same Impossible to tell/need more information The use of conceptual questions in physics was popularized by Eric Mazur, particularly in the form of multiple-choice tests that he called ConcepTests. In recent years, multiple websites that maintain lists of conceptual questions have been created by instructors for various disciplines. Some books on physics provide many examples of conceptual questions as well. Multiple conceptual questions can be assembled into a concept inventory to test the working knowledge of students at the beginning of a course or to track the improvement in Document 1::: Water-use efficiency (WUE) refers to the ratio of water used in plant metabolism to water lost by the plant through transpiration. Two types of water-use efficiency are referred to most frequently: photosynthetic water-use efficiency (also called instantaneous water-use efficiency), which is defined as the ratio of the rate of carbon assimilation (photosynthesis) to the rate of transpiration, and water-use efficiency of productivity (also called integrated water-use efficiency), which is typically defined as the ratio of biomass produced to the rate of transpiration. Increases in water-use efficiency are commonly cited as a response mechanism of plants to moderate to severe soil water deficits and have been the focus of many programs that seek to increase crop tolerance to drought. However, there is some question as to the benefit of increased water-use efficiency of plants in agricultural systems, as the processes of increased yield production and decreased water loss due to transpiration (that is, the main driver of increases in water-use efficiency) are fundamentally opposed. If there existed a situation where water deficit induced lower transpirational rates without simultaneously decreasing photosynthetic rates and biomass production, then water-use efficiency would be both greatly improved and the desired trait in crop production. Document 2::: Wet Processing Engineering is one of the major streams in Textile Engineering or Textile manufacturing which refers to the engineering of textile chemical processes and associated applied science. The other three streams in textile engineering are yarn engineering, fabric engineering, and apparel engineering. The processes of this stream are involved or carried out in an aqueous stage. Hence, it is called a wet process which usually covers pre-treatment, dyeing, printing, and finishing. The wet process is usually done in the manufactured assembly of interlacing fibers, filaments and yarns, having a substantial surface (planar) area in relation to its thickness, and adequate mechanical strength to give it a cohesive structure. In other words, the wet process is done on manufactured fiber, yarn and fabric. All of these stages require an aqueous medium which is created by water. A massive amount of water is required in these processes per day. It is estimated that, on an average, almost 50–100 liters of water is used to process only 1 kilogram of textile goods, depending on the process engineering and applications. Water can be of various qualities and attributes. Not all water can be used in the textile processes; it must have some certain properties, quality, color and attributes of being used. This is the reason why water is a prime concern in wet processing engineering. Water Water consumption and discharge of wastewater are the two major concerns. The textile industry uses a large amount of water in its varied processes especially in wet operations such as pre-treatment, dyeing, and printing. Water is required as a solvent of various dyes and chemicals and it is used in washing or rinsing baths in different steps. Water consumption depends upon the application methods, processes, dyestuffs, equipment/machines and technology which may vary mill to mill and material composition. Longer processing sequences, processing of extra dark colors and reprocessing lead Document 3::: Liquid 3 is an urban photobioreactor tank full of water and micro-algae installed as way to produce oxygen in cities that have high in the air. They also have sitting benches and solar energy panel next to them and can be used to fill urban pockets. They may replace function of an adult tree. They have been used in Serbian cities, such as Belgrade. They are developed through UNDP. It contains 600 liters (159 gallons) of water and works by photosynthesis. Document 4::: The International Space Station Environmental Control and Life Support System (ECLSS) is a life support system that provides or controls atmospheric pressure, fire detection and suppression, oxygen levels, waste management and water supply. The highest priority for the ECLSS is the ISS atmosphere, but the system also collects, processes, and stores both waste and water produced and used by the crew—a process that recycles fluid from the sink, shower, toilet, and condensation from the air. The Elektron system aboard Zvezda and a similar system in Destiny generate oxygen aboard the station. The crew has a backup option in the form of bottled oxygen and Solid Fuel Oxygen Generation (SFOG) canisters. Carbon dioxide is removed from the air by the Vozdukh system in Zvezda, one Carbon Dioxide Removal Assembly (CDRA) located in the U.S. Lab module, and one CDRA in the U.S. Node 3 module. Other by-products of human metabolism, such as methane from flatulence and ammonia from sweat, are removed by activated charcoal filters or by the Trace Contaminant Control System (TCCS). Water recovery systems The ISS has two water recovery systems. Zvezda contains a water recovery system that processes water vapor from the atmosphere that could be used for drinking in an emergency but is normally fed to the Elektron system to produce oxygen. The American segment has a Water Recovery System installed during STS-126 that can process water vapour collected from the atmosphere and urine into water that is intended for drinking. The Water Recovery System was installed initially in Destiny on a temporary basis in November 2008 and moved into Tranquility (Node 3) in February 2010. The Water Recovery System consists of a Urine Processor Assembly and a Water Processor Assembly, housed in two of the three ECLSS racks. The Urine Processor Assembly uses a low pressure vacuum distillation process that uses a centrifuge to compensate for the lack of gravity and thus aid in separating liquids and g The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. What are the three forms of water as found in nature? A. solid , mixture , gas B. solid, liquid, gas C. balanced , liquid , gas D. ice, vapor, sleet Answer:
sciq-10568
multiple_choice
Around what percentage of the earth's surface water is contained in the ocean?
[ "86 %", "92 %", "97%", "99%" ]
C
Relavent Documents: Document 0::: The STEM (Science, Technology, Engineering, and Mathematics) pipeline is a critical infrastructure for fostering the development of future scientists, engineers, and problem solvers. It's the educational and career pathway that guides individuals from early childhood through to advanced research and innovation in STEM-related fields. Description The "pipeline" metaphor is based on the idea that having sufficient graduates requires both having sufficient input of students at the beginning of their studies, and retaining these students through completion of their academic program. The STEM pipeline is a key component of workplace diversity and of workforce development that ensures sufficient qualified candidates are available to fill scientific and technical positions. The STEM pipeline was promoted in the United States from the 1970s onwards, as “the push for STEM (science, technology, engineering, and mathematics) education appears to have grown from a concern for the low number of future professionals to fill STEM jobs and careers and economic and educational competitiveness.” Today, this metaphor is commonly used to describe retention problems in STEM fields, called “leaks” in the pipeline. For example, the White House reported in 2012 that 80% of minority groups and women who enroll in a STEM field switch to a non-STEM field or drop out during their undergraduate education. These leaks often vary by field, gender, ethnic and racial identity, socioeconomic background, and other factors, drawing attention to structural inequities involved in STEM education and careers. Current efforts The STEM pipeline concept is a useful tool for programs aiming at increasing the total number of graduates, and is especially important in efforts to increase the number of underrepresented minorities and women in STEM fields. Using STEM methodology, educational policymakers can examine the quantity and retention of students at all stages of the K–12 educational process and beyo Document 1::: A pre-STEM program is a course of study at any two-year college that prepares a student to transfer to a four-year school to earn a bachelor's degree in a STEM field. Overview The concept of a pre-STEM program is being developed to address America's need for more college-trained professionals in science, technology, engineering, and mathematics (STEM). It is an innovation meant to fill a gap at community colleges that do not have 'major' degree paths that students identify with on their way to earning an Associates degree. Students must complete a considerable amount of STEM coursework before transferring from a two-year school to a four-year school and earn a baccalaureate degree in a STEM field. Schools with a pre-STEM program are able to identify those students and support them with STEM-specific academic and career advising, increasing the student's chances of going on to earn a STEM baccalaureate degree in a timely fashion. With over 50% of America's college-bound students starting their college career at public or private two-year school, and with a very small proportion of students who start college at a two-year school matriculating to and earning STEM degrees from four-year schools, pre-STEM programs have great potential for broadening participation in baccalaureate STEM studies. Example programs The effectiveness of pre-STEM programs is being investigated by a consortium of schools in Missouri: Moberly Area Community College, St. Charles Community College, Metropolitan Community College, and Truman State University. A larger group of schools met at the Belknap Springs Meetings in October 2009 to discuss the challenges and opportunities presented by STEM-focused partnerships between 2-year and 4-year schools. Each program represented a two-year school and a four-year school that were trying to increase the number of people who earn a baccalaureate degree in a STEM area through various means, some of which were pre-STEM programs. Other methods includes Document 2::: The Bachelor of Science in Aquatic Resources and Technology (B.Sc. in AQT) (or Bachelor of Aquatic Resource) is an undergraduate degree that prepares students to pursue careers in the public, private, or non-profit sector in areas such as marine science, fisheries science, aquaculture, aquatic resource technology, food science, management, biotechnology and hydrography. Post-baccalaureate training is available in aquatic resource management and related areas. The Department of Animal Science and Export Agriculture, at the Uva Wellassa University of Badulla, Sri Lanka, has the largest enrollment of undergraduate majors in Aquatic Resources and Technology, with about 200 students as of 2014. The Council on Education for Aquatic Resources and Technology includes undergraduate AQT degrees in the accreditation review of Aquatic Resources and Technology programs and schools. See also Marine Science Ministry of Fisheries and Aquatic Resources Development Document 3::: Several universities have designed interdisciplinary courses with a focus on human biology at the undergraduate level. There is a wide variation in emphasis ranging from business, social studies, public policy, healthcare and pharmaceutical research. Americas Human Biology major at Stanford University, Palo Alto (since 1970) Stanford's Human Biology Program is an undergraduate major; it integrates the natural and social sciences in the study of human beings. It is interdisciplinary and policy-oriented and was founded in 1970 by a group of Stanford faculty (Professors Dornbusch, Ehrlich, Hamburg, Hastorf, Kennedy, Kretchmer, Lederberg, and Pittendrigh). It is a very popular major and alumni have gone to post-graduate education, medical school, law, business and government. Human and Social Biology (Caribbean) Human and Social Biology is a Level 4 & 5 subject in the secondary and post-secondary schools in the Caribbean and is optional for the Caribbean Secondary Education Certification (CSEC) which is equivalent to Ordinary Level (O-Level) under the British school system. The syllabus centers on structure and functioning (anatomy, physiology, biochemistry) of human body and the relevance to human health with Caribbean-specific experience. The syllabus is organized under five main sections: Living organisms and the environment, life processes, heredity and variation, disease and its impact on humans, the impact of human activities on the environment. Human Biology Program at University of Toronto The University of Toronto offers an undergraduate program in Human Biology that is jointly offered by the Faculty of Arts & Science and the Faculty of Medicine. The program offers several major and specialist options in: human biology, neuroscience, health & disease, global health, and fundamental genetics and its applications. Asia BSc (Honours) Human Biology at All India Institute of Medical Sciences, New Delhi (1980–2002) BSc (honours) Human Biology at AIIMS (New Document 4::: Conceptual questions or conceptual problems in science, technology, engineering, and mathematics (STEM) education are questions that can be answered based only on the knowledge of relevant concepts, rather than performing extensive calculations. They contrast with most homework and exam problems in science and engineering that typically require plugging in numerical values into previously discussed formulas. Such "plug-and-chug" numerical problems can often be solved correctly by just matching the pattern of the problem to a previously discussed problem and changing the numerical inputs, which requires significant amounts of time to perform the calculations but does not test or deepen the understanding of how the concepts and formulas should work together. Conceptual questions, therefore, provide a good complement to conventional numerical problems because they need minimal or no calculations and instead encourage the students to engage more deeply with the underlying concepts and how they relate to formulas. Conceptual problems are often formulated as multiple-choice questions, making them easy to use during in-class discussions, particularly when utilizing active learning, peer instruction, and audience response. An example of a conceptual question in undergraduate thermodynamics is provided below: During adiabatic expansion of an ideal gas, its temperatureincreases decreases stays the same Impossible to tell/need more information The use of conceptual questions in physics was popularized by Eric Mazur, particularly in the form of multiple-choice tests that he called ConcepTests. In recent years, multiple websites that maintain lists of conceptual questions have been created by instructors for various disciplines. Some books on physics provide many examples of conceptual questions as well. Multiple conceptual questions can be assembled into a concept inventory to test the working knowledge of students at the beginning of a course or to track the improvement in The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. Around what percentage of the earth's surface water is contained in the ocean? A. 86 % B. 92 % C. 97% D. 99% Answer:
sciq-8396
multiple_choice
Melting points are temperatures that are high enough to separate liquids from which substances?
[ "solids", "oils", "seeds", "gases" ]
A
Relavent Documents: Document 0::: A characteristic property is a chemical or physical property that helps identify and classify substances. The characteristic properties of a substance are always the same whether the sample being observed is large or small. Thus, conversely, if the property of a substance changes as the sample size changes, that property is not a characteristic property. Examples of physical properties that are not characteristic properties are mass and volume. Examples of characteristic properties include melting points, boiling points, density, viscosity, solubility, crystal shape, and color. Substances with characteristic properties can be separated. For example, in fractional distillation, liquids are separated using the boiling point. The water Boiling point is 212 degrees Fahrenheit. Identifying a substance Every characteristic property is unique to one given substance. Scientists use characteristic properties to identify unknown substances. However, characteristic properties are most useful for distinguishing between two or more substances, not identifying a single substance. For example, isopropanol and water can be distinguished by the characteristic property of odor. Characteristic properties are used because the sample size and the shape of the substance does not matter. For example, 1 gram of lead is the same color as 100 tons of lead. See also Intensive and extensive properties Document 1::: While chemically pure materials have a single melting point, chemical mixtures often partially melt at the solidus temperature (TS or Tsol), and fully melt at the higher liquidus temperature (TL or Tliq). The solidus is always less than or equal to the liquidus, but they need not coincide. If a gap exists between the solidus and liquidus it is called the freezing range, and within that gap, the substance consists of a mixture of solid and liquid phases (like a slurry). Such is the case, for example, with the olivine (forsterite-fayalite) system, which is common in earth's mantle. Definitions In chemistry, materials science, and physics, the liquidus temperature specifies the temperature above which a material is completely liquid, and the maximum temperature at which crystals can co-exist with the melt in thermodynamic equilibrium. The solidus is the locus of temperatures (a curve on a phase diagram) below which a given substance is completely solid (crystallized). The solidus temperature, specifies the temperature below which a material is completely solid, and the minimum temperature at which a melt can co-exist with crystals in thermodynamic equilibrium. Liquidus and solidus are mostly used for impure substances (mixtures) such as glasses, metal alloys, ceramics, rocks, and minerals. Lines of liquidus and solidus appear in the phase diagrams of binary solid solutions, as well as in eutectic systems away from the invariant point. When distinction is irrelevant For pure elements or compounds, e.g. pure copper, pure water, etc. the liquidus and solidus are at the same temperature, and the term melting point may be used. There are also some mixtures which melt at a particular temperature, known as congruent melting. One example is eutectic mixture. In a eutectic system, there is particular mixing ratio where the solidus and liquidus temperatures coincide at a point known as the invariant point. At the invariant point, the mixture undergoes a eutectic reaction wh Document 2::: Sublimation is the transition of a substance directly from the solid to the gas state, without passing through the liquid state. Sublimation is an endothermic process that occurs at temperatures and pressures below a substance's triple point in its phase diagram, which corresponds to the lowest pressure at which the substance can exist as a liquid. The reverse process of sublimation is deposition or desublimation, in which a substance passes directly from a gas to a solid phase. Sublimation has also been used as a generic term to describe a solid-to-gas transition (sublimation) followed by a gas-to-solid transition (deposition). While vaporization from liquid to gas occurs as evaporation from the surface if it occurs below the boiling point of the liquid, and as boiling with formation of bubbles in the interior of the liquid if it occurs at the boiling point, there is no such distinction for the solid-to-gas transition which always occurs as sublimation from the surface. At normal pressures, most chemical compounds and elements possess three different states at different temperatures. In these cases, the transition from the solid to the gaseous state requires an intermediate liquid state. The pressure referred to is the partial pressure of the substance, not the total (e.g. atmospheric) pressure of the entire system. Thus, any solid can sublimate if its vapour pressure is higher than the surrounding partial pressure of the same substance, and in some cases sublimates at an appreciable rate (e.g. water ice just below 0 °C). For some substances, such as carbon and arsenic, sublimation is much easier than evaporation from the melt, because the pressure of their triple point is very high, and it is difficult to obtain them as liquids. The term sublimation refers to a physical change of state and is not used to describe the transformation of a solid to a gas in a chemical reaction. For example, the dissociation on heating of solid ammonium chloride into hydrogen chlori Document 3::: The Slip melting point (SMP) or "slip point" is one conventional definition of the melting point of a waxy solid. It is determined by casting a 10 mm column of the solid in a glass tube with an internal diameter of about 1 mm and a length of about 80 mm, and then immersing it in a temperature-controlled water bath. The slip point is the temperature at which the column of the solid begins to rise in the tube due to buoyancy, and because the outside surface of the solid is molten. This is a popular method for fats and waxes, because they tend to be mixtures of compounds with a range of molecular masses, without well-defined melting points. Document 4::: This is a list of gases at standard conditions, which means substances that boil or sublime at or below and 1 atm pressure and are reasonably stable. List This list is sorted by boiling point of gases in ascending order, but can be sorted on different values. "sub" and "triple" refer to the sublimation point and the triple point, which are given in the case of a substance that sublimes at 1 atm; "dec" refers to decomposition. "~" means approximately. Known as gas The following list has substances known to be gases, but with an unknown boiling point. Fluoroamine Trifluoromethyl trifluoroethyl trioxide CF3OOOCF2CF3 boils between 10 and 20° Bis-trifluoromethyl carbonate boils between −10 and +10° possibly +12, freezing −60° Difluorodioxirane boils between −80 and −90°. Difluoroaminosulfinyl fluoride F2NS(O)F is a gas but decomposes over several hours Trifluoromethylsulfinyl chloride CF3S(O)Cl Nitrosyl cyanide ?−20° blue-green gas 4343-68-4 Thiazyl chloride NSCl greenish yellow gas; trimerises. The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. Melting points are temperatures that are high enough to separate liquids from which substances? A. solids B. oils C. seeds D. gases Answer:
sciq-5195
multiple_choice
What structure connects the pharynx to the trachea?
[ "larynx", "sternum", "aorta", "thyroid" ]
A
Relavent Documents: Document 0::: The pharyngeal apparatus is an embryological structure. It consists of: pharyngeal grooves (from ectoderm) pharyngeal arches (from mesoderm) pharyngeal pouches (from endoderm) and related membranes. Document 1::: The laryngotracheal groove is a precursor for the larynx and trachea. The rudiment of the respiratory organs appears as a median longitudinal groove in the ventral wall of the pharynx. The groove deepens, and its lips fuse to form a septum, which grows from below upward and converts the groove into a tube, the laryngotracheal tube. The cephalic end opens into the pharynx through a slit-like aperture formed by the persistent anterior part of the groove. Initially, the cephalic end is in open communication with the foregut, but eventually it becomes separated by the indentations of the mesoderm, the tracheoesophageal folds. When the tracheoesophageal folds fuse in the midline to form the tracheoesophageal septum, the foregut is divided into the trachea ventrally and the esophagus dorsally. The tube is lined by an endoderm, from which the epithelial lining of the respiratory tract is developed. The cephalic part of the tube becomes the larynx, and its next succeeding part is the trachea, while from its caudal end, a respiratory diverticulum appears as the lung bud. The lung bud branches into two lateral outgrowths known as the bronchial buds, one on each side of the trachea. The right and left bronchial buds branch into main (primary), lobar (secondary), segmental (tertiary), and subsegmental bronchi and lead to the development of the lungs. The Hox complex, FGF-10 (fibroblast growth factor), BMP-4 (bone morphogenetic protein), N-myc (a proto-oncogene), syndecan (a proteglycan), tenascin (an extracellular matrix protein), and epimorphin (a protein) appear to play a role in the development of the respiratory system. Document 2::: The pharynx (: pharynges) is the part of the throat behind the mouth and nasal cavity, and above the esophagus and trachea (the tubes going down to the stomach and the lungs respectively). It is found in vertebrates and invertebrates, though its structure varies across species. The pharynx carries food to the esophagus and air to the larynx. The flap of cartilage called the epiglottis stops food from entering the larynx. In humans, the pharynx is part of the digestive system and the conducting zone of the respiratory system. (The conducting zone—which also includes the nostrils of the nose, the larynx, trachea, bronchi, and bronchioles—filters, warms and moistens air and conducts it into the lungs). The human pharynx is conventionally divided into three sections: the nasopharynx, oropharynx, and laryngopharynx. In humans, two sets of pharyngeal muscles form the pharynx and determine the shape of its lumen. They are arranged as an inner layer of longitudinal muscles and an outer circular layer. Structure Nasopharynx The upper portion of the pharynx, the nasopharynx, extends from the base of the skull to the upper surface of the soft palate. It includes the space between the internal nares and the soft palate and lies above the oral cavity. The adenoids, also known as the pharyngeal tonsils, are lymphoid tissue structures located in the posterior wall of the nasopharynx. Waldeyer's tonsillar ring is an annular arrangement of lymphoid tissue in both the nasopharynx and oropharynx. The nasopharynx is lined by respiratory epithelium that is pseudostratified, columnar, and ciliated. Polyps or mucus can obstruct the nasopharynx, as can congestion due to an upper respiratory infection. The auditory tube, which connects the middle ear to the pharynx, opens into the nasopharynx at the pharyngeal opening of the auditory tube. The opening and closing of the auditory tubes serves to equalize the barometric pressure in the middle ear with that of the ambient atmosphere. Th Document 3::: The epipharyngeal groove is a ciliated groove along the dorsal side of the inside of the pharynx in some plankton-feeding early chordates, such as Amphioxus. It helps to carry a stream of mucus with plankton stuck in it, through the pharynx into the gut to be digested. The subnotochordal rod or hypochord is a transient structure that appears ventral to the notochord in the heads of embryos of some vertebrates. Its appearance is stimulated by a chemical secreted by the notochord. The subnotochordal rod helps to stimulate development of the dorsal aorta. There is an opinion that these two structures are homologous. Document 4::: The vocal tract is the cavity in human bodies and in animals where the sound produced at the sound source (larynx in mammals; syrinx in birds) is filtered. In birds it consists of the trachea, the syrinx, the oral cavity, the upper part of the esophagus, and the beak. In mammals it consists of the laryngeal cavity, the pharynx, the oral cavity, and the nasal cavity. The estimated average length of the vocal tract in men is 16.9 cm and 14.1 cm in women. See also Language Talking birds – species of birds capable of imitating human sounds, but without known comprehension Speech organ Speech synthesis Manner of articulation The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. What structure connects the pharynx to the trachea? A. larynx B. sternum C. aorta D. thyroid Answer:
sciq-347
multiple_choice
The number of what subatomic particles can vary between atoms of the same element?
[ "neutrons", "neurons", "electrons", "protons" ]
A
Relavent Documents: Document 0::: The subatomic scale is the domain of physical size that encompasses objects smaller than an atom. It is the scale at which the atomic constituents, such as the nucleus containing protons and neutrons, and the electrons in their orbitals, become apparent. The subatomic scale includes the many thousands of times smaller subnuclear scale, which is the scale of physical size at which constituents of the protons and neutrons - particularly quarks - become apparent. See also Astronomical scale the opposite end of the spectrum Subatomic particles Document 1::: Conceptual questions or conceptual problems in science, technology, engineering, and mathematics (STEM) education are questions that can be answered based only on the knowledge of relevant concepts, rather than performing extensive calculations. They contrast with most homework and exam problems in science and engineering that typically require plugging in numerical values into previously discussed formulas. Such "plug-and-chug" numerical problems can often be solved correctly by just matching the pattern of the problem to a previously discussed problem and changing the numerical inputs, which requires significant amounts of time to perform the calculations but does not test or deepen the understanding of how the concepts and formulas should work together. Conceptual questions, therefore, provide a good complement to conventional numerical problems because they need minimal or no calculations and instead encourage the students to engage more deeply with the underlying concepts and how they relate to formulas. Conceptual problems are often formulated as multiple-choice questions, making them easy to use during in-class discussions, particularly when utilizing active learning, peer instruction, and audience response. An example of a conceptual question in undergraduate thermodynamics is provided below: During adiabatic expansion of an ideal gas, its temperatureincreases decreases stays the same Impossible to tell/need more information The use of conceptual questions in physics was popularized by Eric Mazur, particularly in the form of multiple-choice tests that he called ConcepTests. In recent years, multiple websites that maintain lists of conceptual questions have been created by instructors for various disciplines. Some books on physics provide many examples of conceptual questions as well. Multiple conceptual questions can be assembled into a concept inventory to test the working knowledge of students at the beginning of a course or to track the improvement in Document 2::: The elementary charge, usually denoted by , is a fundamental physical constant, defined as the electric charge carried by a single proton or, equivalently, the magnitude of the negative electric charge carried by a single electron, which has charge −1 . In the SI system of units, the value of the elementary charge is exactly defined as  =  coulombs, or 160.2176634 zeptocoulombs (zC). Since the 2019 redefinition of SI base units, the seven SI base units are defined by seven fundamental physical constants, of which the elementary charge is one. In the centimetre–gram–second system of units (CGS), the corresponding quantity is . Robert A. Millikan and Harvey Fletcher's oil drop experiment first directly measured the magnitude of the elementary charge in 1909, differing from the modern accepted value by just 0.6%. Under assumptions of the then-disputed atomic theory, the elementary charge had also been indirectly inferred to ~3% accuracy from blackbody spectra by Max Planck in 1901 and (through the Faraday constant) at order-of-magnitude accuracy by Johann Loschmidt's measurement of the Avogadro number in 1865. As a unit In some natural unit systems, such as the system of atomic units, e functions as the unit of electric charge. The use of elementary charge as a unit was promoted by George Johnstone Stoney in 1874 for the first system of natural units, called Stoney units. Later, he proposed the name electron for this unit. At the time, the particle we now call the electron was not yet discovered and the difference between the particle electron and the unit of charge electron was still blurred. Later, the name electron was assigned to the particle and the unit of charge e lost its name. However, the unit of energy electronvolt (eV) is a remnant of the fact that the elementary charge was once called electron. In other natural unit systems, the unit of charge is defined as with the result that where is the fine-structure constant, is the speed of light, is Document 3::: Isotopes are distinct nuclear species (or nuclides, as technical term) of the same chemical element. They have the same atomic number (number of protons in their nuclei) and position in the periodic table (and hence belong to the same chemical element), but differ in nucleon numbers (mass numbers) due to different numbers of neutrons in their nuclei. While all isotopes of a given element have almost the same chemical properties, they have different atomic masses and physical properties. The term isotope is formed from the Greek roots isos (ἴσος "equal") and topos (τόπος "place"), meaning "the same place"; thus, the meaning behind the name is that different isotopes of a single element occupy the same position on the periodic table. It was coined by Scottish doctor and writer Margaret Todd in 1913 in a suggestion to the British chemist Frederick Soddy. The number of protons within the atom's nucleus is called its atomic number and is equal to the number of electrons in the neutral (non-ionized) atom. Each atomic number identifies a specific element, but not the isotope; an atom of a given element may have a wide range in its number of neutrons. The number of nucleons (both protons and neutrons) in the nucleus is the atom's mass number, and each isotope of a given element has a different mass number. For example, carbon-12, carbon-13, and carbon-14 are three isotopes of the element carbon with mass numbers 12, 13, and 14, respectively. The atomic number of carbon is 6, which means that every carbon atom has 6 protons so that the neutron numbers of these isotopes are 6, 7, and 8 respectively. Isotope vs. nuclide A nuclide is a species of an atom with a specific number of protons and neutrons in the nucleus, for example, carbon-13 with 6 protons and 7 neutrons. The nuclide concept (referring to individual nuclear species) emphasizes nuclear properties over chemical properties, whereas the isotope concept (grouping all atoms of each element) emphasizes chemical over Document 4::: Secondary electrons are electrons generated as ionization products. They are called 'secondary' because they are generated by other radiation (the primary radiation). This radiation can be in the form of ions, electrons, or photons with sufficiently high energy, i.e. exceeding the ionization potential. Photoelectrons can be considered an example of secondary electrons where the primary radiation are photons; in some discussions photoelectrons with higher energy (>50 eV) are still considered "primary" while the electrons freed by the photoelectrons are "secondary". Applications Secondary electrons are also the main means of viewing images in the scanning electron microscope (SEM). The range of secondary electrons depends on the energy. Plotting the inelastic mean free path as a function of energy often shows characteristics of the "universal curve" familiar to electron spectroscopists and surface analysts. This distance is on the order of a few nanometers in metals and tens of nanometers in insulators. This small distance allows such fine resolution to be achieved in the SEM. For SiO2, for a primary electron energy of 100 eV, the secondary electron range is up to 20 nm from the point of incidence. See also Delta ray Everhart-Thornley detector The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. The number of what subatomic particles can vary between atoms of the same element? A. neutrons B. neurons C. electrons D. protons Answer:
sciq-4206
multiple_choice
What do you call any substance in food that the body needs?
[ "dietary", "antioxidant", "beneficial", "a nutrient" ]
D
Relavent Documents: Document 0::: Human nutrition deals with the provision of essential nutrients in food that are necessary to support human life and good health. Poor nutrition is a chronic problem often linked to poverty, food security, or a poor understanding of nutritional requirements. Malnutrition and its consequences are large contributors to deaths, physical deformities, and disabilities worldwide. Good nutrition is necessary for children to grow physically and mentally, and for normal human biological development. Overview The human body contains chemical compounds such as water, carbohydrates, amino acids (found in proteins), fatty acids (found in lipids), and nucleic acids (DNA and RNA). These compounds are composed of elements such as carbon, hydrogen, oxygen, nitrogen, and phosphorus. Any study done to determine nutritional status must take into account the state of the body before and after experiments, as well as the chemical composition of the whole diet and of all the materials excreted and eliminated from the body (including urine and feces). Nutrients The seven major classes of nutrients are carbohydrates, fats, fiber, minerals, proteins, vitamins, and water. Nutrients can be grouped as either macronutrients or micronutrients (needed in small quantities). Carbohydrates, fats, and proteins are macronutrients, and provide energy. Water and fiber are macronutrients but do not provide energy. The micronutrients are minerals and vitamins. The macronutrients (excluding fiber and water) provide structural material (amino acids from which proteins are built, and lipids from which cell membranes and some signaling molecules are built), and energy. Some of the structural material can also be used to generate energy internally, and in either case it is measured in Joules or kilocalories (often called "Calories" and written with a capital 'C' to distinguish them from little 'c' calories). Carbohydrates and proteins provide 17 kJ approximately (4 kcal) of energy per gram, while fats prov Document 1::: Animal nutrition focuses on the dietary nutrients needs of animals, primarily those in agriculture and food production, but also in zoos, aquariums, and wildlife management. Constituents of diet Macronutrients (excluding fiber and water) provide structural material (amino acids from which proteins are built, and lipids from which cell membranes and some signaling molecules are built) and energy. Some of the structural material can be used to generate energy internally, though the net energy depends on such factors as absorption and digestive effort, which vary substantially from instance to instance. Vitamins, minerals, fiber, and water do not provide energy, but are required for other reasons. A third class dietary material, fiber (i.e., non-digestible material such as cellulose), seems also to be required, for both mechanical and biochemical reasons, though the exact reasons remain unclear. Molecules of carbohydrates and fats consist of carbon, hydrogen, and oxygen atoms. Carbohydrates range from simple monosaccharides (glucose, fructose, galactose) to complex polysaccharides (starch). Fats are triglycerides, made of assorted fatty acid monomers bound to glycerol backbone. Some fatty acids, but not all, are essential in the diet: they cannot be synthesized in the body. Protein molecules contain nitrogen atoms in addition to carbon, oxygen, and hydrogen. The fundamental components of protein are nitrogen-containing amino acids. Essential amino acids cannot be made by the animal. Some of the amino acids are convertible (with the expenditure of energy) to glucose and can be used for energy production just as ordinary glucose. By breaking down existing protein, some glucose can be produced internally; the remaining amino acids are discarded, primarily as urea in urine. This occurs normally only during prolonged starvation. Other dietary substances found in plant foods (phytochemicals, polyphenols) are not identified as essential nutrients but appear to impact healt Document 2::: A bioactive compound is a compound that has an effect on a living organism, tissue or cell, usually demonstrated by basic research in vitro or in vivo in the laboratory. While dietary nutrients are essential to life, bioactive compounds have not been proved to be essential as the body can function without them or because their actions are obscured by nutrients fulfilling the function. Bioactive compounds lack sufficient evidence of effect or safety, and consequently they are usually unregulated and may be sold as dietary supplements. Origin and examples Bioactive compounds are commonly derived from plants, animal products, or can be synthetically produced. Examples of plant bioactive compounds are carotenoids, polyphenols, or phytosterols. Examples in animal products are fatty acids found in milk and fish. Other examples are flavonoids, caffeine, choline, coenzyme Q, creatine, dithiolthiones, polysaccharides, phytoestrogens, glucosinolates, and prebiotics. In the diet The NIH Office of Dietary Supplements proposed a definition of bioactives in the context of human nutrition as "compounds that are constituents in foods and dietary supplements, other than those needed to meet basic human nutritional needs, which are responsible for changes in health status", although a range of other definitions are used. Traditionally, dietary recommendations, such as DRIs used in Canada and the United States, focused on deficiencies causing diseases, and therefore emphasized defined essential nutrients. Bioactive compounds have not been adequately defined for the extent of their bioactivity in humans, indicating that their role in disease prevention and maintenance remains unknown. Dietary fiber, for example, is a non-essential dietary component without a DRI, yet is commonly recommended for the diet to reduce the risk of cardiovascular diseases and cancer. Frameworks for developing DRIs for bioactive compounds have to establish an association with health, safety and non-to Document 3::: Relatively speaking, the brain consumes an immense amount of energy in comparison to the rest of the body. The mechanisms involved in the transfer of energy from foods to neurons are likely to be fundamental to the control of brain function. Human bodily processes, including the brain, all require both macronutrients, as well as micronutrients. Insufficient intake of selected vitamins, or certain metabolic disorders, may affect cognitive processes by disrupting the nutrient-dependent processes within the body that are associated with the management of energy in neurons, which can subsequently affect synaptic plasticity, or the ability to encode new memories. Macronutrients The human brain requires nutrients obtained from the diet to develop and sustain its physical structure and cognitive functions. Additionally, the brain requires caloric energy predominately derived from the primary macronutrients to operate. The three primary macronutrients include carbohydrates, proteins, and fats. Each macronutrient can impact cognition through multiple mechanisms, including glucose and insulin metabolism, neurotransmitter actions, oxidative stress and inflammation, and the gut-brain axis. Inadequate macronutrient consumption or proportion could impair optimal cognitive functioning and have long-term health implications. Carbohydrates Through digestion, dietary carbohydrates are broken down and converted into glucose, which is the sole energy source for the brain. Optimal brain function relies on adequate carbohydrate consumption, as carbohydrates provide the quickest source of glucose for the brain. Glucose deficiencies such as hypoglycaemia reduce available energy for the brain and impair all cognitive processes and performance. Additionally, situations with high cognitive demand, such as learning a new task, increase brain glucose utilization, depleting blood glucose stores and initiating the need for supplementation. Complex carbohydrates, especially those with high d Document 4::: Food science is the basic science and applied science of food; its scope starts at overlap with agricultural science and nutritional science and leads through the scientific aspects of food safety and food processing, informing the development of food technology. Food science brings together multiple scientific disciplines. It incorporates concepts from fields such as chemistry, physics, physiology, microbiology, and biochemistry. Food technology incorporates concepts from chemical engineering, for example. Activities of food scientists include the development of new food products, design of processes to produce these foods, choice of packaging materials, shelf-life studies, sensory evaluation of products using survey panels or potential consumers, as well as microbiological and chemical testing. Food scientists may study more fundamental phenomena that are directly linked to the production of food products and its properties. Definition The Institute of Food Technologists defines food science as "the discipline in which the engineering, biological, and physical sciences are used to study the nature of foods, the causes of deterioration, the principles underlying food processing, and the improvement of foods for the consuming public". The textbook Food Science defines food science in simpler terms as "the application of basic sciences and engineering to study the physical, chemical, and biochemical nature of foods and the principles of food processing". Disciplines Some of the subdisciplines of food science are described below. Food chemistry Food chemistry is the study of chemical processes and interactions of all biological and non-biological components of foods. The biological substances include such items as meat, poultry, lettuce, beer, and milk. It is similar to biochemistry in its main components such as carbohydrates, lipids, and protein, but it also includes areas such as water, vitamins, minerals, enzymes, food additives, flavors, and colors. This The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. What do you call any substance in food that the body needs? A. dietary B. antioxidant C. beneficial D. a nutrient Answer:
sciq-3159
multiple_choice
Are zeroes that show only where the decimal point fall significant or not significant?
[ "neither", "sometimes significant", "significant", "not significant" ]
D
Relavent Documents: Document 0::: Conceptual questions or conceptual problems in science, technology, engineering, and mathematics (STEM) education are questions that can be answered based only on the knowledge of relevant concepts, rather than performing extensive calculations. They contrast with most homework and exam problems in science and engineering that typically require plugging in numerical values into previously discussed formulas. Such "plug-and-chug" numerical problems can often be solved correctly by just matching the pattern of the problem to a previously discussed problem and changing the numerical inputs, which requires significant amounts of time to perform the calculations but does not test or deepen the understanding of how the concepts and formulas should work together. Conceptual questions, therefore, provide a good complement to conventional numerical problems because they need minimal or no calculations and instead encourage the students to engage more deeply with the underlying concepts and how they relate to formulas. Conceptual problems are often formulated as multiple-choice questions, making them easy to use during in-class discussions, particularly when utilizing active learning, peer instruction, and audience response. An example of a conceptual question in undergraduate thermodynamics is provided below: During adiabatic expansion of an ideal gas, its temperatureincreases decreases stays the same Impossible to tell/need more information The use of conceptual questions in physics was popularized by Eric Mazur, particularly in the form of multiple-choice tests that he called ConcepTests. In recent years, multiple websites that maintain lists of conceptual questions have been created by instructors for various disciplines. Some books on physics provide many examples of conceptual questions as well. Multiple conceptual questions can be assembled into a concept inventory to test the working knowledge of students at the beginning of a course or to track the improvement in Document 1::: The Texas Math and Science Coaches Association or TMSCA is an organization for coaches of academic University Interscholastic League teams in Texas middle schools and high schools, specifically those that compete in mathematics and science-related tests. Events There are four events in the TMSCA at both the middle and high school level: Number Sense, General Mathematics, Calculator Applications, and General Science. Number Sense is an 80-question exam that students are given only 10 minutes to solve. Additionally, no scratch work or paper calculations are allowed. These questions range from simple calculations such as 99+98 to more complicated operations such as 1001×1938. Each calculation is able to be done with a certain trick or shortcut that makes the calculations easier. The high school exam includes calculus and other difficult topics in the questions also with the same rules applied as to the middle school version. It is well known that the grading for this event is particularly stringent as errors such as writing over a line or crossing out potential answers are considered as incorrect answers. General Mathematics is a 50-question exam that students are given only 40 minutes to solve. These problems are usually more challenging than questions on the Number Sense test, and the General Mathematics word problems take more thinking to figure out. Every problem correct is worth 5 points, and for every problem incorrect, 2 points are deducted. Tiebreakers are determined by the person that misses the first problem and by percent accuracy. Calculator Applications is an 80-question exam that students are given only 30 minutes to solve. This test requires practice on the calculator, knowledge of a few crucial formulas, and much speed and intensity. Memorizing formulas, tips, and tricks will not be enough. In this event, plenty of practice is necessary in order to master the locations of the keys and develop the speed necessary. All correct questions are worth 5 Document 2::: Mathematics education in the United States varies considerably from one state to the next, and even within a single state. However, with the adoption of the Common Core Standards in most states and the District of Columbia beginning in 2010, mathematics content across the country has moved into closer agreement for each grade level. The SAT, a standardized university entrance exam, has been reformed to better reflect the contents of the Common Core. However, many students take alternatives to the traditional pathways, including accelerated tracks. As of 2023, twenty-seven states require students to pass three math courses before graduation from high school, and seventeen states and the District of Columbia require four. Compared to other developed countries in the Organisation for Economic Co-operation and Development (OECD), the average level of mathematical literacy of American students is mediocre. As in many other countries, math scores dropped even further during the COVID-19 pandemic. Secondary-school algebra proves to be the turning point of difficulty many students struggle to surmount, and as such, many students are ill-prepared for collegiate STEM programs, or future high-skilled careers. Meanwhile, the number of eighth-graders enrolled in Algebra I has fallen between the early 2010s and early 2020s. Across the United States, there is a shortage of qualified mathematics instructors. Despite their best intentions, parents may transmit their mathematical anxiety to their children, who may also have school teachers who fear mathematics. About one in five American adults are functionally innumerate. While an overwhelming majority agree that mathematics is important, many, especially the young, are not confident of their own mathematical ability. Curricular content and standards Each U.S. state sets its own curricular standards, and details are usually set by each local school district. Although there are no federal standards, since 2015 most states have bas Document 3::: In mathematical psychology and education theory, a knowledge space is a combinatorial structure used to formulate mathematical models describing the progression of a human learner. Knowledge spaces were introduced in 1985 by Jean-Paul Doignon and Jean-Claude Falmagne, and remain in extensive use in the education theory. Modern applications include two computerized tutoring systems, ALEKS and the defunct RATH. Formally, a knowledge space assumes that a domain of knowledge is a collection of concepts or skills, each of which must be eventually mastered. Not all concepts are interchangeable; some require other concepts as prerequisites. Conversely, competency at one skill may ease the acquisition of another through similarity. A knowledge space marks out which collections of skills are feasible: they can be learned without mastering any other skills. Under reasonable assumptions, the collection of feasible competencies forms the mathematical structure known as an antimatroid. Researchers and educators usually explore the structure of a discipline's knowledge space as a latent class model. Motivation Knowledge Space Theory attempts to address shortcomings of standardized testing when used in educational psychometry. Common tests, such as the SAT and ACT, compress a student's knowledge into a very small range of ordinal ranks, in the process effacing the conceptual dependencies between questions. Consequently, the tests cannot distinguish between true understanding and guesses, nor can they identify a student's particular weaknesses, only the general proportion of skills mastered. The goal of knowledge space theory is to provide a language by which exams can communicate What the student can do and What the student is ready to learn. Model structure Knowledge Space Theory-based models presume that an educational subject can be modeled as a finite set of concepts, skills, or topics. Each feasible state of knowledge about is then a subset of ; the set of Document 4::: The Standards for Educational and Psychological Testing is a set of testing standards developed jointly by the American Educational Research Association (AERA), American Psychological Association (APA), and the National Council on Measurement in Education (NCME). The new edition of The Standards for Educational and Psychological Testing was released in July 2014. Five areas received particular attention in the 2014 revision: 1. Examining accountability issues associated with the uses of tests in educational policy 2. Broadening the concept of accessibility of tests for all examinees 3. Representing more comprehensively the role of tests in the workplace 4. Taking into account the expanding role of technology in testing 5. Improving the structure of the book for better communication of the standards Previous versions It was published on 1985, the 1999 Standards for Educational and Psychological Testing has more in-depth background material in each chapter, a greater number of standards, and a significantly expanded glossary and index. The 1999 version Standards reflects changes in United States federal law and measurement trends affecting validity; testing individuals with disabilities or different linguistic backgrounds; and new types of tests as well as new uses of existing tests. The Standards is written for the professional and for the educated layperson and addresses professional and technical issues of test development and use in education, psychology and employment. Overview of organization and content Part I: Test Construction, Evaluation, and Documentation 1. Validity 2. Reliability and Errors of Measurement 3. Test Development and Revision 4. Scales, Norms, and Score Comparability 5. Test Administration, Scoring, and Reporting 6. Supporting Documentation for Tests Part II: Fairness in Testing 7. Fairness in Testing and Test Use 8. The Rights and Responsibilities of Test Takers 9. Testing Individuals of Diverse Linguistic Backgrounds 10. Test The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. Are zeroes that show only where the decimal point fall significant or not significant? A. neither B. sometimes significant C. significant D. not significant Answer:
sciq-8648
multiple_choice
What develops into embryos inside seeds, from which the next sporophyte generation grows?
[ "pupa", "zygotes", "secretions", "buds" ]
B
Relavent Documents: Document 0::: Plant embryonic development, also plant embryogenesis is a process that occurs after the fertilization of an ovule to produce a fully developed plant embryo. This is a pertinent stage in the plant life cycle that is followed by dormancy and germination. The zygote produced after fertilization must undergo various cellular divisions and differentiations to become a mature embryo. An end stage embryo has five major components including the shoot apical meristem, hypocotyl, root meristem, root cap, and cotyledons. Unlike the embryonic development in animals, and specifically in humans, plant embryonic development results in an immature form of the plant, lacking most structures like leaves, stems, and reproductive structures. However, both plants and animals including humans, pass through a phylotypic stage that evolved independently and that causes a developmental constraint limiting morphological diversification. Morphogenic events Embryogenesis occurs naturally as a result of single, or double fertilization, of the ovule, giving rise to two distinct structures: the plant embryo and the endosperm which go on to develop into a seed. The zygote goes through various cellular differentiations and divisions in order to produce a mature embryo. These morphogenic events form the basic cellular pattern for the development of the shoot-root body and the primary tissue layers; it also programs the regions of meristematic tissue formation. The following morphogenic events are only particular to eudicots, and not monocots. Plant Following fertilization, the zygote and endosperm are present within the ovule, as seen in stage I of the illustration on this page. Then the zygote undergoes an asymmetric transverse cell division that gives rise to two cells - a small apical cell resting above a large basal cell. These two cells are very different, and give rise to different structures, establishing polarity in the embryo. apical cellThe small apical cell is on the top and contains Document 1::: Important structures in plant development are buds, shoots, roots, leaves, and flowers; plants produce these tissues and structures throughout their life from meristems located at the tips of organs, or between mature tissues. Thus, a living plant always has embryonic tissues. By contrast, an animal embryo will very early produce all of the body parts that it will ever have in its life. When the animal is born (or hatches from its egg), it has all its body parts and from that point will only grow larger and more mature. However, both plants and animals pass through a phylotypic stage that evolved independently and that causes a developmental constraint limiting morphological diversification. According to plant physiologist A. Carl Leopold, the properties of organization seen in a plant are emergent properties which are more than the sum of the individual parts. "The assembly of these tissues and functions into an integrated multicellular organism yields not only the characteristics of the separate parts and processes but also quite a new set of characteristics which would not have been predictable on the basis of examination of the separate parts." Growth A vascular plant begins from a single celled zygote, formed by fertilisation of an egg cell by a sperm cell. From that point, it begins to divide to form a plant embryo through the process of embryogenesis. As this happens, the resulting cells will organize so that one end becomes the first root while the other end forms the tip of the shoot. In seed plants, the embryo will develop one or more "seed leaves" (cotyledons). By the end of embryogenesis, the young plant will have all the parts necessary to begin in its life. Once the embryo germinates from its seed or parent plant, it begins to produce additional organs (leaves, stems, and roots) through the process of organogenesis. New roots grow from root meristems located at the tip of the root, and new stems and leaves grow from shoot meristems located at the Document 2::: Germination is the process by which an organism grows from a seed or spore. The term is applied to the sprouting of a seedling from a seed of an angiosperm or gymnosperm, the growth of a sporeling from a spore, such as the spores of fungi, ferns, bacteria, and the growth of the pollen tube from the pollen grain of a seed plant. Seed plants Germination is usually the growth of a plant contained within a seed; it results in the formation of the seedling. It is also the process of reactivation of metabolic machinery of the seed resulting in the emergence of radicle and plumule. The seed of a vascular plant is a small package produced in a fruit or cone after the union of male and female reproductive cells. All fully developed seeds contain an embryo and, in most plant species some store of food reserves, wrapped in a seed coat. Some plants produce varying numbers of seeds that lack embryos; these are empty seeds which never germinate. Dormant seeds are viable seeds that do not germinate because they require specific internal or environmental stimuli to resume growth. Under proper conditions, the seed begins to germinate and the embryo resumes growth, developing into a seedling. Disturbance of soil can result in vigorous plant growth by exposing seeds already in the soil to changes in environmental factors where germination may have previously been inhibited by depth of the seeds or soil that was too compact. This is often observed at gravesites after a burial. Seed germination depends on both internal and external conditions. The most important external factors include right temperature, water, oxygen or air and sometimes light or darkness. Various plants require different variables for successful seed germination. Often this depends on the individual seed variety and is closely linked to the ecological conditions of a plant's natural habitat. For some seeds, their future germination response is affected by environmental conditions during seed formation; most ofte Document 3::: A seedling is a young sporophyte developing out of a plant embryo from a seed. Seedling development starts with germination of the seed. A typical young seedling consists of three main parts: the radicle (embryonic root), the hypocotyl (embryonic shoot), and the cotyledons (seed leaves). The two classes of flowering plants (angiosperms) are distinguished by their numbers of seed leaves: monocotyledons (monocots) have one blade-shaped cotyledon, whereas dicotyledons (dicots) possess two round cotyledons. Gymnosperms are more varied. For example, pine seedlings have up to eight cotyledons. The seedlings of some flowering plants have no cotyledons at all. These are said to be acotyledons. The plumule is the part of a seed embryo that develops into the shoot bearing the first true leaves of a plant. In most seeds, for example the sunflower, the plumule is a small conical structure without any leaf structure. Growth of the plumule does not occur until the cotyledons have grown above ground. This is epigeal germination. However, in seeds such as the broad bean, a leaf structure is visible on the plumule in the seed. These seeds develop by the plumule growing up through the soil with the cotyledons remaining below the surface. This is known as hypogeal germination. Photomorphogenesis and etiolation Dicot seedlings grown in the light develop short hypocotyls and open cotyledons exposing the epicotyl. This is also referred to as photomorphogenesis. In contrast, seedlings grown in the dark develop long hypocotyls and their cotyledons remain closed around the epicotyl in an apical hook. This is referred to as skotomorphogenesis or etiolation. Etiolated seedlings are yellowish in color as chlorophyll synthesis and chloroplast development depend on light. They will open their cotyledons and turn green when treated with light. In a natural situation, seedling development starts with skotomorphogenesis while the seedling is growing through the soil and attempting to reach the Document 4::: In plant science, the spermosphere is the zone in the soil surrounding a germinating seed. This is a small volume with radius perhaps 1 cm but varying with seed type, the variety of soil microorganisms, the level of soil moisture, and other factors. Within the spermosphere a range of complex interactions take place among the germinating seed, the soil, and the microbiome. Because germination is a brief process, the spermosphere is transient, but the impact of the microbial activity within the spermosphere can have strong and long-lasting effects on the developing plant. Seeds exude various molecules that influence their surrounding microbial communities, either inhibiting or stimulating their growth. The composition of the exudates varies according to the plant type and such properties of the soil as its pH and moisture content. With these biochemical effects, the spermosphere develops both downward—to form the rhizosphere (upon the emergence of the plant's radicle)—and upward to form the laimosphere, which is the soil surrounding the growing plant stem. The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. What develops into embryos inside seeds, from which the next sporophyte generation grows? A. pupa B. zygotes C. secretions D. buds Answer:
sciq-1908
multiple_choice
If humans were to artificially intervene and fertilize the egg of a bald eagle with the sperm of an african fish eagle and a chick did hatch, that offspring, called a hybrid (a cross between two species), would probably be this?
[ "mutated", "fertile", "infertile", "nonviable" ]
C
Relavent Documents: Document 0::: Hybrid inviability is a post-zygotic barrier, which reduces a hybrid's capacity to mature into a healthy, fit adult. The relatively low health of these hybrids relative to pure-breed individuals prevents gene flow between species. Thus, hybrid inviability acts as an isolating mechanism, limiting hybridization and allowing for the differentiation of species. The barrier of hybrid inviability occurs after mating species overcome pre-zygotic barriers (behavioral, mechanical, etc.) to produce a zygote. The barrier emerges from the cumulative effect of parental genes; these conflicting genes interfere with the embryo's development and prevents its maturation. Most often, the hybrid embryo dies before birth. However, sometimes, the offspring develops fully with mixed traits, forming a frail, often infertile adult. This hybrid displays reduced fitness, marked by decreased rates of survival and reproduction relative to the parent species. The offspring fails to compete with purebred individuals, limiting genes flow between species. Evolution of Hybrid Inviability in Tetrapods In the 1970s, Allan C. Wilson and his colleagues first investigated the evolution of hybrid inviability in tetrapods, specifically mammals, birds, and frogs. Recognizing that hybrid viability decreases with time, the researchers used molecular clocks to quantify divergence time. They identified how long ago the common ancestor of hybridizing species diverged into two lines, and found that bird and frog species can produce viable hybrids up to twenty million years after speciation. In addition, the researchers showed that mammal species can only produce viable hybrids up to two or three million years after speciation. Wilson et al. (1974) proposes two hypotheses to explain the relatively faster evolution of hybrid inviability in mammals: the Regulatory and the Immunological Hypotheses. Subsequent research finds support for these hypotheses. The Regulatory Hypothesis accounts for two Document 1::: In biology, offspring are the young creation of living organisms, produced either by a single organism or, in the case of sexual reproduction, two organisms. Collective offspring may be known as a brood or progeny in a more general way. This can refer to a set of simultaneous offspring, such as the chicks hatched from one clutch of eggs, or to all the offspring, as with the honeybee. Human offspring (descendants) are referred to as children (without reference to age, thus one can refer to a parent's "minor children" or "adult children" or "infant children" or "teenage children" depending on their age); male children are sons and female children are daughters (see kinship). Offspring can occur after mating or after artificial insemination. Overview Offspring contains many parts and properties that are precise and accurate in what they consist of, and what they define. As the offspring of a new species, also known as a child or f1 generation, consist of genes of the father and the mother, which is also known as the parent generation. Each of these offspring contains numerous genes which have coding for specific tasks and properties. Males and females both contribute equally to the genotypes of their offspring, in which gametes fuse and form. An important aspect of the formation of the parent offspring is the chromosome, which is a structure of DNA which contains many genes. To focus more on the offspring and how it results in the formation of the f1 generation, is an inheritance called sex linkage, which is a gene located on the sex chromosome, and patterns of this inheritance differ in both male and female. The explanation that proves the theory of the offspring having genes from both parent generations is proven through a process called crossing over, which consists of taking genes from the male chromosomes and genes from the female chromosome, resulting in a process of meiosis occurring, and leading to the splitting of the chromosomes evenly. Depending on which Document 2::: In biology, a hybrid is the offspring resulting from combining the qualities of two organisms of different varieties, species or genera through sexual reproduction. Generally, it means that each cell has genetic material from two different organisms, whereas an individual where some cells are derived from a different organism is called a chimera. Hybrids are not always intermediates between their parents (such as in blending inheritance), but can show hybrid vigor, sometimes growing larger or taller than either parent. The concept of a hybrid is interpreted differently in animal and plant breeding, where there is interest in the individual parentage. In genetics, attention is focused on the numbers of chromosomes. In taxonomy, a key question is how closely related the parent species are. Species are reproductively isolated by strong barriers to hybridization, which include genetic and morphological differences, differing times of fertility, mating behaviors and cues, and physiological rejection of sperm cells or the developing embryo. Some act before fertilization and others after it. Similar barriers exist in plants, with differences in flowering times, pollen vectors, inhibition of pollen tube growth, somatoplastic sterility, cytoplasmic-genic male sterility and the structure of the chromosomes. A few animal species and many plant species, however, are the result of hybrid speciation, including important crop plants such as wheat, where the number of chromosomes has been doubled. Human impact on the environment has resulted in an increase in the interbreeding between regional species, and the proliferation of introduced species worldwide has also resulted in an increase in hybridization. This genetic mixing may threaten many species with extinction, while genetic erosion from monoculture in crop plants may be damaging the gene pools of many species for future breeding. A form of often intentional human-mediated hybridization is the crossing of wild and domestic Document 3::: Hybrid incompatibility is a phenomenon in plants and animals, wherein offspring produced by the mating of two different species or populations have reduced viability and/or are less able to reproduce. Examples of hybrids include mules and ligers from the animal world, and subspecies of the Asian rice crop Oryza sativa from the plant world. Multiple models have been developed to explain this phenomenon. Recent research suggests that the source of this incompatibility is largely genetic, as combinations of genes and alleles prove lethal to the hybrid organism. Incompatibility is not solely influenced by genetics, however, and can be affected by environmental factors such as temperature. The genetic underpinnings of hybrid incompatibility may provide insight into factors responsible for evolutionary divergence between species. Background Hybrid incompatibility occurs when the offspring of two closely related species are not viable or suffer from infertility. Charles Darwin posited that hybrid incompatibility is not a product of natural selection, stating that the phenomenon is an outcome of the hybridizing species diverging, rather than something that is directly acted upon by selective pressures. The underlying causes of the incompatibility can be varied: earlier research focused on things like changes in ploidy in plants. More recent research has taken advantage of improved molecular techniques and has focused on the effects of genes and alleles in the hybrid and its parents. Dobzhansky-Muller model The first major breakthrough in the genetic basis of hybrid incompatibility is the Dobzhansky-Muller model, a combination of findings by Theodosius Dobzhansky and Joseph Muller between 1937 and 1942. The model provides an explanation as to why a negative fitness effect like hybrid incompatibility is not selected against. By hypothesizing that the incompatibility arose from alterations at two or more loci, rather than one, the incompatible alleles are in one hybrid in Document 4::: 46,XX/46,XY is a chimeric genetic condition characterized by the presence of some cells that express a 46,XX karyotype and some cells that express a 46,XY karyotype in a single human being. The cause of the condition lies in utero with the aggregation of two distinct blastocysts or zygotes (one of which expresses 46,XX and the other of which expresses 46,XY) into a single embryo, which subsequently leads to the development of a single individual with two distinct cell lines, instead of a pair of fraternal twins. 46,XX/46,XY chimeras are the result of the merging of two non-identical twins. This is not to be confused with mosaicism or hybridism, neither of which are chimeric conditions. Since individuals with the condition have two cell lines of the opposite sex, it can also be considered an intersex condition. In humans, sexual dimorphism is a consequence of the XY sex-determination system. In normal prenatal sex differentiation, the male and female embryo is anatomically identical until week 7 of the pregnancy, when the presence or the absence of the SRY gene on the Y chromosome causes the undetermined gonadal tissue to undergo differentiation and eventually become a pair of testes or ovaries respectively. The cells of the developing testes produce anti-müllerian hormone (AMH) and androgens, causing the reproductive tract and the genitals of the fetus to differentiate. As individuals with 46,XX/46,XY partially express the SRY gene, the normal process by which an embryo normally develops into a phenotypic male or phenotypic female may be significantly affected causing variation in the gonads, the reproductive tract and the genitals. Despite this, there have been cases of completely normal sex differentiation occurring in 46,XX/46,XY individuals reported in the medical literature. 46,XX/46,XY chimerism can be identified during pregnancy by prenatal screening or in early childhood through genetic testing and direct observation. The rate of incidence is difficult to The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. If humans were to artificially intervene and fertilize the egg of a bald eagle with the sperm of an african fish eagle and a chick did hatch, that offspring, called a hybrid (a cross between two species), would probably be this? A. mutated B. fertile C. infertile D. nonviable Answer:
sciq-9760
multiple_choice
Some living things on the ocean floor are sources of what human necessity?
[ "hormones", "medicines", "steroids", "pesticides" ]
B
Relavent Documents: Document 0::: Marine technology is defined by WEGEMT (a European association of 40 universities in 17 countries) as "technologies for the safe use, exploitation, protection of, and intervention in, the marine environment." In this regard, according to WEGEMT, the technologies involved in marine technology are the following: naval architecture, marine engineering, ship design, ship building and ship operations; oil and gas exploration, exploitation, and production; hydrodynamics, navigation, sea surface and sub-surface support, underwater technology and engineering; marine resources (including both renewable and non-renewable marine resources); transport logistics and economics; inland, coastal, short sea and deep sea shipping; protection of the marine environment; leisure and safety. Education and training According to the Cape Fear Community College of Wilmington, North Carolina, the curriculum for a marine technology program provides practical skills and academic background that are essential in succeeding in the area of marine scientific support. Through a marine technology program, students aspiring to become marine technologists will become proficient in the knowledge and skills required of scientific support technicians. The educational preparation includes classroom instructions and practical training aboard ships, such as how to use and maintain electronic navigation devices, physical and chemical measuring instruments, sampling devices, and data acquisition and reduction systems aboard ocean-going and smaller vessels, among other advanced equipment. As far as marine technician programs are concerned, students learn hands-on to trouble shoot, service and repair four- and two-stroke outboards, stern drive, rigging, fuel & lube systems, electrical including diesel engines. Relationship to commerce Marine technology is related to the marine science and technology industry, also known as maritime commerce. The Executive Office of Housing and Economic Development (EOHED Document 1::: Aquatic science is the study of the various bodies of water that make up our planet including oceanic and freshwater environments. Aquatic scientists study the movement of water, the chemistry of water, aquatic organisms, aquatic ecosystems, the movement of materials in and out of aquatic ecosystems, and the use of water by humans, among other things. Aquatic scientists examine current processes as well as historic processes, and the water bodies that they study can range from tiny areas measured in millimeters to full oceans. Moreover, aquatic scientists work in Interdisciplinary groups. For example, a physical oceanographer might work with a biological oceanographer to understand how physical processes, such as tropical cyclones or rip currents, affect organisms in the Atlantic Ocean. Chemists and biologists, on the other hand, might work together to see how the chemical makeup of a certain body of water affects the plants and animals that reside there. Aquatic scientists can work to tackle global problems such as global oceanic change and local problems, such as trying to understand why a drinking water supply in a certain area is polluted. There are two main fields of study that fall within the field of aquatic science. These fields of study include oceanography and limnology. Oceanography Oceanography refers to the study of the physical, chemical, and biological characteristics of oceanic environments. Oceanographers study the history, current condition, and future of the planet's oceans. They also study marine life and ecosystems, ocean circulation, plate tectonics, the geology of the seafloor, and the chemical and physical properties of the ocean. Oceanography is interdisciplinary. For example, there are biological oceanographers and marine biologists. These scientists specialize in marine organisms. They study how these organisms develop, their relationship with one another, and how they interact and adapt to their environment. Biological oceanographers Document 2::: The Malaspina circumnavigation expedition was an interdisciplinary research project to assess the impact of global change on the oceans and explore their biodiversity. The 250 scientists on board the Hespérides and Sarmiento de Gamboa embarked on an eight-month expedition (starting in December 2010) scientific research with training for young researchers - advancing marine science and fostering the public understanding of science. The project was under the umbrella of the Spanish Ministry of Science and Innovation's Consolider – Ingenio 2010 programme and was led by the Spanish National Research Council (CSIC) with the support of the Spanish Navy. It is named after the original scientific Malaspina Expedition between 1789 and 1794, that was commanded by Alejandro Malaspina. Due to Malaspina's involvement in a conspiracy to overthrow the Spanish government, he was jailed upon his return and a large part of the expedition's reports and collections were put away unpublished, not to see the light again until late in the 20th century. Objectives Assessing the impact of global change on the oceans Global change relates to the impact of human activities on the functioning of the biosphere. These include activities which, although performed locally, have effects on the functioning of the earth's system as a whole. The ocean plays a central role in regulating the planet's climate and is its biggest sink of and other substances produced by human activity. The project will put together Colección Malaspina 2010, a collection of environmental and biological data and samples which will be available to the scientific community for it to evaluate the impacts of future global changes. This will be particularly valuable, for example, when new technologies allow levels of pollutants below current thresholds of detection to be evaluated. Exploring the biodiversity of the deep ocean Half the Earth's surface is covered by oceans over 3,000 metres deep, making them the biggest Document 3::: The Oyster Question: Scientists, Watermen, and the Maryland Chesapeake Bay since 1880 is a 2009 book by Christine Keiner. It examines the conflict between oystermen and scientists in the Chesapeake Bay from the end of the nineteenth century to the present, which includes the period of the so-called "Oyster Wars" and the precipitous decline of the oyster industry at the end of the twentieth century. The book engages the myth of the "Tragedy of the Commons" by examining the often fraught relationship between local politics and conservation science, arguing that for most of the period Maryland's state political system gave rural oystermen more political clout than politicians and the scientists they appointed and allowing oystermen to effectively manage the oyster bed commons. Only towards the end of the twentieth century did reapportionment bring suburban and urban interests more political power, by which time they had latched on to oystermen as elements of the area's heritage and incorporated them and the oysters into broader conservation efforts. An important theme is the "intersection[] of scientific knowledge with experiential knowledge in the context of use," in that Keiner "treats the knowledge of the Chesapeake Bay’s oystermen alongside that of biologists." "Through her analysis, Keiner effectively reframes how environmental historians have analyzed histories of common resources and provides a working model for integrating historical and ecological information to bridge the histories of science and environmental history." Awards The book won the 2010 Forum for the History of Science in America Prize. It shared the 2010 Maryland Historical Trust's Heritage Book Award, and received an Honorable Mention for the Frederick Jackson Turner Award from the Organization of American Historians in 2010. Document 4::: Ethnoichthyology is an area in anthropology that examines human knowledge of fish, the uses of fish, and importance of fish in different human societies. It draws on knowledge from many different areas including ichthyology, economics, oceanography, and marine botany. This area of study seeks to understand the details of the interactions of humans with fish, including both cognitive and behavioural aspects. A knowledge of fish and their life strategies is extremely important to fishermen. In order to conserve fish species, it is also important to be aware of other cultures' knowledge of fish. Ignorance of the effects of human activity on fish populations may endanger fish species. Knowledge of fish can be gained through experience, scientific research, or information passed down through generations. Some factors that affect the amount of knowledge acquired include the value and abundance of the various types of fish, their usefulness in fisheries, and the amount of time one spends observing the fishes' life history patterns. Etymology The term was first used in the scientific literature by W.T. Morrill. He justified the origin and use of this term by stating that it arose from the model of "ethnobotany". Importance in conservation Ethnoichthyology can be very useful to the study and investigation of environmental changes caused by anthropogenic factors, such as the decline of fish stocks, the disappearance of fish species, and the introduction of non-native species of fish in certain environments. Ethnoichthyological knowledge can be used to create environmental conservation strategies. With a sound knowledge of fish ecology, informed decisions with respect to fishing practices can be made, and destructive environmental practices can be avoided. Ethnoichthyological knowledge can be the difference between conserving a species of fish, or placing a moratorium on fishing. Newfoundland's cod fishery collapse The collapse of the cod fishery in Newfoundland and Lab The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. Some living things on the ocean floor are sources of what human necessity? A. hormones B. medicines C. steroids D. pesticides Answer:
sciq-7819
multiple_choice
What is the only marsupial in north america?
[ "Raccoon", "oppossum", "Marmosa", "Didelphinae" ]
B
Relavent Documents: Document 0::: The SAT Subject Test in Biology was the name of a one-hour multiple choice test given on biology by the College Board. A student chose whether to take the test depending upon college entrance requirements for the schools in which the student is planning to apply. Until 1994, the SAT Subject Tests were known as Achievement Tests; and from 1995 until January 2005, they were known as SAT IIs. Of all SAT subject tests, the Biology E/M test was the only SAT II that allowed the test taker a choice between the ecological or molecular tests. A set of 60 questions was taken by all test takers for Biology and a choice of 20 questions was allowed between either the E or M tests. This test was graded on a scale between 200 and 800. The average for Molecular is 630 while Ecological is 591. On January 19 2021, the College Board discontinued all SAT Subject tests, including the SAT Subject Test in Biology E/M. This was effective immediately in the United States, and the tests were to be phased out by the following summer for international students. This was done as a response to changes in college admissions due to the impact of the COVID-19 pandemic on education. Format This test had 80 multiple-choice questions that were to be answered in one hour. All questions had five answer choices. Students received one point for each correct answer, lost ¼ of a point for each incorrect answer, and received 0 points for questions left blank. The student's score was based entirely on his or her performance in answering the multiple-choice questions. The questions covered a broad range of topics in general biology. There were more specific questions related respectively on ecological concepts (such as population studies and general Ecology) on the E test and molecular concepts such as DNA structure, translation, and biochemistry on the M test. Preparation The College Board suggested a year-long course in biology at the college preparatory level, as well as a one-year course in algebra, a Document 1::: Animal science is described as "studying the biology of animals that are under the control of humankind". It can also be described as the production and management of farm animals. Historically, the degree was called animal husbandry and the animals studied were livestock species, like cattle, sheep, pigs, poultry, and horses. Today, courses available look at a broader area, including companion animals, like dogs and cats, and many exotic species. Degrees in Animal Science are offered at a number of colleges and universities. Animal science degrees are often offered at land-grant universities, which will often have on-campus farms to give students hands-on experience with livestock animals. Education Professional education in animal science prepares students for careers in areas such as animal breeding, food and fiber production, nutrition, animal agribusiness, animal behavior, and welfare. Courses in a typical Animal Science program may include genetics, microbiology, animal behavior, nutrition, physiology, and reproduction. Courses in support areas, such as genetics, soils, agricultural economics and marketing, legal aspects, and the environment also are offered. Bachelor degree At many universities, a Bachelor of Science (BS) degree in Animal Science allows emphasis in certain areas. Typical areas are species-specific or career-specific. Species-specific areas of emphasis prepare students for a career in dairy management, beef management, swine management, sheep or small ruminant management, poultry production, or the horse industry. Other career-specific areas of study include pre-veterinary medicine studies, livestock business and marketing, animal welfare and behavior, animal nutrition science, animal reproduction science, or genetics. Youth programs are also an important part of animal science programs. Pre-veterinary emphasis Many schools that offer a degree option in Animal Science also offer a pre-veterinary emphasis such as Iowa State University, th Document 2::: Several universities have designed interdisciplinary courses with a focus on human biology at the undergraduate level. There is a wide variation in emphasis ranging from business, social studies, public policy, healthcare and pharmaceutical research. Americas Human Biology major at Stanford University, Palo Alto (since 1970) Stanford's Human Biology Program is an undergraduate major; it integrates the natural and social sciences in the study of human beings. It is interdisciplinary and policy-oriented and was founded in 1970 by a group of Stanford faculty (Professors Dornbusch, Ehrlich, Hamburg, Hastorf, Kennedy, Kretchmer, Lederberg, and Pittendrigh). It is a very popular major and alumni have gone to post-graduate education, medical school, law, business and government. Human and Social Biology (Caribbean) Human and Social Biology is a Level 4 & 5 subject in the secondary and post-secondary schools in the Caribbean and is optional for the Caribbean Secondary Education Certification (CSEC) which is equivalent to Ordinary Level (O-Level) under the British school system. The syllabus centers on structure and functioning (anatomy, physiology, biochemistry) of human body and the relevance to human health with Caribbean-specific experience. The syllabus is organized under five main sections: Living organisms and the environment, life processes, heredity and variation, disease and its impact on humans, the impact of human activities on the environment. Human Biology Program at University of Toronto The University of Toronto offers an undergraduate program in Human Biology that is jointly offered by the Faculty of Arts & Science and the Faculty of Medicine. The program offers several major and specialist options in: human biology, neuroscience, health & disease, global health, and fundamental genetics and its applications. Asia BSc (Honours) Human Biology at All India Institute of Medical Sciences, New Delhi (1980–2002) BSc (honours) Human Biology at AIIMS (New Document 3::: Advanced Placement (AP) Biology (also known as AP Bio) is an Advanced Placement biology course and exam offered by the College Board in the United States. For the 2012–2013 school year, the College Board unveiled a new curriculum with a greater focus on "scientific practices". This course is designed for students who wish to pursue an interest in the life sciences. The College Board recommends successful completion of high school biology and high school chemistry before commencing AP Biology, although the actual prerequisites vary from school to school and from state to state. This course, nevertheless, is considered very challenging and one of the most difficult AP classes, as shown with AP Finals grade distributions. Topic outline The exam covers the following 8 units. The percentage indicates the portion of the multiple-choice section of the exam focused on each content area: The course is based on and tests six skills, called scientific practices which include: In addition to the topics above, students are required to be familiar with general lab procedure. Students should know how to collect data, analyze data to form conclusions, and apply those conclusions. Exam Students are allowed to use a four-function, scientific, or graphing calculator. The exam has two sections: a 90 minute multiple choice section and a 90 minute free response section. There are 60 multiple choice questions and six free responses, two long and four short. Both sections are worth 50% of the score. Score distribution Commonly used textbooks Biology, AP Edition by Sylvia Mader (2012, hardcover ) Life: The Science of Biology (Sadava, Heller, Orians, Purves, and Hillis, ) Campbell Biology AP Ninth Edition (Reece, Urry, Cain, Wasserman, Minorsky, and Andrew Jackson ) See also Glossary of biology A.P Bio (TV Show) Document 4::: A pre-STEM program is a course of study at any two-year college that prepares a student to transfer to a four-year school to earn a bachelor's degree in a STEM field. Overview The concept of a pre-STEM program is being developed to address America's need for more college-trained professionals in science, technology, engineering, and mathematics (STEM). It is an innovation meant to fill a gap at community colleges that do not have 'major' degree paths that students identify with on their way to earning an Associates degree. Students must complete a considerable amount of STEM coursework before transferring from a two-year school to a four-year school and earn a baccalaureate degree in a STEM field. Schools with a pre-STEM program are able to identify those students and support them with STEM-specific academic and career advising, increasing the student's chances of going on to earn a STEM baccalaureate degree in a timely fashion. With over 50% of America's college-bound students starting their college career at public or private two-year school, and with a very small proportion of students who start college at a two-year school matriculating to and earning STEM degrees from four-year schools, pre-STEM programs have great potential for broadening participation in baccalaureate STEM studies. Example programs The effectiveness of pre-STEM programs is being investigated by a consortium of schools in Missouri: Moberly Area Community College, St. Charles Community College, Metropolitan Community College, and Truman State University. A larger group of schools met at the Belknap Springs Meetings in October 2009 to discuss the challenges and opportunities presented by STEM-focused partnerships between 2-year and 4-year schools. Each program represented a two-year school and a four-year school that were trying to increase the number of people who earn a baccalaureate degree in a STEM area through various means, some of which were pre-STEM programs. Other methods includes The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. What is the only marsupial in north america? A. Raccoon B. oppossum C. Marmosa D. Didelphinae Answer:
sciq-10896
multiple_choice
Composed largely of the polysaccharide chitin, the exoskeleton provides an effective barrier defense against most what?
[ "parasites", "vaccines", "white blood cells", "pathogens" ]
D
Relavent Documents: Document 0::: An exoskeleton (from Greek éxō "outer" and skeletós "skeleton") is an external skeleton that both supports the body shape and protects the internal organs of an animal, in contrast to an internal endoskeleton (e.g. that of a human) which is enclosed under other soft tissues. Some large, hard protective exoskeletons are known as "shells". Examples of exoskeletons in animals include the arthropod exoskeleton shared by arthropods (insects, chelicerates, myriapods and crustaceans) and tardigrades, as well as the outer shell of certain sponges and the mollusc shell shared by snails, clams, tusk shells, chitons and nautilus. Some vertebrate animals, such as the turtle, have both an endoskeleton and a protective exoskeleton. Role Exoskeletons contain rigid and resistant components that fulfil a set of functional roles in many animals including protection, excretion, sensing, support, feeding, and acting as a barrier against desiccation in terrestrial organisms. Exoskeletons have roles in defence from pests and predators and in providing an attachment framework for musculature. Arthropod exoskeletons contain chitin; the addition of calcium carbonate makes them harder and stronger, at the price of increased weight. Ingrowths of the arthropod exoskeleton known as apodemes serve as attachment sites for muscles. These structures are composed of chitin and are approximately six times stronger and twice the stiffness of vertebrate tendons. Similar to tendons, apodemes can stretch to store elastic energy for jumping, notably in locusts. Calcium carbonates constitute the shells of molluscs, brachiopods, and some tube-building polychaete worms. Silica forms the exoskeleton in the microscopic diatoms and radiolaria. One mollusc species, the scaly-foot gastropod, even uses the iron sulfides greigite and pyrite. Some organisms, such as some foraminifera, agglutinate exoskeletons by sticking grains of sand and shell to their exterior. Contrary to a common misconception, echinoder Document 1::: Arthropods are covered with a tough, resilient integument or exoskeleton of chitin. Generally the exoskeleton will have thickened areas in which the chitin is reinforced or stiffened by materials such as minerals or hardened proteins. This happens in parts of the body where there is a need for rigidity or elasticity. Typically the mineral crystals, mainly calcium carbonate, are deposited among the chitin and protein molecules in a process called biomineralization. The crystals and fibres interpenetrate and reinforce each other, the minerals supplying the hardness and resistance to compression, while the chitin supplies the tensile strength. Biomineralization occurs mainly in crustaceans. In insects and arachnids, the main reinforcing materials are various proteins hardened by linking the fibres in processes called sclerotisation and the hardened proteins are called sclerotin. The dorsal tergum, ventral sternum, and the lateral pleura form the hardened plates or sclerites of a typical body segment. In either case, in contrast to the carapace of a tortoise or the cranium of a vertebrate, the exoskeleton has little ability to grow or change its form once it has matured. Except in special cases, whenever the animal needs to grow, it moults, shedding the old skin after growing a new skin from beneath. Microscopic structure A typical arthropod exoskeleton is a multi-layered structure with four functional regions: epicuticle, procuticle, epidermis and basement membrane. Of these, the epicuticle is a multi-layered external barrier that, especially in terrestrial arthropods, acts as a barrier against desiccation. The strength of the exoskeleton is provided by the underlying procuticle, which is in turn secreted by the epidermis. Arthropod cuticle is a biological composite material, consisting of two main portions: fibrous chains of alpha-chitin within a matrix of silk-like and globular proteins, of which the best-known is the rubbery protein called resilin. The rel Document 2::: Pleuran is an insoluble polysaccharide (β-(1,3/1,6)-D-glucan), isolated from Pleurotus ostreatus. Pleuran belongs to a group of glucose polymers commonly called beta-glucans demonstrating biological response modifier properties. These immunomodulating properties render the host more resistant to infections and neoplasms. In a study published in December 2010, pleuran demonstrated to have a protective effect against exercise-induced suppression of immune cell activity (NK cells) in subjects taking 100 mg per day. In another study published in 2011, pleuran reduced the incidence of upper respiratory tract infections and increased the number of circulating NK cells. Pleuran is also being studied as a potential immunologic adjuvant. Document 3::: Chitin (C8H13O5N)n ( ) is a long-chain polymer of N-acetylglucosamine, an amide derivative of glucose. Chitin is probably the second most abundant polysaccharide in nature (behind only cellulose); an estimated 1 billion tons of chitin are produced each year in the biosphere. It is a primary component of cell walls in fungi (especially filamentous and mushroom forming fungi), the exoskeletons of arthropods such as crustaceans and insects, the radulae, cephalopod beaks and gladii of molluscs and in some nematodes and diatoms. It is also synthesised by at least some fish and lissamphibians. Commercially, chitin is extracted from the shells of crabs, shrimps, shellfish and lobsters, which are major by-products of the seafood industry. The structure of chitin is comparable to cellulose, forming crystalline nanofibrils or whiskers. It is functionally comparable to the protein keratin. Chitin has proved useful for several medicinal, industrial and biotechnological purposes. Etymology The English word "chitin" comes from the French word chitine, which was derived in 1821 from the Greek word χιτών (khitōn) meaning covering. A similar word, "chiton", refers to a marine animal with a protective shell. Chemistry, physical properties and biological function The structure of chitin was determined by Albert Hofmann in 1929. Hofmann hydrolyzed chitin using a crude preparation of the enzyme chitinase, which he obtained from the snail Helix pomatia. Chitin is a modified polysaccharide that contains nitrogen; it is synthesized from units of N-acetyl-D-glucosamine (to be precise, 2-(acetylamino)-2-deoxy-D-glucose). These units form covalent β-(1→4)-linkages (like the linkages between glucose units forming cellulose). Therefore, chitin may be described as cellulose with one hydroxyl group on each monomer replaced with an acetyl amine group. This allows for increased hydrogen bonding between adjacent polymers, giving the chitin-polymer matrix increased strength. In its pure, unmod Document 4::: In biology, the extracellular matrix (ECM), is a network consisting of extracellular macromolecules and minerals, such as collagen, enzymes, glycoproteins and hydroxyapatite that provide structural and biochemical support to surrounding cells. Because multicellularity evolved independently in different multicellular lineages, the composition of ECM varies between multicellular structures; however, cell adhesion, cell-to-cell communication and differentiation are common functions of the ECM. The animal extracellular matrix includes the interstitial matrix and the basement membrane. Interstitial matrix is present between various animal cells (i.e., in the intercellular spaces). Gels of polysaccharides and fibrous proteins fill the interstitial space and act as a compression buffer against the stress placed on the ECM. Basement membranes are sheet-like depositions of ECM on which various epithelial cells rest. Each type of connective tissue in animals has a type of ECM: collagen fibers and bone mineral comprise the ECM of bone tissue; reticular fibers and ground substance comprise the ECM of loose connective tissue; and blood plasma is the ECM of blood. The plant ECM includes cell wall components, like cellulose, in addition to more complex signaling molecules. Some single-celled organisms adopt multicellular biofilms in which the cells are embedded in an ECM composed primarily of extracellular polymeric substances (EPS). Structure Components of the ECM are produced intracellularly by resident cells and secreted into the ECM via exocytosis. Once secreted, they then aggregate with the existing matrix. The ECM is composed of an interlocking mesh of fibrous proteins and glycosaminoglycans (GAGs). Proteoglycans Glycosaminoglycans (GAGs) are carbohydrate polymers and mostly attached to extracellular matrix proteins to form proteoglycans (hyaluronic acid is a notable exception; see below). Proteoglycans have a net negative charge that attracts positively charged sod The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. Composed largely of the polysaccharide chitin, the exoskeleton provides an effective barrier defense against most what? A. parasites B. vaccines C. white blood cells D. pathogens Answer:
sciq-4752
multiple_choice
What is the series of changes in the reproductive system of a mature female that happens monthly?
[ "urinary cycle", "menstrual cycle", "pregnancy", "fetal cycle" ]
B
Relavent Documents: Document 0::: Menarche ( ; ) is the first menstrual cycle, or first menstrual bleeding, in female humans. From both social and medical perspectives, it is often considered the central event of female puberty, as it signals the possibility of fertility. Girls experience menarche at different ages. Having menarche occur between the ages of 9–14 in the West is considered normal. Canadian psychological researcher Niva Piran claims that menarche or the perceived average age of puberty is used in many cultures to separate girls from activity with boys, and to begin transition into womanhood. The timing of menarche is influenced by female biology, as well as genetic and environmental factors, especially nutritional factors. The mean age of menarche has declined over the last century, but the magnitude of the decline and the factors responsible remain subjects of contention. The worldwide average age of menarche is very difficult to estimate accurately, and it varies significantly by geographical region, race, ethnicity and other characteristics, and occurs mostly during a span of ages from 8 to 16, with a small percentage of girls having menarche by age 10, and the vast majority having it by the time they were 14. There is a later age of onset in Asian populations compared to the West, but it too is changing with time. For example a Korean study in 2011 showed an overall average age of 12.7, with around 20% before age 12, and more than 90% by age 14. A Chinese study from 2014 published in Acta Paediatrica showed similar results (overall average of age 12.8 in 2005 down to age 12.3 in 2014) and a similar trend in time, but also similar findings about ethnic, cultural, and environmental effects. The average age of menarche was about 12.7 years in Canada in 2001, and 12.9 in the United Kingdom. A study of girls in Istanbul, Turkey, in 2011 found the median age at menarche to be 12.7 years. In the United States, an analysis of 10,590 women aged 15–44 taken from the 2013–2017 round of th Document 1::: Seed cycling is the rotation of different edible seeds into the diet at different times in the menstrual cycle. Practitioners believe that since some seeds promote estrogen production, and others promote progesterone production, that eating these seeds in the correct parts of the menstrual cycle will balance the hormonal rhythm. There is no scientific evidence to support the belief that cycling the seeds actually regulates the hormonal rhythm, but the practice is probably harmless. Overview Seed cycling advocates note that the menstrual cycle is broken up into four interconnected phases. The first phase is menstruation, followed by the follicular phase, then ovulation, then the luteal phase. Assuming a 28-day cycle, the first 13 days represent the menstrual and follicular phases, in which day 1 is when menstruation begins. During day-13, the seed cycling diet suggests consuming either flax or pumpkin seeds daily to boost estrogen, which helps support these phases and the move towards ovulation. Days 14-28 represent the ovulatory phase and luteal phase, with ovulation around day 14. The seed cycling diet suggests sesame or sunflower seeds to boost progesterone at this time, ground up to increase the surface area for absorption of the essential fatty acids, minerals, and other nutrients. The seed cycling diet relies on the belief that most women have a 28-day cycle. However, only 10-15% of women have 28-30 day cycles; most women's cycles vary, or run longer or shorter. For women with irregular or absent cycle, menopause, or post-menopause, the seed cycling diet suggests starting the seed cycle with any two weeks, and then rotating. However, many women who track their cycles through symptothermal methods (e.g. Basal Body Temperature and cervical mucus) are able to adapt the seed cycling protocol to their individual cycle and therefore do not need to rely on the belief that women have 28-day cycles. Research There is currently a lack of solid scientific eviden Document 2::: Involution is the shrinking or return of an organ to a former size. At a cellular level, involution is characterized by the process of proteolysis of the basement membrane (basal lamina), leading to epithelial regression and apoptosis, with accompanying stromal fibrosis. The consequent reduction in cell number and reorganization of stromal tissue leads to the reduction in the size of the organ. Examples Thymus The thymus continues to grow between birth and puberty and then begins to atrophy, a process directed by the high levels of circulating sex hormones. Proportional to thymic size, thymic activity (T cell output) is most active before puberty. Upon atrophy, the size and activity are dramatically reduced, and the organ is primarily replaced with fat. The atrophy is due to the increased circulating level of sex hormones, and chemical or physical castration of an adult results in the thymus increasing in size and activity. Uterus Involution is the process by which the uterus is transformed from pregnant to non-pregnant state. This period is characterized by the restoration of ovarian function in order to prepare the body for a new pregnancy. It is a physiological process occurring after parturition; the hypertrophy of the uterus has to be undone since it does not need to house the fetus anymore. This process is primarily due to the hormone oxytocin. The completion of this period is defined as when the diameter of the uterus returns to the size it is normally during a woman's menstrual cycle. Mammary gland During pregnancy until after birth, mammary glands grow steadily to a size required for optimal milk production. At the end of breastfeeding, the number of cells in the mammary gland becomes reduced until approximately the same number is reached as before the start of pregnancy. See also Subinvolution Document 3::: The corpus albicans (Latin for "whitening body"; also known as atretic corpus luteum, corpus candicans, or simply as albicans) is the regressed form of the corpus luteum. As the corpus luteum is being broken down by macrophages, fibroblasts lay down type I collagen, forming the corpus albicans. This process is called "luteolysis". The remains of the corpus albicans may persist as a scar on the surface of the ovary. Background During the first few hours after expulsion of the ovum from the follicle, the remaining granulosa and theca interna cells change rapidly into lutein cells. They enlarge in diameter two or more times and become filled with lipid inclusions that give them a yellowish appearance. This process is called luteinization, and the total mass of cells together is called the corpus luteum. A well-developed vascular supply also grows into the corpus luteum. The granulosa cells in the corpus luteum develop extensive intracellular smooth endoplasmic reticula that form large amounts of the female sex hormones progesterone and estrogen (more progesterone than estrogen during the luteal phase). The theca cells form mainly the androgens androstenedione and testosterone. These hormones may then be converted by aromatase in the granulosa cells into estrogens, including estradiol. The corpus luteum normally grows to about 1.5 centimeters in diameter, reaching this stage of development 7 to 8 days after ovulation. Then it begins to involute and eventually loses its secretory function and its yellowish, lipid characteristic about 12 days after ovulation, becoming the corpus albicans. In the ensuing weeks, this is replaced by connective tissue and over months is reabsorbed. Document 4::: Menstruation is the shedding of the uterine lining (endometrium). It occurs on a regular basis in uninseminated sexually reproductive-age females of certain mammal species. Although there is some disagreement in definitions between sources, menstruation is generally considered to be limited to primates. Overt menstruation (where there is bleeding from the uterus through the vagina) is found primarily in humans and close relatives such as chimpanzees. It is common in simians (Old World monkeys, New World monkeys, and apes), but completely lacking in strepsirrhine primates and possibly weakly present in tarsiers. Beyond primates, it is known only in bats, the elephant shrew, and the spiny mouse species Acomys cahirinus. Females of other species of placental mammal undergo estrous cycles, in which the endometrium is completely reabsorbed by the animal (covert menstruation) at the end of its reproductive cycle. Many zoologists regard this as different from a "true" menstrual cycle. Female domestic animals used for breeding—for example dogs, pigs, cattle, or horses—are monitored for physical signs of an estrous cycle period, which indicates that the animal is ready for insemination. Estrus and menstruation Females of most mammal species advertise fertility to males with visual behavioral cues, pheromones, or both. This period of advertised fertility is known as oestrus, "estrus" or heat. In species that experience estrus, females are generally only receptive to copulation while they are in heat (dolphins are an exception). In the estrous cycles of most placental mammals, if no fertilization takes place, the uterus reabsorbs the endometrium. This breakdown of the endometrium without vaginal discharge is sometimes called covert menstruation. Overt menstruation (where there is blood flow from the vagina) occurs primarily in humans and close evolutionary relatives such as chimpanzees. Some species, such as domestic dogs, experience small amounts of vaginal bleeding The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. What is the series of changes in the reproductive system of a mature female that happens monthly? A. urinary cycle B. menstrual cycle C. pregnancy D. fetal cycle Answer:
sciq-126
multiple_choice
The difference between the theoretical half-reaction reduction potential and the actual voltage required is called what?
[ "overpotential", "resistance", "excess", "overcharge" ]
A
Relavent Documents: Document 0::: The values below are standard apparent reduction potentials for electro-biochemical half-reactions measured at 25 °C, 1 atmosphere and a pH of 7 in aqueous solution. The actual physiological potential depends on the ratio of the reduced () and oxidized () forms according to the Nernst equation and the thermal voltage. When an oxidizer () accepts a number z of electrons () to be converted in its reduced form (), the half-reaction is expressed as: + z → The reaction quotient (r) is the ratio of the chemical activity (ai) of the reduced form (the reductant, aRed) to the activity of the oxidized form (the oxidant, aox). It is equal to the ratio of their concentrations (Ci) only if the system is sufficiently diluted and the activity coefficients (γi) are close to unity (ai = γi Ci): The Nernst equation is a function of and can be written as follows: At chemical equilibrium, the reaction quotient of the product activity (aRed) by the reagent activity (aOx) is equal to the equilibrium constant () of the half-reaction and in the absence of driving force () the potential () also becomes nul. The numerically simplified form of the Nernst equation is expressed as: Where is the standard reduction potential of the half-reaction expressed versus the standard reduction potential of hydrogen. For standard conditions in electrochemistry (T = 25 °C, P = 1 atm and all concentrations being fixed at 1 mol/L, or 1 M) the standard reduction potential of hydrogen is fixed at zero by convention as it serves of reference. The standard hydrogen electrode (SHE), with [] = 1 M works thus at a pH = 0. At pH = 7, when [] = 10−7 M, the reduction potential of differs from zero because it depends on pH. Solving the Nernst equation for the half-reaction of reduction of two protons into hydrogen gas gives: In biochemistry and in biological fluids, at pH = 7, it is thus important to note that the reduction potential of the protons () into hydrogen gas is no longer zero Document 1::: Conceptual questions or conceptual problems in science, technology, engineering, and mathematics (STEM) education are questions that can be answered based only on the knowledge of relevant concepts, rather than performing extensive calculations. They contrast with most homework and exam problems in science and engineering that typically require plugging in numerical values into previously discussed formulas. Such "plug-and-chug" numerical problems can often be solved correctly by just matching the pattern of the problem to a previously discussed problem and changing the numerical inputs, which requires significant amounts of time to perform the calculations but does not test or deepen the understanding of how the concepts and formulas should work together. Conceptual questions, therefore, provide a good complement to conventional numerical problems because they need minimal or no calculations and instead encourage the students to engage more deeply with the underlying concepts and how they relate to formulas. Conceptual problems are often formulated as multiple-choice questions, making them easy to use during in-class discussions, particularly when utilizing active learning, peer instruction, and audience response. An example of a conceptual question in undergraduate thermodynamics is provided below: During adiabatic expansion of an ideal gas, its temperatureincreases decreases stays the same Impossible to tell/need more information The use of conceptual questions in physics was popularized by Eric Mazur, particularly in the form of multiple-choice tests that he called ConcepTests. In recent years, multiple websites that maintain lists of conceptual questions have been created by instructors for various disciplines. Some books on physics provide many examples of conceptual questions as well. Multiple conceptual questions can be assembled into a concept inventory to test the working knowledge of students at the beginning of a course or to track the improvement in Document 2::: The Tafel equation is an equation in electrochemical kinetics relating the rate of an electrochemical reaction to the overpotential. The Tafel equation was first deduced experimentally and was later shown to have a theoretical justification. The equation is named after Swiss chemist Julius Tafel." It describes how the electrical current through an electrode depends on the voltage difference between the electrode and the bulk electrolyte for a simple, unimolecular redox reaction ". Where an electrochemical reaction occurs in two half reactions on separate electrodes, the Tafel equation is applied to each electrode separately. On a single electrode the Tafel equation can be stated as: where the plus sign under the exponent refers to an anodic reaction, and a minus sign to a cathodic reaction, : overpotential, V : "Tafel slope", V : current density, A/m2 : "exchange current density", A/m2. A verification plus further explanation for this equation can be found here. The Tafel equation is an approximation of the Butler-Volmer equation in the case of . "[ The Tafel equation ] assumes that the concentrations at the electrode are practically equal to the concentrations in the bulk electrolyte, allowing the current to be expressed as a function of potential only. In other words, it assumes that the electrode mass transfer rate is much greater than the reaction rate, and that the reaction is dominated by the slower chemical reaction rate ". Also, at a given electrode the Tafel equation assumes that the reverse half reaction rate is negligible compared to the forward reaction rate. Overview of the terms The exchange current is the current at equilibrium, i.e. the rate at which oxidized and reduced species transfer electrons with the electrode. In other words, the exchange current density is the rate of reaction at the reversible potential (when the overpotential is zero by definition). At the reversible potential, the reaction is in equilibrium meaning that the Document 3::: In electrochemistry, the Nernst equation is a chemical thermodynamical relationship that permits the calculation of the reduction potential of a reaction (half-cell or full cell reaction) from the standard electrode potential, absolute temperature, the number of electrons involved in the redox reaction, and activities (often approximated by concentrations) of the chemical species undergoing reduction and oxidation respectively. It was named after Walther Nernst, a German physical chemist who formulated the equation. Expression General form with chemical activities When an oxidizer () accepts a number z of electrons () to be converted in its reduced form (), the half-reaction is expressed as: + z → The reaction quotient (), also often called the ion activity product (IAP), is the ratio between the chemical activities (a) of the reduced form (the reductant, ) and the oxidized form (the oxidant, ). The chemical activity of a dissolved species corresponds to its true thermodynamic concentration taking into account the electrical interactions between all ions present in solution at elevated concentrations. For a given dissolved species, its chemical activity (a) is the product of its activity coefficient (γ) by its molar (mol/L solution), or molal (mol/kg water), concentration (C): a = γ C. So, if the concentration (C, also denoted here below with square brackets [ ]) of all the dissolved species of interest are sufficiently low and that their activity coefficients are close to unity, their chemical activities can be approximated by their concentrations as commonly done when simplifying, or idealizing, a reaction for didactic purposes: At chemical equilibrium, the ratio of the activity of the reaction product (aRed) by the reagent activity (aOx) is equal to the equilibrium constant of the half-reaction: The standard thermodynamics also says that the actual Gibbs free energy is related to the free energy change under standard state by the relati Document 4::: In electrochemistry, the electrochemical potential (ECP), , is a thermodynamic measure of chemical potential that does not omit the energy contribution of electrostatics. Electrochemical potential is expressed in the unit of J/mol. Introduction Each chemical species (for example, "water molecules", "sodium ions", "electrons", etc.) has an electrochemical potential (a quantity with units of energy) at any given point in space, which represents how easy or difficult it is to add more of that species to that location. If possible, a species will move from areas with higher electrochemical potential to areas with lower electrochemical potential; in equilibrium, the electrochemical potential will be constant everywhere for each species (it may have a different value for different species). For example, if a glass of water has sodium ions (Na+) dissolved uniformly in it, and an electric field is applied across the water, then the sodium ions will tend to get pulled by the electric field towards one side. We say the ions have electric potential energy, and are moving to lower their potential energy. Likewise, if a glass of water has a lot of dissolved sugar on one side and none on the other side, each sugar molecule will randomly diffuse around the water, until there is equal concentration of sugar everywhere. We say that the sugar molecules have a "chemical potential", which is higher in the high-concentration areas, and the molecules move to lower their chemical potential. These two examples show that an electrical potential and a chemical potential can both give the same result: A redistribution of the chemical species. Therefore, it makes sense to combine them into a single "potential", the electrochemical potential, which can directly give the net redistribution taking both into account. It is (in principle) easy to measure whether or not two regions (for example, two glasses of water) have the same electrochemical potential for a certain chemical species (for examp The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. The difference between the theoretical half-reaction reduction potential and the actual voltage required is called what? A. overpotential B. resistance C. excess D. overcharge Answer:
sciq-11626
multiple_choice
Processing of filtrate in the proximal tubule helps maintain what level in body fluid?
[ "ph", "temperature", "homeostasis", "metabolic level" ]
A
Relavent Documents: Document 0::: Body fluids, bodily fluids, or biofluids, sometimes body liquids, are liquids within the human body. In lean healthy adult men, the total body water is about 60% (60–67%) of the total body weight; it is usually slightly lower in women (52–55%). The exact percentage of fluid relative to body weight is inversely proportional to the percentage of body fat. A lean man, for example, has about 42 (42–47) liters of water in his body. The total body of water is divided into fluid compartments, between the intracellular fluid compartment (also called space, or volume) and the extracellular fluid (ECF) compartment (space, volume) in a two-to-one ratio: 28 (28–32) liters are inside cells and 14 (14–15) liters are outside cells. The ECF compartment is divided into the interstitial fluid volume – the fluid outside both the cells and the blood vessels – and the intravascular volume (also called the vascular volume and blood plasma volume) – the fluid inside the blood vessels – in a three-to-one ratio: the interstitial fluid volume is about 12 liters; the vascular volume is about 4 liters. The interstitial fluid compartment is divided into the lymphatic fluid compartment – about 2/3, or 8 (6–10) liters, and the transcellular fluid compartment (the remaining 1/3, or about 4 liters). The vascular volume is divided into the venous volume and the arterial volume; and the arterial volume has a conceptually useful but unmeasurable subcompartment called the effective arterial blood volume. Compartments by location intracellular fluid (ICF), which consist of cytosol and fluids in the cell nucleus Extracellular fluid Intravascular fluid (blood plasma) Interstitial fluid Lymphatic fluid (sometimes included in interstitial fluid) Transcellular fluid Health Body fluid is the term most often used in medical and health contexts. Modern medical, public health, and personal hygiene practices treat body fluids as potentially unclean. This is because they can be vectors for infectious Document 1::: The human body and even its individual body fluids may be conceptually divided into various fluid compartments, which, although not literally anatomic compartments, do represent a real division in terms of how portions of the body's water, solutes, and suspended elements are segregated. The two main fluid compartments are the intracellular and extracellular compartments. The intracellular compartment is the space within the organism's cells; it is separated from the extracellular compartment by cell membranes. About two-thirds of the total body water of humans is held in the cells, mostly in the cytosol, and the remainder is found in the extracellular compartment. The extracellular fluids may be divided into three types: interstitial fluid in the "interstitial compartment" (surrounding tissue cells and bathing them in a solution of nutrients and other chemicals), blood plasma and lymph in the "intravascular compartment" (inside the blood vessels and lymphatic vessels), and small amounts of transcellular fluid such as ocular and cerebrospinal fluids in the "transcellular compartment". The normal processes by which life self-regulates its biochemistry (homeostasis) produce fluid balance across the fluid compartments. Water and electrolytes are continuously moving across barriers (eg, cell membranes, vessel walls), albeit often in small amounts, to maintain this healthy balance. The movement of these molecules is controlled and restricted by various mechanisms. When illnesses upset the balance, electrolyte imbalances can result. The interstitial and intravascular compartments readily exchange water and solutes, but the third extracellular compartment, the transcellular, is thought of as separate from the other two and not in dynamic equilibrium with them. The science of fluid balance across fluid compartments has practical application in intravenous therapy, where doctors and nurses must predict fluid shifts and decide which IV fluids to give (for example, isot Document 2::: The Starling principle holds that extracellular fluid movements between blood and tissues are determined by differences in hydrostatic pressure and colloid osmotic (oncotic) pressure between plasma inside microvessels and interstitial fluid outside them. The Starling Equation, proposed many years after the death of Starling, describes that relationship in mathematical form and can be applied to many biological and non-biological semipermeable membranes. The classic Starling principle and the equation that describes it have in recent years been revised and extended. Every day around 8 litres of water (solvent) containing a variety of small molecules (solutes) leaves the blood stream of an adult human and perfuses the cells of the various body tissues. Interstitial fluid drains by afferent lymph vessels to one of the regional lymph node groups, where around 4 litres per day is reabsorbed to the blood stream. The remainder of the lymphatic fluid is rich in proteins and other large molecules and rejoins the blood stream via the thoracic duct which empties into the great veins close to the heart. Filtration from plasma to interstitial (or tissue) fluid occurs in microvascular capillaries and post-capillary venules. In most tissues the micro vessels are invested with a continuous internal surface layer that includes a fibre matrix now known as the endothelial glycocalyx whose interpolymer spaces function as a system of small pores, radius circa 5 nm. Where the endothelial glycocalyx overlies a gap in the junction molecules that bind endothelial cells together (inter endothelial cell cleft), the plasma ultrafiltrate may pass to the interstitial space, leaving larger molecules reflected back into the plasma. A small number of continuous capillaries are specialised to absorb solvent and solutes from interstitial fluid back into the blood stream through fenestrations in endothelial cells, but the volume of solvent absorbed every day is small. Discontinuous capillaries as Document 3::: In cell biology, extracellular fluid (ECF) denotes all body fluid outside the cells of any multicellular organism. Total body water in healthy adults is about 50–60% (range 45 to 75%) of total body weight; women and the obese typically have a lower percentage than lean men. Extracellular fluid makes up about one-third of body fluid, the remaining two-thirds is intracellular fluid within cells. The main component of the extracellular fluid is the interstitial fluid that surrounds cells. Extracellular fluid is the internal environment of all multicellular animals, and in those animals with a blood circulatory system, a proportion of this fluid is blood plasma. Plasma and interstitial fluid are the two components that make up at least 97% of the ECF. Lymph makes up a small percentage of the interstitial fluid. The remaining small portion of the ECF includes the transcellular fluid (about 2.5%). The ECF can also be seen as having two components – plasma and lymph as a delivery system, and interstitial fluid for water and solute exchange with the cells. The extracellular fluid, in particular the interstitial fluid, constitutes the body's internal environment that bathes all of the cells in the body. The ECF composition is therefore crucial for their normal functions, and is maintained by a number of homeostatic mechanisms involving negative feedback. Homeostasis regulates, among others, the pH, sodium, potassium, and calcium concentrations in the ECF. The volume of body fluid, blood glucose, oxygen, and carbon dioxide levels are also tightly homeostatically maintained. The volume of extracellular fluid in a young adult male of 70 kg (154 lbs) is 20% of body weight – about fourteen liters. Eleven liters are interstitial fluid and the remaining three liters are plasma. Components The main component of the extracellular fluid (ECF) is the interstitial fluid, or tissue fluid, which surrounds the cells in the body. The other major component of the ECF is the intravascula Document 4::: In renal physiology, the filtration fraction is the ratio of the glomerular filtration rate (GFR) over the renal plasma flow (RPF). Filtration Fraction, FF = GFR/RPF, or . The filtration fraction, therefore, represents the proportion of the fluid reaching the kidneys that passes into the renal tubules. It is normally about 20%. GFR on its own is the most common and important measure of renal function. However, in conditions such as renal artery stenosis, blood flow to the kidneys is reduced. Filtration fraction must therefore be increased in order to perform the normal functions of the kidney. Loop diuretics and thiazide diuretics decrease filtration fraction. Catecholamines (norepinephrine and epinephrine) increase filtration fraction by vasoconstriction of afferent and efferent arterioles, possibly through activation of alpha-1 adrenergic receptors. Severe hemorrhage will also result in an increased filtration fraction. The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. Processing of filtrate in the proximal tubule helps maintain what level in body fluid? A. ph B. temperature C. homeostasis D. metabolic level Answer:
sciq-1664
multiple_choice
An antigen is a macromolecule that reacts with components of what?
[ "pulmonary system", "digestion system", "circulatory system", "immune system" ]
D
Relavent Documents: Document 0::: In immunology, an antigen (Ag) is a molecule, moiety, foreign particulate matter, or an allergen, such as pollen, that can bind to a specific antibody or T-cell receptor. The presence of antigens in the body may trigger an immune response. Antigens can be proteins, peptides (amino acid chains), polysaccharides (chains of simple sugars), lipids, or nucleic acids. Antigens exist on normal cells, cancer cells, parasites, viruses, fungi, and bacteria. Antigens are recognized by antigen receptors, including antibodies and T-cell receptors. Diverse antigen receptors are made by cells of the immune system so that each cell has a specificity for a single antigen. Upon exposure to an antigen, only the lymphocytes that recognize that antigen are activated and expanded, a process known as clonal selection. In most cases, antibodies are antigen-specific, meaning that an antibody can only react to and bind one specific antigen; in some instances, however, antibodies may cross-react to bind more than one antigen. The reaction between an antigen and an antibody is called the antigen-antibody reaction. Antigen can originate either from within the body ("self-protein" or "self antigens") or from the external environment ("non-self"). The immune system identifies and attacks "non-self" external antigens. Antibodies usually do not react with self-antigens due to negative selection of T cells in the thymus and B cells in the bone marrow. The diseases in which antibodies react with self antigens and damage the body's own cells are called autoimmune diseases. Vaccines are examples of antigens in an immunogenic form, which are intentionally administered to a recipient to induce the memory function of the adaptive immune system towards antigens of the pathogen invading that recipient. The vaccine for seasonal influenza is a common example. Etymology Paul Ehrlich coined the term antibody () in his side-chain theory at the end of the 19th century. In 1899, Ladislas Deutsch (László Detr Document 1::: An antibody (Ab), also known as an immunoglobulin (Ig), is a large, Y-shaped protein used by the immune system to identify and neutralize foreign objects such as pathogenic bacteria and viruses. The antibody recognizes a unique molecule of the pathogen, called an antigen. Each tip of the "Y" of an antibody contains a paratope (analogous to a lock) that is specific for one particular epitope (analogous to a key) on an antigen, allowing these two structures to bind together with precision. Using this binding mechanism, an antibody can tag a microbe or an infected cell for attack by other parts of the immune system, or can neutralize it directly (for example, by blocking a part of a virus that is essential for its invasion). To allow the immune system to recognize millions of different antigens, the antigen-binding sites at both tips of the antibody come in an equally wide variety. In contrast, the remainder of the antibody is relatively constant. In mammals, antibodies occur in a few variants, which define the antibody's class or isotype: IgA, IgD, IgE, IgG, and IgM. The constant region at the trunk of the antibody includes sites involved in interactions with other components of the immune system. The class hence determines the function triggered by an antibody after binding to an antigen, in addition to some structural features. Antibodies from different classes also differ in where they are released in the body and at what stage of an immune response. Together with B and T cells, antibodies comprise the most important part of the adaptive immune system. They occur in two forms: one that is attached to a B cell, and the other, a soluble form, that is unattached and found in extracellular fluids such as blood plasma. Initially, all antibodies are of the first form, attached to the surface of a B cell – these are then referred to as B-cell receptors (BCR). After an antigen binds to a BCR, the B cell activates to proliferate and differentiate into either plasma cells, Document 2::: Immunogenicity is the ability of a foreign substance, such as an antigen, to provoke an immune response in the body of a human or other animal. It may be wanted or unwanted: Wanted immunogenicity typically relates to vaccines, where the injection of an antigen (the vaccine) provokes an immune response against the pathogen, protecting the organism from future exposure. Immunogenicity is a central aspect of vaccine development. Unwanted immunogenicity is an immune response by an organism against a therapeutic antigen. This reaction leads to production of anti-drug-antibodies (ADAs), inactivating the therapeutic effects of the treatment and potentially inducing adverse effects. A challenge in biotherapy is predicting the immunogenic potential of novel protein therapeutics. For example, immunogenicity data from high-income countries are not always transferable to low-income and middle-income countries. Another challenge is considering how the immunogenicity of vaccines changes with age. Therefore, as stated by the World Health Organization, immunogenicity should be investigated in a target population since animal testing and in vitro models cannot precisely predict immune response in humans. Antigenicity is the capacity of a chemical structure (either an antigen or hapten) to bind specifically with a group of certain products that have adaptive immunity: T cell receptors or antibodies (a.k.a. B cell receptors). Antigenicity was more commonly used in the past to refer to what is now known as immunogenicity, and the two are still often used interchangeably. However, strictly speaking, immunogenicity refers to the ability of an antigen to induce an adaptive immune response. Thus an antigen might bind specifically to a T or B cell receptor, but not induce an adaptive immune response. If the antigen does induce a response, it is an 'immunogenic antigen', which is referred to as an immunogen. Antigenic immunogenic potency Many lipids and nucleic acids are relatively s Document 3::: Polyclonal B cell response is a natural mode of immune response exhibited by the adaptive immune system of mammals. It ensures that a single antigen is recognized and attacked through its overlapping parts, called epitopes, by multiple clones of B cell. In the course of normal immune response, parts of pathogens (e.g. bacteria) are recognized by the immune system as foreign (non-self), and eliminated or effectively neutralized to reduce their potential damage. Such a recognizable substance is called an antigen. The immune system may respond in multiple ways to an antigen; a key feature of this response is the production of antibodies by B cells (or B lymphocytes) involving an arm of the immune system known as humoral immunity. The antibodies are soluble and do not require direct cell-to-cell contact between the pathogen and the B-cell to function. Antigens can be large and complex substances, and any single antibody can only bind to a small, specific area on the antigen. Consequently, an effective immune response often involves the production of many different antibodies by many different B cells against the same antigen. Hence the term "polyclonal", which derives from the words poly, meaning many, and clones from Greek klōn, meaning sprout or twig; a clone is a group of cells arising from a common "mother" cell. The antibodies thus produced in a polyclonal response are known as polyclonal antibodies. The heterogeneous polyclonal antibodies are distinct from monoclonal antibody molecules, which are identical and react against a single epitope only, i.e., are more specific. Although the polyclonal response confers advantages on the immune system, in particular, greater probability of reacting against pathogens, it also increases chances of developing certain autoimmune diseases resulting from the reaction of the immune system against native molecules produced within the host. Humoral response to infection Diseases which can be transmitted from one organism to Document 4::: The adaptive immune system, also known as the acquired immune system, or specific immune system is a subsystem of the immune system that is composed of specialized, systemic cells and processes that eliminate pathogens or prevent their growth. The acquired immune system is one of the two main immunity strategies found in vertebrates (the other being the innate immune system). Like the innate system, the adaptive immune system includes both humoral immunity components and cell-mediated immunity components and destroys invading pathogens. Unlike the innate immune system, which is pre-programmed to react to common broad categories of pathogen, the adaptive immune system is highly specific to each particular pathogen the body has encountered. Adaptive immunity creates immunological memory after an initial response to a specific pathogen, and leads to an enhanced response to future encounters with that pathogen. Antibodies are a critical part of the adaptive immune system. Adaptive immunity can provide long-lasting protection, sometimes for the person's entire lifetime. For example, someone who recovers from measles is now protected against measles for their lifetime; in other cases it does not provide lifetime protection, as with chickenpox. This process of adaptive immunity is the basis of vaccination. The cells that carry out the adaptive immune response are white blood cells known as lymphocytes. B cells and T cells, two different types of lymphocytes, carry out the main activities: antibody responses, and cell-mediated immune response. In antibody responses, B cells are activated to secrete antibodies, which are proteins also known as immunoglobulins. Antibodies travel through the bloodstream and bind to the foreign antigen causing it to inactivate, which does not allow the antigen to bind to the host. Antigens are any substances that elicit the adaptive immune response. Sometimes the adaptive system is unable to distinguish harmful from harmless foreign molecule The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. An antigen is a macromolecule that reacts with components of what? A. pulmonary system B. digestion system C. circulatory system D. immune system Answer:
sciq-7040
multiple_choice
What is the method of evolution by which advantageous heritable traits become more common over generations?
[ "flow selection", "artificial selection", "natural selection", "same selection" ]
C
Relavent Documents: Document 0::: This is a list of topics in evolutionary biology. A abiogenesis – adaptation – adaptive mutation – adaptive radiation – allele – allele frequency – allochronic speciation – allopatric speciation – altruism – : anagenesis – anti-predator adaptation – applications of evolution – aposematism – Archaeopteryx – aquatic adaptation – artificial selection – atavism B Henry Walter Bates – biological organisation – Brassica oleracea – breed C Cambrian explosion – camouflage – Sean B. Carroll – catagenesis – gene-centered view of evolution – cephalization – Sergei Chetverikov – chronobiology – chronospecies – clade – cladistics – climatic adaptation – coalescent theory – co-evolution – co-operation – coefficient of relationship – common descent – convergent evolution – creation–evolution controversy – cultivar – conspecific song preference D Darwin (unit) – Charles Darwin – Darwinism – Darwin's finches – Richard Dawkins – directed mutagenesis – Directed evolution – directional selection – Theodosius Dobzhansky – dog breeding – domestication – domestication of the horse E E. coli long-term evolution experiment – ecological genetics – ecological selection – ecological speciation – Endless Forms Most Beautiful – endosymbiosis – error threshold (evolution) – evidence of common descent – evolution – evolutionary arms race – evolutionary capacitance Evolution: of ageing – of the brain – of cetaceans – of complexity – of dinosaurs – of the eye – of fish – of the horse – of insects – of human intelligence – of mammalian auditory ossicles – of mammals – of monogamy – of sex – of sirenians – of tetrapods – of the wolf evolutionary developmental biology – evolutionary dynamics – evolutionary game theory – evolutionary history of life – evolutionary history of plants – evolutionary medicine – evolutionary neuroscience – evolutionary psychology – evolutionary radiation – evolutionarily stable strategy – evolutionary taxonomy – evolutionary tree – evolvability – experimental evol Document 1::: Adaptive type – in evolutionary biology – is any population or taxon which have the potential for a particular or total occupation of given free of underutilized home habitats or position in the general economy of nature. In evolutionary sense, the emergence of new adaptive type is usually a result of adaptive radiation certain groups of organisms in which they arise categories that can effectively exploit temporary, or new conditions of the environment. Such evolutive units with its distinctive – morphological and anatomical, physiological and other characteristics, i.e. genetic and adjustments (feature) have a predisposition for an occupation certain home habitats or position in the general nature economy. Simply, the adaptive type is one group organisms whose general biological properties represent a key to open the entrance to the observed adaptive zone in the observed natural ecological complex. Adaptive types are spatially and temporally specific. Since the frames of general biological properties these types of substantially genetic are defined between, in effect the emergence of new adaptive types of the corresponding change in population genetic structure and eternal contradiction between the need for optimal adapted well the conditions of living environment, while maintaining genetic variation for survival in a possible new circumstances. For example, the specific place in the economy of nature existed millions of years before the appearance of human type. However, just when the process of evolution of primates (order Primates) reached a level that is able to occupy that position, it is open, and then (in leaving world) an unprecedented acceleration increasingly spreading. Culture, in the broadest sense, is a key adaptation of adaptive type type of Homo sapiens the occupation of existing adaptive zone through work, also in the broadest sense of the term. Document 2::: Evolutionary biology is the subfield of biology that studies the evolutionary processes (natural selection, common descent, speciation) that produced the diversity of life on Earth. It is also defined as the study of the history of life forms on Earth. Evolution holds that all species are related and gradually change over generations. In a population, the genetic variations affect the phenotypes (physical characteristics) of an organism. These changes in the phenotypes will be an advantage to some organisms, which will then be passed onto their offspring. Some examples of evolution in species over many generations are the peppered moth and flightless birds. In the 1930s, the discipline of evolutionary biology emerged through what Julian Huxley called the modern synthesis of understanding, from previously unrelated fields of biological research, such as genetics and ecology, systematics, and paleontology. The investigational range of current research has widened to encompass the genetic architecture of adaptation, molecular evolution, and the different forces that contribute to evolution, such as sexual selection, genetic drift, and biogeography. Moreover, the newer field of evolutionary developmental biology ("evo-devo") investigates how embryogenesis is controlled, thus yielding a wider synthesis that integrates developmental biology with the fields of study covered by the earlier evolutionary synthesis. Subfields Evolution is the central unifying concept in biology. Biology can be divided into various ways. One way is by the level of biological organization, from molecular to cell, organism to population. Another way is by perceived taxonomic group, with fields such as zoology, botany, and microbiology, reflecting what was once seen as the major divisions of life. A third way is by approaches, such as field biology, theoretical biology, experimental evolution, and paleontology. These alternative ways of dividing up the subject have been combined with evolution Document 3::: Ecological inheritance occurs when organisms inhabit a modified environment that a previous generation created; it was first described in Odling-Smee (1988) and Odling-Smee et al. (1996) as a consequence of niche construction. Standard evolutionary theory focuses on the influence that natural selection and genetic inheritance has on biological evolution, when individuals that survive and reproduce also transmit genes to their offspring. If offspring do not live in a modified environment created by their parents, then niche construction activities of parents do not affect the selective pressures of their offspring (see orb-web spiders in Genetic inheritance vs. ecological inheritance below). However, when niche construction affects multiple generations (i.e., parents and offspring), ecological inheritance acts a inheritance system different than genetic inheritance. Since ecological inheritance is a result of ecosystem engineering and niche construction, the fitness of several species and their subsequent generations experience a selective pressure dependent on the modified environment they inherit. Organisms in subsequent generations will encounter ecological inheritance because they are affected by a new selective environment created by prior niche construction. On a macroevolutionary scale, ecological inheritance has been defined as, "the persistence of environmental modifications by a species over multiple generations to influence the evolution of that or other species." Ecological inheritance has also been defined as, "... the accumulation of environmental changes, such as altered soil, atmosphere or ocean states that previous generations have brought about through their niche-constructing activity, and that influence the development of descendant organisms." Related to niche construction and ecological inheritance are factors and features of an organism and environment, respectively, where the feature of an organism is synonymous with adaptation if natural se Document 4::: An acquired characteristic is a non-heritable change in a function or structure of a living organism caused after birth by disease, injury, accident, deliberate modification, variation, repeated use, disuse, misuse, or other environmental influence. Acquired traits are synonymous with acquired characteristics. They are not passed on to offspring through reproduction. The changes that constitute acquired characteristics can have many manifestations and degrees of visibility, but they all have one thing in common. They change a facet of a living organism's function or structure after birth. For example: The muscles acquired by a bodybuilder through physical training and diet. The loss of a limb due to an injury. The miniaturization of bonsai plants through careful cultivation techniques. Acquired characteristics can be minor and temporary like bruises, blisters, or shaving body hair. Permanent but inconspicuous or invisible ones are corrective eye surgery and organ transplant or removal. Semi-permanent but inconspicuous or invisible traits are vaccination and laser hair removal. Perms, tattoos, scars, and amputations are semi-permanent and highly visible. Applying makeup, nail polish, dying one's hair, applying henna to the skin, and tooth whitening are not examples of acquired traits. They change the appearance of a facet of an organism, but do not change the structure or functionality. Inheritance of acquired characteristics was historically proposed by renowned theorists such as Hippocrates, Aristotle, and French naturalist Jean-Baptiste Lamarck. Conversely, this hypothesis was denounced by other renowned theorists such as Charles Darwin. Today, although Lamarckism is generally discredited, there is still debate on whether some acquired characteristics in organisms are actually inheritable. Disputes Acquired characteristics, by definition, are characteristics that are gained by an organism after birth as a result of external influences or the organism's ow The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. What is the method of evolution by which advantageous heritable traits become more common over generations? A. flow selection B. artificial selection C. natural selection D. same selection Answer:
sciq-4166
multiple_choice
One of the most advanced uses of what in medicine is the positron emission tomography scanner, which detects the activity in the body of a very small injection of radioactive glucose?
[ "quarks", "radioisotopes", "radionuclides", "membranes" ]
B
Relavent Documents: Document 0::: Medical physics deals with the application of the concepts and methods of physics to the prevention, diagnosis and treatment of human diseases with a specific goal of improving human health and well-being. Since 2008, medical physics has been included as a health profession according to International Standard Classification of Occupation of the International Labour Organization. Although medical physics may sometimes also be referred to as biomedical physics, medical biophysics, applied physics in medicine, physics applications in medical science, radiological physics or hospital radio-physics, a "medical physicist" is specifically a health professional with specialist education and training in the concepts and techniques of applying physics in medicine and competent to practice independently in one or more of the subfields of medical physics. Traditionally, medical physicists are found in the following healthcare specialties: radiation oncology (also known as radiotherapy or radiation therapy), diagnostic and interventional radiology (also known as medical imaging), nuclear medicine, and radiation protection. Medical physics of radiation therapy can involve work such as dosimetry, linac quality assurance, and brachytherapy. Medical physics of diagnostic and interventional radiology involves medical imaging techniques such as magnetic resonance imaging, ultrasound, computed tomography and x-ray. Nuclear medicine will include positron emission tomography and radionuclide therapy. However one can find Medical Physicists in many other areas such as physiological monitoring, audiology, neurology, neurophysiology, cardiology and others. Medical physics departments may be found in institutions such as universities, hospitals, and laboratories. University departments are of two types. The first type are mainly concerned with preparing students for a career as a hospital Medical Physicist and research focuses on improving the practice of the profession. A second type (in Document 1::: Brain positron emission tomography is a form of positron emission tomography (PET) that is used to measure brain metabolism and the distribution of exogenous radiolabeled chemical agents throughout the brain. PET measures emissions from radioactively labeled metabolically active chemicals that have been injected into the bloodstream. The emission data from brain PET are computer-processed to produce multi-dimensional images of the distribution of the chemicals throughout the brain. Process The positron emitting radioisotopes used are usually produced by a cyclotron, and chemicals are labeled with these radioactive atoms. The radioisotopes used in clinics are normally 18F (fluoride), 11C (carbon) and 15O (oxygen). The labeled compound, called a radiotracer or radioligand, is injected into the bloodstream and eventually makes its way to the brain through blood circulation. Detectors in the PET scanner detect the radioactivity as the compound charges in various regions of the brain. A computer uses the data gathered by the detectors to create multi-dimensional (normally 3-dimensional volumetric or 4-dimensional time-varying) images that show the distribution of the radiotracer in the brain following the time. Especially useful are a wide array of ligands used to map different aspects of neurotransmitter activity, with by far the most commonly used PET tracer being a labeled form of glucose, such as fluorodeoxyglucose (18F). Advantages and disadvantages The greatest benefit of PET scanning is that different compounds can show flow and oxygen, and glucose metabolism in the tissues of the working brain. These measurements reflect the amount of brain activity in the various regions of the brain and allow to learn more about how the brain works. PET scans were superior to all other metabolic imaging methods in terms of resolution and speed of completion (as little as 30 seconds), when they first became available. The improved resolution permitted better study to be Document 2::: TIME-ITEM is an ontology of Topics that describes the content of undergraduate medical education. TIME is an acronym for "Topics for Indexing Medical Education"; ITEM is an acronym for "Index de thèmes pour l’éducation médicale." Version 1.0 of the taxonomy has been released and the web application that allows users to work with it is still under development. Its developers are seeking more collaborators to expand and validate the taxonomy and to guide future development of the web application. History The development of TIME-ITEM began at the University of Ottawa in 2006. It was initially developed to act as a content index for a curriculum map being constructed there. After its initial presentation at the 2006 conference of the Canadian Association for Medical Education, early collaborators included the University of British Columbia, McMaster University and Queen's University. Features The TIME-ITEM ontology is unique in that it is designed specifically for undergraduate medical education. As such, it includes fewer strictly biomedical entries than other common medical vocabularies (such as MeSH or SNOMED CT) but more entries relating to the medico-social concepts of communication, collaboration, professionalism, etc. Topics within TIME-ITEM are arranged poly-hierarchically, meaning any Topic can have more than one parent. Relationships are established based on the logic that learning about a Topic contributes to the learning of all its parent Topics. In addition to housing the ontology of Topics, the TIME-ITEM web application can house multiple Outcome frameworks. All Outcomes, whether private Outcomes entered by single institutions or publicly available medical education Outcomes (such as CanMeds 2005) are hierarchically linked to one or more Topics in the ontology. In this way, the contribution of each Topic to multiple Outcomes is made explicit. The structure of the XML documents exported from TIME-ITEM (which contain the hierarchy of Outco Document 3::: In mathematical psychology and education theory, a knowledge space is a combinatorial structure used to formulate mathematical models describing the progression of a human learner. Knowledge spaces were introduced in 1985 by Jean-Paul Doignon and Jean-Claude Falmagne, and remain in extensive use in the education theory. Modern applications include two computerized tutoring systems, ALEKS and the defunct RATH. Formally, a knowledge space assumes that a domain of knowledge is a collection of concepts or skills, each of which must be eventually mastered. Not all concepts are interchangeable; some require other concepts as prerequisites. Conversely, competency at one skill may ease the acquisition of another through similarity. A knowledge space marks out which collections of skills are feasible: they can be learned without mastering any other skills. Under reasonable assumptions, the collection of feasible competencies forms the mathematical structure known as an antimatroid. Researchers and educators usually explore the structure of a discipline's knowledge space as a latent class model. Motivation Knowledge Space Theory attempts to address shortcomings of standardized testing when used in educational psychometry. Common tests, such as the SAT and ACT, compress a student's knowledge into a very small range of ordinal ranks, in the process effacing the conceptual dependencies between questions. Consequently, the tests cannot distinguish between true understanding and guesses, nor can they identify a student's particular weaknesses, only the general proportion of skills mastered. The goal of knowledge space theory is to provide a language by which exams can communicate What the student can do and What the student is ready to learn. Model structure Knowledge Space Theory-based models presume that an educational subject can be modeled as a finite set of concepts, skills, or topics. Each feasible state of knowledge about is then a subset of ; the set of Document 4::: Medical education is education related to the practice of being a medical practitioner, including the initial training to become a physician (i.e., medical school and internship) and additional training thereafter (e.g., residency, fellowship, and continuing medical education). Medical education and training varies considerably across the world. Various teaching methodologies have been used in medical education, which is an active area of educational research. Medical education is also the subject-didactic academic field of educating medical doctors at all levels, including entry-level, post-graduate, and continuing medical education. Specific requirements such as entrustable professional activities must be met before moving on in stages of medical education. Common techniques and evidence base Medical education applies theories of pedagogy specifically in the context of medical education. Medical education has been a leader in the field of evidence-based education, through the development of evidence syntheses such as the Best Evidence Medical Education collection, formed in 1999, which aimed to "move from opinion-based education to evidence-based education". Common evidence-based techniques include the Objective structured clinical examination (commonly known as the 'OSCE) to assess clinical skills, and reliable checklist-based assessments to determine the development of soft skills such as professionalism. However, there is a persistence of ineffective instructional methods in medical education, such as the matching of teaching to learning styles and Edgar Dales' "Cone of Learning". Entry-level education Entry-level medical education programs are tertiary-level courses undertaken at a medical school. Depending on jurisdiction and university, these may be either undergraduate-entry (most of Europe, Asia, South America and Oceania), or graduate-entry programs (mainly Australia, Philippines and North America). Some jurisdictions and universities provide both u The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. One of the most advanced uses of what in medicine is the positron emission tomography scanner, which detects the activity in the body of a very small injection of radioactive glucose? A. quarks B. radioisotopes C. radionuclides D. membranes Answer:
sciq-11658
multiple_choice
What is the study of macroscopic properties, atomic properties, and phenomena in chemical systems?
[ "differential chemistry", "physical chemistry", "thermal chemistry", "molecular chemistry" ]
B
Relavent Documents: Document 0::: Computer science and engineering (CSE) is an academic program at many universities which comprises computer science classes (e.g. data structures and algorithms) and computer engineering classes (e.g computer architecture). There is no clear division in computing between science and engineering, just like in the field of materials science and engineering. CSE is also a term often used in Europe to translate the name of engineering informatics academic programs. It is offered in both undergraduate as well postgraduate with specializations. Academic courses Academic programs vary between colleges, but typically include a combination of topics in computer science, computer engineering, and electrical engineering. Undergraduate courses usually include programming, algorithms and data structures, computer architecture, operating systems, computer networks, parallel computing, embedded systems, algorithms design, circuit analysis and electronics, digital logic and processor design, computer graphics, scientific computing, software engineering, database systems, digital signal processing, virtualization, computer simulations and games programming. CSE programs also include core subjects of theoretical computer science such as theory of computation, numerical methods, machine learning, programming theory and paradigms. Modern academic programs also cover emerging computing fields like image processing, data science, robotics, bio-inspired computing, computational biology, autonomic computing and artificial intelligence. Most CSE programs require introductory mathematical knowledge, hence the first year of study is dominated by mathematical courses, primarily discrete mathematics, mathematical analysis, linear algebra, probability, and statistics, as well as the basics of electrical and electronic engineering, physics, and electromagnetism. Example universities with CSE majors and departments APJ Abdul Kalam Technological University American International University-B Document 1::: A master's degree in quantitative finance concerns the application of mathematical methods to the solution of problems in financial economics. There are several like-titled degrees which may further focus on financial engineering, computational finance, mathematical finance, and/or financial risk management. In general, these degrees aim to prepare students for roles as "quants" (quantitative analysts), including analysis, structuring, trading, and investing; in particular, these degrees emphasize derivatives and fixed income, and the hedging and management of the resultant market and credit risk. Formal master's-level training in quantitative finance has existed since 1990. Structure The program is usually one to one and a half years in duration, and may include a thesis component. Entrance requirements are generally multivariable calculus, linear algebra, differential equations and some exposure to computer programming (usually C++); programs emphasizing financial mathematics may require some background in measure theory. Initially, the curriculum builds quantitative skills, and simultaneously develops the underlying finance theory: The quantitative component draws on applied mathematics, computer science and statistical modelling, and emphasizes stochastic calculus, numerical methods and simulation techniques; see . Some programs also focus on econometrics / time series analysis. The theory component usually includes a formal study of financial economics, addressing asset pricing and financial markets; some programs may also include general coverage of economics, accounting, corporate finance and portfolio management. The components are then integrated, addressing the modelling, valuation and hedging of equity derivatives, commodity derivatives, foreign exchange derivatives, and fixed income instruments and their related credit- and interest rate derivatives; see . Programs often include dedicated modules in market risk and credit risk, with some degree Document 2::: Physical biochemistry is a branch of biochemistry that deals with the theory, techniques, and methodology used to study the physical chemistry of biomolecules. It also deals with the mathematical approaches for the analysis of biochemical reaction and the modelling of biological systems. It provides insight into the structure of macromolecules, and how chemical structure influences the physical properties of a biological substance. It involves the use of physics, physical chemistry principles, and methodology to study biological systems. It employs various physical chemistry techniques such as chromatography, spectroscopy, Electrophoresis, X-ray crystallography, electron microscopy, and hydrodynamics. See also Physical chemistry Document 3::: The mathematical sciences are a group of areas of study that includes, in addition to mathematics, those academic disciplines that are primarily mathematical in nature but may not be universally considered subfields of mathematics proper. Statistics, for example, is mathematical in its methods but grew out of bureaucratic and scientific observations, which merged with inverse probability and then grew through applications in some areas of physics, biometrics, and the social sciences to become its own separate, though closely allied, field. Theoretical astronomy, theoretical physics, theoretical and applied mechanics, continuum mechanics, mathematical chemistry, actuarial science, computer science, computational science, data science, operations research, quantitative biology, control theory, econometrics, geophysics and mathematical geosciences are likewise other fields often considered part of the mathematical sciences. Some institutions offer degrees in mathematical sciences (e.g. the United States Military Academy, Stanford University, and University of Khartoum) or applied mathematical sciences (for example, the University of Rhode Island). See also Document 4::: Engineering mathematics is a branch of applied mathematics concerning mathematical methods and techniques that are typically used in engineering and industry. Along with fields like engineering physics and engineering geology, both of which may belong in the wider category engineering science, engineering mathematics is an interdisciplinary subject motivated by engineers' needs both for practical, theoretical and other considerations outside their specialization, and to deal with constraints to be effective in their work. Description Historically, engineering mathematics consisted mostly of applied analysis, most notably: differential equations; real and complex analysis (including vector and tensor analysis); approximation theory (broadly construed, to include asymptotic, variational, and perturbative methods, representations, numerical analysis); Fourier analysis; potential theory; as well as linear algebra and applied probability, outside of analysis. These areas of mathematics were intimately tied to the development of Newtonian physics, and the mathematical physics of that period. This history also left a legacy: until the early 20th century subjects such as classical mechanics were often taught in applied mathematics departments at American universities, and fluid mechanics may still be taught in (applied) mathematics as well as engineering departments. The success of modern numerical computer methods and software has led to the emergence of computational mathematics, computational science, and computational engineering (the last two are sometimes lumped together and abbreviated as CS&E), which occasionally use high-performance computing for the simulation of phenomena and the solution of problems in the sciences and engineering. These are often considered interdisciplinary fields, but are also of interest to engineering mathematics. Specialized branches include engineering optimization and engineering statistics. Engineering mathematics in tertiary educ The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. What is the study of macroscopic properties, atomic properties, and phenomena in chemical systems? A. differential chemistry B. physical chemistry C. thermal chemistry D. molecular chemistry Answer:
sciq-3775
multiple_choice
What type of chloride is a non-volatile material, but does not dissolve in water?
[ "silver chloride", "yellow chloride", "pink chloride", "lead chloride" ]
A
Relavent Documents: Document 0::: An ionic liquid (IL) is a salt in the liquid state. In some contexts, the term has been restricted to salts whose melting point is below a specific temperature, such as . While ordinary liquids such as water and gasoline are predominantly made of electrically neutral molecules, ionic liquids are largely made of ions. These substances are variously called liquid electrolytes, ionic melts, ionic fluids, fused salts, liquid salts, or ionic glasses. Ionic liquids have many potential applications. They are powerful solvents and can be used as electrolytes. Salts that are liquid at near-ambient temperature are important for electric battery applications, and have been considered as sealants due to their very low vapor pressure. Any salt that melts without decomposing or vaporizing usually yields an ionic liquid. Sodium chloride (NaCl), for example, melts at into a liquid that consists largely of sodium cations () and chloride anions (). Conversely, when an ionic liquid is cooled, it often forms an ionic solid—which may be either crystalline or glassy. The ionic bond is usually stronger than the Van der Waals forces between the molecules of ordinary liquids. Because of these strong interactions, salts tend to have high lattice energies, manifested in high melting points. Some salts, especially those with organic cations, have low lattice energies and thus are liquid at or below room temperature. Examples include compounds based on the 1-ethyl-3-methylimidazolium (EMIM) cation and include: EMIM:Cl, EMIMAc (acetate anion), EMIM dicyanamide, ()()·, that melts at ; and 1-butyl-3,5-dimethylpyridinium bromide which becomes a glass below . Low-temperature ionic liquids can be compared to ionic solutions, liquids that contain both ions and neutral molecules, and in particular to the so-called deep eutectic solvents, mixtures of ionic and non-ionic solid substances which have much lower melting points than the pure compounds. Certain mixtures of nitrate salts can have melt Document 1::: Binary liquid is a type of chemical combination, which creates a special reaction or feature as a result of mixing two liquid chemicals, that are normally inert or have no function by themselves. A number of chemical products are produced as a result of mixing two chemicals as a binary liquid, such as plastic foams and some explosives. See also Binary chemical weapon Thermophoresis Percus-Yevick equation Document 2::: Tin(II) chloride, also known as stannous chloride, is a white crystalline solid with the formula . It forms a stable dihydrate, but aqueous solutions tend to undergo hydrolysis, particularly if hot. SnCl2 is widely used as a reducing agent (in acid solution), and in electrolytic baths for tin-plating. Tin(II) chloride should not be confused with the other chloride of tin; tin(IV) chloride or stannic chloride (SnCl4). Chemical structure SnCl2 has a lone pair of electrons, such that the molecule in the gas phase is bent. In the solid state, crystalline SnCl2 forms chains linked via chloride bridges as shown. The dihydrate has three coordinates as well, with one water on the tin and another water on the first. The main part of the molecule stacks into double layers in the crystal lattice, with the "second" water sandwiched between the layers. Chemical properties Tin(II) chloride can dissolve in less than its own mass of water without apparent decomposition, but as the solution is diluted, hydrolysis occurs to form an insoluble basic salt: SnCl2 (aq) + H2O (l) Sn(OH)Cl (s) + HCl (aq) Therefore, if clear solutions of tin(II) chloride are to be used, it must be dissolved in hydrochloric acid (typically of the same or greater molarity as the stannous chloride) to maintain the equilibrium towards the left-hand side (using Le Chatelier's principle). Solutions of SnCl2 are also unstable towards oxidation by the air: 6 SnCl2 (aq) + O2 (g) + 2 H2O (l) → 2 SnCl4 (aq) + 4 Sn(OH)Cl (s) This can be prevented by storing the solution over lumps of tin metal. There are many such cases where tin(II) chloride acts as a reducing agent, reducing silver and gold salts to the metal, and iron(III) salts to iron(II), for example: SnCl2 (aq) + 2 FeCl3 (aq) → SnCl4 (aq) + 2 FeCl2 (aq) It also reduces copper(II) to copper(I). Document 3::: Flake salt refers to a category of salt characterized by their dry, plate-like ("lamellose") crystals. Their structure is a result of differing growth rates between the faces and edges of the crystal, an effect that can be achieved in various ways. Flake salt may occur naturally but can also be produced by a variety of methods, including boiling brine over metal salt pans or evaporating it in greenhouse solar evaporators. The technologies used as well as atmospheric conditions can yield varying crystal structures. Flake salts can form as irregular shavings, pyramidal shapes, boxes, or potato chip-like laminated crystals. These salts tend to have lower trace mineral content than other salts, giving them a stronger salty taste. Most form as thin, flattened out crystals with a large surface area and low mass that give them a crunchy texture and relatively fast dissolution rate. Because of the salts' delicate structures, selmeliers tend to use them as finishing salts. See also Alberger process List of edible salts Fleur de sel Document 4::: Bitumen (, ) is an immensely viscous constituent of petroleum. Depending on its exact composition it can be a sticky, black liquid or an apparently solid mass that behaves as a liquid over very large time scales. In the U.S., the material is commonly referred to as asphalt. Whether found in natural deposits or refined from petroleum, the substance is classed as a pitch. Prior to the 20th century the term asphaltum was in general use. The word derives from the ancient Greek ἄσφαλτος ásphaltos, which referred to natural bitumen or pitch. The largest natural deposit of bitumen in the world, estimated to contain 10 million tons, is the Pitch Lake of southwest Trinidad. 70% of annual bitumen production destined for road construction, its primary use. In this application bitumen is used to bind aggregate particles like gravel and forms a substance referred to as asphalt concrete, which is colloquially termed asphalt. Its other main uses lie in bituminous waterproofing products, such as roofing felt and roof sealant. In material sciences and engineering the terms "asphalt" and "bitumen" are often used interchangeably and refer both to natural and manufactured forms of the substance, although there is regional variation as to which term is most common. Worldwide, geologists tend to favor the term "bitumen" for the naturally occurring material. For the manufactured material, which is a refined residue from the distillation process of selected crude oils, "bitumen" is the prevalent term in much of the world; however, in American English, "asphalt" is more commonly used. To help avoid confusion, the phrases "liquid asphalt", "asphalt binder", or "asphalt cement" are used in the U.S. Colloquially, various forms of asphalt are sometimes referred to as "tar", as in the name of the La Brea Tar Pits. Naturally occurring bitumen is sometimes specified by the term "crude bitumen". Its viscosity is similar to that of cold molasses while the material obtained from the fractional di The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. What type of chloride is a non-volatile material, but does not dissolve in water? A. silver chloride B. yellow chloride C. pink chloride D. lead chloride Answer:
sciq-7267
multiple_choice
What types of incest behaviors are controlled by genes?
[ "byproduct behaviors like flying and mating", "physical behaviors like flying and mating", "instinctive behaviors like flying and mating", "psychological behaviors like flying and mating" ]
C
Relavent Documents: Document 0::: Kin recognition, also called kin detection, is an organism's ability to distinguish between close genetic kin and non-kin. In evolutionary biology and psychology, such an ability is presumed to have evolved for inbreeding avoidance, though animals do not typically avoid inbreeding. An additional adaptive function sometimes posited for kin recognition is a role in kin selection. There is debate over this, since in strict theoretical terms kin recognition is not necessary for kin selection or the cooperation associated with it. Rather, social behaviour can emerge by kin selection in the demographic conditions of 'viscous populations' with organisms interacting in their natal context, without active kin discrimination, since social participants by default typically share recent common origin. Since kin selection theory emerged, much research has been produced investigating the possible role of kin recognition mechanisms in mediating altruism. Taken as a whole, this research suggests that active powers of recognition play a negligible role in mediating social cooperation relative to less elaborate cue-based and context-based mechanisms, such as familiarity, imprinting and phenotype matching. Because cue-based 'recognition' predominates in social mammals, outcomes are non-deterministic in relation to actual genetic kinship, instead outcomes simply reliably correlate with genetic kinship in an organism's typical conditions. A well-known human example of an inbreeding avoidance mechanism is the Westermarck effect, in which unrelated individuals who happen to spend their childhood in the same household find each other sexually unattractive. Similarly, due to the cue-based mechanisms that mediate social bonding and cooperation, unrelated individuals who grow up together in this way are also likely to demonstrate strong social and emotional ties, and enduring altruism. Theoretical background The English evolutionary biologist W. D. Hamilton's theory of inclusive fitness, a Document 1::: An incest taboo is any cultural rule or norm that prohibits sexual relations between certain members of the same family, mainly between individuals related by blood. All human cultures have norms that exclude certain close relatives from those considered suitable or permissible sexual or marriage partners, making such relationships taboo. However, different norms exist among cultures as to which blood relations are permissible as sexual partners and which are not. Sexual relations between related persons which are subject to the taboo are called incestuous relationships. Some cultures proscribe sexual relations between clan-members, even when no traceable biological relationship exists, while members of other clans are permissible irrespective of the existence of a biological relationship. In many cultures, certain types of cousin relations are preferred as sexual and marital partners, whereas in others these are taboo. Some cultures permit sexual and marital relations between aunts/uncles and nephews/nieces. In some instances, brother–sister marriages have been practised by the elites with some regularity. Parent–child and sibling–sibling unions are almost universally taboo. Origin Debate about the origin of the incest taboo has often been framed as a question of whether it is based in nature or nurture. One explanation sees the incest taboo as a cultural implementation of a biologically evolved preference for sexual partners with whom one is unlikely to share genes, since inbreeding may have detrimental outcomes. The most widely held hypothesis proposes that the so-called Westermarck effect discourages adults from engaging in sexual relations with individuals with whom they grew up. The existence of the Westermarck effect has achieved some empirical support. Another school argues that the incest prohibition is a cultural construct which arises as a side effect of a general human preference for group exogamy, which arises because intermarriage between groups c Document 2::: The ACE model is a statistical model commonly used to analyze the results of twin and adoption studies. This classic behaviour genetic model aims to partition the phenotypic variance into three categories: additive genetic variance (A), common (or shared) environmental factors (C), and specific (or nonshared) environmental factors plus measurement error (E). It is widely used in genetic epidemiology and behavioural genetics. The basic ACE model relies on several assumptions, including the absence of assortative mating, that there is no genetic dominance or epistasis, that all genetic effects are additive, and the absence of gene-environment interactions. In order to address these limitations, several variants of the ACE model have been developed, including an ACE-β model, which emphasizes the identification of causal effects, and the ACDE model, which accounts for the effects of genetic dominance. See also ADE model Document 3::: Family resemblance refers to physical similarities shared between close relatives, especially between parents and children and between siblings. In psychology, the similarities of personality are also observed. Genetics Heritability, defined as a measure of family resemblance, causes traits to be genetically passed from parents to offspring (heredity), allowing evolutionarily advantageous traits to persist through generations. Despite sharing parents, siblings do not inherit identical genes, making studies on identical twins (who have identical DNA) especially effective at analyzing the role genetics play in phenotypic similarity. Studies have found that generational resemblance of many phenotypic traits results from the inheritance of multiples genes that collectively influence a trait (additive genetic variance). There is evidence of heritability in personality traits. For example, one study found that approximately half of personality differences in high-school aged fraternal and identical twins were due to genetic variation - and another study suggests that no one personality trait is more heritable than another. Environment Family resemblance is also shaped by environmental factors, temperature, light, nutrition, exposure to drugs, the time that different family members spend in shared and non-shared environments, are examples of factors found to influence phenotype. Phenotypes found to be largely environmentally determined in humans include personality, height, and weight. Twin studies have shown that more than half of the variation in a few major aspects of personality are environmentally determined, and that environmental factors even affect traits like immune response and how children handle stress. Additionally, anomalous findings, such as second-degree relatives of alcoholics, showing surprising similarities to them have led some researchers’ attempts in generating better models that account for the environmental impacts on influences like cultural in Document 4::: Exogamy is the social norm of mating or marrying outside one's social group. The group defines the scope and extent of exogamy, and the rules and enforcement mechanisms that ensure its continuity. One form of exogamy is dual exogamy, in which two groups continually intermarry with each other. In social science, exogamy is viewed as a combination of two related aspects: biological and cultural. Biological exogamy is the marriage of people who are not blood relatives. This is regulated by incest taboos and laws against incest. Cultural exogamy is marrying outside a specific cultural group; the opposite being endogamy, marriage within a social group. Biology of exogamy Exogamy often results in two individuals that are not closely genetically related marrying each other; that is, outbreeding as opposed to inbreeding. In moderation, this benefits the offspring as it reduces the risk of the offspring inheriting two copies of a defective gene. Increasing the genetic diversity of the offspring improves the chances of offspring reproducing, up until the fourth-cousin level of relatedness; however, reproduction between individuals on the fourth-cousin level of relatedness decreases evolutionary fitness. In native populations, exogamy might be detrimental if "the benefits of local adaptation are greater than the cost of inbreeding." However, non-native, "invasive" populations that have "not yet established a pattern of local adaptation" may derive some adaptive benefit from admixture. Nancy Wilmsen Thornhill states that the drive in humans to not reproduce or be attracted to one's immediate family is evolutionarily adaptive, as it reduces the risk of children having genetic defects caused by inbreeding, as a result of inheriting two copies of a deleterious recessive gene. In one Old Order Amish society, inbreeding increases the risk of "neonatal and postneonatal mortality." In French populations, people who reproduce with their first cousin develop cystinosis at a greate The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. What types of incest behaviors are controlled by genes? A. byproduct behaviors like flying and mating B. physical behaviors like flying and mating C. instinctive behaviors like flying and mating D. psychological behaviors like flying and mating Answer:
sciq-3398
multiple_choice
What substances that primarily comprise plasma membranes form a bilayer?
[ "phospholipids", "enzymes", "amino acids", "steroids" ]
A
Relavent Documents: Document 0::: A bilayer is a double layer of closely packed atoms or molecules. The properties of bilayers are often studied in condensed matter physics, particularly in the context of semiconductor devices, where two distinct materials are united to form junctions, such as p–n junctions, Schottky junctions, etc. Layered materials, such as graphene, boron nitride, or transition metal dichalcogenides, have unique electronic properties as a bilayer system and are an active area of current research. In biology a common example is the lipid bilayer, which describes the structure of multiple organic structures, such as the membrane of a cell. See also Monolayer Non-carbon nanotube Semiconductor Thin film Document 1::: The lipid bilayer (or phospholipid bilayer) is a thin polar membrane made of two layers of lipid molecules. These membranes are flat sheets that form a continuous barrier around all cells. The cell membranes of almost all organisms and many viruses are made of a lipid bilayer, as are the nuclear membrane surrounding the cell nucleus, and membranes of the membrane-bound organelles in the cell. The lipid bilayer is the barrier that keeps ions, proteins and other molecules where they are needed and prevents them from diffusing into areas where they should not be. Lipid bilayers are ideally suited to this role, even though they are only a few nanometers in width, because they are impermeable to most water-soluble (hydrophilic) molecules. Bilayers are particularly impermeable to ions, which allows cells to regulate salt concentrations and pH by transporting ions across their membranes using proteins called ion pumps. Biological bilayers are usually composed of amphiphilic phospholipids that have a hydrophilic phosphate head and a hydrophobic tail consisting of two fatty acid chains. Phospholipids with certain head groups can alter the surface chemistry of a bilayer and can, for example, serve as signals as well as "anchors" for other molecules in the membranes of cells. Just like the heads, the tails of lipids can also affect membrane properties, for instance by determining the phase of the bilayer. The bilayer can adopt a solid gel phase state at lower temperatures but undergo phase transition to a fluid state at higher temperatures, and the chemical properties of the lipids' tails influence at which temperature this happens. The packing of lipids within the bilayer also affects its mechanical properties, including its resistance to stretching and bending. Many of these properties have been studied with the use of artificial "model" bilayers produced in a lab. Vesicles made by model bilayers have also been used clinically to deliver drugs. The structure of biological Document 2::: Membrane proteins are common proteins that are part of, or interact with, biological membranes. Membrane proteins fall into several broad categories depending on their location. Integral membrane proteins are a permanent part of a cell membrane and can either penetrate the membrane (transmembrane) or associate with one or the other side of a membrane (integral monotopic). Peripheral membrane proteins are transiently associated with the cell membrane. Membrane proteins are common, and medically important—about a third of all human proteins are membrane proteins, and these are targets for more than half of all drugs. Nonetheless, compared to other classes of proteins, determining membrane protein structures remains a challenge in large part due to the difficulty in establishing experimental conditions that can preserve the correct conformation of the protein in isolation from its native environment. Function Membrane proteins perform a variety of functions vital to the survival of organisms: Membrane receptor proteins relay signals between the cell's internal and external environments. Transport proteins move molecules and ions across the membrane. They can be categorized according to the Transporter Classification database. Membrane enzymes may have many activities, such as oxidoreductase, transferase or hydrolase. Cell adhesion molecules allow cells to identify each other and interact. For example, proteins involved in immune response The localization of proteins in membranes can be predicted reliably using hydrophobicity analyses of protein sequences, i.e. the localization of hydrophobic amino acid sequences. Integral membrane proteins Integral membrane proteins are permanently attached to the membrane. Such proteins can be separated from the biological membranes only using detergents, nonpolar solvents, or sometimes denaturing agents. They can be classified according to their relationship with the bilayer: Integral polytopic proteins are transmembran Document 3::: Glycerophospholipids or phosphoglycerides are glycerol-based phospholipids. They are the main component of biological membranes. Two major classes are known: those for bacteria and eukaryotes and a separate family for archaea. Structures The term glycerophospholipid signifies any derivative of glycerophosphoric acid that contains at least one O-acyl, or O-alkyl, or O-alk-1'-enyl residue attached to the glycerol moiety. The phosphate group forms an ester linkage to the glycerol. The long-chained hydrocarbons are typically attached through ester linkages in bacteria/eukaryotes and by ether linkages in archaea. In bacteria and procaryotes, the lipids consist of diesters commonly of C16 or C18 fatty acids. These acids are straight-chained and, especially for the C18 members, can be unsaturated. For archaea, the hydrocarbon chains have chain lengths of C10, C15, C20 etc. since they are derived from isoprene units. These chains are branched, with one methyl substituent per C5 subunit. These chains are linked to the glycerol phosphate by ether linkages. The two hydrocarbon chains attached to the glycerol are hydrophobic while the polar head, which mainly consists of the phosphate group attached to the third carbon of the glycerol backbone, is hydrophilic. This dual characteristic leads to the amphipathic nature of glycerophospholipids. They are usually organized into a bilayer in membranes with the polar hydrophilic heads sticking outwards to the aqueous environment and the non-polar hydrophobic tails pointing inwards. Glycerophospholipids consist of various diverse species which usually differ slightly in structure. The most basic structure is a phosphatidate. This species is an important intermediate in the synthesis of many phosphoglycerides. The presence of an additional group attached to the phosphate allows for many different phosphoglycerides. By convention, structures of these compounds show the 3 glycerol carbon atoms vertically with the phosphate att Document 4::: A model lipid bilayer is any bilayer assembled in vitro, as opposed to the bilayer of natural cell membranes or covering various sub-cellular structures like the nucleus. They are used to study the fundamental properties of biological membranes in a simplified and well-controlled environment, and increasingly in bottom-up synthetic biology for the construction of artificial cells. A model bilayer can be made with either synthetic or natural lipids. The simplest model systems contain only a single pure synthetic lipid. More physiologically relevant model bilayers can be made with mixtures of several synthetic or natural lipids. There are many different types of model bilayers, each having experimental advantages and disadvantages. The first system developed was the black lipid membrane or “painted” bilayer, which allows simple electrical characterization of bilayers but is short-lived and can be difficult to work with. Supported bilayers are anchored to a solid substrate, increasing stability and allowing the use of characterization tools not possible in bulk solution. These advantages come at the cost of unwanted substrate interactions which can denature membrane proteins. Black lipid membranes (BLM) The earliest model bilayer system developed was the “painted” bilayer, also known as a “black lipid membrane.” The term “painted” refers to the process by which these bilayers are made. First, a small aperture is created in a thin layer of a hydrophobic material such as Teflon. Typically the diameter of this hole is a few tens of micrometers up to hundreds of micrometers. To form a BLM, the area around the aperture is first "pre-painted" with a solution of lipids dissolved in a hydrophobic solvent by applying this solution across the aperture with a brush, syringe, or glass applicator. The solvent used must have a very high partition coefficient and must be relatively viscous to prevent immediate rupture. The most common solvent used is a mixture of decane and squ The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. What substances that primarily comprise plasma membranes form a bilayer? A. phospholipids B. enzymes C. amino acids D. steroids Answer:
sciq-2774
multiple_choice
Prostaglandins also help regulate the aggregation of platelets, one step in the formation of what?
[ "cysts", "acne", "bloats", "blood clots" ]
D
Relavent Documents: Document 0::: Platelets or thrombocytes (from Greek θρόμβος, "clot" and κύτος, "cell") are a component of blood whose function (along with the coagulation factors) is to react to bleeding from blood vessel injury by clumping, thereby initiating a blood clot. Platelets have no cell nucleus; they are fragments of cytoplasm derived from the megakaryocytes of the bone marrow or lung, which then enter the circulation. Platelets are found only in mammals, whereas in other vertebrates (e.g. birds, amphibians), thrombocytes circulate as intact mononuclear cells. One major function of platelets is to contribute to hemostasis: the process of stopping bleeding at the site of interrupted endothelium. They gather at the site and, unless the interruption is physically too large, they plug the hole. First, platelets attach to substances outside the interrupted endothelium: adhesion. Second, they change shape, turn on receptors and secrete chemical messengers: activation. Third, they connect to each other through receptor bridges: aggregation. Formation of this platelet plug (primary hemostasis) is associated with activation of the coagulation cascade, with resultant fibrin deposition and linking (secondary hemostasis). These processes may overlap: the spectrum is from a predominantly platelet plug, or "white clot" to a predominantly fibrin, or "red clot" or the more typical mixture. Some would add the subsequent retraction and platelet inhibition as fourth and fifth steps to the completion of the process and still others would add a sixth step, wound repair. Platelets also participate in both innate and adaptive intravascular immune responses. Structure Structure Structurally the platelet can be divided into four zones, from peripheral to innermost: Peripheral zone – is rich in glycoproteins required for platelet adhesion, activation and aggregation. For example, GPIb/IX/V; GPVI; GPIIb/IIIa. Sol-gel zone – is rich in microtubules and microfilaments, allowing the platelets to maintain their Document 1::: Thrombopoiesis is the formation of thrombocytes (blood platelets) in the bone marrow. Thrombopoietin is the main regulator of thrombopoiesis. Thrombopoietin affects most aspects of the production of platelets. This includes self-renewal and expansion of hematopoietic stem cells, stimulating the increase of megakaryocyte progenitor cells, and supporting these cells so they mature to become platelet-producing cells. The process of Thrombopoiesis is caused by the breakdown of proplatelets (mature megakaryocyte membrane pseudopodial projections). During the process almost all of the membranes, organelles, granules, and soluble macromolecules in the cytoplasm are being consumed. Apoptosis also plays a role in the final stages of thrombopoiesis by letting proplatelet processes to occur from the cytoskeleton of actin. Platelets Platelets are formed by megakaryocytes and are present in the bloodstream for 5–7 days. Platelets are regulators of hemostasis and thrombosis. Platelets become active in the blood following vascular injury. Vascular injury causes platelets to stick to the cellular matrix that is exposed under the endothelium, form a platelet plug, and then form a thrombus. Platelets are essential in the formation of an occlusive thrombus and are the main target of preventing the formation of an arterial thrombus. Platelets are also important in innate immunity and regulating tumor growth and vessel leakage. Megakaryocytes The megakaryoblast is a platelet precursor that undergoes endomitosis to form megakaryocytes that have 8 to 64 nuclei. Megakaryocytes shed platelets into the bloodstream. β1-tubulin microtubules, which are found in megakaryocytes, facilitate this process of shedding platelets into the bloodstream. Megakaryocytes are precursor cells that are highly specialized. Megakaryocytes give rise to 1,000 to 3,000 platelets. Megakaryocytes function in the process of Thrombopoiesis by producing platelets and releasing platelets into the bloodstream. Megakar Document 2::: Ecarin clotting time (ECT) is a laboratory test used to monitor anticoagulation during treatment with hirudin, an anticoagulant medication which was originally isolated from leech saliva. Ecarin, the primary reagent in this assay, is derived from the venom of the saw-scaled viper, Echis carinatus. In the clinical assay, a known quantity of ecarin is added to the plasma of a patient treated with hirudin. Ecarin activates prothrombin through a specific proteolytic cleavage, which produces meizothrombin, a prothrombin-thrombin intermediate which retains the full molecular weight of prothrombin, but possesses a low level of procoagulant enzymatic activity. Crucially, this activity is inhibited by hirudin and other direct thrombin inhibitors, but not by heparin. The ECT is also unaffected by prior treatment with warfarin or the presence of phospholipid-dependent anticoagulants, such as lupus anticoagulant. Thus, the ECT is prolonged in a specific and linear fashion with increasing concentrations of hirudin. An enhancement of the ECT is the ecarin chromogenic assay (ECA) in which diluted sample is mixed with an excess of purified prothrombin and the generated meizothrombin is measured with a specific chromogenic substrate. This assay shows no interference from prothrombin or fibrinogen in the sample and is suitable for the measurement of all direct thrombin inhibitors. Document 3::: A promegakaryocyte is a precursor cell for a megakaryocyte. It arises from a megakaryoblast, into a promegakaryocyte and then into a megakaryocyte, which will eventually break off and become a platelet. The developmental stages of the megakaryocyte are: CFU-Me (pluripotential hemopoietic stem cell or hemocytoblast) → megakaryoblast → promegakaryocyte → megakaryocyte. When the megakaryoblast matures into the promegakaryocyte, it undergoes endoreduplication and forms a promegakaryocyte which has multiple nuclei, azurophilic granules, and a basophilic cytoplasm. The promegakaryocyte has rotary motion, but no forward migration. Promegakaryocytes and other precursor cells to megakaryocytes arise from pluripotential hematopoietic progenitors. The megakaryoblast is then produced, followed by the promegakaryocyte, the granular megakaryocyte, and then the mature megakaryocyte. When it is in its promegakaryocyte stage, it is considered an undifferentiated cell. Megakaryocyte pieces will eventually break off and begin circulating the body as platelets. Platelets are very important because of their role in blood clotting, immune response, and the formation of new blood vessels. Document 4::: Thromboplastin (TPL) is derived from cell membranes and is a mixture of both phospholipids and tissue factor, neither of which are enzymes. Thromboplastin acts on and accelerates the activity of Factor Xa, also known as thrombokinase, aiding blood coagulation through catalyzing the conversion of prothrombin to thrombin. Thromboplastin is found in brain, lung, and other tissues and especially in blood platelets. Thromboplastin is sometimes used as a synonym for the protein tissue factor (with its official name "Coagulation factor III [thromboplastin, tissue factor]"). Historically, thromboplastin was a lab reagent, usually derived from placental sources, used to assay prothrombin times (PT). When manipulated in the laboratory, a derivative could be created called partial thromboplastin. Partial thromboplastin was used to measure the intrinsic pathway. This test is called the aPTT, or activated partial thromboplastin time. It was not until much later that the subcomponents of thromboplastin and partial thromboplastin were identified. Thromboplastin is the combination of both phospholipids and tissue factor, both of which are needed in the activation of the extrinsic pathway. However, partial thromboplastin is just phospholipids, and not tissue factor. Therefore, although the coagulation cascade can be triggered in vitro through the intrinsic pathway only, in vivo coagulation is triggered by the extrinsic pathway. However, the model better describing how coagulation works is the so-called cell-based model, a more integrated picture of the whole process, in which phospholipid surfaces, such as those provided by platelets, are a key component. Currently, recombinant tissue factor is available and used in some PT assays. Placental derivatives are still available and are used in some laboratories. Phospholipid is available as an independent reagent or in combination with tissue factor as thromboplastin. Complete thromboplastin consists of tissue factor, phospholipid The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. Prostaglandins also help regulate the aggregation of platelets, one step in the formation of what? A. cysts B. acne C. bloats D. blood clots Answer:
sciq-5788
multiple_choice
What unit of the nervous system consists of a cell body, dendrites, and axon?
[ "neuron", "mitochondria", "ganglion", "Transmitter" ]
A
Relavent Documents: Document 0::: The following diagram is provided as an overview of and topical guide to the human nervous system: Human nervous system – the part of the human body that coordinates a person's voluntary and involuntary actions and transmits signals between different parts of the body. The human nervous system consists of two main parts: the central nervous system (CNS) and the peripheral nervous system (PNS). The CNS contains the brain and spinal cord. The PNS consists mainly of nerves, which are long fibers that connect the CNS to every other part of the body. The PNS includes motor neurons, mediating voluntary movement; the autonomic nervous system, comprising the sympathetic nervous system and the parasympathetic nervous system and regulating involuntary functions; and the enteric nervous system, a semi-independent part of the nervous system whose function is to control the gastrointestinal system. Evolution of the human nervous system Evolution of nervous systems Evolution of human intelligence Evolution of the human brain Paleoneurology Some branches of science that study the human nervous system Neuroscience Neurology Paleoneurology Central nervous system The central nervous system (CNS) is the largest part of the nervous system and includes the brain and spinal cord. Spinal cord Brain Brain – center of the nervous system. Outline of the human brain List of regions of the human brain Principal regions of the vertebrate brain: Peripheral nervous system Peripheral nervous system (PNS) – nervous system structures that do not lie within the CNS. Sensory system A sensory system is a part of the nervous system responsible for processing sensory information. A sensory system consists of sensory receptors, neural pathways, and parts of the brain involved in sensory perception. List of sensory systems Sensory neuron Perception Visual system Auditory system Somatosensory system Vestibular system Olfactory system Taste Pain Components of the nervous system Neuron I Document 1::: The following are two lists of animals ordered by the size of their nervous system. The first list shows number of neurons in their entire nervous system, indicating their overall neural complexity. The second list shows the number of neurons in the structure that has been found to be representative of animal intelligence. The human brain contains 86 billion neurons, with 16 billion neurons in the cerebral cortex. Scientists are engaged in counting, quantification, in order to find answers to the question in the strategy of neuroscience and intelligence of "self-knowledge": how the evolution of a set of components and parameters (~1011 neurons, ~1014 synapses) of a complex system could lead to the phenomenon of the appearance of intelligence in the biological species "sapiens". Overview Neurons are the cells that transmit information in an animal's nervous system so that it can sense stimuli from its environment and behave accordingly. Not all animals have neurons; Trichoplax and sponges lack nerve cells altogether. Neurons may be packed to form structures such as the brain of vertebrates or the neural ganglions of insects. The number of neurons and their relative abundance in different parts of the brain is a determinant of neural function and, consequently, of behavior. Whole nervous system All numbers for neurons (except Caenorhabditis and Ciona), and all numbers for synapses (except Ciona) are estimations. List of animal species by forebrain (cerebrum or pallium) neuron number The question of what physical characteristic of an animal makes an animal intelligent has varied over the centuries. One early speculation was brain size (or weight, which provides the same ordering.) A second proposal was brain-to-body-mass ratio, and a third was encephalization quotient, sometimes referred to as EQ. The current best predictor is number of neurons in the forebrain, based on Herculano-Houzel's improved neuron counts. It accounts most accurately for variations Document 2::: Nervous tissue, also called neural tissue, is the main tissue component of the nervous system. The nervous system regulates and controls body functions and activity. It consists of two parts: the central nervous system (CNS) comprising the brain and spinal cord, and the peripheral nervous system (PNS) comprising the branching peripheral nerves. It is composed of neurons, also known as nerve cells, which receive and transmit impulses, and neuroglia, also known as glial cells or glia, which assist the propagation of the nerve impulse as well as provide nutrients to the neurons. Nervous tissue is made up of different types of neurons, all of which have an axon. An axon is the long stem-like part of the cell that sends action potentials to the next cell. Bundles of axons make up the nerves in the PNS and tracts in the CNS. Functions of the nervous system are sensory input, integration, control of muscles and glands, homeostasis, and mental activity. Structure Nervous tissue is composed of neurons, also called nerve cells, and neuroglial cells. Four types of neuroglia found in the CNS are astrocytes, microglial cells, ependymal cells, and oligodendrocytes. Two types of neuroglia found in the PNS are satellite glial cells and Schwann cells. In the central nervous system (CNS), the tissue types found are grey matter and white matter. The tissue is categorized by its neuronal and neuroglial components. Components Neurons are cells with specialized features that allow them to receive and facilitate nerve impulses, or action potentials, across their membrane to the next neuron. They possess a large cell body (soma), with cell projections called dendrites and an axon. Dendrites are thin, branching projections that receive electrochemical signaling (neurotransmitters) to create a change in voltage in the cell. Axons are long projections that carry the action potential away from the cell body toward the next neuron. The bulb-like end of the axon, called the axon terminal, i Document 3::: The Cognition and Brain Sciences Unit is a branch of the UK Medical Research Council, based in Cambridge, England. The CBSU is a centre for cognitive neuroscience, with a mission to improve human health by understanding and enhancing cognition and behaviour in health, disease and disorder. It is one of the largest and most long-lasting contributors to the development of psychological theory and practice. The CBSU has its own magnetic resonance imaging (MRI, 3T) scanner on-site, as well as a 306-channel magnetoencephalography (MEG) system and a 128-channel electroencephalography (EEG) laboratory. The CBSU has close links to clinical neuroscience research in the University of Cambridge Medical School. Over 140 scientists, students, and support staff work in research areas such as Memory, Attention, Emotion, Speech and Language, Development and Aging, Computational Modelling and Neuroscience Methods. With dedicated facilities available on site, the Unit has particular strengths in the application of neuroimaging techniques in the context of well-developed neuro-cognitive theory. History The unit was established in 1944 as the MRC Applied Psychology Unit. In June 2001, the History of Modern Biomedicine Research Group held a witness seminar to gather information on the unit's history. On 1 July 2017, the CBU was merged with the University of Cambridge. Coming under the Clinical School, the unit is still funded by the British government through Research Councils UK but is managed and maintained by Cambridge University. List of directors Kenneth Craik, 1944–1945 Frederic Bartlett, 1945–1951 Norman Mackworth, 1951–1958 Donald Broadbent, 1958–1974 Alan Baddeley, 1974–1997 William Marslen-Wilson, 1997–2010 Susan Gathercole, 2011–2018 Matthew Lambon Ralph, 2018– Document 4::: There are yet unsolved problems in neuroscience, although some of these problems have evidence supporting a hypothesized solution, and the field is rapidly evolving. One major problem is even enumerating what would belong on a list such as this. However, these problems include: Consciousness Consciousness: How can consciousness be defined? What is the neural basis of subjective experience, cognition, wakefulness, alertness, arousal, and attention? Quantum mind: Does quantum mechanical phenomena, such as entanglement and superposition, play an important part in the brain's function and can it explain critical aspects of consciousness? Is there a "hard problem of consciousness"? If so, how is it solved? What, if any, is the function of consciousness? What is the nature and mechanism behind near-death experiences? How can death be defined? Can consciousness exist after death? If consciousness is generated by brain activity, then how do some patients with physically deteriorated brains suddenly gain a brief moment of restored consciousness prior to death, a phenomenon known as terminal lucidity? Problem of representation: How exactly does the mind function (or how does the brain interpret and represent information about the world)? Bayesian mind: Does the mind make sense of the world by constantly trying to make predictions according to the rules of Bayesian probability? Computational theory of mind: Is the mind a symbol manipulation system, operating on a model of computation, similar to a computer? Connectionism: Can the mind be explained by mathematical models known as artificial neural networks? Embodied cognition: Is the cognition of an organism affected by the organism's entire body (rather than just simply its brain), including its interactions with the environment? Extended mind thesis: Does the mind not only exist in the brain, but also functions in the outside world by using physical objects as mental processes? Or just as prosthetic limbs can becom The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. What unit of the nervous system consists of a cell body, dendrites, and axon? A. neuron B. mitochondria C. ganglion D. Transmitter Answer:
sciq-4110
multiple_choice
What develops in depressions where water flow is low or nonexistent?
[ "swamps", "ponds", "sinkholes", "bogs" ]
D
Relavent Documents: Document 0::: In fluid dynamics, pipe network analysis is the analysis of the fluid flow through a hydraulics network, containing several or many interconnected branches. The aim is to determine the flow rates and pressure drops in the individual sections of the network. This is a common problem in hydraulic design. Description To direct water to many users, municipal water supplies often route it through a water supply network. A major part of this network will consist of interconnected pipes. This network creates a special class of problems in hydraulic design, with solution methods typically referred to as pipe network analysis. Water utilities generally make use of specialized software to automatically solve these problems. However, many such problems can also be addressed with simpler methods, like a spreadsheet equipped with a solver, or a modern graphing calculator. Deterministic network analysis Once the friction factors of the pipes are obtained (or calculated from pipe friction laws such as the Darcy-Weisbach equation), we can consider how to calculate the flow rates and head losses on the network. Generally the head losses (potential differences) at each node are neglected, and a solution is sought for the steady-state flows on the network, taking into account the pipe specifications (lengths and diameters), pipe friction properties and known flow rates or head losses. The steady-state flows on the network must satisfy two conditions: At any junction, the total flow into a junction equals the total flow out of that junction (law of conservation of mass, or continuity law, or Kirchhoff's first law) Between any two junctions, the head loss is independent of the path taken (law of conservation of energy, or Kirchhoff's second law). This is equivalent mathematically to the statement that on any closed loop in the network, the head loss around the loop must vanish. If there are sufficient known flow rates, so that the system of equations given by (1) and (2) abov Document 1::: In hydrology, pipeflow is a type of subterranean water flow where water travels along cracks in the soil or old root systems found in above ground vegetation. In such soils which have a high vegetation content water is able to travel along the 'pipes', allowing water to travel faster than throughflow. Here, water can move at speeds between 50 and 500 m/h. Hydrology Aquatic ecology Document 2::: Large woody debris (LWD) are the logs, sticks, branches, and other wood that falls into streams and rivers. This debris can influence the flow and the shape of the stream channel. Large woody debris, grains, and the shape of the bed of the stream are the three main providers of flow resistance, and are thus, a major influence on the shape of the stream channel. Some stream channels have less LWD than they would naturally because of removal by watershed managers for flood control and aesthetic reasons. The study of woody debris is important for its forestry management implications. Plantation thinning can reduce the potential for recruitment of LWD into proximal streams. The presence of large woody debris is important in the formation of pools which serve as salmon habitat in the Pacific Northwest. Entrainment of the large woody debris in a stream can also cause erosion and scouring around and under the LWD. The amount of scouring and erosion is determined by the ratio of the diameter of the piece, to the depth of the stream, and the embedding and orientation of the piece. Influence on stream flow around bends Large woody debris slow the flow through a bend in the stream, while accelerating flow in the constricted area downstream of the obstruction. See also Beaver dam Coarse woody debris Driftwood Log jam Stream restoration Document 3::: The University of Michigan Biological Station (UMBS) is a research and teaching facility operated by the University of Michigan. It is located on the south shore of Douglas Lake in Cheboygan County, Michigan. The station consists of 10,000 acres (40 km2) of land near Pellston, Michigan in the northern Lower Peninsula of Michigan and 3,200 acres (13 km2) on Sugar Island in the St. Mary's River near Sault Ste. Marie, in the Upper Peninsula. It is one of only 28 Biosphere Reserves in the United States. Overview Founded in 1909, it has grown to include approximately 150 buildings, including classrooms, student cabins, dormitories, a dining hall, and research facilities. Undergraduate and graduate courses are available in the spring and summer terms. It has a full-time staff of 15. In the 2000s, UMBS is increasingly focusing on the measurement of climate change. Its field researchers are gauging the impact of global warming and increased levels of atmospheric carbon dioxide on the ecosystem of the upper Great Lakes region, and are using field data to improve the computer models used to forecast further change. Several archaeological digs have been conducted at the station as well. UMBS field researchers sometimes call the station "bug camp" amongst themselves. This is believed to be due to the number of mosquitoes and other insects present. It is also known as "The Bio-Station". The UMBS is also home to Michigan's most endangered species and one of the most endangered species in the world: the Hungerford's Crawling Water Beetle. The species lives in only five locations in the world, two of which are in Emmet County. One of these, a two and a half mile stretch downstream from the Douglas Road crossing of the East Branch of the Maple River supports the only stable population of the Hungerford's Crawling Water Beetle, with roughly 1000 specimens. This area, though technically not part of the UMBS is largely within and along the boundary of the University of Michigan Document 4::: A Directory of Important Wetlands in Australia (DIWA) is a list of wetlands of national importance to Australia published by the Department of Climate Change, Energy, the Environment and Water. Intended to augment the list of wetlands of international importance under the Ramsar Convention, it was formerly published in report form, but is now essentially an online publication. Wetlands that appear in the Directory are commonly referred to as "DIWA wetlands" or "Directory wetlands". Criteria for determining wetland importance Using criteria agreed in 1994, a wetland can be considered “nationally important” if it satisfies at least one of the following criteria: It is a good example of a wetland type occurring within a biogeographic region in Australia. It is a wetland which plays an important ecological or hydrological role in the natural functioning of a major wetland system/complex. It is a wetland which is important as the habitat for animal taxa at a vulnerable stage in their life cycles, or provides a refuge when adverse conditions such as drought prevail. The wetland supports 1% or more of the national populations of any native plant or animal taxa. The wetland supports native plant or animal taxa or communities which are considered endangered or vulnerable at the national level. The wetland is of outstanding historical or cultural significance. Types of wetlands The directory uses a classification system consisting of the following three categories (i.e. A, B and C) which are further sub-divided into a total of 40 different wetland types: A. Marine and Coastal Zone wetlands, which consists of 12 wetland types B. Inland wetlands, which consists of 19 wetland types C. Human-made wetlands, which consists of 9 wetland types. See also List of Ramsar sites in Australia Wetland classification The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. What develops in depressions where water flow is low or nonexistent? A. swamps B. ponds C. sinkholes D. bogs Answer:
sciq-3098
multiple_choice
What is critical for the formation of hemoglobin?
[ "salts", "proteins", "platelets", "iron ions" ]
D
Relavent Documents: Document 0::: – platelet factor 3 – platelet factor 4 – prothrombin – thrombin – thromboplastin – von willebrand factor – fibrin – fibrin fibrinogen degradation products – fibrin foam – fibrin tissue adhesive – fibrinopeptide a – fibrinopeptide b – glycophorin – hemocyanin – hemoglobins – carboxyhemoglobin – erythrocruorins – fetal hemoglobi Document 1::: Fetal proteins are high levels of proteins present during the fetal stage of development. Often related proteins assume similar roles after birth or in the embryo, in which case the fetal varieties are called fetal isoforms. Sometimes, the genes coding fetal isoforms occur adjacent to their adult homologues in the genome, and in those cases a locus control region often coordinates the transition from fetal to adult forms. In other cases fetal isoforms can be produced by alternate splicing using fetal exons to produce proteins that differ in only a portion of their amino acid sequence. In some situations the continuing expression of fetal forms can reveal the presence of a disease condition or serve as a treatment for diseases such as sickle cell anemia. Some well known examples include: Alpha-fetoprotein (AFP), the predominant serum protein of the fetus which gives way to albumin in the adult. AFP is categorized as an oncofetal protein because it is also found in tumors. Fetal hemoglobin, the fetal version of hemoglobin. Fetal Troponin T and Troponin I isoforms. Fetal Hemoglobin is a member of erythrocytes called F-cells. It is a tetramer protein with 2 alpha and 2 gamma subunits. This is different from adult hemoglobin because it has 2 alpha and 2 beta subunits.  Fetal hemoglobin is coded by a gene on chromosome 11. The gamma subunit on fetal hemoglobin contains a neutral and nonpolar amino acid at position 136, unlike the beta subunit of adult hemoglobin. The protein has a different structure than the adult protein because of this and helps in fetal development. Fetal hemoglobin has a main function to transfer oxygen from the pregnant person to the fetus during gestation. Fetal hemoglobin is vital in this system because it has a high affinity for oxygen. Fetal hemoglobin can be used to screen for pregnancy complications in the fetus and pregnant person. Fetal hemoglobin can also be used to treat sickle cell anemia. This hemoglobin is less likely to be a Document 2::: Myoglobin (symbol Mb or MB) is an iron- and oxygen-binding protein found in the cardiac and skeletal muscle tissue of vertebrates in general and in almost all mammals. Myoglobin is distantly related to hemoglobin. Compared to hemoglobin, myoglobin has a higher affinity for oxygen and does not have cooperative binding with oxygen like hemoglobin does. Myoglobin consists of non-polar amino acids at the core of the globulin, where the heme group is non-covalently bounded with the surrounding polypeptide of myoglobin. In humans, myoglobin is only found in the bloodstream after muscle injury. High concentrations of myoglobin in muscle cells allow organisms to hold their breath for a longer period of time. Diving mammals such as whales and seals have muscles with particularly high abundance of myoglobin. Myoglobin is found in Type I muscle, Type II A, and Type II B; although many texts consider myoglobin not to be found in smooth muscle, this has proved erroneous: there is also myoglobin in smooth muscle cells. Myoglobin was the first protein to have its three-dimensional structure revealed by X-ray crystallography. This achievement was reported in 1958 by John Kendrew and associates. For this discovery, Kendrew shared the 1962 Nobel Prize in chemistry with Max Perutz. Despite being one of the most studied proteins in biology, its physiological function is not yet conclusively established: mice genetically engineered to lack myoglobin can be viable and fertile, but show many cellular and physiological adaptations to overcome the loss. Through observing these changes in myoglobin-depleted mice, it is hypothesised that myoglobin function relates to increased oxygen transport to muscle, and to oxygen storage; as well, it serves as a scavenger of reactive oxygen species. In humans, myoglobin is encoded by the MB gene. Myoglobin can take the forms oxymyoglobin (MbO2), carboxymyoglobin (MbCO), and metmyoglobin (met-Mb), analogously to hemoglobin taking the forms oxyhemogl Document 3::: Hematogen (; , aimatogóno) is a nutrition bar which is notable in that one of its main ingredients is black food albumin, a technical term for cow's blood. Other ingredients may vary, but they usually contain sugar, condensed milk and vanillin. It is often considered to be a medicinal product, and is used to treat or prevent low blood levels of iron and vitamin B12 (e.g., for anemia or during pregnancy). See also Sanguinaccio dolce, a sweet pudding made with pig’s blood Protein bar Blood as food Document 4::: The human β-globin locus is composed of five genes located on a short region of chromosome 11, responsible for the creation of the beta parts (roughly half) of the oxygen transport protein Haemoglobin. This locus contains not only the beta globin gene but also delta, gamma-A, gamma-G, and epsilon globin. Expression of all of these genes is controlled by single locus control region (LCR), and the genes are differentially expressed throughout development. The order of the genes in the beta-globin cluster is: 5' - epsilon – gamma-G – gamma-A – delta – beta - 3'. The arrangement of the genes directly reflects the temporal differentiation of their expression during development, with the early-embryonic stage version of the gene located closest to the LCR. If the genes are rearranged, the gene products are expressed at improper stages of development. Expression of these genes is regulated in embryonic erythropoiesis by many transcription factors, including KLF1, which is associated with the upregulation of adult hemoglobin in adult definitive erythrocytes, and KLF2, which is vital to the expression of embryonic hemoglobin. HBB complex Many CRMs have been mapped within the cluster of genes encoding β-like globins expressed in embryonic (HBE1), fetal (HBG1 and HBG2), and adult (HBB and HBD) erythroid cells. All are marked by DNase I hypersensitive sites and footprints, and many are bound by GATA1 in peripheral blood derived erythroblasts (PBDEs). A DNA segment located between the HBG1 and HBD genes is one of the DNA segments bound by BCL11A and several other proteins to negatively regulate HBG1 and HBG2. It is sensitive to DNase I but is not conserved across mammals. An enhancer located 3′ of the HBG1 gene is bound by several proteins in PBDEs and K562 cells and is sensitive to DNase I, but shows almost no signal for mammalian constraint. The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. What is critical for the formation of hemoglobin? A. salts B. proteins C. platelets D. iron ions Answer:
sciq-8221
multiple_choice
What is the major source of warmth for earth?
[ "the sun", "the Moon", "seasons", "equator" ]
A
Relavent Documents: Document 0::: A pre-STEM program is a course of study at any two-year college that prepares a student to transfer to a four-year school to earn a bachelor's degree in a STEM field. Overview The concept of a pre-STEM program is being developed to address America's need for more college-trained professionals in science, technology, engineering, and mathematics (STEM). It is an innovation meant to fill a gap at community colleges that do not have 'major' degree paths that students identify with on their way to earning an Associates degree. Students must complete a considerable amount of STEM coursework before transferring from a two-year school to a four-year school and earn a baccalaureate degree in a STEM field. Schools with a pre-STEM program are able to identify those students and support them with STEM-specific academic and career advising, increasing the student's chances of going on to earn a STEM baccalaureate degree in a timely fashion. With over 50% of America's college-bound students starting their college career at public or private two-year school, and with a very small proportion of students who start college at a two-year school matriculating to and earning STEM degrees from four-year schools, pre-STEM programs have great potential for broadening participation in baccalaureate STEM studies. Example programs The effectiveness of pre-STEM programs is being investigated by a consortium of schools in Missouri: Moberly Area Community College, St. Charles Community College, Metropolitan Community College, and Truman State University. A larger group of schools met at the Belknap Springs Meetings in October 2009 to discuss the challenges and opportunities presented by STEM-focused partnerships between 2-year and 4-year schools. Each program represented a two-year school and a four-year school that were trying to increase the number of people who earn a baccalaureate degree in a STEM area through various means, some of which were pre-STEM programs. Other methods includes Document 1::: The STEM (Science, Technology, Engineering, and Mathematics) pipeline is a critical infrastructure for fostering the development of future scientists, engineers, and problem solvers. It's the educational and career pathway that guides individuals from early childhood through to advanced research and innovation in STEM-related fields. Description The "pipeline" metaphor is based on the idea that having sufficient graduates requires both having sufficient input of students at the beginning of their studies, and retaining these students through completion of their academic program. The STEM pipeline is a key component of workplace diversity and of workforce development that ensures sufficient qualified candidates are available to fill scientific and technical positions. The STEM pipeline was promoted in the United States from the 1970s onwards, as “the push for STEM (science, technology, engineering, and mathematics) education appears to have grown from a concern for the low number of future professionals to fill STEM jobs and careers and economic and educational competitiveness.” Today, this metaphor is commonly used to describe retention problems in STEM fields, called “leaks” in the pipeline. For example, the White House reported in 2012 that 80% of minority groups and women who enroll in a STEM field switch to a non-STEM field or drop out during their undergraduate education. These leaks often vary by field, gender, ethnic and racial identity, socioeconomic background, and other factors, drawing attention to structural inequities involved in STEM education and careers. Current efforts The STEM pipeline concept is a useful tool for programs aiming at increasing the total number of graduates, and is especially important in efforts to increase the number of underrepresented minorities and women in STEM fields. Using STEM methodology, educational policymakers can examine the quantity and retention of students at all stages of the K–12 educational process and beyo Document 2::: Several universities have designed interdisciplinary courses with a focus on human biology at the undergraduate level. There is a wide variation in emphasis ranging from business, social studies, public policy, healthcare and pharmaceutical research. Americas Human Biology major at Stanford University, Palo Alto (since 1970) Stanford's Human Biology Program is an undergraduate major; it integrates the natural and social sciences in the study of human beings. It is interdisciplinary and policy-oriented and was founded in 1970 by a group of Stanford faculty (Professors Dornbusch, Ehrlich, Hamburg, Hastorf, Kennedy, Kretchmer, Lederberg, and Pittendrigh). It is a very popular major and alumni have gone to post-graduate education, medical school, law, business and government. Human and Social Biology (Caribbean) Human and Social Biology is a Level 4 & 5 subject in the secondary and post-secondary schools in the Caribbean and is optional for the Caribbean Secondary Education Certification (CSEC) which is equivalent to Ordinary Level (O-Level) under the British school system. The syllabus centers on structure and functioning (anatomy, physiology, biochemistry) of human body and the relevance to human health with Caribbean-specific experience. The syllabus is organized under five main sections: Living organisms and the environment, life processes, heredity and variation, disease and its impact on humans, the impact of human activities on the environment. Human Biology Program at University of Toronto The University of Toronto offers an undergraduate program in Human Biology that is jointly offered by the Faculty of Arts & Science and the Faculty of Medicine. The program offers several major and specialist options in: human biology, neuroscience, health & disease, global health, and fundamental genetics and its applications. Asia BSc (Honours) Human Biology at All India Institute of Medical Sciences, New Delhi (1980–2002) BSc (honours) Human Biology at AIIMS (New Document 3::: Female education in STEM refers to child and adult female representation in the educational fields of science, technology, engineering, and mathematics (STEM). In 2017, 33% of students in STEM fields were women. The organization UNESCO has stated that this gender disparity is due to discrimination, biases, social norms and expectations that influence the quality of education women receive and the subjects they study. UNESCO also believes that having more women in STEM fields is desirable because it would help bring about sustainable development. Current status of girls and women in STEM education Overall trends in STEM education Gender differences in STEM education participation are already visible in early childhood care and education in science- and math-related play, and become more pronounced at higher levels of education. Girls appear to lose interest in STEM subjects with age, particularly between early and late adolescence. This decreased interest affects participation in advanced studies at the secondary level and in higher education. Female students represent 35% of all students enrolled in STEM-related fields of study at this level globally. Differences are also observed by disciplines, with female enrollment lowest in engineering, manufacturing and construction, natural science, mathematics and statistics and ICT fields. Significant regional and country differences in female representation in STEM studies can be observed, though, suggesting the presence of contextual factors affecting girls’ and women's engagement in these fields. Women leave STEM disciplines in disproportionate numbers during their higher education studies, in their transition to the world of work and even in their career cycle. Learning achievement in STEM education Data on gender differences in learning achievement present a complex picture, depending on what is measured (subject, knowledge acquisition against knowledge application), the level of education/age of students, and Document 4::: Science, technology, engineering, and mathematics (STEM) is an umbrella term used to group together the distinct but related technical disciplines of science, technology, engineering, and mathematics. The term is typically used in the context of education policy or curriculum choices in schools. It has implications for workforce development, national security concerns (as a shortage of STEM-educated citizens can reduce effectiveness in this area), and immigration policy, with regard to admitting foreign students and tech workers. There is no universal agreement on which disciplines are included in STEM; in particular, whether or not the science in STEM includes social sciences, such as psychology, sociology, economics, and political science. In the United States, these are typically included by organizations such as the National Science Foundation (NSF), the Department of Labor's O*Net online database for job seekers, and the Department of Homeland Security. In the United Kingdom, the social sciences are categorized separately and are instead grouped with humanities and arts to form another counterpart acronym HASS (Humanities, Arts, and Social Sciences), rebranded in 2020 as SHAPE (Social Sciences, Humanities and the Arts for People and the Economy). Some sources also use HEAL (health, education, administration, and literacy) as the counterpart of STEM. Terminology History Previously referred to as SMET by the NSF, in the early 1990s the acronym STEM was used by a variety of educators, including Charles E. Vela, the founder and director of the Center for the Advancement of Hispanics in Science and Engineering Education (CAHSEE). Moreover, the CAHSEE started a summer program for talented under-represented students in the Washington, D.C., area called the STEM Institute. Based on the program's recognized success and his expertise in STEM education, Charles Vela was asked to serve on numerous NSF and Congressional panels in science, mathematics, and engineering edu The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. What is the major source of warmth for earth? A. the sun B. the Moon C. seasons D. equator Answer:
sciq-8399
multiple_choice
What is a measure of the amount of space a substance or an object takes up?
[ "density", "mass", "volume", "speed" ]
C
Relavent Documents: Document 0::: Quantity calculus is the formal method for describing the mathematical relations between abstract physical quantities. Its roots can be traced to Fourier's concept of dimensional analysis (1822). The basic axiom of quantity calculus is Maxwell's description of a physical quantity as the product of a "numerical value" and a "reference quantity" (i.e. a "unit quantity" or a "unit of measurement"). De Boer summarized the multiplication, division, addition, association and commutation rules of quantity calculus and proposed that a full axiomatization has yet to be completed. Measurements are expressed as products of a numeric value with a unit symbol, e.g. "12.7 m". Unlike algebra, the unit symbol represents a measurable quantity such as a meter, not an algebraic variable. A careful distinction needs to be made between abstract quantities and measurable quantities. The multiplication and division rules of quantity calculus are applied to SI base units (which are measurable quantities) to define SI derived units, including dimensionless derived units, such as the radian (rad) and steradian (sr) which are useful for clarity, although they are both algebraically equal to 1. Thus there is some disagreement about whether it is meaningful to multiply or divide units. Emerson suggests that if the units of a quantity are algebraically simplified, they then are no longer units of that quantity. Johansson proposes that there are logical flaws in the application of quantity calculus, and that the so-called dimensionless quantities should be understood as "unitless quantities". How to use quantity calculus for unit conversion and keeping track of units in algebraic manipulations is explained in the handbook Quantities, Units and Symbols in Physical Chemistry. Notes Document 1::: Absolute molar mass is a process used to determine the characteristics of molecules. History The first absolute measurements of molecular weights (i.e. made without reference to standards) were based on fundamental physical characteristics and their relation to the molar mass. The most useful of these were membrane osmometry and sedimentation. Another absolute instrumental approach was also possible with the development of light scattering theory by Albert Einstein, Chandrasekhara Venkata Raman, Peter Debye, Bruno H. Zimm, and others. The problem with measurements made using membrane osmometry and sedimentation was that they only characterized the bulk properties of the polymer sample. Moreover, the measurements were excessively time consuming and prone to operator error. In order to gain information about a polydisperse mixture of molar masses, a method for separating the different sizes was developed. This was achieved by the advent of size exclusion chromatography (SEC). SEC is based on the fact that the pores in the packing material of chromatography columns could be made small enough for molecules to become temporarily lodged in their interstitial spaces. As the sample makes its way through a column the smaller molecules spend more time traveling in these void spaces than the larger ones, which have fewer places to "wander". The result is that a sample is separated according to its hydrodynamic volume . As a consequence, the big molecules come out first, and then the small ones follow in the eluent. By choosing a suitable column packing material it is possible to define the resolution of the system. Columns can also be combined in series to increase resolution or the range of sizes studied. The next step is to convert the time at which the samples eluted into a measurement of molar mass. This is possible because if the molar mass of a standard were known, the time at which this standard eluted should be equal to a specific molar mass. Using multiple Document 2::: A proof mass or test mass is a known quantity of mass used in a measuring instrument as a reference for the measurement of an unknown quantity. A mass used to calibrate a weighing scale is sometimes called a calibration mass or calibration weight. A proof mass that deforms a spring in an accelerometer is sometimes called the seismic mass. In a convective accelerometer, a fluid proof mass may be employed. See also Calibration, checking or adjustment by comparison with a standard Control variable, the experimental element that is constant and unchanged throughout the course of a scientific investigation Test particle, an idealized model of an object in which all physical properties are assumed to be negligible, except for the property being studied Document 3::: Vapour density is the density of a vapour in relation to that of hydrogen. It may be defined as mass of a certain volume of a substance divided by mass of same volume of hydrogen. vapour density = mass of n molecules of gas / mass of n molecules of hydrogen gas . vapour density = molar mass of gas / molar mass of H2 vapour density = molar mass of gas / 2.016 vapour density = × molar mass (and thus: molar mass = ~2 × vapour density) For example, vapour density of mixture of NO2 and N2O4 is 38.3. Vapour density is a dimensionless quantity. Alternative definition In many web sources, particularly in relation to safety considerations at commercial and industrial facilities in the U.S., vapour density is defined with respect to air, not hydrogen. Air is given a vapour density of one. For this use, air has a molecular weight of 28.97 atomic mass units, and all other gas and vapour molecular weights are divided by this number to derive their vapour density. For example, acetone has a vapour density of 2 in relation to air. That means acetone vapour is twice as heavy as air. This can be seen by dividing the molecular weight of Acetone, 58.1 by that of air, 28.97, which equals 2. With this definition, the vapour density would indicate whether a gas is denser (greater than one) or less dense (less than one) than air. The density has implications for container storage and personnel safety—if a container can release a dense gas, its vapour could sink and, if flammable, collect until it is at a concentration sufficient for ignition. Even if not flammable, it could collect in the lower floor or level of a confined space and displace air, possibly presenting an asphyxiation hazard to individuals entering the lower part of that space. See also Relative density (also known as specific gravity) Victor Meyer apparatus Document 4::: In chemistry and related fields, the molar volume, symbol Vm, or of a substance is the ratio of the volume occupied by a substance to the amount of substance, usually given at a given temperature and pressure. It is equal to the molar mass (M) divided by the mass density (ρ): The molar volume has the SI unit of cubic metres per mole (m3/mol), although it is more typical to use the units cubic decimetres per mole (dm3/mol) for gases, and cubic centimetres per mole (cm3/mol) for liquids and solids. Definition The molar volume of a substance i is defined as its molar mass divided by its density ρi0: For an ideal mixture containing N components, the molar volume of the mixture is the weighted sum of the molar volumes of its individual components. For a real mixture the molar volume cannot be calculated without knowing the density: There are many liquid–liquid mixtures, for instance mixing pure ethanol and pure water, which may experience contraction or expansion upon mixing. This effect is represented by the quantity excess volume of the mixture, an example of excess property. Relation to specific volume Molar volume is related to specific volume by the product with molar mass. This follows from above where the specific volume is the reciprocal of the density of a substance: Ideal gases For ideal gases, the molar volume is given by the ideal gas equation; this is a good approximation for many common gases at standard temperature and pressure. The ideal gas equation can be rearranged to give an expression for the molar volume of an ideal gas: Hence, for a given temperature and pressure, the molar volume is the same for all ideal gases and is based on the gas constant: R = , or about . The molar volume of an ideal gas at 100 kPa (1 bar) is at 0 °C, at 25 °C. The molar volume of an ideal gas at 1 atmosphere of pressure is at 0 °C, at 25 °C. Crystalline solids For crystalline solids, the molar volume can be measured by X-ray crystallography. The unit cell The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. What is a measure of the amount of space a substance or an object takes up? A. density B. mass C. volume D. speed Answer:
sciq-6995
multiple_choice
What do you call an area covered with water, or possessing very soggy soil, all or part of the year?
[ "stream", "island", "peninsula", "wetland" ]
D
Relavent Documents: Document 0::: The University of Michigan Biological Station (UMBS) is a research and teaching facility operated by the University of Michigan. It is located on the south shore of Douglas Lake in Cheboygan County, Michigan. The station consists of 10,000 acres (40 km2) of land near Pellston, Michigan in the northern Lower Peninsula of Michigan and 3,200 acres (13 km2) on Sugar Island in the St. Mary's River near Sault Ste. Marie, in the Upper Peninsula. It is one of only 28 Biosphere Reserves in the United States. Overview Founded in 1909, it has grown to include approximately 150 buildings, including classrooms, student cabins, dormitories, a dining hall, and research facilities. Undergraduate and graduate courses are available in the spring and summer terms. It has a full-time staff of 15. In the 2000s, UMBS is increasingly focusing on the measurement of climate change. Its field researchers are gauging the impact of global warming and increased levels of atmospheric carbon dioxide on the ecosystem of the upper Great Lakes region, and are using field data to improve the computer models used to forecast further change. Several archaeological digs have been conducted at the station as well. UMBS field researchers sometimes call the station "bug camp" amongst themselves. This is believed to be due to the number of mosquitoes and other insects present. It is also known as "The Bio-Station". The UMBS is also home to Michigan's most endangered species and one of the most endangered species in the world: the Hungerford's Crawling Water Beetle. The species lives in only five locations in the world, two of which are in Emmet County. One of these, a two and a half mile stretch downstream from the Douglas Road crossing of the East Branch of the Maple River supports the only stable population of the Hungerford's Crawling Water Beetle, with roughly 1000 specimens. This area, though technically not part of the UMBS is largely within and along the boundary of the University of Michigan Document 1::: Conceptual questions or conceptual problems in science, technology, engineering, and mathematics (STEM) education are questions that can be answered based only on the knowledge of relevant concepts, rather than performing extensive calculations. They contrast with most homework and exam problems in science and engineering that typically require plugging in numerical values into previously discussed formulas. Such "plug-and-chug" numerical problems can often be solved correctly by just matching the pattern of the problem to a previously discussed problem and changing the numerical inputs, which requires significant amounts of time to perform the calculations but does not test or deepen the understanding of how the concepts and formulas should work together. Conceptual questions, therefore, provide a good complement to conventional numerical problems because they need minimal or no calculations and instead encourage the students to engage more deeply with the underlying concepts and how they relate to formulas. Conceptual problems are often formulated as multiple-choice questions, making them easy to use during in-class discussions, particularly when utilizing active learning, peer instruction, and audience response. An example of a conceptual question in undergraduate thermodynamics is provided below: During adiabatic expansion of an ideal gas, its temperatureincreases decreases stays the same Impossible to tell/need more information The use of conceptual questions in physics was popularized by Eric Mazur, particularly in the form of multiple-choice tests that he called ConcepTests. In recent years, multiple websites that maintain lists of conceptual questions have been created by instructors for various disciplines. Some books on physics provide many examples of conceptual questions as well. Multiple conceptual questions can be assembled into a concept inventory to test the working knowledge of students at the beginning of a course or to track the improvement in Document 2::: Ecological classification or ecological typology is the classification of land or water into geographical units that represent variation in one or more ecological features. Traditional approaches focus on geology, topography, biogeography, soils, vegetation, climate conditions, living species, habitats, water resources, and sometimes also anthropic factors. Most approaches pursue the cartographical delineation or regionalisation of distinct areas for mapping and planning. Approaches to classifications Different approaches to ecological classifications have been developed in terrestrial, freshwater and marine disciplines. Traditionally these approaches have focused on biotic components (vegetation classification), abiotic components (environmental approaches) or implied ecological and evolutionary processes (biogeographical approaches). Ecosystem classifications are specific kinds of ecological classifications that consider all four elements of the definition of ecosystems: a biotic component, an abiotic complex, the interactions between and within them, and the physical space they occupy (ecotope). Vegetation classification Vegetation is often used to classify terrestrial ecological units. Vegetation classification can be based on vegetation structure and floristic composition. Classifications based entirely on vegetation structure overlap with land cover mapping categories. Many schemes of vegetation classification are in use by the land, resource and environmental management agencies of different national and state jurisdictions. The International Vegetation Classification (IVC or EcoVeg) has been recently proposed but has not been yet widely adopted. Vegetation classifications have limited use in aquatic systems, since only a handful of freshwater or marine habitats are dominated by plants (e.g. kelp forests or seagrass meadows). Also, some extreme terrestrial environments, like subterranean or cryogenic ecosystems, are not properly described in vegetation c Document 3::: Flooded grasslands and savannas is a terrestrial biome of the World Wide Fund for Nature (WWF) biogeographical system, consisting of large expanses or complexes of flooded grasslands. These areas support numerous plants and animals adapted to the unique hydrologic regimes and soil conditions. Large congregations of migratory and resident waterbirds may be found in these regions. The relative importance of these habitat types for these birds as well as more vagile taxa typically varies as the availability of water and productivity annually and seasonally shifts among complexes of smaller and larger wetlands throughout a region. This habitat type is found on four of the continents on Earth. Some globally outstanding flooded savannas and grasslands occur in the Everglades, Pantanal, Lake Chad flooded savanna, Zambezian flooded grasslands, and the Sudd. The Everglades, with an area of , are the world's largest rain-fed flooded grassland on a limestone substrate, and feature some 11,000 species of seed-bearing plants, 25 varieties of orchids, 300 bird species, and 150 fish species. The Pantanal, with an area of , is the largest flooded grassland on Earth, supporting over 260 species of fish, 700 birds, 90 mammals, 160 reptiles, 45 amphibians, 1,000 butterflies, and 1,600 species of plants. The flooded savannas and grasslands are generally the largest complexes in each region. See also Coniferous swamp Dambo Fen Flood-meadow Freshwater swamp forest Mangroves Marsh Marsh gas Muck (soil) Peat Peat swamp forest Salt marsh Shrub swamp Water-meadow Wet meadow Document 4::: Land cover is the physical material at the surface of Earth. Land covers include grass, asphalt, trees, bare ground, water, etc. Earth cover is the expression used by ecologist Frederick Edward Clements that has its closest modern equivalent being vegetation. The expression continues to be used by the United States Bureau of Land Management. There are two primary methods for capturing information on land cover: field survey, and analysis of remotely sensed imagery. Land change models can be built from these types of data to assess changes in land cover over time. One of the major land cover issues (as with all natural resource inventories) is that every survey defines similarly named categories in different ways. For instance, there are many definitions of "forest"—sometimes within the same organisation—that may or may not incorporate a number of different forest features (e.g., stand height, canopy cover, strip width, inclusion of grasses, and rates of growth for timber production). Areas without trees may be classified as forest cover "if the intention is to re-plant" (UK and Ireland), while areas with many trees may not be labelled as forest "if the trees are not growing fast enough" (Norway and Finland). Distinction from "land use" "Land cover" is distinct from "land use", despite the two terms often being used interchangeably. Land use is a description of how people utilize the land and of socio-economic activity. Urban and agricultural land uses are two of the most commonly known land use classes. At any one point or place, there may be multiple and alternate land uses, the specification of which may have a political dimension. The origins of the "land cover/land use" couplet and the implications of their confusion are discussed in Fisher et al. (2005). Types Following table is Land Cover statistics by Food and Agriculture Organization (FAO) with 14 classes. Mapping Land cover change detection using remote sensing and geospatial data provides baselin The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. What do you call an area covered with water, or possessing very soggy soil, all or part of the year? A. stream B. island C. peninsula D. wetland Answer:
sciq-9033
multiple_choice
What type of doctor specializes in the laboratory detection of disease?
[ "diagnostician", "pathologist", "infectious disease physician", "internist" ]
B
Relavent Documents: Document 0::: The Association for Clinical Biochemistry and Laboratory Medicine is a United Kingdom-based learned society dedicated to the practice and promotion of clinical biochemistry. It was founded in 1953 and its official journal is the Annals of Clinical Biochemistry. The association is a full, national society member of the International Federation of Clinical Chemistry and Laboratory Medicine IFCC as well as a full member of the regional European Federation of Clinical Chemistry and Laboratory Medicine. History Founded as the Association of Clinical Biochemists, the association has evolved as biochemistry has changed with advances in laboratory medicine. Recognizing an increasing number of medical members, the name was changed in 2005 to Association for Clinical Biochemistry. In 2007 the "Association of Clinical Scientists in Immunology" merged with the ACB. The membership expanded in 2010 with the merger with the "Association of Clinical Microbiologists". The broader nature of the membership contributed to the renaming of the ACB to its current name at the annual meeting in 2013. Clinical concerns The ACB is responsible for determining the specific content for courses related to certification as a clinical biochemist in the UK. Normally this is a three or four year academic sequence followed by qualification examinations. Because of the competitive admission criteria, many applicants have advanced degrees before beginning the biochemistry program. Papers published by ACB members are related to the use of laboratories by doctors and patient health diagnostic testing in the UK. Blood draw procedures and tests by junior doctors and nurses in the A&E department of a Birmingham hospital were frequently performed with the wrong collection equipment or were mishandled afterward. The College of Emergency Medicine said the issue identified by the audit at Birmingham is "universally relevant". A 2008 study emphasized issues with junior doctors who were not being trained in p Document 1::: The Intersociety Council for Pathology Information (ICPI) is a nonprofit educational organization that provides information about academic paths and career options in medical and research pathology. Directory of Pathology Training Programs in the United States and Canada ICPI publishes the annual Directory of Pathology Training Programs in the United States and Canada and a companion online searchable directory. Career Development Resources The Pathology: A Career in Medicine brochure describes the role of a pathologist in medical, research, and academic settings. Pathology: A Career in Medicine Sponsors ICPI is sponsored by five charter pathology societies and twelve Associate member societies in North America. Awards and Grants Travel Awards support participation of medical students, graduate students, residents, and fellows in the scientific meetings of its sponsoring societies. Career Outreach Grants promote awareness of pathology to the public, media, students, and professional and educational organizations. The Medical Student Interest Group Matching Grants (MSIGs) encourages medical students to consider pathology as a career by providing funds to pathology departments to support MSIGs. Document 2::: Medical education is education related to the practice of being a medical practitioner, including the initial training to become a physician (i.e., medical school and internship) and additional training thereafter (e.g., residency, fellowship, and continuing medical education). Medical education and training varies considerably across the world. Various teaching methodologies have been used in medical education, which is an active area of educational research. Medical education is also the subject-didactic academic field of educating medical doctors at all levels, including entry-level, post-graduate, and continuing medical education. Specific requirements such as entrustable professional activities must be met before moving on in stages of medical education. Common techniques and evidence base Medical education applies theories of pedagogy specifically in the context of medical education. Medical education has been a leader in the field of evidence-based education, through the development of evidence syntheses such as the Best Evidence Medical Education collection, formed in 1999, which aimed to "move from opinion-based education to evidence-based education". Common evidence-based techniques include the Objective structured clinical examination (commonly known as the 'OSCE) to assess clinical skills, and reliable checklist-based assessments to determine the development of soft skills such as professionalism. However, there is a persistence of ineffective instructional methods in medical education, such as the matching of teaching to learning styles and Edgar Dales' "Cone of Learning". Entry-level education Entry-level medical education programs are tertiary-level courses undertaken at a medical school. Depending on jurisdiction and university, these may be either undergraduate-entry (most of Europe, Asia, South America and Oceania), or graduate-entry programs (mainly Australia, Philippines and North America). Some jurisdictions and universities provide both u Document 3::: Alternative medicine degrees include academic degrees, first professional degrees, qualifications or diplomas issued by accredited and legally recognised academic institutions in alternative medicine or related areas, either human or animal. Examples Examples of alternative medicine degrees include: Ayurveda - BSc, MSc, BAMC, MD(Ayurveda), M.S.(Ayurveda), Ph.D(Ayurveda) Siddha medicine - BSMS, MD(Siddha), Ph.D(Siddha) Acupuncture - BSc, LAc, DAc, AP, DiplAc, MAc Herbalism - Acs, BSc, Msc. Homeopathy - BSc, MSc, DHMs, BHMS, M.D. (HOM), PhD in homoeopathy Naprapathy - DN Naturopathic medicine - BSc, MSc, BNYS, MD (Naturopathy), ND, NMD Oriental Medicine - BSc, MSOM, MSTOM, KMD (Korea), BCM (Hong Kong), MCM (Hong Kong), BChinMed (Hong Kong), MChinMed (Hong Kong), MD (Taiwan), MB (China), TCM-Traditional Chinese medicine master (China) Osteopathy - BOst, BOstMed, BSc (Osteo), DipOsteo Document 4::: In France and in other countries like Portugal, Spain, Belgium or Switzerland, a Biological pharmacist (called Pharmacien biologiste in France) is a Pharmacist specialized in Clinical Biology a speciality similar to Clinical Pathology. They have almost the same rights as Medical Doctors specialized in this discipline. They both are called a "Clinical biologist"". These Pharm.D. follow a "post-graduate" formation in hospital's medical laboratories. In France, this specialization called "Internat de Biologie médicale" is a residency and lasts fours years after the five undergraduate years common to all pharmacists. External links Reglementation for French Residency in Clinical Pathology (Biologie médicale) Curriculum Content of French Resident formation in Clinical Pathology, First Level and Second Level See also Pathology Medical laboratory Anatomic pathology Medical technologist Veterinary pathology Clinical Biologist Pathology The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. What type of doctor specializes in the laboratory detection of disease? A. diagnostician B. pathologist C. infectious disease physician D. internist Answer:
sciq-10298
multiple_choice
What do you call the light particles that travel through the radiative zone?
[ "photons", "ions", "neutrons", "electrons" ]
A
Relavent Documents: Document 0::: In infrared astronomy, the L band is an atmospheric transmission window centred on 3.5 micrometres (in the mid-infrared). Electromagnetic spectrum Infrared imaging Document 1::: Thermal radiation is electromagnetic radiation generated by the thermal motion of particles in matter. Thermal radiation is generated when heat from the movement of charges in the material (electrons and protons in common forms of matter) is converted to electromagnetic radiation. All matter with a temperature greater than absolute zero emits thermal radiation. At room temperature, most of the emission is in the infrared (IR) spectrum. Particle motion results in charge-acceleration or dipole oscillation which produces electromagnetic radiation. Infrared radiation emitted by animals (detectable with an infrared camera) and cosmic microwave background radiation are examples of thermal radiation. If a radiation object meets the physical characteristics of a black body in thermodynamic equilibrium, the radiation is called blackbody radiation. Planck's law describes the spectrum of blackbody radiation, which depends solely on the object's temperature. Wien's displacement law determines the most likely frequency of the emitted radiation, and the Stefan–Boltzmann law gives the radiant intensity. Thermal radiation is also one of the fundamental mechanisms of heat transfer. Overview Thermal radiation is the emission of electromagnetic waves from all matter that has a temperature greater than absolute zero. Thermal radiation reflects the conversion of thermal energy into electromagnetic energy. Thermal energy is the kinetic energy of random movements of atoms and molecules in matter. All matter with a nonzero temperature is composed of particles with kinetic energy. These atoms and molecules are composed of charged particles, i.e., protons and electrons. The kinetic interactions among matter particles result in charge acceleration and dipole oscillation. This results in the electrodynamic generation of coupled electric and magnetic fields, resulting in the emission of photons, radiating energy away from the body. Electromagnetic radiation, including visible light, will pr Document 2::: In particle physics, a radiative process refers to one elementary particle emitting another and continuing to exist. This typically happens when a fermion emits a boson such as a gluon or photon. See also Bremsstrahlung Radiation Particle physics Document 3::: Solar radio emission refers to radio waves that are naturally produced by the Sun, primarily from the lower and upper layers of the atmosphere called the chromosphere and corona, respectively. The Sun produces radio emissions through four known mechanisms, each of which operates primarily by converting the energy of moving electrons into electromagnetic radiation. The four emission mechanisms are thermal bremsstrahlung (braking) emission, gyromagnetic emission, plasma emission, and electron-cyclotron maser emission. The first two are incoherent mechanisms, which means that they are the summation of radiation generated independently by many individual particles. These mechanisms are primarily responsible for the persistent "background" emissions that slowly vary as structures in the atmosphere evolve. The latter two processes are coherent mechanisms, which refers to special cases where radiation is efficiently produced at a particular set of frequencies. Coherent mechanisms can produce much larger brightness temperatures (intensities) and are primarily responsible for the intense spikes of radiation called solar radio bursts, which are byproducts of the same processes that lead to other forms of solar activity like solar flares and coronal mass ejections. History and observations Radio emission from the Sun was first reported in the scientific literature by Grote Reber in 1944. Those were observations of 160 MHz frequency (2 meters wavelength) microwave emission emanating from the chromosphere. However, the earliest known observation was in 1942 during World War II by British radar operators who detected an intense low-frequency solar radio burst; that information was kept secret as potentially useful in evading enemy radar, but was later described in a scientific journal after the war. One of the most significant discoveries from early solar radio astronomers such as Joseph Pawsey was that the Sun produces much more radio emission than expected from standard blac Document 4::: Cosmic rays or astroparticles are high-energy particles or clusters of particles (primarily represented by protons or atomic nuclei) that move through space at nearly the speed of light. They originate from the Sun, from outside of the Solar System in our own galaxy, and from distant galaxies. Upon impact with Earth's atmosphere, cosmic rays produce showers of secondary particles, some of which reach the surface, although the bulk is deflected off into space by the magnetosphere or the heliosphere. Cosmic rays were discovered by Victor Hess in 1912 in balloon experiments, for which he was awarded the 1936 Nobel Prize in Physics. Direct measurement of cosmic rays, especially at lower energies, has been possible since the launch of the first satellites in the late 1950s. Particle detectors similar to those used in nuclear and high-energy physics are used on satellites and space probes for research into cosmic rays. Data from the Fermi Space Telescope (2013) have been interpreted as evidence that a significant fraction of primary cosmic rays originate from the supernova explosions of stars. Based on observations of neutrinos and gamma rays from blazar TXS 0506+056 in 2018, active galactic nuclei also appear to produce cosmic rays. Etymology The term ray (as in optical ray) seems to have arisen from an initial belief, due to their penetrating power, that cosmic rays were mostly electromagnetic radiation. Nevertheless, following wider recognition of cosmic rays as being various high-energy particles with intrinsic mass, the term "rays" was still consistent with then known particles such as cathode rays, canal rays, alpha rays and beta rays. Meanwhile "cosmic" ray photons, which are quanta of electromagnetic radiation (and so have no intrinsic mass) are known by their common names, such as gamma rays or X-rays, depending on their photon energy. Composition Of primary cosmic rays, which originate outside of Earth's atmosphere, about 99% are the bare nuclei of common at The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. What do you call the light particles that travel through the radiative zone? A. photons B. ions C. neutrons D. electrons Answer:
sciq-6376
multiple_choice
What occurs when light reflects off a very smooth surface and forms a clear image?
[ "projection", "absorption", "regular reflection", "refraction" ]
C
Relavent Documents: Document 0::: The study of image formation encompasses the radiometric and geometric processes by which 2D images of 3D objects are formed. In the case of digital images, the image formation process also includes analog to digital conversion and sampling. Imaging The imaging process is a mapping of an object to an image plane. Each point on the image corresponds to a point on the object. An illuminated object will scatter light toward a lens and the lens will collect and focus the light to create the image. The ratio of the height of the image to the height of the object is the magnification. The spatial extent of the image surface and the focal length of the lens determines the field of view of the lens. Image formation of mirror these have a center of curvature and its focal length of the mirror is half of the center of curvature. Illumination An object may be illuminated by the light from an emitting source such as the sun, a light bulb or a Light Emitting Diode. The light incident on the object is reflected in a manner dependent on the surface properties of the object. For rough surfaces, the reflected light is scattered in a manner described by the Bi-directional Reflectance Distribution Function (BRDF) of the surface. The BRDF of a surface is the ratio of the exiting power per square meter per steradian (radiance) to the incident power per square meter (irradiance). The BRDF typically varies with angle and may vary with wavelength, but a specific important case is a surface that has constant BRDF. This surface type is referred to as Lambertian and the magnitude of the BRDF is R/π, where R is the reflectivity of the surface. The portion of scattered light that propagates toward the lens is collected by the entrance pupil of the imaging lens over the field of view. Field of view and imagery The Field of view of a lens is limited by the size of the image plane and the focal length of the lens. The relationship between a location on the image and a location on t Document 1::: Conceptual questions or conceptual problems in science, technology, engineering, and mathematics (STEM) education are questions that can be answered based only on the knowledge of relevant concepts, rather than performing extensive calculations. They contrast with most homework and exam problems in science and engineering that typically require plugging in numerical values into previously discussed formulas. Such "plug-and-chug" numerical problems can often be solved correctly by just matching the pattern of the problem to a previously discussed problem and changing the numerical inputs, which requires significant amounts of time to perform the calculations but does not test or deepen the understanding of how the concepts and formulas should work together. Conceptual questions, therefore, provide a good complement to conventional numerical problems because they need minimal or no calculations and instead encourage the students to engage more deeply with the underlying concepts and how they relate to formulas. Conceptual problems are often formulated as multiple-choice questions, making them easy to use during in-class discussions, particularly when utilizing active learning, peer instruction, and audience response. An example of a conceptual question in undergraduate thermodynamics is provided below: During adiabatic expansion of an ideal gas, its temperatureincreases decreases stays the same Impossible to tell/need more information The use of conceptual questions in physics was popularized by Eric Mazur, particularly in the form of multiple-choice tests that he called ConcepTests. In recent years, multiple websites that maintain lists of conceptual questions have been created by instructors for various disciplines. Some books on physics provide many examples of conceptual questions as well. Multiple conceptual questions can be assembled into a concept inventory to test the working knowledge of students at the beginning of a course or to track the improvement in Document 2::: Gloss is an optical property which indicates how well a surface reflects light in a specular (mirror-like) direction. It is one of the important parameters that are used to describe the visual appearance of an object. Other categories of visual appearance related to the perception of regular or diffuse reflection and transmission of light have been organized under the concept of cesia in an order system with three variables, including gloss among the involved aspects. The factors that affect gloss are the refractive index of the material, the angle of incident light and the surface topography. Apparent gloss depends on the amount of specular reflection – light reflected from the surface in an equal amount and the symmetrical angle to the one of incoming light – in comparison with diffuse reflection – the amount of light scattered into other directions. Theory When light illuminates an object, it interacts with it in a number of ways: Absorbed within it (largely responsible for colour) Transmitted through it (dependent on the surface transparency and opacity) Scattered from or within it (diffuse reflection, haze and transmission) Specularly reflected from it (gloss) Variations in surface texture directly influence the level of specular reflection. Objects with a smooth surface, i.e. highly polished or containing coatings with finely dispersed pigments, appear shiny to the eye due to a large amount of light being reflected in a specular direction whilst rough surfaces reflect no specular light as the light is scattered in other directions and therefore appears dull. The image forming qualities of these surfaces are much lower making any reflections appear blurred and distorted. Substrate material type also influences the gloss of a surface. Non-metallic materials, i.e. plastics etc. produce a higher level of reflected light when illuminated at a greater illumination angle due to light being absorbed into the material or being diffusely scattered depending o Document 3::: "La dioptrique" (in English "Dioptrique", "Optics", or "Dioptrics"), is a short treatise published in 1637 included in one of the Essays written with Discourse on the Method by René Descartes. In this essay Descartes uses various models to understand the properties of light. This essay is known as Descartes' greatest contribution to optics, as it is the first publication of the Law of Refraction. First Discourse: On Light The first discourse captures Descartes' theories on the nature of light. In the first model, he compares light to a stick that allows a blind person to discern his environment through touch. Descartes says: You have only to consider that the differences which a blind man notes among trees, rocks, water, and similar things through the medium of his stick do not seem less to him than those among red, yellow, green, and all the other colors seem to us; and that nevertheless these differences are nothing other, in all these bodies, than the diverse ways of moving, or of resisting the movements of, this stick. Descartes' second model on light uses his theory of the elements to demonstrate the rectilinear transmission of light as well as the movement of light through solid objects. He uses a metaphor of wine flowing through a vat of grapes, then exiting through a hole at the bottom of the vat. Now consider that, since there is no vacuum in Nature as almost all the Philosophers affirm, and since there are nevertheless many pores in all the bodies that we perceive around us, as experiment can show quite clearly, it is necessary that these pores be filled with some very subtle and very fluid material, extending without interruption from the stars and planets to us. Thus, this subtle material being compared with the wine in that vat, and the less fluid or heavier parts, of the air as well as of other transparent bodies, being compared with the bunches of grapes which are mixed in, you will easily understand the following: Just as the parts of this wine.. Document 4::: Reflection is the change in direction of a wavefront at an interface between two different media so that the wavefront returns into the medium from which it originated. Common examples include the reflection of light, sound and water waves. The law of reflection says that for specular reflection (for example at a mirror) the angle at which the wave is incident on the surface equals the angle at which it is reflected. In acoustics, reflection causes echoes and is used in sonar. In geology, it is important in the study of seismic waves. Reflection is observed with surface waves in bodies of water. Reflection is observed with many types of electromagnetic wave, besides visible light. Reflection of VHF and higher frequencies is important for radio transmission and for radar. Even hard X-rays and gamma rays can be reflected at shallow angles with special "grazing" mirrors. Reflection of light Reflection of light is either specular (mirror-like) or diffuse (retaining the energy, but losing the image) depending on the nature of the interface. In specular reflection the phase of the reflected waves depends on the choice of the origin of coordinates, but the relative phase between s and p (TE and TM) polarizations is fixed by the properties of the media and of the interface between them. A mirror provides the most common model for specular light reflection, and typically consists of a glass sheet with a metallic coating where the significant reflection occurs. Reflection is enhanced in metals by suppression of wave propagation beyond their skin depths. Reflection also occurs at the surface of transparent media, such as water or glass. In the diagram, a light ray PO strikes a vertical mirror at point O, and the reflected ray is OQ. By projecting an imaginary line through point O perpendicular to the mirror, known as the normal, we can measure the angle of incidence, θi and the angle of reflection, θr. The law of reflection states that θi = θr, or in other words, the angl The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. What occurs when light reflects off a very smooth surface and forms a clear image? A. projection B. absorption C. regular reflection D. refraction Answer:
sciq-10690
multiple_choice
What do you call a circuit that consists of one loop, which if interrupted at any point, causes cessation of the whole circuit's electric current?
[ "series circuit", "parallel circuit", "dramatic circuit", "constant circuit" ]
A
Relavent Documents: Document 0::: Mathematical methods are integral to the study of electronics. Mathematics in electronics Electronics engineering careers usually include courses in calculus (single and multivariable), complex analysis, differential equations (both ordinary and partial), linear algebra and probability. Fourier analysis and Z-transforms are also subjects which are usually included in electrical engineering programs. Laplace transform can simplify computing RLC circuit behaviour. Basic applications A number of electrical laws apply to all electrical networks. These include Faraday's law of induction: Any change in the magnetic environment of a coil of wire will cause a voltage (emf) to be "induced" in the coil. Gauss's Law: The total of the electric flux out of a closed surface is equal to the charge enclosed divided by the permittivity. Kirchhoff's current law: the sum of all currents entering a node is equal to the sum of all currents leaving the node or the sum of total current at a junction is zero Kirchhoff's voltage law: the directed sum of the electrical potential differences around a circuit must be zero. Ohm's law: the voltage across a resistor is the product of its resistance and the current flowing through it.at constant temperature. Norton's theorem: any two-terminal collection of voltage sources and resistors is electrically equivalent to an ideal current source in parallel with a single resistor. Thévenin's theorem: any two-terminal combination of voltage sources and resistors is electrically equivalent to a single voltage source in series with a single resistor. Millman's theorem: the voltage on the ends of branches in parallel is equal to the sum of the currents flowing in every branch divided by the total equivalent conductance. See also Analysis of resistive circuits. Circuit analysis is the study of methods to solve linear systems for an unknown variable. Circuit analysis Components There are many electronic components currently used and they all have thei Document 1::: A linear circuit is an electronic circuit which obeys the superposition principle. This means that the output of the circuit F(x) when a linear combination of signals ax1(t) + bx2(t) is applied to it is equal to the linear combination of the outputs due to the signals x1(t) and x2(t) applied separately: It is called a linear circuit because the output voltage and current of such a circuit are linear functions of its input voltage and current. This kind of linearity is not the same as that of straight-line graphs. In the common case of a circuit in which the components' values are constant and don't change with time, an alternate definition of linearity is that when a sinusoidal input voltage or current of frequency f is applied, any steady-state output of the circuit (the current through any component, or the voltage between any two points) is also sinusoidal with frequency f. A linear circuit with constant component values is called linear time-invariant (LTI). Informally, a linear circuit is one in which the electronic components' values (such as resistance, capacitance, inductance, gain, etc.) do not change with the level of voltage or current in the circuit. Linear circuits are important because they can amplify and process electronic signals without distortion. An example of an electronic device that uses linear circuits is a sound system. Alternate definition The superposition principle, the defining equation of linearity, is equivalent to two properties, additivity and homogeneity, which are sometimes used as an alternate definition Additivity Homogeneity That is, a linear circuit is a circuit in which (1) the output when a sum of two signals is applied is equal to the sum of the outputs when the two signals are applied separately, and (2) scaling the input signal by a factor scales the output signal by the same factor. Linear and nonlinear components A linear circuit is one that has no nonlinear electronic components in it. Examples of line Document 2::: In electrical engineering, electrical terms are associated into pairs called duals. A dual of a relationship is formed by interchanging voltage and current in an expression. The dual expression thus produced is of the same form, and the reason that the dual is always a valid statement can be traced to the duality of electricity and magnetism. Here is a partial list of electrical dualities: voltage – current parallel – serial (circuits) resistance – conductance voltage division – current division impedance – admittance capacitance – inductance reactance – susceptance short circuit – open circuit Kirchhoff's current law – Kirchhoff's voltage law. Thévenin's theorem – Norton's theorem History The use of duality in circuit theory is due to Alexander Russell who published his ideas in 1904. Examples Constitutive relations Resistor and conductor (Ohm's law) Capacitor and inductor – differential form Capacitor and inductor – integral form Voltage division — current division Impedance and admittance Resistor and conductor Capacitor and inductor See also Duality (electricity and magnetism) Duality (mechanical engineering) Dual impedance Dual graph Mechanical–electrical analogies List of dualities Document 3::: Two-terminal components and electrical networks can be connected in series or parallel. The resulting electrical network will have two terminals, and itself can participate in a series or parallel topology. Whether a two-terminal "object" is an electrical component (e.g. a resistor) or an electrical network (e.g. resistors in series) is a matter of perspective. This article will use "component" to refer to a two-terminal "object" that participate in the series/parallel networks. Components connected in series are connected along a single "electrical path", and each component has the same electric current through it, equal to the current through the network. The voltage across the network is equal to the sum of the voltages across each component. Components connected in parallel are connected along multiple paths, and each component has the same voltage across it, equal to the voltage across the network. The current through the network is equal to the sum of the currents through each component. The two preceding statements are equivalent, except for exchanging the role of voltage and current. A circuit composed solely of components connected in series is known as a series circuit; likewise, one connected completely in parallel is known as a parallel circuit. Many circuits can be analyzed as a combination of series and parallel circuits, along with other configurations. In a series circuit, the current that flows through each of the components is the same, and the voltage across the circuit is the sum of the individual voltage drops across each component. In a parallel circuit, the voltage across each of the components is the same, and the total current is the sum of the currents flowing through each component. Consider a very simple circuit consisting of four light bulbs and a 12-volt automotive battery. If a wire joins the battery to one bulb, to the next bulb, to the next bulb, to the next bulb, then back to the battery in one continuous loop, the bulbs are s Document 4::: The circuit topology of an electronic circuit is the form taken by the network of interconnections of the circuit components. Different specific values or ratings of the components are regarded as being the same topology. Topology is not concerned with the physical layout of components in a circuit, nor with their positions on a circuit diagram; similarly to the mathematical concept of topology, it is only concerned with what connections exist between the components. There may be numerous physical layouts and circuit diagrams that all amount to the same topology. Strictly speaking, replacing a component with one of an entirely different type is still the same topology. In some contexts, however, these can loosely be described as different topologies. For instance, interchanging inductors and capacitors in a low-pass filter results in a high-pass filter. These might be described as high-pass and low-pass topologies even though the network topology is identical. A more correct term for these classes of object (that is, a network where the type of component is specified but not the absolute value) is prototype network. Electronic network topology is related to mathematical topology. In particular, for networks which contain only two-terminal devices, circuit topology can be viewed as an application of graph theory. In a network analysis of such a circuit from a topological point of view, the network nodes are the vertices of graph theory, and the network branches are the edges of graph theory. Standard graph theory can be extended to deal with active components and multi-terminal devices such as integrated circuits. Graphs can also be used in the analysis of infinite networks. Circuit diagrams The circuit diagrams in this article follow the usual conventions in electronics; lines represent conductors, filled small circles represent junctions of conductors, and open small circles represent terminals for connection to the outside world. In most cases, imped The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. What do you call a circuit that consists of one loop, which if interrupted at any point, causes cessation of the whole circuit's electric current? A. series circuit B. parallel circuit C. dramatic circuit D. constant circuit Answer:
sciq-7415
multiple_choice
What are the most dramatic, sudden, and dangerous types of mass wasting?
[ "monsoons", "earthquakes", "landslides", "volcanoes" ]
C
Relavent Documents: Document 0::: Mass wasting, also known as mass movement, is a general term for the movement of rock or soil down slopes under the force of gravity. It differs from other processes of erosion in that the debris transported by mass wasting is not entrained in a moving medium, such as water, wind, or ice. Types of mass wasting include creep, solifluction, rockfalls, debris flows, and landslides, each with its own characteristic features, and taking place over timescales from seconds to hundreds of years. Mass wasting occurs on both terrestrial and submarine slopes, and has been observed on Earth, Mars, Venus, Jupiter's moon Io, and on many other bodies in the Solar System. Subsidence is sometimes regarded as a form of mass wasting. A distinction is then made between mass wasting by subsidence, which involves little horizontal movement, and mass wasting by slope movement. Rapid mass wasting events, such as landslides, can be deadly and destructive. More gradual mass wasting, such as soil creep, poses challenges to civil engineering, as creep can deform roadways and structures and break pipelines. Mitigation methods include slope stabilization, construction of walls, catchment dams, or other structures to contain rockfall or debris flows, afforestation, or improved drainage of source areas. Types Mass wasting is a general term for any process of erosion that is driven by gravity and in which the transported soil and rock is not entrained in a moving medium, such as water, wind, or ice. The presence of water usually aids mass wasting, but the water is not abundant enough to be regarded as a transporting medium. Thus, the distinction between mass wasting and stream erosion lies between a mudflow (mass wasting) and a very muddy stream (stream erosion), without a sharp dividing line. Many forms of mass wasting are recognized, each with its own characteristic features, and taking place over timescales from seconds to hundreds of years. Based on how the soil, regolith or rock moves dow Document 1::: Biodiversity loss includes the worldwide extinction of different species, as well as the local reduction or loss of species in a certain habitat, resulting in a loss of biological diversity. The latter phenomenon can be temporary or permanent, depending on whether the environmental degradation that leads to the loss is reversible through ecological restoration/ecological resilience or effectively permanent (e.g. through land loss). The current global extinction (frequently called the sixth mass extinction or Anthropocene extinction), has resulted in a biodiversity crisis being driven by human activities which push beyond the planetary boundaries and so far has proven irreversible. The main direct threats to conservation (and thus causes for biodiversity loss) fall in eleven categories: Residential and commercial development; farming activities; energy production and mining; transportation and service corridors; biological resource usages; human intrusions and activities that alter, destroy, disturb habitats and species from exhibiting natural behaviors; natural system modification; invasive and problematic species, pathogens and genes; pollution; catastrophic geological events, climate change, and so on. Numerous scientists and the IPBES Global Assessment Report on Biodiversity and Ecosystem Services assert that human population growth and overconsumption are the primary factors in this decline. However other scientists have criticized this, saying that loss of habitat is caused mainly by "the growth of commodities for export" and that population has very little to do with overall consumption, due to country wealth disparities. Climate change is another threat to global biodiversity. For example, coral reefs – which are biodiversity hotspots – will be lost within the century if global warming continues at the current rate. However, habitat destruction e.g. for the expansion of agriculture, is currently the more significant driver of contemporary biodiversity lo Document 2::: Conceptual questions or conceptual problems in science, technology, engineering, and mathematics (STEM) education are questions that can be answered based only on the knowledge of relevant concepts, rather than performing extensive calculations. They contrast with most homework and exam problems in science and engineering that typically require plugging in numerical values into previously discussed formulas. Such "plug-and-chug" numerical problems can often be solved correctly by just matching the pattern of the problem to a previously discussed problem and changing the numerical inputs, which requires significant amounts of time to perform the calculations but does not test or deepen the understanding of how the concepts and formulas should work together. Conceptual questions, therefore, provide a good complement to conventional numerical problems because they need minimal or no calculations and instead encourage the students to engage more deeply with the underlying concepts and how they relate to formulas. Conceptual problems are often formulated as multiple-choice questions, making them easy to use during in-class discussions, particularly when utilizing active learning, peer instruction, and audience response. An example of a conceptual question in undergraduate thermodynamics is provided below: During adiabatic expansion of an ideal gas, its temperatureincreases decreases stays the same Impossible to tell/need more information The use of conceptual questions in physics was popularized by Eric Mazur, particularly in the form of multiple-choice tests that he called ConcepTests. In recent years, multiple websites that maintain lists of conceptual questions have been created by instructors for various disciplines. Some books on physics provide many examples of conceptual questions as well. Multiple conceptual questions can be assembled into a concept inventory to test the working knowledge of students at the beginning of a course or to track the improvement in Document 3::: Features, Events, and Processes (FEP) are terms used in the fields of radioactive waste management, carbon capture and storage, and hydraulic fracturing to define relevant scenarios for safety assessment studies. For a radioactive waste repository, features would include the characteristics of the site, such as the type of soil or geological formation the repository is to be built on or under. Events would include things that may or will occur in the future, like, e.g., glaciations, droughts, earthquakes, or formation of faults. Processes are things that are ongoing, such as the erosion or subsidence of the landform where the site is located on, or near. Several catalogues of FEP's are publicly available, a.o., this one elaborated for the NEA Clay Club dealing with the disposal of radioactive waste in deep clay formations, and those compiled for deep crystalline rocks (granite) by Svensk Kärnbränslehantering AB, SKB, the Swedish Nuclear Fuel and Waste Management Company. Document 4::: Background extinction rate, also known as the normal extinction rate, refers to the standard rate of extinction in Earth's geological and biological history before humans became a primary contributor to extinctions. This is primarily the pre-human extinction rates during periods in between major extinction events. Currently there have been five mass extinctions that have happened since the beginning of time all resulting in a variety of reasons. Overview Extinctions are a normal part of the evolutionary process, and the background extinction rate is a measurement of "how often" they naturally occur. Normal extinction rates are often used as a comparison to present day extinction rates, to illustrate the higher frequency of extinction today than in all periods of non-extinction events before it. Background extinction rates have not remained constant, although changes are measured over geological time, covering millions of years. Measurement Background extinction rates are typically measured in order to give a specific classification to a species and this is obtained over a certain period of time. There is three different ways to calculate background extinction rate.. The first is simply the number of species that normally go extinct over a given period of time. For example, at the background rate one species of bird will go extinct every estimated 400 years. Another way the extinction rate can be given is in million species years (MSY). For example, there is approximately one extinction estimated per million species years. From a purely mathematical standpoint this means that if there are a million species on the planet earth, one would go extinct every year, while if there was only one species it would go extinct in one million years, etc. The third way is in giving species survival rates over time. For example, given normal extinction rates species typically exist for 5–10 million years before going extinct. Lifespan estimates Some species lifespan es The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. What are the most dramatic, sudden, and dangerous types of mass wasting? A. monsoons B. earthquakes C. landslides D. volcanoes Answer:
sciq-6181
multiple_choice
Where do most biochemical reactions take place?
[ "upper atmosphere", "stomach", "within cells", "outside of cells" ]
C
Relavent Documents: Document 0::: Biochemical engineering, also known as bioprocess engineering, is a field of study with roots stemming from chemical engineering and biological engineering. It mainly deals with the design, construction, and advancement of unit processes that involve biological organisms (such as fermentation) or organic molecules (often enzymes) and has various applications in areas of interest such as biofuels, food, pharmaceuticals, biotechnology, and water treatment processes. The role of a biochemical engineer is to take findings developed by biologists and chemists in a laboratory and translate that to a large-scale manufacturing process. History For hundreds of years, humans have made use of the chemical reactions of biological organisms in order to create goods. In the mid-1800s, Louis Pasteur was one of the first people to look into the role of these organisms when he researched fermentation. His work also contributed to the use of pasteurization, which is still used to this day. By the early 1900s, the use of microorganisms had expanded, and was used to make industrial products. Up to this point, biochemical engineering hadn't developed as a field yet. It wasn't until 1928 when Alexander Fleming discovered penicillin that the field of biochemical engineering was established. After this discovery, samples were gathered from around the world in order to continue research into the characteristics of microbes from places such as soils, gardens, forests, rivers, and streams. Today, biochemical engineers can be found working in a variety of industries, from food to pharmaceuticals. This is due to the increasing need for efficiency and production which requires knowledge of how biological systems and chemical reactions interact with each other and how they can be used to meet these needs. Education Biochemical engineering is not a major offered by most universities and is instead an area of interest under the chemical engineering major in most cases. The following universiti Document 1::: Biochemistry is the study of the chemical processes in living organisms. It deals with the structure and function of cellular components such as proteins, carbohydrates, lipids, nucleic acids and other biomolecules. Articles related to biochemistry include: 0–9 2-amino-5-phosphonovalerate - 3' end - 5' end Document 2::: Biochemists are scientists who are trained in biochemistry. They study chemical processes and chemical transformations in living organisms. Biochemists study DNA, proteins and cell parts. The word "biochemist" is a portmanteau of "biological chemist." Biochemists also research how certain chemical reactions happen in cells and tissues and observe and record the effects of products in food additives and medicines. Biochemist researchers focus on playing and constructing research experiments, mainly for developing new products, updating existing products and analyzing said products. It is also the responsibility of a biochemist to present their research findings and create grant proposals to obtain funds for future research. Biochemists study aspects of the immune system, the expressions of genes, isolating, analyzing, and synthesizing different products, mutations that lead to cancers, and manage laboratory teams and monitor laboratory work. Biochemists also have to have the capabilities of designing and building laboratory equipment and devise new methods of producing correct results for products. The most common industry role is the development of biochemical products and processes. Identifying substances' chemical and physical properties in biological systems is of great importance, and can be carried out by doing various types of analysis. Biochemists must also prepare technical reports after collecting, analyzing and summarizing the information and trends found. In biochemistry, researchers often break down complicated biological systems into their component parts. They study the effects of foods, drugs, allergens and other substances on living tissues; they research molecular biology, the study of life at the molecular level and the study of genes and gene expression; and they study chemical reactions in metabolism, growth, reproduction, and heredity, and apply techniques drawn from biotechnology and genetic engineering to help them in their research. Abou Document 3::: The following outline is provided as an overview of and topical guide to biophysics: Biophysics – interdisciplinary science that uses the methods of physics to study biological systems. Nature of biophysics Biophysics is An academic discipline – branch of knowledge that is taught and researched at the college or university level. Disciplines are defined (in part), and recognized by the academic journals in which research is published, and the learned societies and academic departments or faculties to which their practitioners belong. A scientific field (a branch of science) – widely recognized category of specialized expertise within science, and typically embodies its own terminology and nomenclature. Such a field will usually be represented by one or more scientific journals, where peer-reviewed research is published. A natural science – one that seeks to elucidate the rules that govern the natural world using empirical and scientific methods. A biological science – concerned with the study of living organisms, including their structure, function, growth, evolution, distribution, and taxonomy. A branch of physics – concerned with the study of matter and its motion through space and time, along with related concepts such as energy and force. An interdisciplinary field – field of science that overlaps with other sciences Scope of biophysics research Biomolecular scale Biomolecule Biomolecular structure Organismal scale Animal locomotion Biomechanics Biomineralization Motility Environmental scale Biophysical environment Biophysics research overlaps with Agrophysics Biochemistry Biophysical chemistry Bioengineering Biogeophysics Nanotechnology Systems biology Branches of biophysics Astrobiophysics – field of intersection between astrophysics and biophysics concerned with the influence of the astrophysical phenomena upon life on planet Earth or some other planet in general. Medical biophysics – interdisciplinary field that applies me Document 4::: The following outline is provided as an overview of and topical guide to biochemistry: Biochemistry – study of chemical processes in living organisms, including living matter. Biochemistry governs all living organisms and living processes. Applications of biochemistry Testing Ames test – salmonella bacteria is exposed to a chemical under question (a food additive, for example), and changes in the way the bacteria grows are measured. This test is useful for screening chemicals to see if they mutate the structure of DNA and by extension identifying their potential to cause cancer in humans. Pregnancy test – one uses a urine sample and the other a blood sample. Both detect the presence of the hormone human chorionic gonadotropin (hCG). This hormone is produced by the placenta shortly after implantation of the embryo into the uterine walls and accumulates. Breast cancer screening – identification of risk by testing for mutations in two genes—Breast Cancer-1 gene (BRCA1) and the Breast Cancer-2 gene (BRCA2)—allow a woman to schedule increased screening tests at a more frequent rate than the general population. Prenatal genetic testing – testing the fetus for potential genetic defects, to detect chromosomal abnormalities such as Down syndrome or birth defects such as spina bifida. PKU test – Phenylketonuria (PKU) is a metabolic disorder in which the individual is missing an enzyme called phenylalanine hydroxylase. Absence of this enzyme allows the buildup of phenylalanine, which can lead to mental retardation. Genetic engineering – taking a gene from one organism and placing it into another. Biochemists inserted the gene for human insulin into bacteria. The bacteria, through the process of translation, create human insulin. Cloning – Dolly the sheep was the first mammal ever cloned from adult animal cells. The cloned sheep was, of course, genetically identical to the original adult sheep. This clone was created by taking cells from the udder of a six-year-old The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. Where do most biochemical reactions take place? A. upper atmosphere B. stomach C. within cells D. outside of cells Answer:
sciq-11510
multiple_choice
What is the major cause of chronic respiratory disease as well as cardiovascular disease and cancer?
[ "diet", "drinking", "smoking", "exercise" ]
C
Relavent Documents: Document 0::: Atherosclerosis is a pattern of the disease arteriosclerosis, characterized by development of abnormalities called lesions in walls of arteries. These lesions may lead to narrowing of the arteries' walls due to buildup of atheromatous plaques. At onset there are usually no symptoms, but if they develop, symptoms generally begin around middle age. In severe cases, it can result in coronary artery disease, stroke, peripheral artery disease, or kidney disorders, depending on which body parts(s) the affected arteries are located in the body. The exact cause of atherosclerosis is unknown and is proposed to be multifactorial. Risk factors include abnormal cholesterol levels, elevated levels of inflammatory biomarkers, high blood pressure, diabetes, smoking (both active and passive smoking), obesity, genetic factors, family history, lifestyle habits, and an unhealthy diet. Plaque is made up of fat, cholesterol, calcium, and other substances found in the blood. The narrowing of arteries limits the flow of oxygen-rich blood to parts of the body. Diagnosis is based upon a physical exam, electrocardiogram, and exercise stress test, among others. Prevention is generally by eating a healthy diet, exercising, not smoking, and maintaining a normal weight. Treatment of established disease may include medications to lower cholesterol such as statins, blood pressure medication, or medications that decrease clotting, such as aspirin. A number of procedures may also be carried out such as percutaneous coronary intervention, coronary artery bypass graft, or carotid endarterectomy. Atherosclerosis generally starts when a person is young and worsens with age. Almost all people are affected to some degree by the age of 65. It is the number one cause of death and disability in developed countries. Though it was first described in 1575, there is evidence that the condition occurred in people more than 5,000 years ago. Signs and symptoms Atherosclerosis is asymptomatic for decades because Document 1::: Alcoholic liver disease (ALD), also called alcohol-related liver disease (ARLD), is a term that encompasses the liver manifestations of alcohol overconsumption, including fatty liver, alcoholic hepatitis, and chronic hepatitis with liver fibrosis or cirrhosis. It is the major cause of liver disease in Western countries. Although steatosis (fatty liver disease) will develop in any individual who consumes a large quantity of alcoholic beverages over a long period of time, this process is transient and reversible. More than 90% of all heavy drinkers develop fatty liver whilst about 25% develop the more severe alcoholic hepatitis, and 15% liver cirrhosis. Risk factors Risk factors known as of 2010 are: Quantity of alcohol taken: Consumption of 60–80 g per day (14 g is considered one standard drink in the US, i.e., 1.5 fl oz hard liquor, 5 fl oz wine, 12 fl oz beer; drinking a six-pack of 5% ABV beer daily would be 84 g and just over the upper limit) for 20 years or more in men, or 20 g/day for women significantly increases the risk of hepatitis and fibrosis by 6% to 41%. Pattern of drinking: Drinking outside of meal times increases up to 3 times the risk of alcoholic liver disease. Sex: Women are twice as susceptible to alcohol-related liver disease, and may develop alcoholic liver disease with shorter durations and doses of chronic consumption. The lesser amount of alcohol dehydrogenase secreted in the gut, higher proportion of body fat in women, and changes in alcohol absorption due to the menstrual cycle may explain this phenomenon. Ethnicity: Higher rates of alcohol-related liver disease, unrelated to differences in amounts of alcohol consumed, are seen in African-American and Hispanic males compared to Caucasian males. Hepatitis C infection: A concomitant hepatitis C infection significantly accelerates the process of liver injury. Genetic factors: Genetic factors predispose both to alcoholism and to alcoholic liver disease. Both monozygotic twins are more l Document 2::: The scientific community in the United States and Europe are primarily concerned with the possible effect of electronic cigarette use on public health. There is concern among public health experts that e-cigarettes could renormalize smoking, weaken measures to control tobacco, and serve as a gateway for smoking among youth. The public health community is divided over whether to support e-cigarettes, because their safety and efficacy for quitting smoking is unclear. Many in the public health community acknowledge the potential for their quitting smoking and decreasing harm benefits, but there remains a concern over their long-term safety and potential for a new era of users to get addicted to nicotine and then tobacco. There is concern among tobacco control academics and advocates that prevalent universal vaping "will bring its own distinct but as yet unknown health risks in the same way tobacco smoking did, as a result of chronic exposure", among other things. Medical organizations differ in their views about the health implications of vaping and avoid releasing statements about the relative toxicity of electronic cigarettes because of the many different device types, liquid formulations, and new devices that come onto the market. Some healthcare groups and policy makers have hesitated to recommend e-cigarettes with nicotine for quitting smoking, despite some evidence of effectiveness (when compared to Nicotine Replacement Therapy or e-cigarettes without nicotine) and safety. Reasons for hesitancy include challenges ensuring that quality control measures on the devices and liquids are met, unknown second hand vapour inhalation effects, uncertainty about EC use leading to the initiation of smoking or effects on people new to smoking who develop nicotine dependence, unknown long-term effects of electronic cigarette use on human health, uncertainty about the effects of ECs on smoking regulations and smoke free legislation measures, and uncertainty about involvement of Document 3::: Preventive healthcare, or prophylaxis, is the application of healthcare measures to prevent diseases. Disease and disability are affected by environmental factors, genetic predisposition, disease agents, and lifestyle choices, and are dynamic processes that begin before individuals realize they are affected. Disease prevention relies on anticipatory actions that can be categorized as primal, primary, secondary, and tertiary prevention. Each year, millions of people die of preventable causes. A 2004 study showed that about half of all deaths in the United States in 2000 were due to preventable behaviors and exposures. Leading causes included cardiovascular disease, chronic respiratory disease, unintentional injuries, diabetes, and certain infectious diseases. This same study estimates that 400,000 people die each year in the United States due to poor diet and a sedentary lifestyle. According to estimates made by the World Health Organization (WHO), about 55 million people died worldwide in 2011, and two-thirds of these died from non-communicable diseases, including cancer, diabetes, and chronic cardiovascular and lung diseases. This is an increase from the year 2000, during which 60% of deaths were attributed to these diseases. Preventive healthcare is especially important given the worldwide rise in the prevalence of chronic diseases and deaths from these diseases. There are many methods for prevention of disease. One of them is prevention of teenage smoking through information giving. It is recommended that adults and children aim to visit their doctor for regular check-ups, even if they feel healthy, to perform disease screening, identify risk factors for disease, discuss tips for a healthy and balanced lifestyle, stay up to date with immunizations and boosters, and maintain a good relationship with a healthcare provider. In pediatrics, some common examples of primary prevention are encouraging parents to turn down the temperature of their home water heater in o Document 4::: The Robertson Centre for Biostatistics is a specialised biostatistical research centre in Glasgow, Scotland. It is part of the College of Medical, Veterinary and Life Sciences and the Institute of Health and Wellbeing at the University of Glasgow. All scales of research are carried out at the centre from multi-site clinical trials to small scale research projects. The centre also has interests in the development of novel informatics solutions for clinical research, statistical issues in epidemiology and health economic evaluation. History The centre led the WOSCOP study (New England Journal of Medicine 1995; 333:1301-7) which found that treatment with Pravastatin significantly reduced the risk of myocardial infarction and the risk of death from cardiovascular causes without adversely affecting the risk of death from noncardiovascular causes in men with moderate hypercholesterolaemia and no history of myocardial infarction. The Robertson Centre joined with the Glasgow Clinical Research Facility and Greater Glasgow and Clyde NHS R&D division in November 2007 to create a UKCRN registered Clinical Trials Unit - the Glasgow Clinical Trials Unit. The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. What is the major cause of chronic respiratory disease as well as cardiovascular disease and cancer? A. diet B. drinking C. smoking D. exercise Answer:
sciq-8511
multiple_choice
The following definition relates to which term: the application of knowledge to real-world problems?
[ "technology", "capitalism", "industry", "invention" ]
A
Relavent Documents: Document 0::: A pre-STEM program is a course of study at any two-year college that prepares a student to transfer to a four-year school to earn a bachelor's degree in a STEM field. Overview The concept of a pre-STEM program is being developed to address America's need for more college-trained professionals in science, technology, engineering, and mathematics (STEM). It is an innovation meant to fill a gap at community colleges that do not have 'major' degree paths that students identify with on their way to earning an Associates degree. Students must complete a considerable amount of STEM coursework before transferring from a two-year school to a four-year school and earn a baccalaureate degree in a STEM field. Schools with a pre-STEM program are able to identify those students and support them with STEM-specific academic and career advising, increasing the student's chances of going on to earn a STEM baccalaureate degree in a timely fashion. With over 50% of America's college-bound students starting their college career at public or private two-year school, and with a very small proportion of students who start college at a two-year school matriculating to and earning STEM degrees from four-year schools, pre-STEM programs have great potential for broadening participation in baccalaureate STEM studies. Example programs The effectiveness of pre-STEM programs is being investigated by a consortium of schools in Missouri: Moberly Area Community College, St. Charles Community College, Metropolitan Community College, and Truman State University. A larger group of schools met at the Belknap Springs Meetings in October 2009 to discuss the challenges and opportunities presented by STEM-focused partnerships between 2-year and 4-year schools. Each program represented a two-year school and a four-year school that were trying to increase the number of people who earn a baccalaureate degree in a STEM area through various means, some of which were pre-STEM programs. Other methods includes Document 1::: Engineering mathematics is a branch of applied mathematics concerning mathematical methods and techniques that are typically used in engineering and industry. Along with fields like engineering physics and engineering geology, both of which may belong in the wider category engineering science, engineering mathematics is an interdisciplinary subject motivated by engineers' needs both for practical, theoretical and other considerations outside their specialization, and to deal with constraints to be effective in their work. Description Historically, engineering mathematics consisted mostly of applied analysis, most notably: differential equations; real and complex analysis (including vector and tensor analysis); approximation theory (broadly construed, to include asymptotic, variational, and perturbative methods, representations, numerical analysis); Fourier analysis; potential theory; as well as linear algebra and applied probability, outside of analysis. These areas of mathematics were intimately tied to the development of Newtonian physics, and the mathematical physics of that period. This history also left a legacy: until the early 20th century subjects such as classical mechanics were often taught in applied mathematics departments at American universities, and fluid mechanics may still be taught in (applied) mathematics as well as engineering departments. The success of modern numerical computer methods and software has led to the emergence of computational mathematics, computational science, and computational engineering (the last two are sometimes lumped together and abbreviated as CS&E), which occasionally use high-performance computing for the simulation of phenomena and the solution of problems in the sciences and engineering. These are often considered interdisciplinary fields, but are also of interest to engineering mathematics. Specialized branches include engineering optimization and engineering statistics. Engineering mathematics in tertiary educ Document 2::: The STEM (Science, Technology, Engineering, and Mathematics) pipeline is a critical infrastructure for fostering the development of future scientists, engineers, and problem solvers. It's the educational and career pathway that guides individuals from early childhood through to advanced research and innovation in STEM-related fields. Description The "pipeline" metaphor is based on the idea that having sufficient graduates requires both having sufficient input of students at the beginning of their studies, and retaining these students through completion of their academic program. The STEM pipeline is a key component of workplace diversity and of workforce development that ensures sufficient qualified candidates are available to fill scientific and technical positions. The STEM pipeline was promoted in the United States from the 1970s onwards, as “the push for STEM (science, technology, engineering, and mathematics) education appears to have grown from a concern for the low number of future professionals to fill STEM jobs and careers and economic and educational competitiveness.” Today, this metaphor is commonly used to describe retention problems in STEM fields, called “leaks” in the pipeline. For example, the White House reported in 2012 that 80% of minority groups and women who enroll in a STEM field switch to a non-STEM field or drop out during their undergraduate education. These leaks often vary by field, gender, ethnic and racial identity, socioeconomic background, and other factors, drawing attention to structural inequities involved in STEM education and careers. Current efforts The STEM pipeline concept is a useful tool for programs aiming at increasing the total number of graduates, and is especially important in efforts to increase the number of underrepresented minorities and women in STEM fields. Using STEM methodology, educational policymakers can examine the quantity and retention of students at all stages of the K–12 educational process and beyo Document 3::: The Science, Technology, Engineering and Mathematics Network or STEMNET is an educational charity in the United Kingdom that seeks to encourage participation at school and college in science and engineering-related subjects (science, technology, engineering, and mathematics) and (eventually) work. History It is based at Woolgate Exchange near Moorgate tube station in London and was established in 1996. The chief executive is Kirsten Bodley. The STEMNET offices are housed within the Engineering Council. Function Its chief aim is to interest children in science, technology, engineering and mathematics. Primary school children can start to have an interest in these subjects, leading secondary school pupils to choose science A levels, which will lead to a science career. It supports the After School Science and Engineering Clubs at schools. There are also nine regional Science Learning Centres. STEM ambassadors To promote STEM subjects and encourage young people to take up jobs in these areas, STEMNET have around 30,000 ambassadors across the UK. these come from a wide selection of the STEM industries and include TV personalities like Rob Bell. Funding STEMNET used to receive funding from the Department for Education and Skills. Since June 2007, it receives funding from the Department for Children, Schools and Families and Department for Innovation, Universities and Skills, since STEMNET sits on the chronological dividing point (age 16) of both of the new departments. See also The WISE Campaign Engineering and Physical Sciences Research Council National Centre for Excellence in Teaching Mathematics Association for Science Education Glossary of areas of mathematics Glossary of astronomy Glossary of biology Glossary of chemistry Glossary of engineering Glossary of physics Document 4::: Science, technology, engineering, and mathematics (STEM) is an umbrella term used to group together the distinct but related technical disciplines of science, technology, engineering, and mathematics. The term is typically used in the context of education policy or curriculum choices in schools. It has implications for workforce development, national security concerns (as a shortage of STEM-educated citizens can reduce effectiveness in this area), and immigration policy, with regard to admitting foreign students and tech workers. There is no universal agreement on which disciplines are included in STEM; in particular, whether or not the science in STEM includes social sciences, such as psychology, sociology, economics, and political science. In the United States, these are typically included by organizations such as the National Science Foundation (NSF), the Department of Labor's O*Net online database for job seekers, and the Department of Homeland Security. In the United Kingdom, the social sciences are categorized separately and are instead grouped with humanities and arts to form another counterpart acronym HASS (Humanities, Arts, and Social Sciences), rebranded in 2020 as SHAPE (Social Sciences, Humanities and the Arts for People and the Economy). Some sources also use HEAL (health, education, administration, and literacy) as the counterpart of STEM. Terminology History Previously referred to as SMET by the NSF, in the early 1990s the acronym STEM was used by a variety of educators, including Charles E. Vela, the founder and director of the Center for the Advancement of Hispanics in Science and Engineering Education (CAHSEE). Moreover, the CAHSEE started a summer program for talented under-represented students in the Washington, D.C., area called the STEM Institute. Based on the program's recognized success and his expertise in STEM education, Charles Vela was asked to serve on numerous NSF and Congressional panels in science, mathematics, and engineering edu The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. The following definition relates to which term: the application of knowledge to real-world problems? A. technology B. capitalism C. industry D. invention Answer:
sciq-251
multiple_choice
Cycling, shoveling snow and cross-country skiing are examples of what kind of heart-strengthening activity?
[ "metabolism", "exercise", "anaerobic", "aerobic" ]
D
Relavent Documents: Document 0::: Cardiovascular fitness refers a health-related component of physical fitness that is brought about by sustained physical activity. A person's ability to deliver oxygen to the working muscles is affected by many physiological parameters, including heart rate, stroke volume, cardiac output, and maximal oxygen consumption. Understanding the relationship between cardiorespiratory fitness and other categories of conditioning requires a review of changes that occur with increased aerobic, or anaerobic capacity. As aerobic/anaerobic capacity increases, general metabolism rises, muscle metabolism is enhanced, haemoglobin rises, buffers in the bloodstream increase, venous return is improved, stroke volume is improved, and the blood bed becomes more able to adapt readily to varying demands. Each of these results of cardiovascular fitness/cardiorespiratory conditioning will have a direct positive effect on muscular endurance, and an indirect effect on strength and flexibility. To facilitate optimal delivery of oxygen to the working muscles, an individual needs to train or participate in activities that will build up the energy stores needed for sport. This is referred to as metabolic training. Metabolic training is generally divided into two types: aerobic and anaerobic. A 2005 Cochrane review demonstrated that physical activity interventions are effective for increasing cardiovascular fitness. Cardiovascular fitness is a measure of how well the heart, lungs, and blood vessels can transport oxygen to the muscles during exercise. It is an important component of overall fitness and has been linked to numerous health benefits, including a reduced risk of cardiovascular disease, improved cognitive function, and increased longevity. A study published in the American Journal of Epidemiology found that higher levels of cardiovascular fitness were associated with a lower risk of mortality from all causes, including cardiovascular disease and cancer. A cardiovascular workout consi Document 1::: The neurobiological effects of physical exercise are numerous and involve a wide range of interrelated effects on brain structure, brain function, and cognition. A large body of research in humans has demonstrated that consistent aerobic exercise (e.g., 30 minutes every day) induces persistent improvements in certain cognitive functions, healthy alterations in gene expression in the brain, and beneficial forms of neuroplasticity and behavioral plasticity; some of these long-term effects include: increased neuron growth, increased neurological activity (e.g., and BDNF signaling), improved stress coping, enhanced cognitive control of behavior, improved declarative, spatial, and working memory, and structural and functional improvements in brain structures and pathways associated with cognitive control and memory. The effects of exercise on cognition have important implications for improving academic performance in children and college students, improving adult productivity, preserving cognitive function in old age, preventing or treating certain neurological disorders, and improving overall quality of life. In healthy adults, aerobic exercise has been shown to induce transient effects on cognition after a single exercise session and persistent effects on cognition following regular exercise over the course of several months. People who regularly perform an aerobic exercise (e.g., running, jogging, brisk walking, swimming, and cycling) have greater scores on neuropsychological function and performance tests that measure certain cognitive functions, such as attentional control, inhibitory control, cognitive flexibility, working memory updating and capacity, declarative memory, spatial memory, and information processing speed. The transient effects of exercise on cognition include improvements in most executive functions (e.g., attention, working memory, cognitive flexibility, inhibitory control, problem solving, and decision making) and information processing speed fo Document 2::: Kinesiogenomics refers to the study of genetics in the various disciplines of the field of kinesiology, the study of human movement. The field has also been referred to as "exercise genomics" or "exercisenomics." Areas of study within kinesiogenomics include the role of gene sequence variation (i.e., alleles) in sport performance, identification of genes (and their different alleles) that contribute to the response and adaptation of the body's tissue systems (e.g., muscles, heart, metabolism, etc.) to various exercise-related stimuli, the use of genetic testing to predict sport performance or individualize exercise prescription, and gene doping, the potential for genetic therapy to be used to enhance sport performance. The field of kinesiogenomics is relatively new, though two books have outlined basic concepts. A regularly published review article entitled, "The human gene map for performance and health-related fitness phenotypes," describes the genes that have been studied in relation to specific exercise- and fitness-related traits. The most recent (seventh) update was published in 2009. Research Within the field of kinesiogenomics, several research studies have been conducted in recent years. This increase in research has led to advancements of knowledge in associating how genes and gene sequencing effects a person's exercise habits and health. One study focusing on twins looked to see the effect of genes on exercise ability, the effects of exercise on mood, and the ability to lose weight. The research concluded that genetics had a significant impact of the likelihood an individual would participate in exercise. An increase in participation can be linked to personality factors such as self-motivation and self-discipline, while a lower participation in exercise can be influenced by factors such as anxiety and depression. These personality trait, both positive and negative, can be associated to one's genetic makeup. Document 3::: Physical literacy is a fundamental and valuable human capability that can be described as a disposition acquired by human individuals encompassing the motivation, confidence, physical competence, knowledge and understanding that establishes purposeful physical pursuits as an integral part of their lifestyle. The fundamental and significant aspects of physical literacy are: everyone can be physically literate as it is appropriate to each individual’s endowment everyone’s physical literacy journey is unique the skills that make up physical literacy can vary by location physical literacy is relevant and valuable at all stages and ages of life the concept embraces much more than physical competence at the heart of the concept is the motivation and commitment to be active the disposition is evidenced by a love of being active, born out of the pleasure and satisfaction individuals experience in participation a physically literate individual values and takes responsibility for maintaining purposeful physical pursuits throughout the lifecourse charting of progress of an individual’s personal journey must be judged against previous achievements and not against any form of national benchmarks History In 1993, Dr. Margaret Whitehead proposed the concept of Physical literacy at the International Association of Physical Education and Sport for Girls and Women Congress in Melbourne, Australia. From this research, the concept and definition of physical literacy was developed. In addition, the implications of physical literacy being the goal of all structures were drawn up. Since 1993 to the present day, much has been done to advance physical literacy. Research has been conducted on Physical Literacy and presented at conferences around the world. In addition, the book Physical Literacy: throughout the life course was written and numerous conferences and workshops have been delivered, to train educators, parents, health practitioners, early childhood educators, coaches, Document 4::: Exercise and Sport Sciences Reviews is a quarterly peer-reviewed review journal covering sports medicine and exercise science. It was established in 1973 as a hardcover book series, and became a quarterly peer-reviewed journal in January 2000. It is published by Lippincott Williams & Wilkins, and is an official journal of the American College of Sports Medicine. The editor-in-chief is Sandra K. Hunter, Ph.D., FACSM (Marquette University). According to the Journal Citation Reports, the journal has a 2021 impact factor of 6.642. The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. Cycling, shoveling snow and cross-country skiing are examples of what kind of heart-strengthening activity? A. metabolism B. exercise C. anaerobic D. aerobic Answer:
sciq-2469
multiple_choice
Earth rotates on its axis once each day and revolves around the sun how often?
[ "every other year", "once each year", "ever 3 years", "once each month" ]
B
Relavent Documents: Document 0::: A pre-STEM program is a course of study at any two-year college that prepares a student to transfer to a four-year school to earn a bachelor's degree in a STEM field. Overview The concept of a pre-STEM program is being developed to address America's need for more college-trained professionals in science, technology, engineering, and mathematics (STEM). It is an innovation meant to fill a gap at community colleges that do not have 'major' degree paths that students identify with on their way to earning an Associates degree. Students must complete a considerable amount of STEM coursework before transferring from a two-year school to a four-year school and earn a baccalaureate degree in a STEM field. Schools with a pre-STEM program are able to identify those students and support them with STEM-specific academic and career advising, increasing the student's chances of going on to earn a STEM baccalaureate degree in a timely fashion. With over 50% of America's college-bound students starting their college career at public or private two-year school, and with a very small proportion of students who start college at a two-year school matriculating to and earning STEM degrees from four-year schools, pre-STEM programs have great potential for broadening participation in baccalaureate STEM studies. Example programs The effectiveness of pre-STEM programs is being investigated by a consortium of schools in Missouri: Moberly Area Community College, St. Charles Community College, Metropolitan Community College, and Truman State University. A larger group of schools met at the Belknap Springs Meetings in October 2009 to discuss the challenges and opportunities presented by STEM-focused partnerships between 2-year and 4-year schools. Each program represented a two-year school and a four-year school that were trying to increase the number of people who earn a baccalaureate degree in a STEM area through various means, some of which were pre-STEM programs. Other methods includes Document 1::: Conceptual questions or conceptual problems in science, technology, engineering, and mathematics (STEM) education are questions that can be answered based only on the knowledge of relevant concepts, rather than performing extensive calculations. They contrast with most homework and exam problems in science and engineering that typically require plugging in numerical values into previously discussed formulas. Such "plug-and-chug" numerical problems can often be solved correctly by just matching the pattern of the problem to a previously discussed problem and changing the numerical inputs, which requires significant amounts of time to perform the calculations but does not test or deepen the understanding of how the concepts and formulas should work together. Conceptual questions, therefore, provide a good complement to conventional numerical problems because they need minimal or no calculations and instead encourage the students to engage more deeply with the underlying concepts and how they relate to formulas. Conceptual problems are often formulated as multiple-choice questions, making them easy to use during in-class discussions, particularly when utilizing active learning, peer instruction, and audience response. An example of a conceptual question in undergraduate thermodynamics is provided below: During adiabatic expansion of an ideal gas, its temperatureincreases decreases stays the same Impossible to tell/need more information The use of conceptual questions in physics was popularized by Eric Mazur, particularly in the form of multiple-choice tests that he called ConcepTests. In recent years, multiple websites that maintain lists of conceptual questions have been created by instructors for various disciplines. Some books on physics provide many examples of conceptual questions as well. Multiple conceptual questions can be assembled into a concept inventory to test the working knowledge of students at the beginning of a course or to track the improvement in Document 2::: Advanced Placement (AP) Physics 1 is a year-long introductory physics course administered by the College Board as part of its Advanced Placement program. It is intended to proxy a one-semester algebra-based university course in mechanics. Along with AP Physics 2, the first AP Physics 1 exam was administered in 2015. In its first five years, AP Physics 1 covered forces and motion, conservation laws, waves, and electricity. As of 2021, AP Physics 1 includes mechanics topics only. History The heavily computational AP Physics B course served for four decades as the College Board's algebra-based offering. As part of the College Board's redesign of science courses, AP Physics B was discontinued; therefore, AP Physics 1 and 2 were created with guidance from the National Research Council and the National Science Foundation. The course covers material of a first-semester university undergraduate physics course offered at American universities that use best practices of physics pedagogy. The first AP Physics 1 classes had begun in the 2014–2015 school year, with the first AP exams administered in May 2015. Curriculum AP Physics 1 is an algebra-based, introductory college-level physics course that includes mechanics topics such as motion, force, momentum, energy, harmonic motion, and rotation; The College Board published a curriculum framework that includes seven big ideas on which the AP Physics 1 and 2 courses are based, along with "enduring understandings" students are expected to acquire within each of the big ideas.: Questions for the exam are constructed with direct reference to items in the curriculum framework. Student understanding of each topic is tested with reference to multiple skills—that is, questions require students to use quantitative, semi-quantitative, qualitative, and experimental reasoning in each content area. Exam Science Practices Assessed Multiple Choice and Free Response Sections of the AP® Physics 1 exam are also assessed on scientific prac Document 3::: The SAT Subject Test in Biology was the name of a one-hour multiple choice test given on biology by the College Board. A student chose whether to take the test depending upon college entrance requirements for the schools in which the student is planning to apply. Until 1994, the SAT Subject Tests were known as Achievement Tests; and from 1995 until January 2005, they were known as SAT IIs. Of all SAT subject tests, the Biology E/M test was the only SAT II that allowed the test taker a choice between the ecological or molecular tests. A set of 60 questions was taken by all test takers for Biology and a choice of 20 questions was allowed between either the E or M tests. This test was graded on a scale between 200 and 800. The average for Molecular is 630 while Ecological is 591. On January 19 2021, the College Board discontinued all SAT Subject tests, including the SAT Subject Test in Biology E/M. This was effective immediately in the United States, and the tests were to be phased out by the following summer for international students. This was done as a response to changes in college admissions due to the impact of the COVID-19 pandemic on education. Format This test had 80 multiple-choice questions that were to be answered in one hour. All questions had five answer choices. Students received one point for each correct answer, lost ¼ of a point for each incorrect answer, and received 0 points for questions left blank. The student's score was based entirely on his or her performance in answering the multiple-choice questions. The questions covered a broad range of topics in general biology. There were more specific questions related respectively on ecological concepts (such as population studies and general Ecology) on the E test and molecular concepts such as DNA structure, translation, and biochemistry on the M test. Preparation The College Board suggested a year-long course in biology at the college preparatory level, as well as a one-year course in algebra, a Document 4::: The Force Concept Inventory is a test measuring mastery of concepts commonly taught in a first semester of physics developed by Hestenes, Halloun, Wells, and Swackhamer (1985). It was the first such "concept inventory" and several others have been developed since for a variety of topics. The FCI was designed to assess student understanding of the Newtonian concepts of force. Hestenes (1998) found that while "nearly 80% of the [students completing introductory college physics courses] could state Newton's Third Law at the beginning of the course, FCI data showed that less than 15% of them fully understood it at the end". These results have been replicated in a number of studies involving students at a range of institutions (see sources section below), and have led to greater recognition in the physics education research community of the importance of students' "active engagement" with the materials to be mastered. The 1995 version has 30 five-way multiple choice questions. Example question (question 4): Gender differences The FCI shows a gender difference in favor of males that has been the subject of some research in regard to gender equity in education. Men score on average about 10% higher. The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. Earth rotates on its axis once each day and revolves around the sun how often? A. every other year B. once each year C. ever 3 years D. once each month Answer:
sciq-10345
multiple_choice
What type of relationship occurs between organisms when one member benefits without affecting the other?
[ "cooperative", "commensalism", "bilateral", "parasitic" ]
B
Relavent Documents: Document 0::: In ecology, a biological interaction is the effect that a pair of organisms living together in a community have on each other. They can be either of the same species (intraspecific interactions), or of different species (interspecific interactions). These effects may be short-term, or long-term, both often strongly influence the adaptation and evolution of the species involved. Biological interactions range from mutualism, beneficial to both partners, to competition, harmful to both partners. Interactions can be direct when physical contact is established or indirect, through intermediaries such as shared resources, territories, ecological services, metabolic waste, toxins or growth inhibitors. This type of relationship can be shown by net effect based on individual effects on both organisms arising out of relationship. Several recent studies have suggested non-trophic species interactions such as habitat modification and mutualisms can be important determinants of food web structures. However, it remains unclear whether these findings generalize across ecosystems, and whether non-trophic interactions affect food webs randomly, or affect specific trophic levels or functional groups. History Although biological interactions, more or less individually, were studied earlier, Edward Haskell (1949) gave an integrative approach to the thematic, proposing a classification of "co-actions", later adopted by biologists as "interactions". Close and long-term interactions are described as symbiosis; symbioses that are mutually beneficial are called mutualistic. The term symbiosis was subject to a century-long debate about whether it should specifically denote mutualism, as in lichens or in parasites that benefit themselves. This debate created two different classifications for biotic interactions, one based on the time (long-term and short-term interactions), and other based on the magnitud of interaction force (competition/mutualism) or effect of individual fitness, accordi Document 1::: Mutualism describes the ecological interaction between two or more species where each species has a net benefit. Mutualism is a common type of ecological interaction. Prominent examples include most vascular plants engaged in mutualistic interactions with mycorrhizae, flowering plants being pollinated by animals, vascular plants being dispersed by animals, and corals with zooxanthellae, among many others. Mutualism can be contrasted with interspecific competition, in which each species experiences reduced fitness, and exploitation, or parasitism, in which one species benefits at the expense of the other. The term mutualism was introduced by Pierre-Joseph van Beneden in his 1876 book Animal Parasites and Messmates to mean "mutual aid among species". Mutualism is often conflated with two other types of ecological phenomena: cooperation and symbiosis. Cooperation most commonly refers to increases in fitness through within-species (intraspecific) interactions, although it has been used (especially in the past) to refer to mutualistic interactions, and it is sometimes used to refer to mutualistic interactions that are not obligate. Symbiosis involves two species living in close physical contact over a long period of their existence and may be mutualistic, parasitic, or commensal, so symbiotic relationships are not always mutualistic, and mutualistic interactions are not always symbiotic. Despite a different definition between mutualistic interactions and symbiosis, mutualistic and symbiosis have been largely used interchangeably in the past, and confusion on their use has persisted. Mutualism plays a key part in ecology and evolution. For example, mutualistic interactions are vital for terrestrial ecosystem function as about 80% of land plants species rely on mycorrhizal relationships with fungi to provide them with inorganic compounds and trace elements. As another example, the estimate of tropical rainforest plants with seed dispersal mutualisms with animals ranges Document 2::: Commensalism is a long-term biological interaction (symbiosis) in which members of one species gain benefits while those of the other species neither benefit nor are harmed. This is in contrast with mutualism, in which both organisms benefit from each other; amensalism, where one is harmed while the other is unaffected; and parasitism, where one is harmed and the other benefits. The commensal (the species that benefits from the association) may obtain nutrients, shelter, support, or locomotion from the host species, which is substantially unaffected. The commensal relation is often between a larger host and a smaller commensal; the host organism is unmodified, whereas the commensal species may show great structural adaptation consistent with its habits, as in the remoras that ride attached to sharks and other fishes. Remoras feed on their hosts' fecal matter, while pilot fish feed on the leftovers of their hosts' meals. Numerous birds perch on bodies of large mammal herbivores or feed on the insects turned up by grazing mammals. Etymology The word "commensalism" is derived from the word "commensal", meaning "eating at the same table" in human social interaction, which in turn comes through French from the Medieval Latin commensalis, meaning "sharing a table", from the prefix com-, meaning "together", and mensa, meaning "table" or "meal". Commensality, at the Universities of Oxford and Cambridge, refers to professors eating at the same table as students (as they live in the same "college"). Pierre-Joseph van Beneden introduced the term "commensalism" in 1876. Examples of commensal relationships The commensal pathway was traveled by animals that fed on refuse around human habitats or by animals that preyed on other animals drawn to human camps. Those animals established a commensal relationship with humans in which the animals benefited but the humans received little benefit or harm. Those animals that were most capable of taking advantage of the resources associ Document 3::: Bioclaustration is kind of interaction when one organism (usually soft bodied) is embedded in a living substrate (i.e. skeleton of another organism); it means “biologically walled -up”. In case of symbiosis the walling-up is not complete and both organisms stay alive (Palmer and Wilson, 1988). Document 4::: The hypothesis or paradigm of Mutualism Parasitism Continuum postulates that compatible host-symbiont associations can occupy a broad continuum of interactions with different fitness outcomes for each member. At one end of the continuum lies obligate mutualism where both host and symbiont benefit from the interaction and are dependent on it for survival. At the other end of the continuum highly parasitic interactions can occur, where one member gains a fitness benefit at the expense of the others survival. Between these extremes many different types of interaction are possible. The degree of change between mutualism or parasitism varies depending on the availability of resources, where there is environmental stress generated by few resources, symbiotic relationships are formed while in environments where there is an excess of resources, biological interactions turn to competition and parasitism. Classically the transmission mode of the symbiont can also be important in predicting where on the mutualism-parasitism-continuum an interaction will sit. Symbionts that are vertically transmitted (inherited symbionts) frequently occupy mutualism space on the continuum, this is due to the aligned reproductive interests between host and symbiont that are generated under vertical transmission. In some systems increases in the relative contribution of horizontal transmission can drive selection for parasitism. Studies of this hypothesis have focused on host-symbiont models of plants and fungi, and also of animals and microbes. See also Red King Hypothesis Red Queen Hypothesis Black Queen Hypothesis Biological interaction The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. What type of relationship occurs between organisms when one member benefits without affecting the other? A. cooperative B. commensalism C. bilateral D. parasitic Answer:
sciq-499
multiple_choice
What do you call the traits that allow a plant, animal, or other organism to survive and reproduce in its environment?
[ "adaptations", "additions", "advantages", "settings" ]
A
Relavent Documents: Document 0::: This glossary of biology terms is a list of definitions of fundamental terms and concepts used in biology, the study of life and of living organisms. It is intended as introductory material for novices; for more specific and technical definitions from sub-disciplines and related fields, see Glossary of cell biology, Glossary of genetics, Glossary of evolutionary biology, Glossary of ecology, Glossary of environmental science and Glossary of scientific naming, or any of the organism-specific glossaries in :Category:Glossaries of biology. A B C D E F G H I J K L M N O P R S T U V W X Y Z Related to this search Index of biology articles Outline of biology Glossaries of sub-disciplines and related fields: Glossary of botany Glossary of ecology Glossary of entomology Glossary of environmental science Glossary of genetics Glossary of ichthyology Glossary of ornithology Glossary of scientific naming Glossary of speciation Glossary of virology Document 1::: Evolutionary biology is the subfield of biology that studies the evolutionary processes (natural selection, common descent, speciation) that produced the diversity of life on Earth. It is also defined as the study of the history of life forms on Earth. Evolution holds that all species are related and gradually change over generations. In a population, the genetic variations affect the phenotypes (physical characteristics) of an organism. These changes in the phenotypes will be an advantage to some organisms, which will then be passed onto their offspring. Some examples of evolution in species over many generations are the peppered moth and flightless birds. In the 1930s, the discipline of evolutionary biology emerged through what Julian Huxley called the modern synthesis of understanding, from previously unrelated fields of biological research, such as genetics and ecology, systematics, and paleontology. The investigational range of current research has widened to encompass the genetic architecture of adaptation, molecular evolution, and the different forces that contribute to evolution, such as sexual selection, genetic drift, and biogeography. Moreover, the newer field of evolutionary developmental biology ("evo-devo") investigates how embryogenesis is controlled, thus yielding a wider synthesis that integrates developmental biology with the fields of study covered by the earlier evolutionary synthesis. Subfields Evolution is the central unifying concept in biology. Biology can be divided into various ways. One way is by the level of biological organization, from molecular to cell, organism to population. Another way is by perceived taxonomic group, with fields such as zoology, botany, and microbiology, reflecting what was once seen as the major divisions of life. A third way is by approaches, such as field biology, theoretical biology, experimental evolution, and paleontology. These alternative ways of dividing up the subject have been combined with evolution Document 2::: Ecological inheritance occurs when organisms inhabit a modified environment that a previous generation created; it was first described in Odling-Smee (1988) and Odling-Smee et al. (1996) as a consequence of niche construction. Standard evolutionary theory focuses on the influence that natural selection and genetic inheritance has on biological evolution, when individuals that survive and reproduce also transmit genes to their offspring. If offspring do not live in a modified environment created by their parents, then niche construction activities of parents do not affect the selective pressures of their offspring (see orb-web spiders in Genetic inheritance vs. ecological inheritance below). However, when niche construction affects multiple generations (i.e., parents and offspring), ecological inheritance acts a inheritance system different than genetic inheritance. Since ecological inheritance is a result of ecosystem engineering and niche construction, the fitness of several species and their subsequent generations experience a selective pressure dependent on the modified environment they inherit. Organisms in subsequent generations will encounter ecological inheritance because they are affected by a new selective environment created by prior niche construction. On a macroevolutionary scale, ecological inheritance has been defined as, "the persistence of environmental modifications by a species over multiple generations to influence the evolution of that or other species." Ecological inheritance has also been defined as, "... the accumulation of environmental changes, such as altered soil, atmosphere or ocean states that previous generations have brought about through their niche-constructing activity, and that influence the development of descendant organisms." Related to niche construction and ecological inheritance are factors and features of an organism and environment, respectively, where the feature of an organism is synonymous with adaptation if natural se Document 3::: Tolerance is the ability of plants to mitigate the negative fitness effects caused by herbivory. It is one of the general plant defense strategies against herbivores, the other being resistance, which is the ability of plants to prevent damage (Strauss and Agrawal 1999). Plant defense strategies play important roles in the survival of plants as they are fed upon by many different types of herbivores, especially insects, which may impose negative fitness effects (Strauss and Zangerl 2002). Damage can occur in almost any part of the plants, including the roots, stems, leaves, flowers and seeds (Strauss and Zergerl 2002). In response to herbivory, plants have evolved a wide variety of defense mechanisms and although relatively less studied than resistance strategies, tolerance traits play a major role in plant defense (Strauss and Zergerl 2002, Rosenthal and Kotanen 1995). Traits that confer tolerance are controlled genetically and therefore are heritable traits under selection (Strauss and Agrawal 1999). Many factors intrinsic to the plants, such as growth rate, storage capacity, photosynthetic rates and nutrient allocation and uptake, can affect the extent to which plants can tolerate damage (Rosenthal and Kotanen 1994). Extrinsic factors such as soil nutrition, carbon dioxide levels, light levels, water availability and competition also have an effect on tolerance (Rosenthal and Kotanen 1994). History of the study of plant tolerance Studies of tolerance to herbivory has historically been the focus of agricultural scientists (Painter 1958; Bardner and Fletcher 1974). Tolerance was actually initially classified as a form of resistance (Painter 1958). Agricultural studies on tolerance, however, are mainly concerned with the compensatory effect on the plants' yield and not its fitness, since it is of economical interest to reduce crop losses due to herbivory by pests (Trumble 1993; Bardner and Fletcher 1974). One surprising discovery made about plant tolerance was th Document 4::: Phenotypic plasticity refers to some of the changes in an organism's behavior, morphology and physiology in response to a unique environment. Fundamental to the way in which organisms cope with environmental variation, phenotypic plasticity encompasses all types of environmentally induced changes (e.g. morphological, physiological, behavioural, phenological) that may or may not be permanent throughout an individual's lifespan. The term was originally used to describe developmental effects on morphological characters, but is now more broadly used to describe all phenotypic responses to environmental change, such as acclimation (acclimatization), as well as learning. The special case when differences in environment induce discrete phenotypes is termed polyphenism. Generally, phenotypic plasticity is more important for immobile organisms (e.g. plants) than mobile organisms (e.g. most animals), as mobile organisms can often move away from unfavourable environments. Nevertheless, mobile organisms also have at least some degree of plasticity in at least some aspects of the phenotype. One mobile organism with substantial phenotypic plasticity is Acyrthosiphon pisum of the aphid family, which exhibits the ability to interchange between asexual and sexual reproduction, as well as growing wings between generations when plants become too populated. Water fleas (Daphnia magna) have shown both phenotypic plasticity and the ability to genetically evolve to deal with the heat stress of warmer, urban pond waters. Examples Plants Phenotypic plasticity in plants includes the timing of transition from vegetative to reproductive growth stage, the allocation of more resources to the roots in soils that contain low concentrations of nutrients, the size of the seeds an individual produces depending on the environment, and the alteration of leaf shape, size, and thickness. Leaves are particularly plastic, and their growth may be altered by light levels. Leaves grown in the light ten The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. What do you call the traits that allow a plant, animal, or other organism to survive and reproduce in its environment? A. adaptations B. additions C. advantages D. settings Answer:
sciq-11190
multiple_choice
During which phase is the moon more than half lit but not full?
[ "new", "crescent", "gibbous phase", "waxing" ]
C
Relavent Documents: Document 0::: A transient lunar phenomenon (TLP) or lunar transient phenomenon (LTP) is a short-lived change in light, color or appearance on the surface of the Moon. The term was created by Patrick Moore in his co-authorship of NASA Technical Report R-277 Chronological Catalog of Reported Lunar Events, published in 1968. Claims of short-lived lunar phenomena go back at least 1,000 years, with some having been observed independently by multiple witnesses or reputable scientists. Nevertheless, the majority of transient lunar phenomenon reports are irreproducible and do not possess adequate control experiments that could be used to distinguish among alternative hypotheses to explain their origins. Most lunar scientists will acknowledge that transient events such as outgassing and impact cratering do occur over geologic time. The controversy lies in the frequency of such events. Description of events Reports of transient lunar phenomena range from foggy patches to permanent changes of the lunar landscape. Cameron classifies these as (1) gaseous, involving mists and other forms of obscuration, (2) reddish colorations, (3) green, blue or violet colorations, (4) brightenings, and (5) darkening. Two extensive catalogs of transient lunar phenomena exist, with the most recent tallying 2,254 events going back to the 6th century. Of the most reliable of these events, at least one-third come from the vicinity of the Aristarchus plateau. An overview of the more famous historical accounts of transient phenomena include the following: Pre 1700 On June 18, 1178, five or more monks from Canterbury reported an upheaval on the Moon shortly after sunset: This description appears outlandish, perhaps due to the writer's and viewers' lack of understanding of astronomical phenomena. In 1976, Jack Hartung proposed that this described the formation of the Giordano Bruno crater. However, more recent studies suggest that it appears very unlikely the 1178 event was related to the formation of Crater Document 1::: Earthlight is the diffuse reflection of sunlight reflected from Earth's surface and clouds. Earthshine (an example of planetshine), also known as the Moon's ashen glow, is the dim illumination of the otherwise unilluminated portion of the Moon by this indirect sunlight. Earthlight on the Moon during the waxing crescent is called "the old Moon in the new Moon's arms", while that during the waning crescent is called "the new Moon in the old Moon's arms". Visibility Earthlight has a calculated maximum apparent magnitude of −17.7 as viewed from the Moon. When the Earth is at maximum phase, the total radiance at the lunar surface is approximately from Earthlight. This is only 0.01% of the radiance from direct Sunlight. Earthshine has a calculated maximum apparent magnitude of −3.69 as viewed from Earth. This phenomenon is most visible from Earth at night (or astronomical twilight) a few days before or after the day of new moon, when the lunar phase is a thin crescent. On these nights, the entire lunar disk is both directly and indirectly sunlit, and is thus unevenly bright enough to see. Earthshine is most clearly seen after dusk during the waxing crescent (in the western sky) and before dawn during the waning crescent (in the eastern sky). The term earthlight would also be suitable for an observer on the Moon seeing Earth during the lunar night, or for an astronaut inside a spacecraft looking out the window. Arthur C. Clarke uses it in this sense in his 1955 novel Earthlight. High contrast photography is also able to reveal the night side of the moon illuminated by Earthlight during a solar eclipse. Radio frequency transmissions are also reflected by the moon; for example, see Earth–Moon–Earth communication. History The phenomenon was sketched and remarked upon in the 16th century by Leonardo da Vinci, who thought that the illumination came from reflections from the Earth's oceans (we now know that clouds account for much more reflected intensity than the oceans) Document 2::: The opposition surge (sometimes known as the opposition effect, opposition spike or Seeliger effect) is the brightening of a rough surface, or an object with many particles, when illuminated from directly behind the observer. The term is most widely used in astronomy, where generally it refers to the sudden noticeable increase in the brightness of a celestial body such as a planet, moon, or comet as its phase angle of observation approaches zero. It is so named because the reflected light from the Moon and Mars appear significantly brighter than predicted by simple Lambertian reflectance when at astronomical opposition. Two physical mechanisms have been proposed for this observational phenomenon: shadow hiding and coherent backscatter. Overview The phase angle is defined as the angle between the observer, the observed object and the source of light. In the case of the Solar System, the light source is the Sun, and the observer is generally on Earth. At zero phase angle, the Sun is directly behind the observer and the object is directly ahead, fully illuminated. As the phase angle of an object lit by the Sun decreases, the object's brightness rapidly increases. This is mainly due to the increased area lit, but is also partly due to the intrinsic brightness of the part that is sunlit. This is affected by such factors as the angle at which light reflected from the object is observed. For this reason, a full moon is more than twice as bright as the moon at first or third quarter, even though the visible area illuminated appears to be exactly twice as large. Physical mechanisms Shadow hiding When the angle of reflection is close to the angle at which the light's rays hit the surface (that is, when the Sun and the object are close to opposition from the viewpoint of the observer), this intrinsic brightness is usually close to its maximum. At a phase angle of zero degrees, all shadows disappear and the object is fully illuminated. When phase angles approach zero, th Document 3::: The Force Concept Inventory is a test measuring mastery of concepts commonly taught in a first semester of physics developed by Hestenes, Halloun, Wells, and Swackhamer (1985). It was the first such "concept inventory" and several others have been developed since for a variety of topics. The FCI was designed to assess student understanding of the Newtonian concepts of force. Hestenes (1998) found that while "nearly 80% of the [students completing introductory college physics courses] could state Newton's Third Law at the beginning of the course, FCI data showed that less than 15% of them fully understood it at the end". These results have been replicated in a number of studies involving students at a range of institutions (see sources section below), and have led to greater recognition in the physics education research community of the importance of students' "active engagement" with the materials to be mastered. The 1995 version has 30 five-way multiple choice questions. Example question (question 4): Gender differences The FCI shows a gender difference in favor of males that has been the subject of some research in regard to gender equity in education. Men score on average about 10% higher. Document 4::: Conceptual questions or conceptual problems in science, technology, engineering, and mathematics (STEM) education are questions that can be answered based only on the knowledge of relevant concepts, rather than performing extensive calculations. They contrast with most homework and exam problems in science and engineering that typically require plugging in numerical values into previously discussed formulas. Such "plug-and-chug" numerical problems can often be solved correctly by just matching the pattern of the problem to a previously discussed problem and changing the numerical inputs, which requires significant amounts of time to perform the calculations but does not test or deepen the understanding of how the concepts and formulas should work together. Conceptual questions, therefore, provide a good complement to conventional numerical problems because they need minimal or no calculations and instead encourage the students to engage more deeply with the underlying concepts and how they relate to formulas. Conceptual problems are often formulated as multiple-choice questions, making them easy to use during in-class discussions, particularly when utilizing active learning, peer instruction, and audience response. An example of a conceptual question in undergraduate thermodynamics is provided below: During adiabatic expansion of an ideal gas, its temperatureincreases decreases stays the same Impossible to tell/need more information The use of conceptual questions in physics was popularized by Eric Mazur, particularly in the form of multiple-choice tests that he called ConcepTests. In recent years, multiple websites that maintain lists of conceptual questions have been created by instructors for various disciplines. Some books on physics provide many examples of conceptual questions as well. Multiple conceptual questions can be assembled into a concept inventory to test the working knowledge of students at the beginning of a course or to track the improvement in The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. During which phase is the moon more than half lit but not full? A. new B. crescent C. gibbous phase D. waxing Answer:
sciq-2817
multiple_choice
A stem cell is an unspecialized cell that can divide without limit as needed and can, under specific conditions, differentiate into these?
[ "clones", "germ cells", "infectious cells", "specialized cells" ]
D
Relavent Documents: Document 0::: Adult stem cells are undifferentiated cells, found throughout the body after development, that multiply by cell division to replenish dying cells and regenerate damaged tissues. Also known as somatic stem cells (from Greek σωματικóς, meaning of the body), they can be found in juvenile, adult animals, and humans, unlike embryonic stem cells. Scientific interest in adult stem cells is centered around two main characteristics. The first of which is their ability to divide or self-renew indefinitely, and the second their ability to generate all the cell types of the organ from which they originate, potentially regenerating the entire organ from a few cells. Unlike embryonic stem cells, the use of human adult stem cells in research and therapy is not considered to be controversial, as they are derived from adult tissue samples rather than human embryos designated for scientific research. The main functions of adult stem cells are to replace cells that are at risk of possibly dying as a result of disease or injury and to maintain a state of homeostasis within the cell. There are three main methods to determine if the adult stem cell is capable of becoming a specialized cell. The adult stem cell can be labeled in vivo and tracked, it can be isolated and then transplanted back into the organism, and it can be isolated in vivo and manipulated with growth hormones. They have mainly been studied in humans and model organisms such as mice and rats. Structure Defining properties A stem cell possesses two properties: Self-renewal is the ability to go through numerous cycles of cell division while still maintaining its undifferentiated state. Stem cells can replicate several times and can result in the formation of two stem cells, one stem cell more differentiated than the other, or two differentiated cells. Multipotency or multidifferentiative potential is the ability to generate progeny of several distinct cell types, (for example glial cells and neurons) as opposed to u Document 1::: A cell type is a classification used to identify cells that share morphological or phenotypical features. A multicellular organism may contain cells of a number of widely differing and specialized cell types, such as muscle cells and skin cells, that differ both in appearance and function yet have identical genomic sequences. Cells may have the same genotype, but belong to different cell types due to the differential regulation of the genes they contain. Classification of a specific cell type is often done through the use of microscopy (such as those from the cluster of differentiation family that are commonly used for this purpose in immunology). Recent developments in single cell RNA sequencing facilitated classification of cell types based on shared gene expression patterns. This has led to the discovery of many new cell types in e.g. mouse cortex, hippocampus, dorsal root ganglion and spinal cord. Animals have evolved a greater diversity of cell types in a multicellular body (100–150 different cell types), compared with 10–20 in plants, fungi, and protists. The exact number of cell types is, however, undefined, and the Cell Ontology, as of 2021, lists over 2,300 different cell types. Multicellular organisms All higher multicellular organisms contain cells specialised for different functions. Most distinct cell types arise from a single totipotent cell that differentiates into hundreds of different cell types during the course of development. Differentiation of cells is driven by different environmental cues (such as cell–cell interaction) and intrinsic differences (such as those caused by the uneven distribution of molecules during division). Multicellular organisms are composed of cells that fall into two fundamental types: germ cells and somatic cells. During development, somatic cells will become more specialized and form the three primary germ layers: ectoderm, mesoderm, and endoderm. After formation of the three germ layers, cells will continue to special Document 2::: Cell potency is a cell's ability to differentiate into other cell types. The more cell types a cell can differentiate into, the greater its potency. Potency is also described as the gene activation potential within a cell, which like a continuum, begins with totipotency to designate a cell with the most differentiation potential, pluripotency, multipotency, oligopotency, and finally unipotency. Totipotency Totipotency (Lat. totipotentia, "ability for all [things]") is the ability of a single cell to divide and produce all of the differentiated cells in an organism. Spores and zygotes are examples of totipotent cells. In the spectrum of cell potency, totipotency represents the cell with the greatest differentiation potential, being able to differentiate into any embryonic cell, as well as any extraembryonic cell. In contrast, pluripotent cells can only differentiate into embryonic cells. A fully differentiated cell can return to a state of totipotency. The conversion to totipotency is complex and not fully understood. In 2011, research revealed that cells may differentiate not into a fully totipotent cell, but instead into a "complex cellular variation" of totipotency. Stem cells resembling totipotent blastomeres from 2-cell stage embryos can arise spontaneously in mouse embryonic stem cell cultures and also can be induced to arise more frequently in vitro through down-regulation of the chromatin assembly activity of CAF-1. The human development model can be used to describe how totipotent cells arise. Human development begins when a sperm fertilizes an egg and the resulting fertilized egg creates a single totipotent cell, a zygote. In the first hours after fertilization, this zygote divides into identical totipotent cells, which can later develop into any of the three germ layers of a human (endoderm, mesoderm, or ectoderm), or into cells of the placenta (cytotrophoblast or syncytiotrophoblast). After reaching a 16-cell stage, the totipotent cells of the morula d Document 3::: Stem Cells is a peer-review scientific journal of cell biology. It was established as The International Journal of Cell Cloning in 1983, acquiring its current title in 1993. The journal is published by AlphaMed Press, and is currently edited by Jan Nolta (University of California). Stem Cells currently has an impact factor of 6.277. Abstracting and indexing The journal is abstracted and indexed in the following bibliographic databases: Document 4::: Cellular differentiation is the process in which a stem cell changes from one type to a differentiated one. Usually, the cell changes to a more specialized type. Differentiation happens multiple times during the development of a multicellular organism as it changes from a simple zygote to a complex system of tissues and cell types. Differentiation continues in adulthood as adult stem cells divide and create fully differentiated daughter cells during tissue repair and during normal cell turnover. Some differentiation occurs in response to antigen exposure. Differentiation dramatically changes a cell's size, shape, membrane potential, metabolic activity, and responsiveness to signals. These changes are largely due to highly controlled modifications in gene expression and are the study of epigenetics. With a few exceptions, cellular differentiation almost never involves a change in the DNA sequence itself. However, metabolic composition does get altered quite dramatically where stem cells are characterized by abundant metabolites with highly unsaturated structures whose levels decrease upon differentiation. Thus, different cells can have very different physical characteristics despite having the same genome. A specialized type of differentiation, known as terminal differentiation, is of importance in some tissues, including vertebrate nervous system, striated muscle, epidermis and gut. During terminal differentiation, a precursor cell formerly capable of cell division permanently leaves the cell cycle, dismantles the cell cycle machinery and often expresses a range of genes characteristic of the cell's final function (e.g. myosin and actin for a muscle cell). Differentiation may continue to occur after terminal differentiation if the capacity and functions of the cell undergo further changes. Among dividing cells, there are multiple levels of cell potency, which is the cell's ability to differentiate into other cell types. A greater potency indicates a larger n The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. A stem cell is an unspecialized cell that can divide without limit as needed and can, under specific conditions, differentiate into these? A. clones B. germ cells C. infectious cells D. specialized cells Answer:
sciq-7292
multiple_choice
Programmed cell death, which goes by what term, is important for removing damaged or unnecessary cells?
[ "mytosis", "synthesis", "mutations", "apoptosis" ]
D
Relavent Documents: Document 0::: Cellular components are the complex biomolecules and structures of which cells, and thus living organisms, are composed. Cells are the structural and functional units of life. The smallest organisms are single cells, while the largest organisms are assemblages of trillions of cells. DNA, double stranded macromolecule that carries the hereditary information of the cell and found in all living cells; each cell carries chromosome(s) having a distinctive DNA sequence. Examples include macromolecules such as proteins and nucleic acids, biomolecular complexes such as a ribosome, and structures such as membranes, and organelles. While the majority of cellular components are located within the cell itself, some may exist in extracellular areas of an organism. Cellular components may also be called biological matter or biological material. Most biological matter has the characteristics of soft matter, being governed by relatively small energies. All known life is made of biological matter. To be differentiated from other theoretical or fictional life forms, such life may be called carbon-based, cellular, organic, biological, or even simply living – as some definitions of life exclude hypothetical types of biochemistry. See also Cell (biology) Cell biology Biomolecule Organelle Tissue (biology) External links https://web.archive.org/web/20130918033010/http://bioserv.fiu.edu/~walterm/FallSpring/review1_fall05_chap_cell3.htm Document 1::: This lecture, named in memory of Keith R. Porter, is presented to an eminent cell biologist each year at the ASCB Annual Meeting. The ASCB Program Committee and the ASCB President recommend the Porter Lecturer to the Porter Endowment each year. Lecturers Source: ASCB See also List of biology awards Document 2::: Cell death is the event of a biological cell ceasing to carry out its functions. This may be the result of the natural process of old cells dying and being replaced by new ones, as in programmed cell death, or may result from factors such as diseases, localized injury, or the death of the organism of which the cells are part. Apoptosis or Type I cell-death, and autophagy or Type II cell-death are both forms of programmed cell death, while necrosis is a non-physiological process that occurs as a result of infection or injury. Programmed cell death Programmed cell death (PCD) is cell death mediated by an intracellular program. PCD is carried out in a regulated process, which usually confers advantage during an organism's life-cycle. For example, the differentiation of fingers and toes in a developing human embryo occurs because cells between the fingers apoptose; the result is that the digits separate. PCD serves fundamental functions during both plant and metazoa (multicellular animals) tissue development. Apoptosis Apoptosis is the processor of programmed cell death (PCD) that may occur in multicellular organisms. Biochemical events lead to characteristic cell changes (morphology) and death. These changes include blebbing, cell shrinkage, nuclear fragmentation, chromatin condensation, and chromosomal DNA fragmentation. It is now thought that – in a developmental context – cells are induced to positively commit suicide whilst in a homeostatic context; the absence of certain survival factors may provide the impetus for suicide. There appears to be some variation in the morphology and indeed the biochemistry of these suicide pathways; some treading the path of "apoptosis", others following a more generalized pathway to deletion, but both usually being genetically and synthetically motivated. There is some evidence that certain symptoms of "apoptosis" such as endonuclease activation can be spuriously induced without engaging a genetic cascade, however, presumably Document 3::: In cellular neuroscience, chromatolysis is the dissolution of the Nissl bodies in the cell body of a neuron. It is an induced response of the cell usually triggered by axotomy, ischemia, toxicity to the cell, cell exhaustion, virus infections, and hibernation in lower vertebrates. Neuronal recovery through regeneration can occur after chromatolysis, but most often it is a precursor of apoptosis. The event of chromatolysis is also characterized by a prominent migration of the nucleus towards the periphery of the cell and an increase in the size of the nucleolus, nucleus, and cell body. The term "chromatolysis" was initially used in the 1940s to describe the observed form of cell death characterized by the gradual disintegration of nuclear components; a process which is now called apoptosis. Chromatolysis is still used as a term to distinguish the particular apoptotic process in the neuronal cells, where Nissl substance disintegrates. History In 1885, researcher Walther Flemming described dying cells in degenerating mammalian ovarian follicles. The cells showed variable stages of pyknotic chromatin. These stages included chromatin condensation, which Flemming described as "half-moon" shaped and appearing as "chromatin balls," or structures resembling large, smooth, and round electron-dense chromatin masses. Other stages included cell fractionation into smaller bodies. Flemming named this degenerative process "chromatolysis" to describe the gradual disintegration of nuclear components. The process he described now fits with the relatively new term, apoptosis, to describe cell death. Around the same time of Flemming's research, chromatolysis was also studied in the lactating mammary glands and in breast cancer cells. From observing the regression of ovarian follicles in mammals, it was argued that a necessary cellular process existed to counterbalance the proliferation of cells by mitosis. At this time, chromatolysis was proposed to play a major role in this ph Document 4::: Stem Cells is a peer-review scientific journal of cell biology. It was established as The International Journal of Cell Cloning in 1983, acquiring its current title in 1993. The journal is published by AlphaMed Press, and is currently edited by Jan Nolta (University of California). Stem Cells currently has an impact factor of 6.277. Abstracting and indexing The journal is abstracted and indexed in the following bibliographic databases: The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. Programmed cell death, which goes by what term, is important for removing damaged or unnecessary cells? A. mytosis B. synthesis C. mutations D. apoptosis Answer:
sciq-2481
multiple_choice
What phase follows ovulation?
[ "varicose phase", "interstitial phase", "telophase", "luteal phase" ]
D
Relavent Documents: Document 0::: Oogenesis, ovogenesis, or oögenesis is the differentiation of the ovum (egg cell) into a cell competent to further develop when fertilized. It is developed from the primary oocyte by maturation. Oogenesis is initiated in the embryonic stage. Oogenesis in non-human mammals In mammals, the first part of oogenesis starts in the germinal epithelium, which gives rise to the development of ovarian follicles, the functional unit of the ovary. Oogenesis consists of several sub-processes: oocytogenesis, ootidogenesis, and finally maturation to form an ovum (oogenesis proper). Folliculogenesis is a separate sub-process that accompanies and supports all three oogenetic sub-processes. Oogonium —(Oocytogenesis)—> Primary Oocyte —(Meiosis I)—> First Polar body (Discarded afterward) + Secondary oocyte —(Meiosis II)—> Second Polar Body (Discarded afterward) + Ovum Oocyte meiosis, important to all animal life cycles yet unlike all other instances of animal cell division, occurs completely without the aid of spindle-coordinating centrosomes. The creation of oogonia The creation of oogonia traditionally doesn't belong to oogenesis proper, but, instead, to the common process of gametogenesis, which, in the female human, begins with the processes of folliculogenesis, oocytogenesis, and ootidogenesis. Oogonia enter meiosis during embryonic development, becoming oocytes. Meiosis begins with DNA replication and meiotic crossing over. It then stops in early prophase. Maintenance of meiotic arrest Mammalian oocytes are maintained in meiotic prophase arrest for a very long time—months in mice, years in humans. Initially the arrest is due to lack of sufficient cell cycle proteins to allow meiotic progression. However, as the oocyte grows, these proteins are synthesized, and meiotic arrest becomes dependent on cyclic AMP. The cyclic AMP is generated by the oocyte by adenylyl cyclase in the oocyte membrane. The adenylyl cyclase is kept active by a constitutively active G-protein-coupled Document 1::: Induced ovulation is when a female animal ovulates due to an externally-derived stimulus during, or just prior to, mating, rather than ovulating cyclically or spontaneously. Stimuli causing induced ovulation include the physical act of coitus or mechanical stimulation simulating this, sperm and pheromones. Ovulation occurs at the ovary surface and is described as the process in which an oocyte (female germ cell) is released from the follicle. Ovulation is a non-deleterious 'inflammatory response' which is initiated by a luteinizing hormone (LH) surge. The mechanism of ovulation varies between species. In humans the ovulation process occurs around day 14 of the menstrual cycle, this can also be referred to as 'cyclical spontaneous ovulation'. However the monthly menstruation process is typically linked to humans and primates, all other animal species ovulate by various other mechanisms. Spontaneous ovulation is the ovulatory process in which the maturing ovarian follicles secrete ovarian steroids to generate pulsatile GnRH (the neuropeptide which controls all vertebrate reproductive function) release into the median eminence (the area which connects the hypothalamus to the anterior pituitary gland) to ultimately cause a pre-ovulatory LH surge. Spontaneously ovulating species go through menstrual cycles and are fertile at certain times based on what part of the cycle they are in. Species in which the females are spontaneous ovulators include rats, mice, guinea pigs, horse, pigs, sheep, monkeys, and humans. Induced ovulation is the process in which the pre-ovulatory LH surge and therefore ovulation is induced by some component of coitus e.g. receipt of genital stimulation. Usually, spontaneous steroid-induced LH surges are not observed in induced ovulator species throughout their reproductive cycles, which indicates that GnRH release is absent or reduced due to lack of positive feedback action from steroid hormones. However, by contradiction, some spontaneously ovu Document 2::: Interphase is the portion of the cell cycle that is not accompanied by visible changes under the microscope, and includes the G1, S and G2 phases. During interphase, the cell grows (G1), replicates its DNA (S) and prepares for mitosis (G2). A cell in interphase is not simply quiescent. The term quiescent (i.e. dormant) would be misleading since a cell in interphase is very busy synthesizing proteins, copying DNA into RNA, engulfing extracellular material, processing signals, to name just a few activities. The cell is quiescent only in the sense of cell division (i.e. the cell is out of the cell cycle, G0). Interphase is the phase of the cell cycle in which a typical cell spends most of its life. Interphase is the 'daily living' or metabolic phase of the cell, in which the cell obtains nutrients and metabolizes them, grows, replicates its DNA in preparation for mitosis, and conducts other "normal" cell functions. Interphase was formerly called the resting phase. However, interphase does not describe a cell that is merely resting; rather, the cell is living and preparing for later cell division, so the name was changed. A common misconception is that interphase is the first stage of mitosis, but since mitosis is the division of the nucleus, prophase is actually the first stage. In interphase, the cell gets itself ready for mitosis or meiosis. Somatic cells, or normal diploid cells of the body, go through mitosis in order to reproduce themselves through cell division, whereas diploid germ cells (i.e., primary spermatocytes and primary oocytes) go through meiosis in order to create haploid gametes (i.e., sperm and ova) for the purpose of sexual reproduction. Stages of interphase There are three stages of cellular interphase, with each phase ending when a cellular checkpoint checks the accuracy of the stage's completion before proceeding to the next. The stages of interphase are: G1 (Gap 1), in which the cell grows and functions normally. During this time, a high a Document 3::: An immature ovum is a cell that goes through the process of oogenesis to become an ovum. It can be an oogonium, an oocyte, or an ootid. An oocyte, in turn, can be either primary or secondary, depending on how far it has come in its process of meiosis. Oogonium Oogonia are the cells that turn into primary oocytes in oogenesis. They are diploid, i.e. Oogonia are created in early embryonic life. All have turned into primary oocytes at late fetal age. Primary oocyte The primary oocyte is defined by its process of ootidogenesis, which is meiosis. It has duplicated its DNA, so that each chromosome has two chromatids, i.e. 92 chromatids all in all (4C). When meiosis I is completed, one secondary oocyte and one polar body is created. Primary oocytes have been created in late fetal life. This is the stage where immature ova spend most of their lifetime, more specifically in diplotene of prophase I of meiosis. The halt is called dictyate. Most degenerate by atresia, but a few go through ovulation, and that's the trigger to the next step. Thus, an immature ovum can spend up to ~55 years as a primary oocyte (the last ovulation before menopause). Secondary oocyte The secondary oocyte is the cell that is formed by meiosis I in oogenesis. Thus, it has only one of each pair of homologous chromosomes. In other words, it is haploid. However, each chromosome still has two chromatids, making a total of 46 chromatids (1N but 2C). The secondary oocyte continues the second stage of meiosis (meiosis II), and the daughter cells are one ootid and one polar body. Secondary oocytes are the immature ovum shortly after ovulation, to fertilization, where it turns into an ootid. Thus, the time as a secondary oocyte is measured in days. Document 4::: Secondary oocytes are the immature ovum shortly after ovulation, to fertilization, where it turns into an ootid. Thus, the time as a secondary oocyte is measured in days. Ootid An ootid is the haploid result of ootidogenesis. In oogenesis, it doesn't really have any significance in itself, since it is very similar to the ovum. However, it fills the purpose as the female counterpart of the male spermatid in spermatogenesis The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. What phase follows ovulation? A. varicose phase B. interstitial phase C. telophase D. luteal phase Answer:
sciq-11345
multiple_choice
The vas deferens and ejaculatory ducts transport sperm from the epididymes to the urethra in what system?
[ "famous reproductive system", "male reproductive system", "cardiovascular system", "digestive system" ]
B
Relavent Documents: Document 0::: Reproductive biology includes both sexual and asexual reproduction. Reproductive biology includes a wide number of fields: Reproductive systems Endocrinology Sexual development (Puberty) Sexual maturity Reproduction Fertility Human reproductive biology Endocrinology Human reproductive biology is primarily controlled through hormones, which send signals to the human reproductive structures to influence growth and maturation. These hormones are secreted by endocrine glands, and spread to different tissues in the human body. In humans, the pituitary gland synthesizes hormones used to control the activity of endocrine glands. Reproductive systems Internal and external organs are included in the reproductive system. There are two reproductive systems including the male and female, which contain different organs from one another. These systems work together in order to produce offspring. Female reproductive system The female reproductive system includes the structures involved in ovulation, fertilization, development of an embryo, and birth. These structures include: Ovaries Oviducts Uterus Vagina Mammary Glands Estrogen is one of the sexual reproductive hormones that aid in the sexual reproductive system of the female. Male reproductive system The male reproductive system includes testes, rete testis, efferent ductules, epididymis, sex accessory glands, sex accessory ducts and external genitalia. Testosterone, an androgen, although present in both males and females, is relatively more abundant in males. Testosterone serves as one of the major sexual reproductive hormones in the male reproductive system However, the enzyme aromatase is present in testes and capable of synthesizing estrogens from androgens. Estrogens are present in high concentrations in luminal fluids of the male reproductive tract. Androgen and estrogen receptors are abundant in epithelial cells of the male reproductive tract. Animal Reproductive Biology Animal reproduction oc Document 1::: The reproductive system of an organism, also known as the genital system, is the biological system made up of all the anatomical organs involved in sexual reproduction. Many non-living substances such as fluids, hormones, and pheromones are also important accessories to the reproductive system. Unlike most organ systems, the sexes of differentiated species often have significant differences. These differences allow for a combination of genetic material between two individuals, which allows for the possibility of greater genetic fitness of the offspring. Animals In mammals, the major organs of the reproductive system include the external genitalia (penis and vulva) as well as a number of internal organs, including the gamete-producing gonads (testicles and ovaries). Diseases of the human reproductive system are very common and widespread, particularly communicable sexually transmitted diseases. Most other vertebrates have similar reproductive systems consisting of gonads, ducts, and openings. However, there is a great diversity of physical adaptations as well as reproductive strategies in every group of vertebrates. Vertebrates Vertebrates share key elements of their reproductive systems. They all have gamete-producing organs known as gonads. In females, these gonads are then connected by oviducts to an opening to the outside of the body, typically the cloaca, but sometimes to a unique pore such as a vagina or intromittent organ. Humans The human reproductive system usually involves internal fertilization by sexual intercourse. During this process, the male inserts their erect penis into the female's vagina and ejaculates semen, which contains sperm. The sperm then travels through the vagina and cervix into the uterus or fallopian tubes for fertilization of the ovum. Upon successful fertilization and implantation, gestation of the fetus then occurs within the female's uterus for approximately nine months, this process is known as pregnancy in humans. Gestati Document 2::: Urination is the release of urine from the bladder through the urethra to the outside of the body. It is the urinary system's form of excretion. It is also known medically as micturition, voiding, uresis, or, rarely, emiction, and known colloquially by various names including peeing, weeing, pissing, and euphemistically going (for a) number one. In healthy humans and other animals, the process of urination is under voluntary control. In infants, some elderly individuals, and those with neurological injury, urination may occur as a reflex. It is normal for adult humans to urinate up to seven times during the day. In some animals, in addition to expelling waste material, urination can mark territory or express submissiveness. Physiologically, urination involves coordination between the central, autonomic, and somatic nervous systems. Brain centres that regulate urination include the pontine micturition center, periaqueductal gray, and the cerebral cortex. In placental mammals, urine is drained through the urinary meatus, a urethral opening in the male penis or female vulval vestibule. Anatomy and physiology Anatomy of the bladder and outlet The main organs involved in urination are the urinary bladder and the urethra. The smooth muscle of the bladder, known as the detrusor, is innervated by sympathetic nervous system fibers from the lumbar spinal cord and parasympathetic fibers from the sacral spinal cord. Fibers in the pelvic nerves constitute the main afferent limb of the voiding reflex; the parasympathetic fibers to the bladder that constitute the excitatory efferent limb also travel in these nerves. Part of the urethra is surrounded by the male or female external urethral sphincter, which is innervated by the somatic pudendal nerve originating in the cord, in an area termed Onuf's nucleus. Smooth muscle bundles pass on either side of the urethra, and these fibers are sometimes called the internal urethral sphincter, although they do not encircle the urethra. Document 3::: Mesonephric tubules are genital ridges that are next to the mesonephros. In males, some of the mesonephric kidney tubules, instead of being used to filter blood like the rest, "grow" over to the developing testes, penetrate them, and become connected to the seminiferous tubules of the testes. They also form the epididymis and the paradidymis. The sperm differentiate inside the seminiferous tubules, then swim down these tubes, then through these special mesonephric tubules, and go down inside Wolffian duct, to the coelom and finally to the organ the animal uses to transport sperm into females. In females, it gives rise to the epoophoron and the paroöphoron. Document 4::: This list of related male and female reproductive organs shows how the male and female reproductive organs and the development of the reproductive system are related, sharing a common developmental path. This makes them biological homologues. These organs differentiate into the respective sex organs in males and females. List Internal organs External organs The external genitalia of both males and females have similar origins. They arise from the genital tubercle that forms anterior to the cloacal folds (proliferating mesenchymal cells around the cloacal membrane). The caudal aspect of the cloacal folds further subdivides into the posterior anal folds and the anterior urethral folds. Bilateral to the urethral fold, genital swellings (tubercles) become prominent. These structures are the future scrotum and labia majora in males and females, respectively. The genital tubercles of an eight-week-old embryo of either sex are identical. They both have a glans area, which will go on to form the glans clitoridis (females) or glans penis (males), a urogenital fold and groove, and an anal tubercle. At around ten weeks, the external genitalia are still similar. At the base of the glans, there is a groove known as the coronal sulcus or corona glandis. It is the site of attachment of the future prepuce. Just anterior to the anal tubercle, the caudal end of the left and right urethral folds fuse to form the urethral raphe. The lateral part of the genital tubercle (called the lateral tubercle) grows longitudinally and is about the same length in either sex. Human physiology The male external genitalia include the penis and the scrotum. The female external genitalia include the clitoris, the labia, and the vaginal opening, which are collectively called the vulva. External genitalia vary widely in external appearance among different people. One difference between the glans penis and the glans clitoridis is that the glans clitoridis packs nerve endings into a volume only about The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. The vas deferens and ejaculatory ducts transport sperm from the epididymes to the urethra in what system? A. famous reproductive system B. male reproductive system C. cardiovascular system D. digestive system Answer:
sciq-4865
multiple_choice
All living things require what, which most ecosystems obtain from the sun?
[ "heat", "water", "energy", "oxygen" ]
C
Relavent Documents: Document 0::: Plant ecology is a subdiscipline of ecology that studies the distribution and abundance of plants, the effects of environmental factors upon the abundance of plants, and the interactions among plants and between plants and other organisms. Examples of these are the distribution of temperate deciduous forests in North America, the effects of drought or flooding upon plant survival, and competition among desert plants for water, or effects of herds of grazing animals upon the composition of grasslands. A global overview of the Earth's major vegetation types is provided by O.W. Archibold. He recognizes 11 major vegetation types: tropical forests, tropical savannas, arid regions (deserts), Mediterranean ecosystems, temperate forest ecosystems, temperate grasslands, coniferous forests, tundra (both polar and high mountain), terrestrial wetlands, freshwater ecosystems and coastal/marine systems. This breadth of topics shows the complexity of plant ecology, since it includes plants from floating single-celled algae up to large canopy forming trees. One feature that defines plants is photosynthesis. Photosynthesis is the process of a chemical reactions to create glucose and oxygen, which is vital for plant life. One of the most important aspects of plant ecology is the role plants have played in creating the oxygenated atmosphere of earth, an event that occurred some 2 billion years ago. It can be dated by the deposition of banded iron formations, distinctive sedimentary rocks with large amounts of iron oxide. At the same time, plants began removing carbon dioxide from the atmosphere, thereby initiating the process of controlling Earth's climate. A long term trend of the Earth has been toward increasing oxygen and decreasing carbon dioxide, and many other events in the Earth's history, like the first movement of life onto land, are likely tied to this sequence of events. One of the early classic books on plant ecology was written by J.E. Weaver and F.E. Clements. It Document 1::: A nutrient is a substance used by an organism to survive, grow, and reproduce. The requirement for dietary nutrient intake applies to animals, plants, fungi, and protists. Nutrients can be incorporated into cells for metabolic purposes or excreted by cells to create non-cellular structures, such as hair, scales, feathers, or exoskeletons. Some nutrients can be metabolically converted to smaller molecules in the process of releasing energy, such as for carbohydrates, lipids, proteins, and fermentation products (ethanol or vinegar), leading to end-products of water and carbon dioxide. All organisms require water. Essential nutrients for animals are the energy sources, some of the amino acids that are combined to create proteins, a subset of fatty acids, vitamins and certain minerals. Plants require more diverse minerals absorbed through roots, plus carbon dioxide and oxygen absorbed through leaves. Fungi live on dead or living organic matter and meet nutrient needs from their host. Different types of organisms have different essential nutrients. Ascorbic acid (vitamin C) is essential, meaning it must be consumed in sufficient amounts, to humans and some other animal species, but some animals and plants are able to synthesize it. Nutrients may be organic or inorganic: organic compounds include most compounds containing carbon, while all other chemicals are inorganic. Inorganic nutrients include nutrients such as iron, selenium, and zinc, while organic nutrients include, among many others, energy-providing compounds and vitamins. A classification used primarily to describe nutrient needs of animals divides nutrients into macronutrients and micronutrients. Consumed in relatively large amounts (grams or ounces), macronutrients (carbohydrates, fats, proteins, water) are primarily used to generate energy or to incorporate into tissues for growth and repair. Micronutrients are needed in smaller amounts (milligrams or micrograms); they have subtle biochemical and physiologi Document 2::: Terrestrial ecosystems are ecosystems that are found on land. Examples include tundra, taiga, temperate deciduous forest, tropical rain forest, grassland, deserts. Terrestrial ecosystems differ from aquatic ecosystems by the predominant presence of soil rather than water at the surface and by the extension of plants above this soil/water surface in terrestrial ecosystems. There is a wide range of water availability among terrestrial ecosystems (including water scarcity in some cases), whereas water is seldom a limiting factor to organisms in aquatic ecosystems. Because water buffers temperature fluctuations, terrestrial ecosystems usually experience greater diurnal and seasonal temperature fluctuations than do aquatic ecosystems in similar climates. Terrestrial ecosystems are of particular importance especially in meeting Sustainable Development Goal 15 that targets the conservation-restoration and sustainable use of terrestrial ecosystems. Organisms and processes Organisms in terrestrial ecosystems have adaptations that allow them to obtain water when the entire body is no longer bathed in that fluid, means of transporting the water from limited sites of acquisition to the rest of the body, and means of preventing the evaporation of water from body surfaces. They also have traits that provide body support in the atmosphere, a much less buoyant medium than water, and other traits that render them capable of withstanding the extremes of temperature, wind, and humidity that characterize terrestrial ecosystems. Finally, the organisms in terrestrial ecosystems have evolved many methods of transporting gametes in environments where fluid flow is much less effective as a transport medium. This is terrestrial ecosystems. Size and plants Terrestrial ecosystems occupy 55,660,000 mi2 (144,150,000 km2), or 28.26% of Earth's surface. Major plant taxa in terrestrial ecosystems are members of the division Magnoliophyta (flowering plants), of which there are about 275,000 Document 3::: Earth system science (ESS) is the application of systems science to the Earth. In particular, it considers interactions and 'feedbacks', through material and energy fluxes, between the Earth's sub-systems' cycles, processes and "spheres"—atmosphere, hydrosphere, cryosphere, geosphere, pedosphere, lithosphere, biosphere, and even the magnetosphere—as well as the impact of human societies on these components. At its broadest scale, Earth system science brings together researchers across both the natural and social sciences, from fields including ecology, economics, geography, geology, glaciology, meteorology, oceanography, climatology, paleontology, sociology, and space science. Like the broader subject of systems science, Earth system science assumes a holistic view of the dynamic interaction between the Earth's spheres and their many constituent subsystems fluxes and processes, the resulting spatial organization and time evolution of these systems, and their variability, stability and instability. Subsets of Earth System science include systems geology and systems ecology, and many aspects of Earth System science are fundamental to the subjects of physical geography and climate science. Definition The Science Education Resource Center, Carleton College, offers the following description: "Earth System science embraces chemistry, physics, biology, mathematics and applied sciences in transcending disciplinary boundaries to treat the Earth as an integrated system. It seeks a deeper understanding of the physical, chemical, biological and human interactions that determine the past, current and future states of the Earth. Earth System science provides a physical basis for understanding the world in which we live and upon which humankind seeks to achieve sustainability". Earth System science has articulated four overarching, definitive and critically important features of the Earth System, which include: Variability: Many of the Earth System's natural 'modes' and variab Document 4::: The Inverness Campus is an area in Inverness, Scotland. 5.5 hectares of the site have been designated as an enterprise area for life sciences by the Scottish Government. This designation is intended to encourage research and development in the field of life sciences, by providing incentives to locate at the site. The enterprise area is part of a larger site, over 200 acres, which will house Inverness College, Scotland's Rural College (SRUC), the University of the Highlands and Islands, a health science centre and sports and other community facilities. The purpose built research hub will provide space for up to 30 staff and researchers, allowing better collaboration. The Highland Science Academy will be located on the site, a collaboration formed by Highland Council, employers and public bodies. The academy will be aimed towards assisting young people to gain the necessary skills to work in the energy, engineering and life sciences sectors. History The site was identified in 2006. Work started to develop the infrastructure on the site in early 2012. A virtual tour was made available in October 2013 to help mark Doors Open Day. The construction had reached halfway stage in May 2014, meaning that it is on track to open doors to receive its first students in August 2015. In May 2014, work was due to commence on a building designed to provide office space and laboratories as part of the campus's "life science" sector. Morrison Construction have been appointed to undertake the building work. Scotland's Rural College (SRUC) will be able to relocate their Inverness-based activities to the Campus. SRUC's research centre for Comparative Epidemiology and Medicine, and Agricultural Business Consultancy services could co-locate with UHI where their activities have complementary themes. By the start of 2017, there were more than 600 people working at the site. In June 2021, a new bridge opened connecting Inverness Campus to Inverness Shopping Park. It crosses the Aberdeen The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. All living things require what, which most ecosystems obtain from the sun? A. heat B. water C. energy D. oxygen Answer:
sciq-6848
multiple_choice
Which of the electromagnetic waves have the shortest wavelengths and highest frequencies?
[ "plasma", "gamma", "ultraviolet", "beta" ]
B
Relavent Documents: Document 0::: Terahertz radiation – also known as submillimeter radiation, terahertz waves, tremendously high frequency (THF), T-rays, T-waves, T-light, T-lux or THz – consists of electromagnetic waves within the ITU-designated band of frequencies from 0.3 to 3 terahertz (THz), although the upper boundary is somewhat arbitrary and is considered by some sources as 30 THz. One terahertz is 1012 Hz or 1000 GHz. Wavelengths of radiation in the terahertz band correspondingly range from 1 mm to 0.1 mm = 100 µm. Because terahertz radiation begins at a wavelength of around 1 millimeter and proceeds into shorter wavelengths, it is sometimes known as the submillimeter band, and its radiation as submillimeter waves, especially in astronomy. This band of electromagnetic radiation lies within the transition region between microwave and far infrared, and can be regarded as either. At some frequencies, terahertz radiation is strongly absorbed by the gases of the atmosphere, and in air is attenuated to zero within a few meters, so it is not practical for terrestrial radio communication at such frequencies. However, there are frequency windows in Earth's atmosphere, where the terahertz radiation could propagate up to 1 km or even longer depending on atmospheric conditions. The most important is the 0.3 THz band that will be used for 6G communications. It can penetrate thin layers of materials but is blocked by thicker objects. THz beams transmitted through materials can be used for material characterization, layer inspection, relief measurement, and as a lower-energy alternative to X-rays for producing high resolution images of the interior of solid objects. Terahertz radiation occupies a middle ground where the ranges of microwaves and infrared light waves overlap, known as the “terahertz gap”; it is called a “gap” because the technology for its generation and manipulation is still in its infancy. The generation and modulation of electromagnetic waves in this frequency range ceases to be pos Document 1::: The IEEE Heinrich Hertz Medal was a science award presented by the IEEE for outstanding achievements in the field of electromagnetic waves. The medal was named in honour of German physicist Heinrich Hertz, and was first proposed in 1986 by IEEE Region 8 (Germany) as a centennial recognition of Hertz's work on electromagnetic radiation theory from 1886 to 1891. The medal was first awarded in 1988, and was presented annually until 2001. It was officially discontinued in November 2009. Recipients 1988: Hans-Georg Unger (Technical University at Brunswick, Germany) for outstanding merits in radio-frequency science, particularly the theory of dielectric wave guides and their application in modern wide-band communication. 1989: Nathan Marcuvitz (Polytechnic University of New York, United States) for fundamental theoretical and experimental contributions to the engineering formulation of electromagnetic field theory. 1990: John D. Kraus (Ohio State University, United States) for pioneering work in radio astronomy and the development of the helical antenna and the corner reflector antenna. 1991: Leopold B. Felsen (Polytechnic University of New York, United States) for highly original and significant developments in the theories of propagation, diffraction and dispersion of electromagnetic waves. 1992: James R. Wait (University of Arizona, United States) for fundamental contributions to electromagnetic theory, to the study of propagation of Hertzian waves through the atmosphere, ionosphere and the Earth, and to their applications in communications, navigation and geophysical exploration. 1993: Kenneth Budden (Cavendish Laboratory, University of Cambridge, United Kingdom) for major original contributions to the theory of electromagnetic waves in ionized media with applications to terrestrial and space communications. 1994: Ronald N. Bracewell (Stanford University, United States) for pioneering work in antenna aperture synthesis and image reconstruction as applied to radioast Document 2::: International Journal of Antennas and Propagation is a peer reviewed, scientific open access journal that publishes original and review articles in all areas of antennas and propagation. The editor-in-chief is Slawomir Koziel. Abstracting and indexing This journal is abstracted and indexed by the following services Academic Onefile Aerospace and High Technology Database Aluminium Industry Abstracts Current Contents - Engineering, Computing and Technology EBSCO Ei Compendex INSPEC Science Citation Index Scopus Solid State and Superconductivity Abstracts Document 3::: Andrea Neto from the TU Delft- Delft University of Technology, Delft, Netherlands was named Fellow of the Institute of Electrical and Electronics Engineers (IEEE) in 2016 for contributions to dielectric lens antennas and wideband arrays. Document 4::: Longitudinal-section modes are a set of a particular kind of electromagnetic transmission modes found in some types of transmission line. They are a subset of hybrid electromagnetic modes (HEM modes). HEM modes are those modes that have both an electric field and a magnetic field component longitudinally in the direction of travel of the propagating wave. Longitudinal-section modes, additionally, have a component of either magnetic or electric field that is zero in one transverse direction. In longitudinal-section electric (LSE) modes this field component is electric. In longitudinal-section magnetic (LSM) modes the zero field component is magnetic. Hybrid modes are to be compared to transverse modes which have, at most, only one component of either electric or magnetic field in the longitudinal direction. Derivation and notation There is an analogy between the way transverse modes (TE and TM modes) are arrived at and the definition of longitudinal section modes (LSE and LSM modes). When determining whether a structure can support a particular TE mode, one sets the electric field in the direction (the longitudinal direction of the line) to zero and then solves Maxwell's equations for the boundary conditions set by the physical structure of the line. One can just as easily set the electric field in the direction to zero and ask what modes that gives rise to. Such modes are designated LSE{x} modes. Similarly there can be LSE{y} modes and, analogously for the magnetic field, LSM{x} and LSM{y} modes. When dealing with longitudinal-section modes, the TE and TM modes are sometimes written as LSE{z} and LSM{z} respectively to produce a consistent set of notations and to reflect the analogous way in which they are defined. Both LSE and LSM modes are a linear superposition of the corresponding TE and TM modes (that is, the modes with the same suffix numbers). Thus, in general, the LSE and LSM modes have a longitudinal component of both electric and magnetic The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. Which of the electromagnetic waves have the shortest wavelengths and highest frequencies? A. plasma B. gamma C. ultraviolet D. beta Answer:
sciq-2820
multiple_choice
What is a soft, gray, nontoxic alkaline earth metal?
[ "potassium", "calcium", "magnesium", "pewter" ]
B
Relavent Documents: Document 0::: See also List of minerals Document 1::: Major innovations in materials technology BC 28,000 BC – People wear beads, bracelets, and pendants 14,500 BC – First pottery, made by the Jōmon people of Japan. 6th millennium BC – Copper metallurgy is invented and copper is used for ornamentation (see Pločnik article) 2nd millennium BC – Bronze is used for weapons and armor 16th century BC – The Hittites develop crude iron metallurgy 13th century BC – Invention of steel when iron and charcoal are combined properly 10th century BC – Glass production begins in ancient Near East 1st millennium BC – Pewter beginning to be used in China and Egypt 1000 BC – The Phoenicians introduce dyes made from the purple murex. 3rd century BC – Wootz steel, the first crucible steel, is invented in ancient India 50s BC – Glassblowing techniques flourish in Phoenicia 20s BC – Roman architect Vitruvius describes low-water-content method for mixing concrete 1st millennium 3rd century – Cast iron widely used in Han Dynasty China 300 – Greek alchemist Zomius, summarizing the work of Egyptian alchemists, describes arsenic and lead acetate 4th century – Iron pillar of Delhi is the oldest surviving example of corrosion-resistant steel 8th century – Porcelain is invented in Tang Dynasty China 8th century – Tin-glazing of ceramics invented by Muslim chemists and potters in Basra, Iraq 9th century – Stonepaste ceramics invented in Iraq 900 – First systematic classification of chemical substances appears in the works attributed to Jābir ibn Ḥayyān (Latin: Geber) and in those of the Persian alchemist and physician Abū Bakr al-Rāzī ( 865–925, Latin: Rhazes) 900 – Synthesis of ammonium chloride from organic substances described in the works attributed to Jābir ibn Ḥayyān (Latin: Geber) 900 – Abū Bakr al-Rāzī describes the preparation of plaster of Paris and metallic antimony 9th century – Lustreware appears in Mesopotamia 2nd millennium 1000 – Gunpowder is developed in China 1340 – In Liège, Belgium, the first blast furnaces for the production Document 2::: Materials science has shaped the development of civilizations since the dawn of mankind. Better materials for tools and weapons has allowed mankind to spread and conquer, and advancements in material processing like steel and aluminum production continue to impact society today. Historians have regarded materials as such an important aspect of civilizations such that entire periods of time have defined by the predominant material used (Stone Age, Bronze Age, Iron Age). For most of recorded history, control of materials had been through alchemy or empirical means at best. The study and development of chemistry and physics assisted the study of materials, and eventually the interdisciplinary study of materials science emerged from the fusion of these studies. The history of materials science is the study of how different materials were used and developed through the history of Earth and how those materials affected the culture of the peoples of the Earth. The term "Silicon Age" is sometimes used to refer to the modern period of history during the late 20th to early 21st centuries. Prehistory In many cases, different cultures leave their materials as the only records; which anthropologists can use to define the existence of such cultures. The progressive use of more sophisticated materials allows archeologists to characterize and distinguish between peoples. This is partially due to the major material of use in a culture and to its associated benefits and drawbacks. Stone-Age cultures were limited by which rocks they could find locally and by which they could acquire by trading. The use of flint around 300,000 BCE is sometimes considered the beginning of the use of ceramics. The use of polished stone axes marks a significant advance, because a much wider variety of rocks could serve as tools. The innovation of smelting and casting metals in the Bronze Age started to change the way that cultures developed and interacted with each other. Starting around 5,500 BCE, Document 3::: Advanced Placement (AP) Physics C: Electricity and Magnetism (also known as AP Physics C: E&M or AP E&M) is an introductory physics course administered by the College Board as part of its Advanced Placement program. It is intended to proxy a second-semester calculus-based university course in electricity and magnetism. The content of Physics C: E&M overlaps with that of AP Physics 2, but Physics 2 is algebra-based and covers other topics outside of electromagnetism, while Physics C is calculus-based and only covers electromagnetism. Physics C: E&M may be combined with its mechanics counterpart to form a year-long course that prepares for both exams. Course content E&M is equivalent to an introductory college course in electricity and magnetism for physics or engineering majors. The course modules are: Electrostatics Conductors, capacitors, and dielectrics Electric circuits Magnetic fields Electromagnetism. Methods of calculus are used wherever appropriate in formulating physical principles and in applying them to physical problems. Therefore, students should have completed or be concurrently enrolled in a calculus class. AP test The course culminates in an optional exam for which high-performing students may receive some credit towards their college coursework, depending on the institution. Registration The AP examination for AP Physics C: Electricity and Magnetism is separate from the AP examination for AP Physics C: Mechanics. Before 2006, test-takers paid only once and were given the choice of taking either one or two parts of the Physics C test. Format The exam is typically administered on a Monday afternoon in May. The exam is configured in two categories: a 35-question multiple choice section and a 3-question free response section. Test takers are allowed to use an approved calculator during the entire exam. The test is weighted such that each section is worth half of the final score. This and AP Physics C: Mechanics are the shortest AP exams, with Document 4::: The Goldschmidt classification, developed by Victor Goldschmidt (1888–1947), is a geochemical classification which groups the chemical elements within the Earth according to their preferred host phases into lithophile (rock-loving), siderophile (iron-loving), chalcophile (sulfide ore-loving or chalcogen-loving), and atmophile (gas-loving) or volatile (the element, or a compound in which it occurs, is liquid or gaseous at ambient surface conditions). Some elements have affinities to more than one phase. The main affinity is given in the table below and a discussion of each group follows that table. Lithophile elements Lithophile elements are those that remain on or close to the surface because they combine readily with oxygen, forming compounds that do not sink into the Earth's core. The lithophile elements include: Al, B, Ba, Be, Br, Ca, Cl, Cr, Cs, F, I, Hf, K, Li, Mg, Na, Nb, O, P, Rb, Sc, Si, Sr, Ta, Th, Ti, U, V, Y, Zr, W and the lanthanides or rare earth elements (REE). Lithophile elements mainly consist of the highly reactive metals of the s- and f-blocks. They also include a small number of reactive nonmetals, and the more reactive metals of the d-block such as titanium, zirconium and vanadium. Lithophile derives from "lithos" which means "rock", and "phileo" which means "love". Most lithophile elements form very stable ions with an electron configuration of a noble gas (sometimes with additional f-electrons). The few that do not, such as silicon, phosphorus and boron, form extremely strong covalent bonds with oxygen – often involving pi bonding. Their strong affinity for oxygen causes lithophile elements to associate very strongly with silica, forming relatively low-density minerals that thus float to the Earth's crust. The more soluble minerals formed by the alkali metals tend to concentrate in seawater or extremely arid regions where they can crystallise. The less soluble lithophile elements are concentrated on ancient continental shields where all so The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. What is a soft, gray, nontoxic alkaline earth metal? A. potassium B. calcium C. magnesium D. pewter Answer:
scienceQA-10044
multiple_choice
Select the mammal.
[ "hippopotamus", "great white shark", "arroyo toad", "great crested newt" ]
A
An arroyo toad is an amphibian. It has moist skin and begins its life in water. Toads do not have teeth! They swallow their food whole. A great crested newt is an amphibian. It has moist skin and begins its life in water. Some newts live in water. Other newts live on land but lay their eggs in water. A great white shark is a fish. It lives underwater. It has fins, not limbs. Great white sharks can live for up to 70 years. A hippopotamus is a mammal. It has hair and feeds its young milk. Hippopotamuses keep cool by lying in mud or water.
Relavent Documents: Document 0::: In zoology, mammalogy is the study of mammals – a class of vertebrates with characteristics such as homeothermic metabolism, fur, four-chambered hearts, and complex nervous systems. Mammalogy has also been known as "mastology," "theriology," and "therology." The archive of number of mammals on earth is constantly growing, but is currently set at 6,495 different mammal species including recently extinct. There are 5,416 living mammals identified on earth and roughly 1,251 have been newly discovered since 2006. The major branches of mammalogy include natural history, taxonomy and systematics, anatomy and physiology, ethology, ecology, and management and control. The approximate salary of a mammalogist varies from $20,000 to $60,000 a year, depending on their experience. Mammalogists are typically involved in activities such as conducting research, managing personnel, and writing proposals. Mammalogy branches off into other taxonomically-oriented disciplines such as primatology (study of primates), and cetology (study of cetaceans). Like other studies, mammalogy is also a part of zoology which is also a part of biology, the study of all living things. Research purposes Mammalogists have stated that there are multiple reasons for the study and observation of mammals. Knowing how mammals contribute or thrive in their ecosystems gives knowledge on the ecology behind it. Mammals are often used in business industries, agriculture, and kept for pets. Studying mammals habitats and source of energy has led to aiding in survival. The domestication of some small mammals has also helped discover several different diseases, viruses, and cures. Mammalogist A mammalogist studies and observes mammals. In studying mammals, they can observe their habitats, contributions to the ecosystem, their interactions, and the anatomy and physiology. A mammalogist can do a broad variety of things within the realm of mammals. A mammalogist on average can make roughly $58,000 a year. This dep Document 1::: Mammals Alces alces (Linnaeus, 1758) — Eurasian elk, moose Axis axis (Erxleben, 1777) — chital, axis deer Bison bison (Linnaeus, 1758) — American bison, buffalo Capreolus capreolus (Linnaeus, 1758) — European roe deer, roe deer Caracal caracal (Schreber, 1776) — caracal Chinchilla chinchilla (Lichtenstein, 1829) — short-tailed chinchilla Chiropotes chiropotes (Humboldt, 1811) — red-backed bearded saki Cricetus cricetus (Linnaeus, 1758) — common hamster, European hamster Crocuta crocuta (Erxleben, 1777) — spotted hyena Dama dama (Linnaeus, 1758) — European fallow deer Feroculus feroculus (Kelaart, 1850) — Kelaart's long-clawed shrew Gazella gazella (Pallas, 1766) — mountain gazelle Genetta genetta (Linnaeus, 1758) — common genet Gerbillus gerbillus (Olivier, 1801) — lesser Egyptian gerbil Giraffa giraffa (von Schreber, 1784) — southern giraffe Glis glis (Linnaeus, 1766) — European edible dormouse, European fat dormouse Gorilla gorilla (Savage, 1847) — western gorilla Gulo gulo (Linnaeus, 1758) — wolverine Hoolock hoolock (Harlan, 1834) — western hoolock gibbon Hyaena hyaena (Linnaeus, 1758) — striped hyena Indri indri (Gmelin, 1788) — indri Jaculus jaculus (Linnaeus, 1758) — lesser Egyptian jerboa Lagurus lagurus (Pallas, 1773) — steppe vole, steppe lemming Lemmus lemmus (Linnaeus, 1758) — Norway lemming Lutra lutra (Linnaeus, 1758) — European otter Lynx lynx (Linnaeus, 1758) — Eurasian lynx Macrophyllum macrophyllum (Schinz, 1821) — long-legged bat Marmota marmota (Linnaeus, 1758) — Alpine marmot Martes martes (Linnaeus, 1758) — European pine marten, pine marten Meles meles (Linnaeus, 1758) — European badg Document 2::: Vertebrate zoology is the biological discipline that consists of the study of Vertebrate animals, i.e., animals with a backbone, such as fish, amphibians, reptiles, birds and mammals. Many natural history museums have departments named Vertebrate Zoology. In some cases whole museums bear this name, e.g. the Museum of Vertebrate Zoology at the University of California, Berkeley. Subdivisions This subdivision of zoology has many further subdivisions, including: Ichthyology - the study of fishes. Mammalogy - the study of mammals. Chiropterology - the study of bats. Primatology - the study of primates. Ornithology - the study of birds. Herpetology - the study of reptiles. Batrachology - the study of amphibians. These divisions are sometimes further divided into more specific specialties. Document 3::: The following is a list of megafauna discovered by science since the beginning of the 19th century (with their respective date of discovery). Some of these may have been known to native peoples or reported anecdotally but had not been generally acknowledged as confirmed by the scientific world, until conclusive evidence was obtained for formal studies. In other cases, certain animals were initially considered hoaxes – similar to the initial reception of mounted specimens of the duck-billed platypus (Ornithorhynchus anatinus) in late 18th-century Europe. The definition of megafauna varies, but this list includes some of the more notable examples. Megafauna believed extinct, but rediscovered Burchell's zebra (Equus quagga burchellii), 2004 Megafauna previously unknown from the fossil record Western grey kangaroo Notamacropus fuliginosus (1817) Malayan tapir Tapirus indicus (1819) Red kangaroo Osphranter rufus (1822) Lowland anoa Bubalus depressicornis (1827) Mountain tapir Tapirus pinchaque (1829) Baird's tapir Tapirus bairdii (1865) Bonobo Pan paniscus (1928) Kouprey Bos sauveli (1937) Saola Pseudoryx nghetinhensis (1993) Megafauna initially believed to have been fictitious or hoaxes Przewalski's horse Equus ferus przewalskii (1881 - current wild population descended from zoo breeding since 1945) Okapi Okapia johnstoni (1901) See also Mammalia List of mammals described in the 2000s Document 4::: In zoology, megafauna (from Greek μέγας megas "large" and Neo-Latin fauna "animal life") are large animals. The most common thresholds to be a megafauna are weighing over (i.e., having a mass comparable to or larger than a human) or weighing over a tonne, (i.e., having a mass comparable to or larger than an ox). The first of these include many species not popularly thought of as overly large, and being the only few large animals left in a given range/area, such as white-tailed deer, Thomson's gazelle, and red kangaroo. In practice, the most common usage encountered in academic and popular writing describes land mammals roughly larger than a human that are not (solely) domesticated. The term is especially associated with the Pleistocene megafauna – the land animals that are considered archetypical of the last ice age, such as mammoths, the majority of which in northern Eurasia, Australia-New Guinea and the Americas became extinct within the last forty thousand years. Among living animals, the term megafauna is most commonly used for the largest extant terrestrial mammals, which includes (but is not limited to) elephants, giraffes, hippopotamuses, rhinoceroses, and large bovines. Of these five categories of large herbivores, only bovines are presently found outside of Africa and southern Asia, but all the others were formerly more wide-ranging, with their ranges and populations continually shrinking and decreasing over time. Wild equines are another example of megafauna, but their current ranges are largely restricted to the Old World, specifically Africa and Asia. Megafaunal species may be categorized according to their dietary type: megaherbivores (e.g., elephants), megacarnivores (e.g., lions), and, more rarely, megaomnivores (e.g., bears). The megafauna is also categorized by the class of animals that it belongs to, which are mammals, birds, reptiles, amphibians, fish, and invertebrates. Other common uses are for giant aquatic species, especially whales, as The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. Select the mammal. A. hippopotamus B. great white shark C. arroyo toad D. great crested newt Answer:
sciq-736
multiple_choice
What type of terminal releases neurotransmitters at a synapse?
[ "acetylcholine,", "secretion", "chloride", "axon" ]
D
Relavent Documents: Document 0::: Neurotransmitters are released into a synapse in packaged vesicles called quanta. One quantum generates a miniature end plate potential (MEPP) which is the smallest amount of stimulation that one neuron can send to another neuron. Quantal release is the mechanism by which most traditional endogenous neurotransmitters are transmitted throughout the body. The aggregate sum of many MEPPs is an end plate potential (EPP). A normal end plate potential usually causes the postsynaptic neuron to reach its threshold of excitation and elicit an action potential. Electrical synapses do not use quantal neurotransmitter release and instead use gap junctions between neurons to send current flows between neurons. The goal of any synapse is to produce either an excitatory postsynaptic potential (EPSP) or an inhibitory postsynaptic potential (IPSP), which generate or repress the expression, respectively, of an action potential in the postsynaptic neuron. It is estimated that an action potential will trigger the release of approximately 20% of an axon terminal's neurotransmitter load. Quantal neurotransmitter release mechanism Neurotransmitters are synthesized in the axon terminal where they are stored in vesicles. These neurotransmitter-filled vesicles are the quanta that will be released into the synapse. Quantal vesicles release their contents into the synapse by binding to the presynaptic membrane and combining their phospholipid bilayers. Individual quanta may randomly diffuse into the synapse and cause a subsequent MEPP. These spontaneous occurrences are completely random and are not the result of any kind of signaling pathway. Calcium ion signaling to the axon terminal is the usual signal for presynaptic release of neurotransmitters. Calcium ion diffusion into the presynaptic membrane signals the axon terminal to release quanta to generate either an IPSP or EPSP in the postsynaptic membrane. Release of different neurotransmitters will lead to different postsynaptic potential Document 1::: An amino acid neurotransmitter is an amino acid which is able to transmit a nerve message across a synapse. Neurotransmitters (chemicals) are packaged into vesicles that cluster beneath the axon terminal membrane on the presynaptic side of a synapse in a process called endocytosis. Amino acid neurotransmitter release (exocytosis) is dependent upon calcium Ca2+ and is a presynaptic response. Types Excitatory amino acids (EAA) will activate post-synaptic cells. inhibitory amino acids (IAA) depress the activity of post-synaptic cells. See also Amino acid non-protein functions Monoamine neurotransmitter Document 2::: Neurotransmission (Latin: transmissio "passage, crossing" from transmittere "send, let through") is the process by which signaling molecules called neurotransmitters are released by the axon terminal of a neuron (the presynaptic neuron), and bind to and react with the receptors on the dendrites of another neuron (the postsynaptic neuron) a short distance away. A similar process occurs in retrograde neurotransmission, where the dendrites of the postsynaptic neuron release retrograde neurotransmitters (e.g., endocannabinoids; synthesized in response to a rise in intracellular calcium levels) that signal through receptors that are located on the axon terminal of the presynaptic neuron, mainly at GABAergic and glutamatergic synapses. Neurotransmission is regulated by several different factors: the availability and rate-of-synthesis of the neurotransmitter, the release of that neurotransmitter, the baseline activity of the postsynaptic cell, the number of available postsynaptic receptors for the neurotransmitter to bind to, and the subsequent removal or deactivation of the neurotransmitter by enzymes or presynaptic reuptake. In response to a threshold action potential or graded electrical potential, a neurotransmitter is released at the presynaptic terminal. The released neurotransmitter may then move across the synapse to be detected by and bind with receptors in the postsynaptic neuron. Binding of neurotransmitters may influence the postsynaptic neuron in either an inhibitory or excitatory way. The binding of neurotransmitters to receptors in the postsynaptic neuron can trigger either short term changes, such as changes in the membrane potential called postsynaptic potentials, or longer term changes by the activation of signaling cascades. Neurons form complex biological neural networks through which nerve impulses (action potentials) travel. Neurons do not touch each other (except in the case of an electrical synapse through a gap junction); instead, neurons intera Document 3::: The active zone or synaptic active zone is a term first used by Couteaux and Pecot-Dechavassinein in 1970 to define the site of neurotransmitter release. Two neurons make near contact through structures called synapses allowing them to communicate with each other. As shown in the adjacent diagram, a synapse consists of the presynaptic bouton of one neuron which stores vesicles containing neurotransmitter (uppermost in the picture), and a second, postsynaptic neuron which bears receptors for the neurotransmitter (at the bottom), together with a gap between the two called the synaptic cleft (with synaptic adhesion molecules, SAMs, holding the two together). When an action potential reaches the presynaptic bouton, the contents of the vesicles are released into the synaptic cleft and the released neurotransmitter travels across the cleft to the postsynaptic neuron (the lower structure in the picture) and activates the receptors on the postsynaptic membrane. The active zone is the region in the presynaptic bouton that mediates neurotransmitter release and is composed of the presynaptic membrane and a dense collection of proteins called the cytomatrix at the active zone (CAZ). The CAZ is seen under the electron microscope to be a dark (electron dense) area close to the membrane. Proteins within the CAZ tether synaptic vesicles to the presynaptic membrane and mediate synaptic vesicle fusion, thereby allowing neurotransmitter to be released reliably and rapidly when an action potential arrives. Function The function of the active zone is to ensure that neurotransmitters can be reliably released in a specific location of a neuron and only released when the neuron fires an action potential. As an action potential propagates down an axon it reaches the axon terminal called the presynaptic bouton. In the presynaptic bouton, the action potential activates calcium channels (VDCCs) that cause a local influx of calcium. The increase in calcium is detected by proteins in the Document 4::: Multivesicular Release (MVR) is the phenomenon by which individual chemical synapses, forming the junction between neurons, is mediated by multiple releasable vesicles of neurotransmitter. In neuroscience, whether one or many vesicles are released per action potential depends on the synapse and has been shown to be more prevalent in humans. Examples In the mammalian brain, MVR has been shown to be common throughout the brain including in hippocampus and cerebellum. It has also been proposed and then refuted at the ribbon synapses formed between inner hair cell and spiral ganglion neurons. Recent evidence points to a possibility of MVR at neocortical connections of the somatosensory cortex as well as in other brain regions (for a review see). The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. What type of terminal releases neurotransmitters at a synapse? A. acetylcholine, B. secretion C. chloride D. axon Answer:
sciq-9907
multiple_choice
What is the touch response in plants called?
[ "pollenation", "phototropism", "thigmotropism", "sensitivity" ]
C
Relavent Documents: Document 0::: A mechanoreceptor is a sensory organ or cell that responds to mechanical stimulation such as touch, pressure, vibration, and sound from both the internal and external environment. Mechanoreceptors are well-documented in animals and are integrated into the nervous system as sensory neurons. While plants do not have nerves or a nervous system like animals, they also contain mechanoreceptors that perform a similar function. Mechanoreceptors detect mechanical stimulus originating from within the plant (intrinsic) and from the surrounding environment (extrinsic). The ability to sense vibrations, touch, or other disturbance is an adaptive response to herbivory and attack so that the plant can appropriately defend itself against harm. Mechanoreceptors can be organized into three levels: molecular, cellular, and organ-level. Mechanism of sensation Signal There is a growing body of knowledge about how mechanoreceptors in plant cells receive information about a mechanical stimulation, but there are many gaps in the current understanding. While a complete model cannot yet be formed, we do know much of what is happening at the plasma membrane. The plasma membrane is full of membrane proteins and ion channels. One type of ion channel are Mechanosensitive (MS) ion channels. MS channels are different from other membrane proteins in that their primary gating stimulus is force, such that they open conduits for ions to pass through the membrane in response to mechanical stimuli. This system allows physical force to create an ion flux, which then results in signal integration and response (as detailed below). MS channels are hypothesized to be the working mechanism in the perception of gravity, vibration, touch, hyper-osmotic and hypo-osmotic stress, pathogenic invasion, and interaction with commensal microbes. MS channels have been discovered across a diverse array of genera as well as in different plant organs, like leaves and stems, and localize to diverse cellular membranes. Document 1::: In plant biology, thigmotropism is a directional growth movement which occurs as a mechanosensory response to a touch stimulus. Thigmotropism is typically found in twining plants and tendrils, however plant biologists have also found thigmotropic responses in flowering plants and fungi. This behavior occurs due to unilateral growth inhibition. That is, the growth rate on the side of the stem which is being touched is slower than on the side opposite the touch. The resultant growth pattern is to attach and sometimes curl around the object which is touching the plant. However, flowering plants have also been observed to move or grow their sex organs toward a pollinator that lands on the flower, as in Portulaca grandiflora. Physiological factors Since growth is a complex developmental procedure, there are indeed many requirements (both biotic and abiotic) that are needed for both touch perception and a thigmotropic response to occur. One of these is calcium. In a series of experiments in 1995 using the tendril Bryonia dioica, touch-sensing calcium channels were blocked using various antagonists. Responses to touch in treatment plants which received calcium channel inhibitors were diminished compared to control plants, indicating that calcium may be required for thigmotropism. Later in 2001, a membrane depolarization pathway was proposed in which calcium was involved: when a touch occurs, calcium channels open and calcium flows into the cell, shifting the electrochemical potential across the membrane. This triggers voltage-gated chloride and potassium channels to open and leads to an action potential that signals the perception of touch. The plant growth hormone auxin has also been observed to be involved in thigmotropic behavior in plants, but its role is not well understood. Instead of asymmetric auxin distribution influencing other tropisms, it has been shown that a unidirectional thigmotropic response can occur even with a symmetric distribution of auxin. It has Document 2::: Plant Physiology is a monthly peer-reviewed scientific journal that covers research on physiology, biochemistry, cellular and molecular biology, genetics, biophysics, and environmental biology of plants. The journal has been published since 1926 by the American Society of Plant Biologists. The current editor-in-chief is Yunde Zhao (University of California San Diego. According to the Journal Citation Reports, the journal has a 2021 impact factor of 8.005. Document 3::: Plant perception is the ability of plants to sense and respond to the environment by adjusting their morphology and physiology. Botanical research has revealed that plants are capable of reacting to a broad range of stimuli, including chemicals, gravity, light, moisture, infections, temperature, oxygen and carbon dioxide concentrations, parasite infestation, disease, physical disruption, sound, and touch. The scientific study of plant perception is informed by numerous disciplines, such as plant physiology, ecology, and molecular biology. Aspects of perception Light Many plant organs contain photoreceptors (phototropins, cryptochromes, and phytochromes), each of which reacts very specifically to certain wavelengths of light. These light sensors tell the plant if it is day or night, how long the day is, how much light is available, and where the light is coming from. Shoots generally grow towards light, while roots grow away from it, responses known as phototropism and skototropism, respectively. They are brought about by light-sensitive pigments like phototropins and phytochromes and the plant hormone auxin. Many plants exhibit certain behaviors at specific times of the day; for example, flowers that open only in the mornings. Plants keep track of the time of day with a circadian clock. This internal clock is synchronized with solar time every day using sunlight, temperature, and other cues, similar to the biological clocks present in other organisms. The internal clock coupled with the ability to perceive light also allows plants to measure the time of the day and so determine the season of the year. This is how many plants know when to flower (see photoperiodism). The seeds of many plants sprout only after they are exposed to light. This response is carried out by phytochrome signalling. Plants are also able to sense the quality of light and respond appropriately. For example, in low light conditions, plants produce more photosynthetic pigments. If the light i Document 4::: Phytomorphology is the study of the physical form and external structure of plants. This is usually considered distinct from plant anatomy, which is the study of the internal structure of plants, especially at the microscopic level. Plant morphology is useful in the visual identification of plants. Recent studies in molecular biology started to investigate the molecular processes involved in determining the conservation and diversification of plant morphologies. In these studies transcriptome conservation patterns were found to mark crucial ontogenetic transitions during the plant life cycle which may result in evolutionary constraints limiting diversification. Scope Plant morphology "represents a study of the development, form, and structure of plants, and, by implication, an attempt to interpret these on the basis of similarity of plan and origin". There are four major areas of investigation in plant morphology, and each overlaps with another field of the biological sciences. First of all, morphology is comparative, meaning that the morphologist examines structures in many different plants of the same or different species, then draws comparisons and formulates ideas about similarities. When structures in different species are believed to exist and develop as a result of common, inherited genetic pathways, those structures are termed homologous. For example, the leaves of pine, oak, and cabbage all look very different, but share certain basic structures and arrangement of parts. The homology of leaves is an easy conclusion to make. The plant morphologist goes further, and discovers that the spines of cactus also share the same basic structure and development as leaves in other plants, and therefore cactus spines are homologous to leaves as well. This aspect of plant morphology overlaps with the study of plant evolution and paleobotany. Secondly, plant morphology observes both the vegetative (somatic) structures of plants, as well as the reproductive str The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. What is the touch response in plants called? A. pollenation B. phototropism C. thigmotropism D. sensitivity Answer:
sciq-9836
multiple_choice
What event occurs between the two solstices?
[ "equinox", "Christmas", "Leap Year", "summer" ]
A
Relavent Documents: Document 0::: A pre-STEM program is a course of study at any two-year college that prepares a student to transfer to a four-year school to earn a bachelor's degree in a STEM field. Overview The concept of a pre-STEM program is being developed to address America's need for more college-trained professionals in science, technology, engineering, and mathematics (STEM). It is an innovation meant to fill a gap at community colleges that do not have 'major' degree paths that students identify with on their way to earning an Associates degree. Students must complete a considerable amount of STEM coursework before transferring from a two-year school to a four-year school and earn a baccalaureate degree in a STEM field. Schools with a pre-STEM program are able to identify those students and support them with STEM-specific academic and career advising, increasing the student's chances of going on to earn a STEM baccalaureate degree in a timely fashion. With over 50% of America's college-bound students starting their college career at public or private two-year school, and with a very small proportion of students who start college at a two-year school matriculating to and earning STEM degrees from four-year schools, pre-STEM programs have great potential for broadening participation in baccalaureate STEM studies. Example programs The effectiveness of pre-STEM programs is being investigated by a consortium of schools in Missouri: Moberly Area Community College, St. Charles Community College, Metropolitan Community College, and Truman State University. A larger group of schools met at the Belknap Springs Meetings in October 2009 to discuss the challenges and opportunities presented by STEM-focused partnerships between 2-year and 4-year schools. Each program represented a two-year school and a four-year school that were trying to increase the number of people who earn a baccalaureate degree in a STEM area through various means, some of which were pre-STEM programs. Other methods includes Document 1::: The equation of time describes the discrepancy between two kinds of solar time. The word equation is used in the medieval sense of "reconciliation of a difference". The two times that differ are the apparent solar time, which directly tracks the diurnal motion of the Sun, and mean solar time, which tracks a theoretical mean Sun with uniform motion along the celestial equator. Apparent solar time can be obtained by measurement of the current position (hour angle) of the Sun, as indicated (with limited accuracy) by a sundial. Mean solar time, for the same place, would be the time indicated by a steady clock set so that over the year its differences from apparent solar time would have a mean of zero. The equation of time is the east or west component of the analemma, a curve representing the angular offset of the Sun from its mean position on the celestial sphere as viewed from Earth. The equation of time values for each day of the year, compiled by astronomical observatories, were widely listed in almanacs and ephemerides. The equation of time can be approximated by a sum of two sine waves (see explanation below): [minutes] In plain text format: EoT =  -7.659sin(6.24004077 + 0.01720197(365*(y-2000) + d)) + 9.863sin( 2 (6.24004077 + 0.01720197 (365*(y-2000) + d)) + 3.5932 ) [minutes] A less precise but more compact and simpler form is: EoT = 9.87 sin(2B°) - 7.67 sin(B° + 78.7°) where B = 360° (d - 81) / 365. With arguments expressed in radians: EoT = 9.87 sin(2B * π /180) - 7.67 sin((B° + 78.7°) * π /180) d represents the number of days since January 1 of the current year. The concept Document 2::: Single Best Answer (SBA or One Best Answer) is a written examination form of multiple choice questions used extensively in medical education. Structure A single question is posed with typically five alternate answers, from which the candidate must choose the best answer. This method avoids the problems of past examinations of a similar form described as Single Correct Answer. The older form can produce confusion where more than one of the possible answers has some validity. The newer form makes it explicit that more than one answer may have elements that are correct, but that one answer will be superior. Prior to the widespread introduction of SBAs into medical education, the typical form of examination was true-false multiple choice questions. But during the 2000s, educators found that SBAs would be superior. Document 3::: Solar rotation varies with latitude. The Sun is not a solid body, but is composed of a gaseous plasma. Different latitudes rotate at different periods. The source of this differential rotation is an area of current research in solar astronomy. The rate of surface rotation is observed to be the fastest at the equator (latitude ) and to decrease as latitude increases. The solar rotation period is 24.47 days at the equator and almost 38 days at the poles. The average rotation is 28 days. Current Carrington Rotation: CR [] Surface rotation as an equation The differential rotation rate is usually described by the equation: where is the angular velocity in degrees per day, is the solar latitude, A is angular velocity at the equator, and B, C are constants controlling the decrease in velocity with increasing latitude. The values of A, B, and C differ depending on the techniques used to make the measurement, as well as the time period studied. A current set of accepted average values is: A= 14.713 ± 0.0491 °/day B= −2.396 ± 0.188 °/day C= −1.787 ± 0.253 °/day Sidereal rotation At the equator, the solar rotation period is 24.47 days. This is called the sidereal rotation period, and should not be confused with the synodic rotation period of 26.24 days, which is the time for a fixed feature on the Sun to rotate to the same apparent position as viewed from Earth (the earth's orbital rotation is in the same direction as the sun's rotation). The synodic period is longer because the Sun must rotate for a sidereal period plus an extra amount due to the orbital motion of Earth around the Sun. Note that astrophysical literature does not typically use the equatorial rotation period, but instead often uses the definition of a Carrington rotation: a synodic rotation period of 27.2753 days or a sidereal period of 25.38 days. This chosen period roughly corresponds to the prograde rotation at a latitude of 26° north or south, which is consistent with the typical latitude of sunspot Document 4::: Science, technology, engineering, and mathematics (STEM) is an umbrella term used to group together the distinct but related technical disciplines of science, technology, engineering, and mathematics. The term is typically used in the context of education policy or curriculum choices in schools. It has implications for workforce development, national security concerns (as a shortage of STEM-educated citizens can reduce effectiveness in this area), and immigration policy, with regard to admitting foreign students and tech workers. There is no universal agreement on which disciplines are included in STEM; in particular, whether or not the science in STEM includes social sciences, such as psychology, sociology, economics, and political science. In the United States, these are typically included by organizations such as the National Science Foundation (NSF), the Department of Labor's O*Net online database for job seekers, and the Department of Homeland Security. In the United Kingdom, the social sciences are categorized separately and are instead grouped with humanities and arts to form another counterpart acronym HASS (Humanities, Arts, and Social Sciences), rebranded in 2020 as SHAPE (Social Sciences, Humanities and the Arts for People and the Economy). Some sources also use HEAL (health, education, administration, and literacy) as the counterpart of STEM. Terminology History Previously referred to as SMET by the NSF, in the early 1990s the acronym STEM was used by a variety of educators, including Charles E. Vela, the founder and director of the Center for the Advancement of Hispanics in Science and Engineering Education (CAHSEE). Moreover, the CAHSEE started a summer program for talented under-represented students in the Washington, D.C., area called the STEM Institute. Based on the program's recognized success and his expertise in STEM education, Charles Vela was asked to serve on numerous NSF and Congressional panels in science, mathematics, and engineering edu The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. What event occurs between the two solstices? A. equinox B. Christmas C. Leap Year D. summer Answer:
sciq-3474
multiple_choice
Gap genes are defined by the effect of what in that gene?
[ "modification", "infection", "mutation", "radiation" ]
C
Relavent Documents: Document 0::: Major gene is a gene with pronounced phenotype expression, in contrast to a modifier gene. Major gene characterizes common expression of oligogenic series, i.e. a small number of genes that determine the same trait. Major genes control the discontinuous or qualitative characters in contrast of minor genes or polygenes with individually small effects. Major genes segregate and may be easily subject to mendelian analysis. The gene categorization into major and minor determinants is more or less arbitrary. Both of the two types are in all probability only end points in a more or less continuous series of gene action and gene interactions. The term major gene was introduced into the science of inheritance by Keneth Mather (1941). See also Gene interaction Minor gene Gene Document 1::: Genetics (from Ancient Greek , “genite” and that from , “origin”), a discipline of biology, is the science of heredity and variation in living organisms. Articles (arranged alphabetically) related to genetics include: # A B C D E F G H I J K L M N O P Q R S T U V W X Y Z Document 2::: The Generalist Genes hypothesis of learning abilities and disabilities was originally coined in an article by Plomin & Kovas (2005). The Generalist Genes hypothesis suggests that most genes associated with common learning disabilities and abilities are generalist in three ways. Firstly, the same genes that influence common learning abilities (e.g., high reading aptitude) are also responsible for common learning disabilities (e.g., reading disability): they are strongly genetically correlated. Secondly, many of the genes associated with one aspect of a learning disability (e.g., vocabulary problems) also influence other aspects of this learning disability (e.g., grammar problems). Thirdly, genes that influence one learning disability (e.g., reading disability) are largely the same as those that influence other learning disabilities (e.g., mathematics disability). The Generalist Genes hypothesis has important implications for education, cognitive sciences and molecular genetics. Document 3::: In mathematical psychology and education theory, a knowledge space is a combinatorial structure used to formulate mathematical models describing the progression of a human learner. Knowledge spaces were introduced in 1985 by Jean-Paul Doignon and Jean-Claude Falmagne, and remain in extensive use in the education theory. Modern applications include two computerized tutoring systems, ALEKS and the defunct RATH. Formally, a knowledge space assumes that a domain of knowledge is a collection of concepts or skills, each of which must be eventually mastered. Not all concepts are interchangeable; some require other concepts as prerequisites. Conversely, competency at one skill may ease the acquisition of another through similarity. A knowledge space marks out which collections of skills are feasible: they can be learned without mastering any other skills. Under reasonable assumptions, the collection of feasible competencies forms the mathematical structure known as an antimatroid. Researchers and educators usually explore the structure of a discipline's knowledge space as a latent class model. Motivation Knowledge Space Theory attempts to address shortcomings of standardized testing when used in educational psychometry. Common tests, such as the SAT and ACT, compress a student's knowledge into a very small range of ordinal ranks, in the process effacing the conceptual dependencies between questions. Consequently, the tests cannot distinguish between true understanding and guesses, nor can they identify a student's particular weaknesses, only the general proportion of skills mastered. The goal of knowledge space theory is to provide a language by which exams can communicate What the student can do and What the student is ready to learn. Model structure Knowledge Space Theory-based models presume that an educational subject can be modeled as a finite set of concepts, skills, or topics. Each feasible state of knowledge about is then a subset of ; the set of Document 4::: The Personal Genetics Education Project (pgEd) aims to engage and inform a worldwide audience about the benefits of knowing one's genome as well as the ethical, legal and social issues (ELSI) and dimensions of personal genetics. pgEd was founded in 2006, is housed in the Department of Genetics at Harvard Medical School and is directed by Ting Wu, a professor in that department. It employs a variety of strategies for reaching general audiences, including generating online curricular materials, leading discussions in classrooms, workshops, and conferences, developing a mobile educational game (Map-Ed), holding an annual conference geared toward accelerating awareness (GETed), and working with the world of entertainment to improve accuracy and outreach. Online curricular materials and professional development for teachers pgEd develops tools for teachers and general audiences that examine the potential benefits and risks of personalized genome analysis. These include freely accessible, interactive lesson plans that tackle issues such as genetic testing of minors, reproductive genetics, complex human traits and genetics, and the history of eugenics. pgEd also engages educators at conferences as well as organizes professional development workshops. All of pgEd's materials are freely available online. Map-Ed, a mobile quiz In 2013, pgEd created a mobile educational quiz called Map-Ed. Map-Ed invites players to work their way through five questions that address key concepts in genetics and then pin themselves on a world map. Within weeks of its launch, Map-Ed gained over 1,000 pins around the world, spanning across all 7 continents. Translations and new maps linked to questions on topics broadly related to genetics are in development. GETed conference pgEd hosts the annual GETed conference, a meeting that brings together experts from across the United States and beyond in education, research, health, entertainment, and policy to develop strategies for acceleratin The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. Gap genes are defined by the effect of what in that gene? A. modification B. infection C. mutation D. radiation Answer:
sciq-1360
multiple_choice
Migration and hibernation are examples of behaviors that occur on what temporal basis?
[ "nocturnal", "annual", "daily", "weekly" ]
B
Relavent Documents: Document 0::: Diurnality is a form of plant and animal behavior characterized by activity during daytime, with a period of sleeping or other inactivity at night. The common adjective used for daytime activity is "diurnal". The timing of activity by an animal depends on a variety of environmental factors such as the temperature, the ability to gather food by sight, the risk of predation, and the time of year. Diurnality is a cycle of activity within a 24-hour period; cyclic activities called circadian rhythms are endogenous cycles not dependent on external cues or environmental factors except for a zeitgeber. Animals active during twilight are crepuscular, those active during the night are nocturnal and animals active at sporadic times during both night and day are cathemeral. Plants that open their flowers during the daytime are described as diurnal, while those that bloom during nighttime are nocturnal. The timing of flower opening is often related to the time at which preferred pollinators are foraging. For example, sunflowers open during the day to attract bees, whereas the night-blooming cereus opens at night to attract large sphinx moths. In animals Many types of animals are classified as being diurnal, meaning they are active during the day time and inactive or have periods of rest during the night time. Commonly classified diurnal animals include mammals, birds, and reptiles. Most primates are diurnal, including humans. Scientifically classifying diurnality within animals can be a challenge, apart from the obvious increased activity levels during the day time light. Evolution of diurnality Initially, most animals were diurnal, but adaptations that allowed some animals to become nocturnal is what helped contribute to the success of many, especially mammals. This evolutionary movement to nocturnality allowed them to better avoid predators and gain resources with less competition from other animals. This did come with some adaptations that mammals live with today. Visi Document 1::: Cathemerality, sometimes called "metaturnality", is an organismal activity pattern of irregular intervals during the day or night in which food is acquired, socializing with other organisms occurs, and any other activities necessary for livelihood are undertaken. This activity differs from the generally monophasic pattern (sleeping once per day) of nocturnal and diurnal species as it is polyphasic (sleeping 4-6 times per day) and is approximately evenly distributed throughout the 24-hour cycle. Many animals do not fit the traditional definitions of being strictly nocturnal, diurnal, or crepuscular, often driven by factors that include the availability of food, predation pressure, and variable ambient temperature. Although cathemerality is not as widely observed in individual species as diurnality or nocturnality, this activity pattern is seen across the mammal taxa, such as in lions, coyotes, and lemurs. Cathemeral behaviour can also vary on a seasonal basis over an annual period by exhibiting periods of predominantly nocturnal behaviour and exhibiting periods of predominantly diurnal behaviour. For example, seasonal cathemerality has been described for the mongoose lemur (Eulemur mongoz) as activity that shifts from being predominantly diurnal to being predominantly nocturnal over a yearly cycle, but the common brown lemurs (Eulemur fulvus) have been observed as seasonally shifting from diurnal activity to cathemerality. As research on cathemerality continues, many factors that have been identified as influencing whether or why an animal behaves cathemerally. Such factors include resource variation, food quality, photoperiodism, nocturnal luminosity, temperature, predator avoidance, and energetic constraints. Etymology In the original manuscript for his article "Patterns of activity in the Mayotte lemur, Lemur fulvus mayottensis," Ian Tattersall introduced the term cathemerality to describe a pattern of observed activity that was neither diurnal nor nocturn Document 2::: Animal migration is the relatively long-distance movement of individual animals, usually on a seasonal basis. It is the most common form of migration in ecology. It is found in all major animal groups, including birds, mammals, fish, reptiles, amphibians, insects, and crustaceans. The cause of migration may be local climate, local availability of food, the season of the year or for mating. To be counted as a true migration, and not just a local dispersal or irruption, the movement of the animals should be an annual or seasonal occurrence, or a major habitat change as part of their life. An annual event could include Northern Hemisphere birds migrating south for the winter, or wildebeest migrating annually for seasonal grazing. A major habitat change could include young Atlantic salmon or sea lamprey leaving the river of their birth when they have reached a few inches in size. Some traditional forms of human migration fit this pattern. Migrations can be studied using traditional identification tags such as bird rings, or tracked directly with electronic tracking devices. Before animal migration was understood, folklore explanations were formulated for the appearance and disappearance of some species, such as that barnacle geese grew from goose barnacles. Overview Concepts Migration can take very different forms in different species, and has a variety of causes. As such, there is no simple accepted definition of migration. One of the most commonly used definitions, proposed by the zoologist J. S. Kennedy is Migration encompasses four related concepts: persistent straight movement; relocation of an individual on a greater scale (in both space and time) than its normal daily activities; seasonal to-and-fro movement of a population between two areas; and movement leading to the redistribution of individuals within a population. Migration can be either obligate, meaning individuals must migrate, or facultative, meaning individuals can "choose" to migrate or not. Wi Document 3::: Bird migration is the regular seasonal movement, often north and south, along a flyway, between breeding and wintering grounds. Many species of bird migrate. Migration carries high costs in predation and mortality, including from hunting by humans, and is driven primarily by the availability of food. It occurs mainly in the northern hemisphere, where birds are funnelled onto specific routes by natural barriers such as the Mediterranean Sea or the Caribbean Sea. Migration of species such as storks, turtle doves, and swallows was recorded as many as 3,000 years ago by Ancient Greek authors, including Homer and Aristotle, and in the Book of Job. More recently, Johannes Leche began recording dates of arrivals of spring migrants in Finland in 1749, and modern scientific studies have used techniques including bird ringing and satellite tracking to trace migrants. Threats to migratory birds have grown with habitat destruction, especially of stopover and wintering sites, as well as structures such as power lines and wind farms. The Arctic tern holds the long-distance migration record for birds, travelling between Arctic breeding grounds and the Antarctic each year. Some species of tubenoses (Procellariiformes) such as albatrosses circle the Earth, flying over the southern oceans, while others such as Manx shearwaters migrate between their northern breeding grounds and the southern ocean. Shorter migrations are common, while longer ones are not. The shorter migrations include altitudinal migrations on mountains such as the Andes and Himalayas. The timing of migration seems to be controlled primarily by changes in day length. Migrating birds navigate using celestial cues from the Sun and stars, the Earth's magnetic field, and mental maps. Historical views In the Pacific, traditional land-finding techniques used by Micronesians and Polynesians suggest that bird migration was observed and interpreted for more than 3,000 years. In Samoan tradition, for example, Tagaloa sent Document 4::: Migration, in ecology, is the large-scale movement of members of a species to a different environment. Migration is a natural behavior and component of the life cycle of many species of mobile organisms, not limited to animals, though animal migration is the best known type. Migration is often cyclical, frequently occurring on a seasonal basis, and in some cases on a daily basis. Species migrate to take advantage of more favorable conditions with respect to food availability, safety from predation, mating opportunity, or other environmental factors. Migration is most commonly seen as animal migration, the physical movement by animals from one area to another. That includes bird, fish, and insect migration. However, plants can be said to migrate, as seed dispersal enables plants to grow in new areas, under environmental constraints such as temperature and rainfall, resulting in changes such as forest migration. Mechanisms While members of some species learn a migratory route on their first journey with older members of their group, other species genetically pass on information regarding their migratory paths. Despite many differences in organisms’ migratory cues and behaviors, “considerable similarities appear to exist in the cues involved in the different phases of migration.” Migratory organisms use environmental cues like photoperiod and weather conditions as well as internal cues like hormone levels to determine when it is time to begin a migration. Migratory species use senses such as magnetoreception or olfaction to orient themselves or navigate their route, respectively. Factors The factors that determine migration methods are variable due to the inconsistency of major seasonal changes and events. When an organism migrates from one location to another, its energy use and rate of migration are directly related to each other and to the safety of the organism. If an ecological barrier presents itself along a migrant's route, the migrant can either choose t The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. Migration and hibernation are examples of behaviors that occur on what temporal basis? A. nocturnal B. annual C. daily D. weekly Answer:
sciq-1678
multiple_choice
What takes both the shape and the volume of their container?
[ "tissues", "fluids", "solids", "gases" ]
D
Relavent Documents: Document 0::: Conceptual questions or conceptual problems in science, technology, engineering, and mathematics (STEM) education are questions that can be answered based only on the knowledge of relevant concepts, rather than performing extensive calculations. They contrast with most homework and exam problems in science and engineering that typically require plugging in numerical values into previously discussed formulas. Such "plug-and-chug" numerical problems can often be solved correctly by just matching the pattern of the problem to a previously discussed problem and changing the numerical inputs, which requires significant amounts of time to perform the calculations but does not test or deepen the understanding of how the concepts and formulas should work together. Conceptual questions, therefore, provide a good complement to conventional numerical problems because they need minimal or no calculations and instead encourage the students to engage more deeply with the underlying concepts and how they relate to formulas. Conceptual problems are often formulated as multiple-choice questions, making them easy to use during in-class discussions, particularly when utilizing active learning, peer instruction, and audience response. An example of a conceptual question in undergraduate thermodynamics is provided below: During adiabatic expansion of an ideal gas, its temperatureincreases decreases stays the same Impossible to tell/need more information The use of conceptual questions in physics was popularized by Eric Mazur, particularly in the form of multiple-choice tests that he called ConcepTests. In recent years, multiple websites that maintain lists of conceptual questions have been created by instructors for various disciplines. Some books on physics provide many examples of conceptual questions as well. Multiple conceptual questions can be assembled into a concept inventory to test the working knowledge of students at the beginning of a course or to track the improvement in Document 1::: Cellular components are the complex biomolecules and structures of which cells, and thus living organisms, are composed. Cells are the structural and functional units of life. The smallest organisms are single cells, while the largest organisms are assemblages of trillions of cells. DNA, double stranded macromolecule that carries the hereditary information of the cell and found in all living cells; each cell carries chromosome(s) having a distinctive DNA sequence. Examples include macromolecules such as proteins and nucleic acids, biomolecular complexes such as a ribosome, and structures such as membranes, and organelles. While the majority of cellular components are located within the cell itself, some may exist in extracellular areas of an organism. Cellular components may also be called biological matter or biological material. Most biological matter has the characteristics of soft matter, being governed by relatively small energies. All known life is made of biological matter. To be differentiated from other theoretical or fictional life forms, such life may be called carbon-based, cellular, organic, biological, or even simply living – as some definitions of life exclude hypothetical types of biochemistry. See also Cell (biology) Cell biology Biomolecule Organelle Tissue (biology) External links https://web.archive.org/web/20130918033010/http://bioserv.fiu.edu/~walterm/FallSpring/review1_fall05_chap_cell3.htm Document 2::: In geometry, a spherical shell is a generalization of an annulus to three dimensions. It is the region of a ball between two concentric spheres of differing radii. Volume The volume of a spherical shell is the difference between the enclosed volume of the outer sphere and the enclosed volume of the inner sphere: where is the radius of the inner sphere and is the radius of the outer sphere. Approximation An approximation for the volume of a thin spherical shell is the surface area of the inner sphere multiplied by the thickness of the shell: when is very small compared to (). The total surface area of the spherical shell is . See also Spherical pressure vessel Ball Solid torus Bubble Sphere Document 3::: In mathematical psychology and education theory, a knowledge space is a combinatorial structure used to formulate mathematical models describing the progression of a human learner. Knowledge spaces were introduced in 1985 by Jean-Paul Doignon and Jean-Claude Falmagne, and remain in extensive use in the education theory. Modern applications include two computerized tutoring systems, ALEKS and the defunct RATH. Formally, a knowledge space assumes that a domain of knowledge is a collection of concepts or skills, each of which must be eventually mastered. Not all concepts are interchangeable; some require other concepts as prerequisites. Conversely, competency at one skill may ease the acquisition of another through similarity. A knowledge space marks out which collections of skills are feasible: they can be learned without mastering any other skills. Under reasonable assumptions, the collection of feasible competencies forms the mathematical structure known as an antimatroid. Researchers and educators usually explore the structure of a discipline's knowledge space as a latent class model. Motivation Knowledge Space Theory attempts to address shortcomings of standardized testing when used in educational psychometry. Common tests, such as the SAT and ACT, compress a student's knowledge into a very small range of ordinal ranks, in the process effacing the conceptual dependencies between questions. Consequently, the tests cannot distinguish between true understanding and guesses, nor can they identify a student's particular weaknesses, only the general proportion of skills mastered. The goal of knowledge space theory is to provide a language by which exams can communicate What the student can do and What the student is ready to learn. Model structure Knowledge Space Theory-based models presume that an educational subject can be modeled as a finite set of concepts, skills, or topics. Each feasible state of knowledge about is then a subset of ; the set of Document 4::: A liquid is a nearly incompressible fluid that conforms to the shape of its container but retains a nearly constant volume independent of pressure. It is one of the four fundamental states of matter (the others being solid, gas, and plasma), and is the only state with a definite volume but no fixed shape. The density of a liquid is usually close to that of a solid, and much higher than that of a gas. Therefore, liquid and solid are both termed condensed matter. On the other hand, as liquids and gases share the ability to flow, they are both called fluids. A liquid is made up of tiny vibrating particles of matter, such as atoms, held together by intermolecular bonds. Like a gas, a liquid is able to flow and take the shape of a container. Unlike a gas, a liquid maintains a fairly constant density and does not disperse to fill every space of a container. Although liquid water is abundant on Earth, this state of matter is actually the least common in the known universe, because liquids require a relatively narrow temperature/pressure range to exist. Most known matter in the universe is either gas (as interstellar clouds) or plasma (as stars). Introduction Liquid is one of the four primary states of matter, with the others being solid, gas and plasma. A liquid is a fluid. Unlike a solid, the molecules in a liquid have a much greater freedom to move. The forces that bind the molecules together in a solid are only temporary in a liquid, allowing a liquid to flow while a solid remains rigid. A liquid, like a gas, displays the properties of a fluid. A liquid can flow, assume the shape of a container, and, if placed in a sealed container, will distribute applied pressure evenly to every surface in the container. If liquid is placed in a bag, it can be squeezed into any shape. Unlike a gas, a liquid is nearly incompressible, meaning that it occupies nearly a constant volume over a wide range of pressures; it does not generally expand to fill available space in a containe The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. What takes both the shape and the volume of their container? A. tissues B. fluids C. solids D. gases Answer:
scienceQA-435
multiple_choice
What do these two changes have in common? plants making food from sunlight, air, and water rust forming on a metal gate
[ "Both are caused by cooling.", "Both are caused by heating.", "Both are only physical changes.", "Both are chemical changes." ]
D
Step 1: Think about each change. Plants making food is a chemical change. Plants use energy from sunlight to change air and water into food. The food is sugar. Sugar is a different type of matter than air or water. Rust forming on a metal gate is a chemical change. As the gate rusts, the metal turns into a different type of matter called rust. Rust is reddish-brown and falls apart easily. Step 2: Look at each answer choice. Both are only physical changes. Both changes are chemical changes. They are not physical changes. Both are chemical changes. Both changes are chemical changes. The type of matter before and after each change is different. Both are caused by heating. Neither change is caused by heating. Both are caused by cooling. Neither change is caused by cooling.
Relavent Documents: Document 0::: Physical changes are changes affecting the form of a chemical substance, but not its chemical composition. Physical changes are used to separate mixtures into their component compounds, but can not usually be used to separate compounds into chemical elements or simpler compounds. Physical changes occur when objects or substances undergo a change that does not change their chemical composition. This contrasts with the concept of chemical change in which the composition of a substance changes or one or more substances combine or break up to form new substances. In general a physical change is reversible using physical means. For example, salt dissolved in water can be recovered by allowing the water to evaporate. A physical change involves a change in physical properties. Examples of physical properties include melting, transition to a gas, change of strength, change of durability, changes to crystal form, textural change, shape, size, color, volume and density. An example of a physical change is the process of tempering steel to form a knife blade. A steel blank is repeatedly heated and hammered which changes the hardness of the steel, its flexibility and its ability to maintain a sharp edge. Many physical changes also involve the rearrangement of atoms most noticeably in the formation of crystals. Many chemical changes are irreversible, and many physical changes are reversible, but reversibility is not a certain criterion for classification. Although chemical changes may be recognized by an indication such as odor, color change, or production of a gas, every one of these indicators can result from physical change. Examples Heating and cooling Many elements and some compounds change from solids to liquids and from liquids to gases when heated and the reverse when cooled. Some substances such as iodine and carbon dioxide go directly from solid to gas in a process called sublimation. Magnetism Ferro-magnetic materials can become magnetic. The process is reve Document 1::: Conceptual questions or conceptual problems in science, technology, engineering, and mathematics (STEM) education are questions that can be answered based only on the knowledge of relevant concepts, rather than performing extensive calculations. They contrast with most homework and exam problems in science and engineering that typically require plugging in numerical values into previously discussed formulas. Such "plug-and-chug" numerical problems can often be solved correctly by just matching the pattern of the problem to a previously discussed problem and changing the numerical inputs, which requires significant amounts of time to perform the calculations but does not test or deepen the understanding of how the concepts and formulas should work together. Conceptual questions, therefore, provide a good complement to conventional numerical problems because they need minimal or no calculations and instead encourage the students to engage more deeply with the underlying concepts and how they relate to formulas. Conceptual problems are often formulated as multiple-choice questions, making them easy to use during in-class discussions, particularly when utilizing active learning, peer instruction, and audience response. An example of a conceptual question in undergraduate thermodynamics is provided below: During adiabatic expansion of an ideal gas, its temperatureincreases decreases stays the same Impossible to tell/need more information The use of conceptual questions in physics was popularized by Eric Mazur, particularly in the form of multiple-choice tests that he called ConcepTests. In recent years, multiple websites that maintain lists of conceptual questions have been created by instructors for various disciplines. Some books on physics provide many examples of conceptual questions as well. Multiple conceptual questions can be assembled into a concept inventory to test the working knowledge of students at the beginning of a course or to track the improvement in Document 2::: At equilibrium, the relationship between water content and equilibrium relative humidity of a material can be displayed graphically by a curve, the so-called moisture sorption isotherm. For each humidity value, a sorption isotherm indicates the corresponding water content value at a given, constant temperature. If the composition or quality of the material changes, then its sorption behaviour also changes. Because of the complexity of sorption process the isotherms cannot be determined explicitly by calculation, but must be recorded experimentally for each product. The relationship between water content and water activity (aw) is complex. An increase in aw is usually accompanied by an increase in water content, but in a non-linear fashion. This relationship between water activity and moisture content at a given temperature is called the moisture sorption isotherm. These curves are determined experimentally and constitute the fingerprint of a food system. BET theory (Brunauer-Emmett-Teller) provides a calculation to describe the physical adsorption of gas molecules on a solid surface. Because of the complexity of the process, these calculations are only moderately successful; however, Stephen Brunauer was able to classify sorption isotherms into five generalized shapes as shown in Figure 2. He found that Type II and Type III isotherms require highly porous materials or desiccants, with first monolayer adsorption, followed by multilayer adsorption and finally leading to capillary condensation, explaining these materials high moisture capacity at high relative humidity. Care must be used in extracting data from isotherms, as the representation for each axis may vary in its designation. Brunauer provided the vertical axis as moles of gas adsorbed divided by the moles of the dry material, and on the horizontal axis he used the ratio of partial pressure of the gas just over the sample, divided by its partial pressure at saturation. More modern isotherms showing the Document 3::: The energy systems language, also referred to as energese, or energy circuit language, or generic systems symbols, is a modelling language used for composing energy flow diagrams in the field of systems ecology. It was developed by Howard T. Odum and colleagues in the 1950s during studies of the tropical forests funded by the United States Atomic Energy Commission. Design intent The design intent of the energy systems language was to facilitate the generic depiction of energy flows through any scale system while encompassing the laws of physics, and in particular, the laws of thermodynamics (see energy transformation for an example). In particular H.T. Odum aimed to produce a language which could facilitate the intellectual analysis, engineering synthesis and management of global systems such as the geobiosphere, and its many subsystems. Within this aim, H.T. Odum had a strong concern that many abstract mathematical models of such systems were not thermodynamically valid. Hence he used analog computers to make system models due to their intrinsic value; that is, the electronic circuits are of value for modelling natural systems which are assumed to obey the laws of energy flow, because, in themselves the circuits, like natural systems, also obey the known laws of energy flow, where the energy form is electrical. However Odum was interested not only in the electronic circuits themselves, but also in how they might be used as formal analogies for modeling other systems which also had energy flowing through them. As a result, Odum did not restrict his inquiry to the analysis and synthesis of any one system in isolation. The discipline that is most often associated with this kind of approach, together with the use of the energy systems language is known as systems ecology. General characteristics When applying the electronic circuits (and schematics) to modeling ecological and economic systems, Odum believed that generic categories, or characteristic modules, could Document 4::: Ecophenotypic variation ("ecophenotype") refers to phenotypical variation as a function of life station. In wide-ranging species, the contributions of heredity and environment are not always certain, but their interplay can sometimes be determined by experiment. Plants Plants display the most obvious examples of ecophenotypic variation. One example are trees growing in the woods developing long straight trunks, with branching crowns high in the canopy, while the same species growing alone in the open develops a spreading form, branching much lower to the ground. Genotypes often have much flexibility in the modification and expression of phenotypes; in many organisms these phenotypes are very different under varying environmental conditions. The plant Hieracium umbellatum is found growing in two different habitats in Sweden. One habitat is rocky sea-side cliffs, where the plants are bushy with broad leaves and expanded inflorescences; the other is among sand dunes where the plants grow prostrate with narrow leaves and compact inflorescences. These habitats alternate along the coast of Sweden and the habitat that the seeds of H. umbellatum land in determines the phenotype that grows. Invasive plants such as the honeysuckle can thrive by altering their morphology in response to changes in the environment, which gives them a competitive advantage. Another example of a plants phenotypic reaction and adaptation with its environment is how Thlaspi caerulescens can absorb the metals in the soil to use to its advantage in defending against harmful microbes and bacteria in its leaves. The more immediate responses shown by vascular plants to their environment, for instance a vine's ability to conform to the wall or tree upon which it grows, are not usually considered ecophenotypic, even though the mechanisms may be related. Animals Since animals are far less plastic than plants, ecophenotypic variation is noteworthy. When encountered, it can cause confusion in identification The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. What do these two changes have in common? plants making food from sunlight, air, and water rust forming on a metal gate A. Both are caused by cooling. B. Both are caused by heating. C. Both are only physical changes. D. Both are chemical changes. Answer:
sciq-343
multiple_choice
Which part of the ear amplifies the sound waves?
[ "ear canal", "eardrum", "middle ear", "cochlea" ]
C
Relavent Documents: Document 0::: Audiology (from Latin , "to hear"; and from Greek , -logia) is a branch of science that studies hearing, balance, and related disorders. Audiologists treat those with hearing loss and proactively prevent related damage. By employing various testing strategies (e.g. behavioral hearing tests, otoacoustic emission measurements, and electrophysiologic tests), audiologists aim to determine whether someone has normal sensitivity to sounds. If hearing loss is identified, audiologists determine which portions of hearing (high, middle, or low frequencies) are affected, to what degree (severity of loss), and where the lesion causing the hearing loss is found (outer ear, middle ear, inner ear, auditory nerve and/or central nervous system). If an audiologist determines that a hearing loss or vestibular abnormality is present, they will provide recommendations for interventions or rehabilitation (e.g. hearing aids, cochlear implants, appropriate medical referrals). In addition to diagnosing audiologic and vestibular pathologies, audiologists can also specialize in rehabilitation of tinnitus, hyperacusis, misophonia, auditory processing disorders, cochlear implant use and/or hearing aid use. Audiologists can provide hearing health care from birth to end-of-life. Audiologist An audiologist is a health care provider specializing in identifying, diagnosing, treating, and monitoring disorders of the auditory and vestibular systems. Audiologists are trained to diagnose, manage and/or treat hearing, tinnitus, or balance problems. They dispense, manage, and rehabilitate hearing aids and assess candidacy for and map hearing implants, such as cochlear implants, middle ear implants and bone conduction implants. They counsel families through a new diagnosis of hearing loss in infants, and help teach coping and compensation skills to late-deafened adults. They also help design and implement personal and industrial hearing safety programs, newborn hearing screening programs, school hearing Document 1::: The middle ear is the portion of the ear medial to the eardrum, and distal to the oval window of the cochlea (of the inner ear). The mammalian middle ear contains three ossicles (malleus, incus, and stapes), which transfer the vibrations of the eardrum into waves in the fluid and membranes of the inner ear. The hollow space of the middle ear is also known as the tympanic cavity and is surrounded by the tympanic part of the temporal bone. The auditory tube (also known as the Eustachian tube or the pharyngotympanic tube) joins the tympanic cavity with the nasal cavity (nasopharynx), allowing pressure to equalize between the middle ear and throat. The primary function of the middle ear is to efficiently transfer acoustic energy from compression waves in air to fluid–membrane waves within the cochlea. Structure Ossicles The middle ear contains three tiny bones known as the ossicles: malleus, incus, and stapes. The ossicles were given their Latin names for their distinctive shapes; they are also referred to as the hammer, anvil, and stirrup, respectively. The ossicles directly couple sound energy from the eardrum to the oval window of the cochlea. While the stapes is present in all tetrapods, the malleus and incus evolved from lower and upper jaw bones present in reptiles. The ossicles are classically supposed to mechanically convert the vibrations of the eardrum into amplified pressure waves in the fluid of the cochlea (or inner ear), with a lever arm factor of 1.3. Since the effective vibratory area of the eardrum is about 14 fold larger than that of the oval window, the sound pressure is concentrated, leading to a pressure gain of at least 18.1. The eardrum is merged to the malleus, which connects to the incus, which in turn connects to the stapes. Vibrations of the stapes footplate introduce pressure waves in the inner ear. There is a steadily increasing body of evidence that shows that the lever arm ratio is actually variable, depending on frequency. Betwe Document 2::: A middle ear implant is a hearing device that is surgically implanted into the middle ear. They help people with conductive, sensorineural or mixed hearing loss to hear.   Middle ear implants work by improving the conduction of sound vibrations from the middle ear to the inner ear. There are two types of middle ear devices: active and passive. Active middle ear implants (AMEI) consist of an external audio processor and an internal implant, which actively vibrates the structures of the middle ear. Passive middle ear implants (PMEIs) are sometimes known as ossicular replacement prostheses, TORPs or PORPs. They replace damaged or missing parts of the middle ear, creating a bridge between the outer ear and the inner ear, so that sound vibrations can be conducted through the middle ear and on to the cochlea. Unlike AMEIs, PMEIs contain no electronics and are not powered by an external source. PMEIs are the usual first-line surgical treatment for conductive hearing loss, due to their lack of external components and cost-effectiveness. However, each patient is assessed individually as to whether an AMEI or PMEI would bring more benefit. This is especially true if the patient has already had several surgeries with PMEIs. Active middle ear implant Parts An active middle ear implant (AMEI) has two parts: an internal implant and an external audio processor. The microphone of the audio processor picks up sounds from the environment. The processor then converts these acoustic signals into digital signals and sends them to the implant through the skin. The implant sends the signals to the Floating Mass Transducer (FMT): a small vibratory part that is surgically fixed either on one of the three ossicles or against the round window of the cochlea. The FMT vibrates and sends sound vibrations to the cochlea. The cochlea converts these vibrations into nerve signals and sends them to the brain, where they are interpreted as sound. Indications AMEIs are intended for patients wit Document 3::: The endocochlear potential (EP; also called endolymphatic potential) is the positive voltage of 80-100mV seen in the cochlear endolymphatic spaces. Within the cochlea the EP varies in the magnitude all along its length. When a sound is presented, the endocochlear potential changes either positive or negative in the endolymph, depending on the stimulus. The change in the potential is called the summating potential. With the movement of the basilar membrane, a shear force is created and a small potential is generated due to a difference in potential between the endolymph (scala media, +80 mV) and the perilymph (vestibular and tympanic ducts, 0 mV). EP is highest in the basal turn of the cochlea (95 mV in mice) and decreases in the magnitude towards the apex (87 mV). In saccule and utricle, endolymphatic potential is about +9 mV and +3mV in the semicircular canal. EP is highly dependent on the metabolism and ionic transport. An acoustic stimulus produces a simultaneous change in conductance at the membrane of the receptor cell. Because there is a steep gradient (150 mV), changes in membrane conductance are accompanied by rapid influx and efflux of ions which in turn produce the receptor potential. This is known as the Battery Hypothesis. The receptor potential for each hair cell causes a release of neurotransmitter at its basal pole, which elicits excitation of the afferent nerve fibres. Anatomy Document 4::: Earwax, also known by the medical term cerumen, is a waxy substance secreted in the ear canal of humans and other mammals. Earwax can be many colors, including brown, orange, red, yellowish, and gray. Earwax protects the skin of the human ear canal, assists in cleaning and lubrication, and provides protection against bacteria, fungi, particulate matter, and water. Major components of earwax include cerumen, produced by a type of modified sweat gland, and sebum, an oily substance. Both components are made by glands located in the outer ear canal. The chemical composition of earwax includes long chain fatty acids, both saturated and unsaturated, alcohols, squalene, and cholesterol. Earwax also contains dead skin cells and hair. Excess or compacted cerumen is the buildup of ear wax causing a blockage in the ear canal and it can press against the eardrum or block the outside ear canal or hearing aids, potentially causing hearing loss. Physiology Cerumen is produced in the cartilaginous outer third portion of the ear canal. It is a mixture of secretions from sebaceous glands and less-viscous ones from modified apocrine sweat glands. The primary components of both wet and dry earwax are shed layers of skin, with, on average, 60% of the earwax consisting of keratin, 12–20% saturated and unsaturated long-chain fatty acids, alcohols, squalene and 6–9% cholesterol. Wet or dry There are two genetically-determined types of earwax: the wet type, which is dominant, and the dry type, which is recessive. This distinction is caused by a single base change in the "ATP-binding cassette C11 gene". Dry-type individuals are homozygous for adenine (AA) whereas wet-type requires at least one guanine (AG or GG). Dry earwax is gray or tan and brittle, and is about 20% lipid. It has a smaller concentration of lipid and pigment granules than wet earwax. Wet earwax is light brown or dark brown and has a viscous and sticky consistency, and is about 50% lipid. Wet-type earwax is associated The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. Which part of the ear amplifies the sound waves? A. ear canal B. eardrum C. middle ear D. cochlea Answer:
sciq-9599
multiple_choice
As the organism grows more sophisticated, what happens to the process of gene regulation?
[ "degenerates", "becomes fractured", "becomes simplified", "becomes more complex" ]
D
Relavent Documents: Document 0::: Developmental systems theory (DST) is an overarching theoretical perspective on biological development, heredity, and evolution. It emphasizes the shared contributions of genes, environment, and epigenetic factors on developmental processes. DST, unlike conventional scientific theories, is not directly used to help make predictions for testing experimental results; instead, it is seen as a collection of philosophical, psychological, and scientific models of development and evolution. As a whole, these models argue the inadequacy of the modern evolutionary synthesis on the roles of genes and natural selection as the principal explanation of living structures. Developmental systems theory embraces a large range of positions that expand biological explanations of organismal development and hold modern evolutionary theory as a misconception of the nature of living processes. Overview All versions of developmental systems theory espouse the view that: All biological processes (including both evolution and development) operate by continually assembling new structures. Each such structure transcends the structures from which it arose and has its own systematic characteristics, information, functions and laws. Conversely, each such structure is ultimately irreducible to any lower (or higher) level of structure, and can be described and explained only on its own terms. Furthermore, the major processes through which life as a whole operates, including evolution, heredity and the development of particular organisms, can only be accounted for by incorporating many more layers of structure and process than the conventional concepts of ‘gene’ and ‘environment’ normally allow for. In other words, although it does not claim that all structures are equal, development systems theory is fundamentally opposed to reductionism of all kinds. In short, developmental systems theory intends to formulate a perspective which does not presume the causal (or ontological) priority of any p Document 1::: A gene (or genetic) regulatory network (GRN) is a collection of molecular regulators that interact with each other and with other substances in the cell to govern the gene expression levels of mRNA and proteins which, in turn, determine the function of the cell. GRN also play a central role in morphogenesis, the creation of body structures, which in turn is central to evolutionary developmental biology (evo-devo). The regulator can be DNA, RNA, protein or any combination of two or more of these three that form a complex, such as a specific sequence of DNA and a transcription factor to activate that sequence. The interaction can be direct or indirect (through transcribed RNA or translated protein). In general, each mRNA molecule goes on to make a specific protein (or set of proteins). In some cases this protein will be structural, and will accumulate at the cell membrane or within the cell to give it particular structural properties. In other cases the protein will be an enzyme, i.e., a micro-machine that catalyses a certain reaction, such as the breakdown of a food source or toxin. Some proteins though serve only to activate other genes, and these are the transcription factors that are the main players in regulatory networks or cascades. By binding to the promoter region at the start of other genes they turn them on, initiating the production of another protein, and so on. Some transcription factors are inhibitory. In single-celled organisms, regulatory networks respond to the external environment, optimising the cell at a given time for survival in this environment. Thus a yeast cell, finding itself in a sugar solution, will turn on genes to make enzymes that process the sugar to alcohol. This process, which we associate with wine-making, is how the yeast cell makes its living, gaining energy to multiply, which under normal circumstances would enhance its survival prospects. In multicellular animals the same principle has been put in the service of gene cascades Document 2::: In biology, constructive development refers to the hypothesis that organisms shape their own developmental trajectory by constantly responding to, and causing, changes in both their internal state and their external environment. Constructive development can be contrasted with programmed development, the hypothesis that organisms develop according to a genetic program or blueprint. The constructivist perspective is found in philosophy, most notably developmental systems theory, and in the biological and social sciences, including developmental psychobiology and key themes of the extended evolutionary synthesis. Constructive development may be important to evolution because it enables organisms to produce functional phenotypes in response to genetic or environmental perturbation, and thereby contributes to adaptation and diversification. Key themes of constructive development Responsiveness and flexibility At any point in time, an organism's development depends on both the current state of the organism and the state of the environment. The developmental system, including the genome and its epigenetic regulation, responds flexibly to internal and external inputs. One example is condition-dependent gene expression, but regulatory systems also rely on physical properties of cells and tissues and exploratory behavior among microtubular, neural, muscular and vascular systems. Multiple modes of inheritance Organisms inherit (i.e., receive from their predecessors) a diverse set of developmental resources, including DNA, epigenetic marks, organelles, enzymes, hormones, antibodies, transcription factors, symbionts, socially transmitted knowledge and environmental conditions modified by parents. Developmental environments are constructed In the course of development, organisms help shape their internal and external environment, and in this way, influence their own development. Organisms also construct developmental environments for their offspring through various forms of Document 3::: Evolutionary biology is the subfield of biology that studies the evolutionary processes (natural selection, common descent, speciation) that produced the diversity of life on Earth. It is also defined as the study of the history of life forms on Earth. Evolution holds that all species are related and gradually change over generations. In a population, the genetic variations affect the phenotypes (physical characteristics) of an organism. These changes in the phenotypes will be an advantage to some organisms, which will then be passed onto their offspring. Some examples of evolution in species over many generations are the peppered moth and flightless birds. In the 1930s, the discipline of evolutionary biology emerged through what Julian Huxley called the modern synthesis of understanding, from previously unrelated fields of biological research, such as genetics and ecology, systematics, and paleontology. The investigational range of current research has widened to encompass the genetic architecture of adaptation, molecular evolution, and the different forces that contribute to evolution, such as sexual selection, genetic drift, and biogeography. Moreover, the newer field of evolutionary developmental biology ("evo-devo") investigates how embryogenesis is controlled, thus yielding a wider synthesis that integrates developmental biology with the fields of study covered by the earlier evolutionary synthesis. Subfields Evolution is the central unifying concept in biology. Biology can be divided into various ways. One way is by the level of biological organization, from molecular to cell, organism to population. Another way is by perceived taxonomic group, with fields such as zoology, botany, and microbiology, reflecting what was once seen as the major divisions of life. A third way is by approaches, such as field biology, theoretical biology, experimental evolution, and paleontology. These alternative ways of dividing up the subject have been combined with evolution Document 4::: Modularity refers to the ability of a system to organize discrete, individual units that can overall increase the efficiency of network activity and, in a biological sense, facilitates selective forces upon the network. Modularity is observed in all model systems, and can be studied at nearly every scale of biological organization, from molecular interactions all the way up to the whole organism. Evolution of Modularity The exact evolutionary origins of biological modularity has been debated since the 1990s. In the mid 1990s, Günter Wagner argued that modularity could have arisen and been maintained through the interaction of four evolutionary modes of action: [1] Selection for the rate of adaptation: If different complexes evolve at different rates, then those evolving more quickly reach fixation in a population faster than other complexes. Thus, common evolutionary rates could be forcing the genes for certain proteins to evolve together while preventing other genes from being co-opted unless there is a shift in evolutionary rate. [2] Constructional selection: When a gene exists in many duplicated copies, it may be maintained because of the many connections it has (also termed pleiotropy). There is evidence that this is so following whole genome duplication, or duplication at a single locus. However, the direct relationship that duplication processes have with modularity has yet to be directly examined. [3] Stabilizing selection: While seeming antithetical to forming novel modules, Wagner maintains that it is important to consider the effects of stabilizing selection as it may be "an important counter force against the evolution of modularity". Stabilizing selection, if ubiquitously spread across the network, could then be a "wall" that makes the formation of novel interactions more difficult and maintains previously established interactions. Against such strong positive selection, other evolutionary forces acting on the network must exist, with gaps of relaxed The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. As the organism grows more sophisticated, what happens to the process of gene regulation? A. degenerates B. becomes fractured C. becomes simplified D. becomes more complex Answer:
sciq-1424
multiple_choice
What long molecules are composed of chains of units called monomers?
[ "microbes", "complexes", "polymers", "drummers" ]
C
Relavent Documents: Document 0::: A macromolecule is a very large molecule important to biological processes, such as a protein or nucleic acid. It is composed of thousands of covalently bonded atoms. Many macromolecules are polymers of smaller molecules called monomers. The most common macromolecules in biochemistry are biopolymers (nucleic acids, proteins, and carbohydrates) and large non-polymeric molecules such as lipids, nanogels and macrocycles. Synthetic fibers and experimental materials such as carbon nanotubes are also examples of macromolecules. Definition The term macromolecule (macro- + molecule) was coined by Nobel laureate Hermann Staudinger in the 1920s, although his first relevant publication on this field only mentions high molecular compounds (in excess of 1,000 atoms). At that time the term polymer, as introduced by Berzelius in 1832, had a different meaning from that of today: it simply was another form of isomerism for example with benzene and acetylene and had little to do with size. Usage of the term to describe large molecules varies among the disciplines. For example, while biology refers to macromolecules as the four large molecules comprising living things, in chemistry, the term may refer to aggregates of two or more molecules held together by intermolecular forces rather than covalent bonds but which do not readily dissociate. According to the standard IUPAC definition, the term macromolecule as used in polymer science refers only to a single molecule. For example, a single polymeric molecule is appropriately described as a "macromolecule" or "polymer molecule" rather than a "polymer," which suggests a substance composed of macromolecules. Because of their size, macromolecules are not conveniently described in terms of stoichiometry alone. The structure of simple macromolecules, such as homopolymers, may be described in terms of the individual monomer subunit and total molecular mass. Complicated biomacromolecules, on the other hand, require multi-faceted structura Document 1::: The term macromolecular assembly (MA) refers to massive chemical structures such as viruses and non-biologic nanoparticles, cellular organelles and membranes and ribosomes, etc. that are complex mixtures of polypeptide, polynucleotide, polysaccharide or other polymeric macromolecules. They are generally of more than one of these types, and the mixtures are defined spatially (i.e., with regard to their chemical shape), and with regard to their underlying chemical composition and structure. Macromolecules are found in living and nonliving things, and are composed of many hundreds or thousands of atoms held together by covalent bonds; they are often characterized by repeating units (i.e., they are polymers). Assemblies of these can likewise be biologic or non-biologic, though the MA term is more commonly applied in biology, and the term supramolecular assembly is more often applied in non-biologic contexts (e.g., in supramolecular chemistry and nanotechnology). MAs of macromolecules are held in their defined forms by non-covalent intermolecular interactions (rather than covalent bonds), and can be in either non-repeating structures (e.g., as in the ribosome (image) and cell membrane architectures), or in repeating linear, circular, spiral, or other patterns (e.g., as in actin filaments and the flagellar motor, image). The process by which MAs are formed has been termed molecular self-assembly, a term especially applied in non-biologic contexts. A wide variety of physical/biophysical, chemical/biochemical, and computational methods exist for the study of MA; given the scale (molecular dimensions) of MAs, efforts to elaborate their composition and structure and discern mechanisms underlying their functions are at the forefront of modern structure science. Biomolecular complex A biomolecular complex, also called a biomacromolecular complex, is any biological complex made of more than one biopolymer (protein, RNA, DNA, carbohydrate) or large non-polymeric biomolecules Document 2::: Biopolymers are natural polymers produced by the cells of living organisms. Like other polymers, biopolymers consist of monomeric units that are covalently bonded in chains to form larger molecules. There are three main classes of biopolymers, classified according to the monomers used and the structure of the biopolymer formed: polynucleotides, polypeptides, and polysaccharides. The Polynucleotides, RNA and DNA, are long polymers of nucleotides. Polypeptides include proteins and shorter polymers of amino acids; some major examples include collagen, actin, and fibrin. Polysaccharides are linear or branched chains of sugar carbohydrates; examples include starch, cellulose, and alginate. Other examples of biopolymers include natural rubbers (polymers of isoprene), suberin and lignin (complex polyphenolic polymers), cutin and cutan (complex polymers of long-chain fatty acids), melanin, and polyhydroxyalkanoates (PHAs). In addition to their many essential roles in living organisms, biopolymers have applications in many fields including the food industry, manufacturing, packaging, and biomedical engineering. Biopolymers versus synthetic polymers A major defining difference between biopolymers and synthetic polymers can be found in their structures. All polymers are made of repetitive units called monomers. Biopolymers often have a well-defined structure, though this is not a defining characteristic (example: lignocellulose): The exact chemical composition and the sequence in which these units are arranged is called the primary structure, in the case of proteins. Many biopolymers spontaneously fold into characteristic compact shapes (see also "protein folding" as well as secondary structure and tertiary structure), which determine their biological functions and depend in a complicated way on their primary structures. Structural biology is the study of the structural properties of biopolymers. In contrast, most synthetic polymers have much simpler and more random (or st Document 3::: This is a list of topics in molecular biology. See also index of biochemistry articles. Document 4::: This is an index of lists of molecules (i.e. by year, number of atoms, etc.). Millions of molecules have existed in the universe since before the formation of Earth. Three of them, carbon dioxide, water and oxygen were necessary for the growth of life. Although humanity had always been surrounded by these substances, it has not always known what they were composed of. By century The following is an index of list of molecules organized by time of discovery of their molecular formula or their specific molecule in case of isomers: List of compounds By number of carbon atoms in the molecule List of compounds with carbon number 1 List of compounds with carbon number 2 List of compounds with carbon number 3 List of compounds with carbon number 4 List of compounds with carbon number 5 List of compounds with carbon number 6 List of compounds with carbon number 7 List of compounds with carbon number 8 List of compounds with carbon number 9 List of compounds with carbon number 10 List of compounds with carbon number 11 List of compounds with carbon number 12 List of compounds with carbon number 13 List of compounds with carbon number 14 List of compounds with carbon number 15 List of compounds with carbon number 16 List of compounds with carbon number 17 List of compounds with carbon number 18 List of compounds with carbon number 19 List of compounds with carbon number 20 List of compounds with carbon number 21 List of compounds with carbon number 22 List of compounds with carbon number 23 List of compounds with carbon number 24 List of compounds with carbon numbers 25-29 List of compounds with carbon numbers 30-39 List of compounds with carbon numbers 40-49 List of compounds with carbon numbers 50+ Other lists List of interstellar and circumstellar molecules List of gases List of molecules with unusual names See also Molecule Empirical formula Chemical formula Chemical structure Chemical compound Chemical bond Coordination complex L The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. What long molecules are composed of chains of units called monomers? A. microbes B. complexes C. polymers D. drummers Answer:
sciq-11267
multiple_choice
What substances digest the food in the vacuole of an ingestive protist?
[ "hormones", "carbohydrates", "enzymes", "lipids" ]
C
Relavent Documents: Document 0::: ' is the process of absorption of vitamins, minerals, and other chemicals from food as part of the nutrition of an organism. In humans, this is always done with a chemical breakdown (enzymes and acids) and physical breakdown (oral mastication and stomach churning).chemical alteration of substances in the bloodstream by the liver or cellular secretions. Although a few similar compounds can be absorbed in digestion bio assimilation, the bioavailability of many compounds is dictated by this second process since both the liver and cellular secretions can be very specific in their metabolic action (see chirality). This second process is where the absorbed food reaches the cells via the liver. Most foods are composed of largely indigestible components depending on the enzymes and effectiveness of an animal's digestive tract. The most well-known of these indigestible compounds is cellulose; the basic chemical polymer in the makeup of plant cell walls. Most animals, however, do not produce cellulase; the enzyme needed to digest cellulose. However some animal and species have developed symbiotic relationships with cellulase-producing bacteria (see termites and metamonads.) This allows termites to use the energy-dense cellulose carbohydrate. Other such enzymes are known to significantly improve bio-assimilation of nutrients. Because of the use of bacterial derivatives, enzymatic dietary supplements now contain such enzymes as amylase, glucoamylase, protease, invertase, peptidase, lipase, lactase, phytase, and cellulase. Examples of biological assimilation Photosynthesis, a process whereby carbon dioxide and water are transformed into a number of organic molecules in plant cells. Nitrogen fixation from the soil into organic molecules by symbiotic bacteria which live in the roots of certain plants, such as Leguminosae. Magnesium supplements orotate, oxide, sulfate, citrate, and glycerate are all structurally similar. However, oxide and sulfate are not water-soluble Document 1::: Digestion is the breakdown of large insoluble food compounds into small water-soluble components so that they can be absorbed into the blood plasma. In certain organisms, these smaller substances are absorbed through the small intestine into the blood stream. Digestion is a form of catabolism that is often divided into two processes based on how food is broken down: mechanical and chemical digestion. The term mechanical digestion refers to the physical breakdown of large pieces of food into smaller pieces which can subsequently be accessed by digestive enzymes. Mechanical digestion takes place in the mouth through mastication and in the small intestine through segmentation contractions. In chemical digestion, enzymes break down food into the small compounds that the body can use. In the human digestive system, food enters the mouth and mechanical digestion of the food starts by the action of mastication (chewing), a form of mechanical digestion, and the wetting contact of saliva. Saliva, a liquid secreted by the salivary glands, contains salivary amylase, an enzyme which starts the digestion of starch in the food; the saliva also contains mucus, which lubricates the food, and hydrogen carbonate, which provides the ideal conditions of pH (alkaline) for amylase to work, and electrolytes (Na+, K+, Cl−, HCO−3). About 30% of starch is hydrolyzed into disaccharide in the oral cavity (mouth). After undergoing mastication and starch digestion, the food will be in the form of a small, round slurry mass called a bolus. It will then travel down the esophagus and into the stomach by the action of peristalsis. Gastric juice in the stomach starts protein digestion. Gastric juice mainly contains hydrochloric acid and pepsin. In infants and toddlers, gastric juice also contains rennin to digest milk proteins. As the first two chemicals may damage the stomach wall, mucus and bicarbonates are secreted by the stomach. They provide a slimy layer that acts as a shield against the damag Document 2::: The food vacuole, or digestive vacuole, is an organelle found in simple eukaryotes such as protists. This organelle is essentially a lysosome. During the stage of the symbiont parasites' lifecycle where it resides within a human (or other mammalian) red blood cell, it is the site of haemoglobin digestion and the formation of the large haemozoin crystals that can be seen under a light microscope. See also Protists Eukaryote Amoeba Lysosome Enzymes Euglenids Paramecia Document 3::: Holozoic nutrition (Greek: holo-whole ; zoikos-of animals) is a type of heterotrophic nutrition that is characterized by the internalization (ingestion) and internal processing of liquids or solid food particles. Protozoa, such as amoebas, and most of the free living animals, such as humans, exhibit this type of nutrition where food is taken into the body as a liquid or solid and then further broken down is known as holozoic nutrition. Most animals exhibit this kind of nutrition. In Holozoic nutrition, the energy and organic building blocks are obtained by ingesting and then digesting other organisms or pieces of other organisms, including blood and decaying organic matter. This contrasts with holophytic nutrition, in which energy and organic building blocks are obtained through photosynthesis or chemosynthesis, and with saprozoic nutrition, in which digestive enzymes are released externally and the resulting monomers (small organic molecules) are absorbed directly from the environment. There are several stages of holozoic nutrition, which often occur in separate compartments within an organism (such as the stomach and intestines): Ingestion: In animals, this is merely taking food in through the mouth. In protozoa, this most commonly occurs through phagocytosis. Digestion: The physical breakdown of complex large and organic food particles and the enzymatic breakdown of complex organic compounds into small, simple molecules. Absorption: The active and passive transport of the chemical products of digestion out of the food-containing compartment and into the body 4. Assimilation: The chemical products used up for various metabolic processes Document 4::: Every organism requires energy to be active. However, to obtain energy from its outside environment, cells must not only retrieve molecules from their surroundings but also break them down. This process is known as intracellular digestion. In its broadest sense, intracellular digestion is the breakdown of substances within the cytoplasm of a cell. In detail, a phagocyte's duty is obtaining food particles and digesting it in a vacuole. For example, following phagocytosis, the ingested particle (or phagosome) fuses with a lysosome containing hydrolytic enzymes to form a phagolysosome; the pathogens or food particles within the phagosome are then digested by the lysosome's enzymes. Intracellular digestion can also refer to the process in which animals that lack a digestive tract bring food items into the cell for the purposes of digestion for nutritional needs. This kind of intracellular digestion occurs in many unicellular protozoans, in Pycnogonida, in some molluscs, Cnidaria and Porifera. There is another type of digestion, called extracellular digestion. In amphioxus, digestion is both extracellular and intracellular. Function Intracellular digestion is divided into heterophagic digestion and autophagic digestion. These two types take place in the lysosome and they both have very specific functions. Heterophagic intracellular digestion has an important job which is to break down all molecules that are brought into a cell by endocytosis. The degraded molecules need to be delivered to the cytoplasm; however, this will not be possible if the molecules are not hydrolyzed in the lysosome. Autophagic intracellular digestion is processed in the cell, which means it digests the internal molecules. Autophagy Generally, autophagy includes three small branches, which are macroautophagy, microautophagy, and chaperone-mediated autophagy. Occurrence Most organisms that use intracellular digestion belong to Kingdom Protista, such as amoeba and paramecium. Amoeba Amoeba u The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. What substances digest the food in the vacuole of an ingestive protist? A. hormones B. carbohydrates C. enzymes D. lipids Answer:
sciq-10114
multiple_choice
What is composed of very long strands of glucose monomers, is largely indigestible and comprises the cell walls of plants?
[ "cellulose", "vascular cambrium", "tree bark", "chlorophyll" ]
A
Relavent Documents: Document 0::: Glucuronoxylans are the primary components of hemicellulose as found in hardwood trees, for example birch. They are hemicellulosic plant cell wall polysaccharides, containing glucuronic acid and xylose as its main constituents. They are linear polymers of β-D-xylopyranosyl units linked by (1→4) glycosidic bonds, with many of the xylose units substituted with 2, 3 or 2,3-linked glucuronate residue, which are often methylated at position 4. Most of the glucuronoxylans have single 4-O-methyl-α-D-glucopyranosyl uronate residues (MeGlcA) attached at position 2. This structural type is usually named as 4-O-methyl-D-glucurono-D-xylan (MGX). Angiosperm (hardwood) glucuronoxylans also have a high rate of substitution (70-80%) by acetyl groups, at position 2 and/or 3 of the β-D-xylopyranosyl, conferring on the xylan its partial solubility in water. Document 1::: Xylan (; ) (CAS number: 9014-63-5) is a type of hemicellulose, a polysaccharide consisting mainly of xylose residues. It is found in plants, in the secondary cell walls of dicots and all cell walls of grasses. Xylan is the third most abundant biopolymer on Earth, after cellulose and chitin. Composition Xylans are polysaccharides made up of β-1,4-linked xylose (a pentose sugar) residues with side branches of α-arabinofuranose and/or α-glucuronic acids. On the basis of substituted groups xylan can be categorized into three classes i) glucuronoxylan (GX) ii) neutral arabinoxylan (AX) and iii) glucuronoarabinoxylan (GAX). In some cases contribute to cross-linking of cellulose microfibrils and lignin through ferulic acid residues. Occurrence Plant cell structure Xylans play an important role in the integrity of the plant cell wall and increase cell wall recalcitrance to enzymatic digestion; thus, they help plants to defend against herbivores and pathogens (biotic stress). Xylan also plays a significant role in plant growth and development. Typically, xylans content in hardwoods is 10-35%, whereas they are 10-15% in softwoods. The main xylan component in hardwoods is O-acetyl-4-O-methylglucuronoxylan, whereas arabino-4-O-methylglucuronoxylans are a major component in softwoods. In general, softwood xylans differ from hardwood xylans by the lack of acetyl groups and the presence of arabinose units linked by α-(1,3)-glycosidic bonds to the xylan backbone. Algae Some macrophytic green algae contain xylan (specifically homoxylan) especially those within the Codium and Bryopsis genera where it replaces cellulose in the cell wall matrix. Similarly, it replaces the inner fibrillar cell-wall layer of cellulose in some red algae. Document 2::: Ergastic substances are non-protoplasmic materials found in cells. The living protoplasm of a cell is sometimes called the bioplasm and distinct from the ergastic substances of the cell. The latter are usually organic or inorganic substances that are products of metabolism, and include crystals, oil drops, gums, tannins, resins and other compounds that can aid the organism in defense, maintenance of cellular structure, or just substance storage. Ergastic substances may appear in the protoplasm, in vacuoles, or in the cell wall. Carbohydrates Reserve carbohydrate of plants are the derivatives of the end products of photosynthesis. Cellulose and starch are the main ergastic substances of plant cells. Cellulose is the chief component of the cell wall, and starch occurs as a reserve material in the protoplasm. Starch, as starch grains, arise almost exclusively in plastids, especially leucoplasts and amyloplasts. Proteins Although proteins are the main component of living protoplasm, proteins can occur as inactive, ergastic bodies—in an amorphous or crystalline (or crystalloid) form. A well-known amorphous ergastic protein is gluten. Fats and oils Fats (lipids) and oils are widely distributed in plant tissues. Substances related to fats—waxes, suberin, and cutin—occur as protective layers in or on the cell wall. Crystals Animals eliminate excess inorganic materials; plants mostly deposit such material in their tissues. Such mineral matter is mostly salts of calcium and anhydrides of silica. Raphides are a type of elongated crystalline form of calcium oxalate aggregated in bundles within a plant cell. Because of the needle-like form, large numbers in the tissue of, say, a leaf can render the leaf unpalatable to herbivores (see Dieffenbachia and taro). Druse Cystolith Document 3::: Suberin, cutin and lignins are complex, higher plant epidermis and periderm cell-wall macromolecules, forming a protective barrier. Suberin, a complex polyester biopolymer, is lipophilic, and composed of long chain fatty acids called suberin acids, and glycerol. Suberins and lignins are considered covalently linked to lipids and carbohydrates, respectively, and lignin is covalently linked to suberin, and to a lesser extent, to cutin. Suberin is a major constituent of cork, and is named after the cork oak, Quercus suber. Its main function is as a barrier to movement of water and solutes. Anatomy and physiology Suberin is highly hydrophobic and a somewhat 'rubbery' material. In roots, suberin is deposited in the radial and transverse/tangential cell walls of the endodermal cells. This structure, known as the Casparian strip or Casparian band, functions to prevent water and nutrients taken up by the root from entering the stele through the apoplast. Instead, water must bypass the endodermis via the symplast. This allows the plant to select the solutes that pass further into the plant. It thus forms an important barrier to harmful solutes. For example, mangroves use suberin to minimize salt intake from their littoral habitat. Suberin is found in the phellem layer of the periderm (or cork). This is outermost layer of the bark. The cells in this layer are dead and abundant in suberin, preventing water loss from the tissues below. Suberin can also be found in various other plant structures. For example, they are present in the lenticels on the stems of many plants and the net structure in the rind of a netted melon is composed of suberised cells. Structure and biosynthesis Suberin consists of two domains, a polyaromatic and a polyaliphatic domain. The polyaromatics are predominantly located within the primary cell wall, and the polyaliphatics are located between the primary cell wall and the cell membrane. The two domains are supposed to be cross-linked. The exact quali Document 4::: A microfibril is a very fine fibril, or fiber-like strand, consisting of glycoproteins and cellulose. It is usually, but not always, used as a general term in describing the structure of protein fiber, e.g. hair and sperm tail. Its most frequently observed structural pattern is the 9+2 pattern in which two central protofibrils are surrounded by nine other pairs. Cellulose inside plants is one of the examples of non-protein compounds that are using this term with the same purpose. Cellulose microfibrils are laid down in the inner surface of the primary cell wall. As the cell absorbs water, its volume increases and the existing microfibrils separate and new ones are formed to help increase cell strength. Synthesis and function Cellulose is synthesized by cellulose synthase or Rosette terminal complexes which reside on a cells membrane. As cellulose fibrils are synthesized and grow extracellularly they push up against neighboring cells. Since the neighboring cell can not move easily the Rosette complex is instead pushed around the cell through the fluid phospholipid membrane. Eventually this results in the cell becoming wrapped in a microfibril layer. This layer becomes the cell wall. The organization of microfibrils forming the primary cell wall is rather disorganized. However, another mechanism is used in secondary cell walls leading to its organization. Essentially, lanes on the secondary cell wall are built with microtubules. These lanes force microfibrils to remain in a certain area while they wrap. During this process microtubules can spontaneously depolymerize and repolymerize in a different orientation. This leads to a different direction in which the cell continues getting wrapped. Fibrillin microfibrils are found in connective tissues, which mainly makes up fibrillin-1 and provides elasticity. During the assembly, mirofibrils exhibit a repeating stringed-beads arrangement produced by the cross-linking of molecules forming a striated pattern with a given The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. What is composed of very long strands of glucose monomers, is largely indigestible and comprises the cell walls of plants? A. cellulose B. vascular cambrium C. tree bark D. chlorophyll Answer:
sciq-5946
multiple_choice
Vertebrates have tissues which are organized into organs which in turn are organized into what?
[ "artificial systems", "organ systems", "information systems", "maturation systems" ]
B
Relavent Documents: Document 0::: In a multicellular organism, an organ is a collection of tissues joined in a structural unit to serve a common function. In the hierarchy of life, an organ lies between tissue and an organ system. Tissues are formed from same type cells to act together in a function. Tissues of different types combine to form an organ which has a specific function. The intestinal wall for example is formed by epithelial tissue and smooth muscle tissue. Two or more organs working together in the execution of a specific body function form an organ system, also called a biological system or body system. An organ's tissues can be broadly categorized as parenchyma, the functional tissue, and stroma, the structural tissue with supportive, connective, or ancillary functions. For example, the gland's tissue that makes the hormones is the parenchyma, whereas the stroma includes the nerves that innervate the parenchyma, the blood vessels that oxygenate and nourish it and carry away its metabolic wastes, and the connective tissues that provide a suitable place for it to be situated and anchored. The main tissues that make up an organ tend to have common embryologic origins, such as arising from the same germ layer. Organs exist in most multicellular organisms. In single-celled organisms such as members of the eukaryotes, the functional analogue of an organ is known as an organelle. In plants, there are three main organs. The number of organs in any organism depends on the definition used. By one widely adopted definition, 79 organs have been identified in the human body. Animals Except for placozoans, multicellular animals including humans have a variety of organ systems. These specific systems are widely studied in human anatomy. The functions of these organ systems often share significant overlap. For instance, the nervous and endocrine system both operate via a shared organ, the hypothalamus. For this reason, the two systems are combined and studied as the neuroendocrine system. The sam Document 1::: A cell type is a classification used to identify cells that share morphological or phenotypical features. A multicellular organism may contain cells of a number of widely differing and specialized cell types, such as muscle cells and skin cells, that differ both in appearance and function yet have identical genomic sequences. Cells may have the same genotype, but belong to different cell types due to the differential regulation of the genes they contain. Classification of a specific cell type is often done through the use of microscopy (such as those from the cluster of differentiation family that are commonly used for this purpose in immunology). Recent developments in single cell RNA sequencing facilitated classification of cell types based on shared gene expression patterns. This has led to the discovery of many new cell types in e.g. mouse cortex, hippocampus, dorsal root ganglion and spinal cord. Animals have evolved a greater diversity of cell types in a multicellular body (100–150 different cell types), compared with 10–20 in plants, fungi, and protists. The exact number of cell types is, however, undefined, and the Cell Ontology, as of 2021, lists over 2,300 different cell types. Multicellular organisms All higher multicellular organisms contain cells specialised for different functions. Most distinct cell types arise from a single totipotent cell that differentiates into hundreds of different cell types during the course of development. Differentiation of cells is driven by different environmental cues (such as cell–cell interaction) and intrinsic differences (such as those caused by the uneven distribution of molecules during division). Multicellular organisms are composed of cells that fall into two fundamental types: germ cells and somatic cells. During development, somatic cells will become more specialized and form the three primary germ layers: ectoderm, mesoderm, and endoderm. After formation of the three germ layers, cells will continue to special Document 2::: This list of life sciences comprises the branches of science that involve the scientific study of life – such as microorganisms, plants, and animals including human beings. This science is one of the two major branches of natural science, the other being physical science, which is concerned with non-living matter. Biology is the overall natural science that studies life, with the other life sciences as its sub-disciplines. Some life sciences focus on a specific type of organism. For example, zoology is the study of animals, while botany is the study of plants. Other life sciences focus on aspects common to all or many life forms, such as anatomy and genetics. Some focus on the micro-scale (e.g. molecular biology, biochemistry) other on larger scales (e.g. cytology, immunology, ethology, pharmacy, ecology). Another major branch of life sciences involves understanding the mindneuroscience. Life sciences discoveries are helpful in improving the quality and standard of life and have applications in health, agriculture, medicine, and the pharmaceutical and food science industries. For example, it has provided information on certain diseases which has overall aided in the understanding of human health. Basic life science branches Biology – scientific study of life Anatomy – study of form and function, in plants, animals, and other organisms, or specifically in humans Astrobiology – the study of the formation and presence of life in the universe Bacteriology – study of bacteria Biotechnology – study of combination of both the living organism and technology Biochemistry – study of the chemical reactions required for life to exist and function, usually a focus on the cellular level Bioinformatics – developing of methods or software tools for storing, retrieving, organizing and analyzing biological data to generate useful biological knowledge Biolinguistics – the study of the biology and evolution of language. Biological anthropology – the study of humans, non-hum Document 3::: A biological system is a complex network which connects several biologically relevant entities. Biological organization spans several scales and are determined based different structures depending on what the system is. Examples of biological systems at the macro scale are populations of organisms. On the organ and tissue scale in mammals and other animals, examples include the circulatory system, the respiratory system, and the nervous system. On the micro to the nanoscopic scale, examples of biological systems are cells, organelles, macromolecular complexes and regulatory pathways. A biological system is not to be confused with a living system, such as a living organism. Organ and tissue systems These specific systems are widely studied in human anatomy and are also present in many other animals. Respiratory system: the organs used for breathing, the pharynx, larynx, bronchi, lungs and diaphragm. Digestive system: digestion and processing food with salivary glands, oesophagus, stomach, liver, gallbladder, pancreas, intestines, rectum and anus. Cardiovascular system (heart and circulatory system): pumping and channeling blood to and from the body and lungs with heart, blood and blood vessels. Urinary system: kidneys, ureters, bladder and urethra involved in fluid balance, electrolyte balance and excretion of urine. Integumentary system: skin, hair, fat, and nails. Skeletal system: structural support and protection with bones, cartilage, ligaments and tendons. Endocrine system: communication within the body using hormones made by endocrine glands such as the hypothalamus, pituitary gland, pineal body or pineal gland, thyroid, parathyroid and adrenals, i.e., adrenal glands. Lymphatic system: structures involved in the transfer of lymph between tissues and the blood stream; includes the lymph and the nodes and vessels. The lymphatic system includes functions including immune responses and development of antibodies. Immune system: protects the organism from Document 4::: H2.00.04.4.01001: Lymphoid tissue H2.00.05.0.00001: Muscle tissue H2.00.05.1.00001: Smooth muscle tissue H2.00.05.2.00001: Striated muscle tissue H2.00.06.0.00001: Nerve tissue H2.00.06.1.00001: Neuron H2.00.06.2.00001: Synapse H2.00.06.2.00001: Neuroglia h3.01: Bones h3.02: Joints h3.03: Muscles h3.04: Alimentary system h3.05: Respiratory system h3.06: Urinary system h3.07: Genital system h3.08: The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. Vertebrates have tissues which are organized into organs which in turn are organized into what? A. artificial systems B. organ systems C. information systems D. maturation systems Answer:
sciq-7290
multiple_choice
Dna segments cross over to form what kind of chromosome?
[ "mutated", "recombinant", "resistant", "autosome" ]
B
Relavent Documents: Document 0::: Advanced Placement (AP) Biology (also known as AP Bio) is an Advanced Placement biology course and exam offered by the College Board in the United States. For the 2012–2013 school year, the College Board unveiled a new curriculum with a greater focus on "scientific practices". This course is designed for students who wish to pursue an interest in the life sciences. The College Board recommends successful completion of high school biology and high school chemistry before commencing AP Biology, although the actual prerequisites vary from school to school and from state to state. This course, nevertheless, is considered very challenging and one of the most difficult AP classes, as shown with AP Finals grade distributions. Topic outline The exam covers the following 8 units. The percentage indicates the portion of the multiple-choice section of the exam focused on each content area: The course is based on and tests six skills, called scientific practices which include: In addition to the topics above, students are required to be familiar with general lab procedure. Students should know how to collect data, analyze data to form conclusions, and apply those conclusions. Exam Students are allowed to use a four-function, scientific, or graphing calculator. The exam has two sections: a 90 minute multiple choice section and a 90 minute free response section. There are 60 multiple choice questions and six free responses, two long and four short. Both sections are worth 50% of the score. Score distribution Commonly used textbooks Biology, AP Edition by Sylvia Mader (2012, hardcover ) Life: The Science of Biology (Sadava, Heller, Orians, Purves, and Hillis, ) Campbell Biology AP Ninth Edition (Reece, Urry, Cain, Wasserman, Minorsky, and Andrew Jackson ) See also Glossary of biology A.P Bio (TV Show) Document 1::: The SAT Subject Test in Biology was the name of a one-hour multiple choice test given on biology by the College Board. A student chose whether to take the test depending upon college entrance requirements for the schools in which the student is planning to apply. Until 1994, the SAT Subject Tests were known as Achievement Tests; and from 1995 until January 2005, they were known as SAT IIs. Of all SAT subject tests, the Biology E/M test was the only SAT II that allowed the test taker a choice between the ecological or molecular tests. A set of 60 questions was taken by all test takers for Biology and a choice of 20 questions was allowed between either the E or M tests. This test was graded on a scale between 200 and 800. The average for Molecular is 630 while Ecological is 591. On January 19 2021, the College Board discontinued all SAT Subject tests, including the SAT Subject Test in Biology E/M. This was effective immediately in the United States, and the tests were to be phased out by the following summer for international students. This was done as a response to changes in college admissions due to the impact of the COVID-19 pandemic on education. Format This test had 80 multiple-choice questions that were to be answered in one hour. All questions had five answer choices. Students received one point for each correct answer, lost ¼ of a point for each incorrect answer, and received 0 points for questions left blank. The student's score was based entirely on his or her performance in answering the multiple-choice questions. The questions covered a broad range of topics in general biology. There were more specific questions related respectively on ecological concepts (such as population studies and general Ecology) on the E test and molecular concepts such as DNA structure, translation, and biochemistry on the M test. Preparation The College Board suggested a year-long course in biology at the college preparatory level, as well as a one-year course in algebra, a Document 2::: In addition to the normal karyotype, wild populations of many animal, plant, and fungi species contain B chromosomes (also known as supernumerary, accessory, (conditionally-)dispensable, or lineage-specific chromosomes). By definition, these chromosomes are not essential for the life of a species, and are lacking in some (usually most) of the individuals. Thus a population would consist of individuals with 0, 1, 2, 3 (etc.) B chromosomes. B chromosomes are distinct from marker chromosomes or additional copies of normal chromosomes as they occur in trisomies. Origin The evolutionary origin of supernumerary chromosomes is obscure, but presumably, they must have been derived from heterochromatic segments of normal chromosomes in the remote past. In general "we may regard supernumeraries as a very special category of genetic polymorphism which, because of manifold types of accumulation mechanisms, does not obey the ordinary Mendelian laws of inheritance." (White 1973 p173) Next generation sequencing has shown that the B chromosomes from rye are amalgamations of the rye A chromosomes. Similarly, B chromosomes of the cichlid fish Haplochromis latifasciatus also have been shown to arise from rearrangements of normal A chromosomes. Function Most B chromosomes are mainly or entirely heterochromatic (i.e. largely non-coding), but some contain sizeable euchromatic segments (e.g. such as the B chromosomes of maize). In some cases, B chromosomes act as selfish genetic elements. In other cases, B chromosomes provide some positive adaptive advantage. For instance, the British grasshopper Myrmeleotettix maculatus has two structural types of B chromosomes: metacentrics and submetacentric. The supernumeraries, which have a satellite DNA, occur in warm, dry environments, and are scarce or absent in humid, cooler localities. There is evidence of deleterious effects of supernumeraries on pollen fertility, and favourable effects or associations with particular habitats are also kno Document 3::: Genetics (from Ancient Greek , “genite” and that from , “origin”), a discipline of biology, is the science of heredity and variation in living organisms. Articles (arranged alphabetically) related to genetics include: # A B C D E F G H I J K L M N O P Q R S T U V W X Y Z Document 4::: Conceptual questions or conceptual problems in science, technology, engineering, and mathematics (STEM) education are questions that can be answered based only on the knowledge of relevant concepts, rather than performing extensive calculations. They contrast with most homework and exam problems in science and engineering that typically require plugging in numerical values into previously discussed formulas. Such "plug-and-chug" numerical problems can often be solved correctly by just matching the pattern of the problem to a previously discussed problem and changing the numerical inputs, which requires significant amounts of time to perform the calculations but does not test or deepen the understanding of how the concepts and formulas should work together. Conceptual questions, therefore, provide a good complement to conventional numerical problems because they need minimal or no calculations and instead encourage the students to engage more deeply with the underlying concepts and how they relate to formulas. Conceptual problems are often formulated as multiple-choice questions, making them easy to use during in-class discussions, particularly when utilizing active learning, peer instruction, and audience response. An example of a conceptual question in undergraduate thermodynamics is provided below: During adiabatic expansion of an ideal gas, its temperatureincreases decreases stays the same Impossible to tell/need more information The use of conceptual questions in physics was popularized by Eric Mazur, particularly in the form of multiple-choice tests that he called ConcepTests. In recent years, multiple websites that maintain lists of conceptual questions have been created by instructors for various disciplines. Some books on physics provide many examples of conceptual questions as well. Multiple conceptual questions can be assembled into a concept inventory to test the working knowledge of students at the beginning of a course or to track the improvement in The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. Dna segments cross over to form what kind of chromosome? A. mutated B. recombinant C. resistant D. autosome Answer:
sciq-793
multiple_choice
A chemical property describes the ability of a substance to undergo a specific what?
[ "weight Change", "radiation change", "chemical change", "liquid change" ]
C
Relavent Documents: Document 0::: Physical changes are changes affecting the form of a chemical substance, but not its chemical composition. Physical changes are used to separate mixtures into their component compounds, but can not usually be used to separate compounds into chemical elements or simpler compounds. Physical changes occur when objects or substances undergo a change that does not change their chemical composition. This contrasts with the concept of chemical change in which the composition of a substance changes or one or more substances combine or break up to form new substances. In general a physical change is reversible using physical means. For example, salt dissolved in water can be recovered by allowing the water to evaporate. A physical change involves a change in physical properties. Examples of physical properties include melting, transition to a gas, change of strength, change of durability, changes to crystal form, textural change, shape, size, color, volume and density. An example of a physical change is the process of tempering steel to form a knife blade. A steel blank is repeatedly heated and hammered which changes the hardness of the steel, its flexibility and its ability to maintain a sharp edge. Many physical changes also involve the rearrangement of atoms most noticeably in the formation of crystals. Many chemical changes are irreversible, and many physical changes are reversible, but reversibility is not a certain criterion for classification. Although chemical changes may be recognized by an indication such as odor, color change, or production of a gas, every one of these indicators can result from physical change. Examples Heating and cooling Many elements and some compounds change from solids to liquids and from liquids to gases when heated and the reverse when cooled. Some substances such as iodine and carbon dioxide go directly from solid to gas in a process called sublimation. Magnetism Ferro-magnetic materials can become magnetic. The process is reve Document 1::: Physical biochemistry is a branch of biochemistry that deals with the theory, techniques, and methodology used to study the physical chemistry of biomolecules. It also deals with the mathematical approaches for the analysis of biochemical reaction and the modelling of biological systems. It provides insight into the structure of macromolecules, and how chemical structure influences the physical properties of a biological substance. It involves the use of physics, physical chemistry principles, and methodology to study biological systems. It employs various physical chemistry techniques such as chromatography, spectroscopy, Electrophoresis, X-ray crystallography, electron microscopy, and hydrodynamics. See also Physical chemistry Document 2::: A material property is an intensive property of a material, i.e., a physical property or chemical property that does not depend on the amount of the material. These quantitative properties may be used as a metric by which the benefits of one material versus another can be compared, thereby aiding in materials selection. A property having a fixed value for a given material or substance is called material constant or constant of matter. (Material constants should not be confused with physical constants, that have a universal character.) A material property may also be a function of one or more independent variables, such as temperature. Materials properties often vary to some degree according to the direction in the material in which they are measured, a condition referred to as anisotropy. Materials properties that relate to different physical phenomena often behave linearly (or approximately so) in a given operating range . Modeling them as linear functions can significantly simplify the differential constitutive equations that are used to describe the property. Equations describing relevant materials properties are often used to predict the attributes of a system. The properties are measured by standardized test methods. Many such methods have been documented by their respective user communities and published through the Internet; see ASTM International. Acoustical properties Acoustical absorption Speed of sound Sound reflection Sound transfer Third order elasticity (Acoustoelastic effect) Atomic properties Atomic mass: (applies to each element) the average mass of the atoms of an element, in daltons (Da), a.k.a. atomic mass units (amu). Atomic number: (applies to individual atoms or pure elements) the number of protons in each nucleus Relative atomic mass, a.k.a. atomic weight: (applies to individual isotopes or specific mixtures of isotopes of a given element) (no units) Standard atomic weight: the average relative atomic mass of a typical sample of the ele Document 3::: A generalized compound is a mixture of chemical compounds of constant composition, despite possible changes in the total amount. The concept is used in the Dynamic Energy Budget theory, where biomass is partitioned into a limited set of generalised compounds, which contain a high percentage of organic compounds. The amount of generalized compound can be quantified in terms of weight, but more conveniently in terms of C-moles. The concept of strong homeostasis has an intimate relationship with that of generalised compound. Document 4::: In chemistry, yield, also referred to as reaction yield, is a measure of the quantity of moles of a product formed in relation to the reactant consumed, obtained in a chemical reaction, usually expressed as a percentage. Yield is one of the primary factors that scientists must consider in organic and inorganic chemical synthesis processes. In chemical reaction engineering, "yield", "conversion" and "selectivity" are terms used to describe ratios of how much of a reactant was consumed (conversion), how much desired product was formed (yield) in relation to the undesired product (selectivity), represented as X, Y, and S. Definitions In chemical reaction engineering, "yield", "conversion" and "selectivity" are terms used to describe ratios of how much of a reactant has reacted—conversion, how much of a desired product was formed—yield, and how much desired product was formed in ratio to the undesired product—selectivity, represented as X,S, and Y. According to the Elements of Chemical Reaction Engineering manual, yield refers to the amount of a specific product formed per mole of reactant consumed. In chemistry, mole is used to describe quantities of reactants and products in chemical reactions. The Compendium of Chemical Terminology defined yield as the "ratio expressing the efficiency of a mass conversion process. The yield coefficient is defined as the amount of cell mass (kg) or product formed (kg,mol) related to the consumed substrate (carbon or nitrogen source or oxygen in kg or moles) or to the intracellular ATP production (moles)." In the section "Calculations of yields in the monitoring of reactions" in the 1996 4th edition of Vogel's Textbook of Practical Organic Chemistry (1978), the authors write that, "theoretical yield in an organic reaction is the weight of product which would be obtained if the reaction has proceeded to completion according to the chemical equation. The yield is the weight of the pure product which is isolated from the react The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. A chemical property describes the ability of a substance to undergo a specific what? A. weight Change B. radiation change C. chemical change D. liquid change Answer:
sciq-1302
multiple_choice
Minerals that are not compounds consist of a single what?
[ "proton", "gas", "element", "electron" ]
C
Relavent Documents: Document 0::: In geology, rock (or stone) is any naturally occurring solid mass or aggregate of minerals or mineraloid matter. It is categorized by the minerals included, its chemical composition, and the way in which it is formed. Rocks form the Earth's outer solid layer, the crust, and most of its interior, except for the liquid outer core and pockets of magma in the asthenosphere. The study of rocks involves multiple subdisciplines of geology, including petrology and mineralogy. It may be limited to rocks found on Earth, or it may include planetary geology that studies the rocks of other celestial objects. Rocks are usually grouped into three main groups: igneous rocks, sedimentary rocks and metamorphic rocks. Igneous rocks are formed when magma cools in the Earth's crust, or lava cools on the ground surface or the seabed. Sedimentary rocks are formed by diagenesis and lithification of sediments, which in turn are formed by the weathering, transport, and deposition of existing rocks. Metamorphic rocks are formed when existing rocks are subjected to such high pressures and temperatures that they are transformed without significant melting. Humanity has made use of rocks since the earliest humans. This early period, called the Stone Age, saw the development of many stone tools. Stone was then used as a major component in the construction of buildings and early infrastructure. Mining developed to extract rocks from the Earth and obtain the minerals within them, including metals. Modern technology has allowed the development of new man-made rocks and rock-like substances, such as concrete. Study Geology is the study of Earth and its components, including the study of rock formations. Petrology is the study of the character and origin of rocks. Mineralogy is the study of the mineral components that create rocks. The study of rocks and their components has contributed to the geological understanding of Earth's history, the archaeological understanding of human history, and the Document 1::: See also List of minerals Document 2::: A nonmetal is a chemical element that mostly lacks metallic properties. Seventeen elements are generally considered nonmetals, though some authors recognize more or fewer depending on the properties considered most representative of metallic or nonmetallic character. Some borderline elements further complicate the situation. Nonmetals tend to have low density and high electronegativity (the ability of an atom in a molecule to attract electrons to itself). They range from colorless gases like hydrogen to shiny solids like the graphite form of carbon. Nonmetals are often poor conductors of heat and electricity, and when solid tend to be brittle or crumbly. In contrast, metals are good conductors and most are pliable. While compounds of metals tend to be basic, those of nonmetals tend to be acidic. The two lightest nonmetals, hydrogen and helium, together make up about 98% of the observable ordinary matter in the universe by mass. Five nonmetallic elements—hydrogen, carbon, nitrogen, oxygen, and silicon—make up the overwhelming majority of the Earth's crust, atmosphere, oceans and biosphere. The distinct properties of nonmetallic elements allow for specific uses that metals often cannot achieve. Elements like hydrogen, oxygen, carbon, and nitrogen are essential building blocks for life itself. Moreover, nonmetallic elements are integral to industries such as electronics, energy storage, agriculture, and chemical production. Most nonmetallic elements were not identified until the 18th and 19th centuries. While a distinction between metals and other minerals had existed since antiquity, a basic classification of chemical elements as metallic or nonmetallic emerged only in the late 18th century. Since then nigh on two dozen properties have been suggested as single criteria for distinguishing nonmetals from metals. Definition and applicable elements Properties mentioned hereafter refer to the elements in their most stable forms in ambient conditions unless otherwise Document 3::: Mineral tests are several methods which can help identify the mineral type. This is used widely in mineralogy, hydrocarbon exploration and general mapping. There are over 4000 types of minerals known with each one with different sub-classes. Elements make minerals and minerals make rocks so actually testing minerals in the lab and in the field is essential to understand the history of the rock which aids data, zonation, metamorphic history, processes involved and other minerals. The following tests are used on specimen and thin sections through polarizing microscope. Color Color of the mineral. This is not mineral specific. For example quartz can be almost any color, shape and within many rock types. Streak Color of the mineral's powder. This can be found by rubbing the mineral onto a concrete. This is more accurate but not always mineral specific. Lustre This is the way light reflects from the mineral's surface. A mineral can be metallic (shiny) or non-metallic (not shiny). Transparency The way light travels through minerals. The mineral can be transparent (clear), translucent (cloudy) or opaque (none). Specific gravity Ratio between the weight of the mineral relative to an equal volume of water. Mineral habitat The shape of the crystal and habitat. Magnetism Magnetic or nonmagnetic. Can be tested by using a magnet or a compass. This does not apply to all ion minerals (for example, pyrite). Cleavage Number, behaviour, size and way cracks fracture in the mineral. UV fluorescence Many minerals glow when put under a UV light. Radioactivity Is the mineral radioactive or non-radioactive? This is measured by a Geiger counter. Taste This is not recommended. Is the mineral salty, bitter or does it have no taste? Bite Test This is not recommended. This involves biting a mineral to see if its generally soft or hard. This was used in early gold exploration to tell the difference between pyrite (fools gold, hard) and gold (soft). Hardness The Mohs Hardn Document 4::: In crystallography, the term polysome is used to describe overall mineral structures which have structurally and compositionally different framework structures. A general example is amphiboles, in which cutting along the {010} plane yields alternating layers of pyroxene and trioctahedral mica. The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. Minerals that are not compounds consist of a single what? A. proton B. gas C. element D. electron Answer:
sciq-4948
multiple_choice
What sound can be heard when sound waves bounce back from a hard object?
[ "loop", "eerie", "echo", "boom" ]
C
Relavent Documents: Document 0::: A sonic boom is a sound associated with shock waves created when an object travels through the air faster than the speed of sound. Sonic booms generate enormous amounts of sound energy, sounding similar to an explosion or a thunderclap to the human ear. The crack of a supersonic bullet passing overhead or the crack of a bullwhip are examples of a sonic boom in miniature. Sonic booms due to large supersonic aircraft can be particularly loud and startling, tend to awaken people, and may cause minor damage to some structures. This led to prohibition of routine supersonic flight overland. Although they cannot be completely prevented, research suggests that with careful shaping of the vehicle, the nuisance due to the sonic booms may be reduced to the point that overland supersonic flight may become a feasible option. A sonic boom does not occur only at the moment an object crosses the sound barrier and neither is it heard in all directions emanating from the supersonic object. Rather, the boom is a continuous effect that occurs while the object is travelling at supersonic speeds and affects only observers that are positioned at a point that intersects a region in the shape of a geometrical cone behind the object. As the object moves, this conical region also moves behind it and when the cone passes over the observer, they will briefly experience the "boom". Causes When an aircraft passes through the air, it creates a series of pressure waves in front of the aircraft and behind it, similar to the bow and stern waves created by a boat. These waves travel at the speed of sound and, as the speed of the object increases, the waves are forced together, or compressed, because they cannot get out of each other's way quickly enough. Eventually they merge into a single shock wave, which travels at the speed of sound, a critical speed known as Mach 1, and is approximately at sea level and . In smooth flight, the shock wave starts at the nose of the aircraft and ends at the ta Document 1::: This is a list of wave topics. 0–9 21 cm line A Abbe prism Absorption spectroscopy Absorption spectrum Absorption wavemeter Acoustic wave Acoustic wave equation Acoustics Acousto-optic effect Acousto-optic modulator Acousto-optics Airy disc Airy wave theory Alfvén wave Alpha waves Amphidromic point Amplitude Amplitude modulation Animal echolocation Antarctic Circumpolar Wave Antiphase Aquamarine Power Arrayed waveguide grating Artificial wave Atmospheric diffraction Atmospheric wave Atmospheric waveguide Atom laser Atomic clock Atomic mirror Audience wave Autowave Averaged Lagrangian B Babinet's principle Backward wave oscillator Bandwidth-limited pulse beat Berry phase Bessel beam Beta wave Black hole Blazar Bloch's theorem Blueshift Boussinesq approximation (water waves) Bow wave Bragg diffraction Bragg's law Breaking wave Bremsstrahlung, Electromagnetic radiation Brillouin scattering Bullet bow shockwave Burgers' equation Business cycle C Capillary wave Carrier wave Cherenkov radiation Chirp Ernst Chladni Circular polarization Clapotis Closed waveguide Cnoidal wave Coherence (physics) Coherence length Coherence time Cold wave Collimated light Collimator Compton effect Comparison of analog and digital recording Computation of radiowave attenuation in the atmosphere Continuous phase modulation Continuous wave Convective heat transfer Coriolis frequency Coronal mass ejection Cosmic microwave background radiation Coulomb wave function Cutoff frequency Cutoff wavelength Cymatics D Damped wave Decollimation Delta wave Dielectric waveguide Diffraction Direction finding Dispersion (optics) Dispersion (water waves) Dispersion relation Dominant wavelength Doppler effect Doppler radar Douglas Sea Scale Draupner wave Droplet-shaped wave Duhamel's principle E E-skip Earthquake Echo (phenomenon) Echo sounding Echolocation (animal) Echolocation (human) Eddy (fluid dynamics) Edge wave Eikonal equation Ekman layer Ekman spiral Ekman transport El Niño–Southern Oscillation El Document 2::: In physics, a mechanical wave is a wave that is an oscillation of matter, and therefore transfers energy through a medium. While waves can move over long distances, the movement of the medium of transmission—the material—is limited. Therefore, the oscillating material does not move far from its initial equilibrium position. Mechanical waves can be produced only in media which possess elasticity and inertia. There are three types of mechanical waves: transverse waves, longitudinal waves, and surface waves. Some of the most common examples of mechanical waves are water waves, sound waves, and seismic waves. Like all waves, mechanical waves transport energy. This energy propagates in the same direction as the wave. A wave requires an initial energy input; once this initial energy is added, the wave travels through the medium until all its energy is transferred. In contrast, electromagnetic waves require no medium, but can still travel through one. Transverse wave A transverse wave is the form of a wave in which particles of medium vibrate about their mean position perpendicular to the direction of the motion of the wave. To see an example, move an end of a Slinky (whose other end is fixed) to the left-and-right of the Slinky, as opposed to to-and-fro. Light also has properties of a transverse wave, although it is an electromagnetic wave. Longitudinal wave Longitudinal waves cause the medium to vibrate parallel to the direction of the wave. It consists of multiple compressions and rarefactions. The rarefaction is the farthest distance apart in the longitudinal wave and the compression is the closest distance together. The speed of the longitudinal wave is increased in higher index of refraction, due to the closer proximity of the atoms in the medium that is being compressed. Sound is a longitudinal wave. Surface waves This type of wave travels along the surface or interface between two media. An example of a surface wave would be waves in a pool, or in an ocean Document 3::: Singing sand, also called whistling sand, barking sand or singing dune, is sand that produces sound. The sound emission may be caused by wind passing over dunes or by walking on the sand. Certain conditions have to come together to create singing sand: The sand grains have to be round and between 0.1 and 0.5 mm in diameter. The sand has to contain silica. The sand needs to be at a certain humidity. The most common frequency emitted seems to be close to 450 Hz. There are various theories about the singing sand mechanism. It has been proposed that the sound frequency is controlled by the shear rate. Others have suggested that the frequency of vibration is related to the thickness of the dry surface layer of sand. The sound waves bounce back and forth between the surface of the dune and the surface of the moist layer, creating a resonance that increases the sound's volume. The noise may be generated by friction between the grains or by the compression of air between them. Other sounds that can be emitted by sand have been described as "roaring" or "booming". In dunes Singing sand dunes, an example of the phenomenon of singing sand, produce a sound described as roaring, booming, squeaking, or the "Song of Dunes". This is a natural sound phenomenon of up to 105 decibels, lasting as long as several minutes, that occurs in about 35 desert locations around the world. The sound is similar to a loud low-pitch rumble. It emanates from crescent-shaped dunes, or barchans. The sound emission accompanies a slumping or avalanching movement of sand, usually triggered by wind passing over the dune or by someone walking near the crest. Examples of singing sand dunes include California's Kelso Dunes and Eureka Dunes; AuTrain Beach in Northern Michigan; sugar sand beaches and Warren Dunes in southwestern Michigan; Sand Mountain in Nevada; the Booming Dunes in the Namib Desert, Africa; Porth Oer (also known as Whistling Sands) near Aberdaron in Wales; Indiana Dunes in Indiana; Document 4::: Long delayed echoes (LDEs) are radio echoes which return to the sender several seconds after a radio transmission has occurred. Delays of longer than 2.7 seconds are considered LDEs. LDEs have a number of proposed scientific origins. History These echoes were first observed in 1927 by civil engineer and amateur radio operator Jørgen Hals from his home near Oslo, Norway. Hals had repeatedly observed an unexpected second radio echo with a significant time delay after the primary radio echo ended. Unable to account for this strange phenomenon, he wrote a letter to Norwegian physicist Carl Størmer, explaining the event: At the end of the summer of 1927 I repeatedly heard signals from the Dutch short-wave transmitting station PCJJ at Eindhoven. At the same time as I heard these I also heard echoes. I heard the usual echo which goes round the Earth with an interval of about 1/7 of a second as well as a weaker echo about three seconds after the principal echo had gone. When the principal signal was especially strong, I suppose the amplitude for the last echo three seconds later, lay between 1/10 and 1/20 of the principal signal in strength. From where this echo comes I cannot say for the present, I can only confirm that I really heard it. Physicist Balthasar van der Pol helped Hals and Stormer investigate the echoes, but due to the sporadic nature of the echo events and variations in time-delay, did not find a suitable explanation. Long delayed echoes have been heard sporadically from the first observations in 1927 and up to the present day. Five hypotheses Shlionskiy lists 15 possible natural explanations in two groups: reflections in outer space, and reflections within the Earth's magnetosphere. Vidmar and Crawford suggest five of them are the most likely. Sverre Holm, professor of signal processing at the University of Oslo details those five; in summary, Ducting in the Earth's magnetosphere and ionosphere at low HF frequencies (1–4 MHz). Some similarities with The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. What sound can be heard when sound waves bounce back from a hard object? A. loop B. eerie C. echo D. boom Answer:
sciq-147
multiple_choice
What type of reproduction usually occur during times of environmental stress?
[ "sexual reproduction", "internal reproduction", "hysterical reproduction", "asexual reproduction" ]
A
Relavent Documents: Document 0::: Reproductive biology includes both sexual and asexual reproduction. Reproductive biology includes a wide number of fields: Reproductive systems Endocrinology Sexual development (Puberty) Sexual maturity Reproduction Fertility Human reproductive biology Endocrinology Human reproductive biology is primarily controlled through hormones, which send signals to the human reproductive structures to influence growth and maturation. These hormones are secreted by endocrine glands, and spread to different tissues in the human body. In humans, the pituitary gland synthesizes hormones used to control the activity of endocrine glands. Reproductive systems Internal and external organs are included in the reproductive system. There are two reproductive systems including the male and female, which contain different organs from one another. These systems work together in order to produce offspring. Female reproductive system The female reproductive system includes the structures involved in ovulation, fertilization, development of an embryo, and birth. These structures include: Ovaries Oviducts Uterus Vagina Mammary Glands Estrogen is one of the sexual reproductive hormones that aid in the sexual reproductive system of the female. Male reproductive system The male reproductive system includes testes, rete testis, efferent ductules, epididymis, sex accessory glands, sex accessory ducts and external genitalia. Testosterone, an androgen, although present in both males and females, is relatively more abundant in males. Testosterone serves as one of the major sexual reproductive hormones in the male reproductive system However, the enzyme aromatase is present in testes and capable of synthesizing estrogens from androgens. Estrogens are present in high concentrations in luminal fluids of the male reproductive tract. Androgen and estrogen receptors are abundant in epithelial cells of the male reproductive tract. Animal Reproductive Biology Animal reproduction oc Document 1::: Apicomplexans, a group of intracellular parasites, have life cycle stages that allow them to survive the wide variety of environments they are exposed to during their complex life cycle. Each stage in the life cycle of an apicomplexan organism is typified by a cellular variety with a distinct morphology and biochemistry. Not all apicomplexa develop all the following cellular varieties and division methods. This presentation is intended as an outline of a hypothetical generalised apicomplexan organism. Methods of asexual replication Apicomplexans (sporozoans) replicate via ways of multiple fission (also known as schizogony). These ways include , and , although the latter is sometimes referred to as schizogony, despite its general meaning. Merogony is an asexually reproductive process of apicomplexa. After infecting a host cell, a trophozoite (see glossary below) increases in size while repeatedly replicating its nucleus and other organelles. During this process, the organism is known as a or . Cytokinesis next subdivides the multinucleated schizont into numerous identical daughter cells called merozoites (see glossary below), which are released into the blood when the host cell ruptures. Organisms whose life cycles rely on this process include Theileria, Babesia, Plasmodium, and Toxoplasma gondii. Sporogony is a type of sexual and asexual reproduction. It involves karyogamy, the formation of a zygote, which is followed by meiosis and multiple fission. This results in the production of sporozoites. Other forms of replication include and . Endodyogeny is a process of asexual reproduction, favoured by parasites such as Toxoplasma gondii. It involves an unusual process in which two daughter cells are produced inside a mother cell, which is then consumed by the offspring prior to their separation. Endopolygeny is the division into several organisms at once by internal budding. Glossary of cell types Infectious stages A (ancient Greek , seed + , animal) is th Document 2::: In biology, a biological life cycle (or just life cycle when the biological context is clear) is a series of stages of the life of an organism, that begins as a zygote, often in an egg, and concludes as an adult that reproduces, producing an offspring in the form of a new zygote which then itself goes through the same series of stages, the process repeating in a cyclic fashion. "The concept is closely related to those of the life history, development and ontogeny, but differs from them in stressing renewal." Transitions of form may involve growth, asexual reproduction, or sexual reproduction. In some organisms, different "generations" of the species succeed each other during the life cycle. For plants and many algae, there are two multicellular stages, and the life cycle is referred to as alternation of generations. The term life history is often used, particularly for organisms such as the red algae which have three multicellular stages (or more), rather than two. Life cycles that include sexual reproduction involve alternating haploid (n) and diploid (2n) stages, i.e., a change of ploidy is involved. To return from a diploid stage to a haploid stage, meiosis must occur. In regard to changes of ploidy, there are three types of cycles: haplontic life cycle — the haploid stage is multicellular and the diploid stage is a single cell, meiosis is "zygotic". diplontic life cycle — the diploid stage is multicellular and haploid gametes are formed, meiosis is "gametic". haplodiplontic life cycle (also referred to as diplohaplontic, diplobiontic, or dibiontic life cycle) — multicellular diploid and haploid stages occur, meiosis is "sporic". The cycles differ in when mitosis (growth) occurs. Zygotic meiosis and gametic meiosis have one mitotic stage: mitosis occurs during the n phase in zygotic meiosis and during the 2n phase in gametic meiosis. Therefore, zygotic and gametic meiosis are collectively termed "haplobiontic" (single mitotic phase, not to be confused with ha Document 3::: Paratomy is a form of asexual reproduction in animals where the organism splits in a plane perpendicular to the antero-posterior axis and the split is preceded by the "pregeneration" of the anterior structures in the posterior portion. The developing organisms have their body axis aligned, i.e., they develop in a head to tail fashion. Budding can be considered to be similar to paratomy except that the body axes need not be aligned: the new head may grow toward the side or even point backward (e.g. Convolutriloba retrogemma an acoel flat worm). In animals that undergo fast paratomy a chain of zooids packed in a head to tail formation may develop. Many oligochaete annelids, acoelous turbellarians, echinoderm larvae and coelenterates reproduce by this method. See also External resources This paper has a detailed description of the changes during paratomy. Document 4::: The "Vicar of Bray" hypothesis (or Fisher-Muller Model) attempts to explain why sexual reproduction might have advantages over asexual reproduction. Reproduction is the process by which organisms give rise to offspring. Asexual reproduction involves a single parent and results in offspring that are genetically identical to each other and to the parent. In contrast to asexual reproduction, sexual reproduction involves two parents. Both the parents produce gametes through meiosis, a special type of cell division that reduces the chromosome number by half. During an early stage of meiosis, before the chromosomes are separated in the two daughter cells, the chromosomes undergo genetic recombination. This allows them to exchange some of their genetic information. Therefore, the gametes from a single organism are all genetically different from each other. The process in which the two gametes from the two parents unite is called fertilization. Half of the genetic information from both parents is combined. This results in offspring that are genetically different from each other and from the parents. In short, sexual reproduction allows a continuous rearrangement of genes. Therefore, the offspring of a population of sexually reproducing individuals will show a more varied selection of phenotypes. Due to faster attainment of favorable genetic combinations, sexually reproducing populations evolve more rapidly in response to environmental changes. Under the Vicar of Bray hypothesis, sex benefits a population as a whole, but not individuals within it, making it a case of group selection. Disadvantage of sexual reproduction Sexual reproduction often takes a lot of effort. Finding a mate can sometimes be an expensive, risky and time consuming process. Courtship, copulation and taking care of the new born offspring may also take up a lot of time and energy. From this point of view, asexual reproduction may seem a lot easier and more efficient. But another important thing to co The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. What type of reproduction usually occur during times of environmental stress? A. sexual reproduction B. internal reproduction C. hysterical reproduction D. asexual reproduction Answer:
sciq-9318
multiple_choice
A crucial function of the cranial nerves is to keep visual stimuli centered on the fovea of what eye structure?
[ "retina", "pupil", "scelra", "iris" ]
A
Relavent Documents: Document 0::: The fixation reflex is that concerned with attracting the eye on a peripheral object. For example, when a light shines in the periphery, the eyes shift gaze on it. It is controlled by the occipital lobe of the cerebral cortex, corroborated by three main tests: Removal of cortex causes shutdown of this reflex Drawing a figure on the cortex surface will cause eye movements in the direction traveled Detecting an image by recording the actual signals from the eyes Older research declares that a motor pathway from the occipital cortex to the brainstem motor neurons was via the superior colliculi. This is the case in lower animals, but in humans, the theory that eye-muscle nuclei aside from the superior colliculi of the midbrain is now generally held. When an object is focused directly at an object but the eyes drift off their target, the fixation reflex keeps the eyes focused on the original object, albeit moving itself. See also Nystagmus Saccade Bibliography "eye, human."Encyclopædia Britannica. 2008. Encyclopædia Britannica 2006 Ultimate Reference Suite DVD Reflexes Vision Document 1::: The oculomotor nerve, also known as the third cranial nerve, cranial nerve III, or simply CN III, is a cranial nerve that enters the orbit through the superior orbital fissure and innervates extraocular muscles that enable most movements of the eye and that raise the eyelid. The nerve also contains fibers that innervate the intrinsic eye muscles that enable pupillary constriction and accommodation (ability to focus on near objects as in reading). The oculomotor nerve is derived from the basal plate of the embryonic midbrain. Cranial nerves IV and VI also participate in control of eye movement. Structure The oculomotor nerve originates from the third nerve nucleus at the level of the superior colliculus in the midbrain. The third nerve nucleus is located ventral to the cerebral aqueduct, on the pre-aqueductal grey matter. The fibers from the two third nerve nuclei located laterally on either side of the cerebral aqueduct then pass through the red nucleus. From the red nucleus fibers then pass via the substantia nigra to emerge from the substance of the brainstem at the oculomotor sulcus (a groove on the lateral wall of the interpeduncular fossa). On emerging from the brainstem, the nerve is invested with a sheath of pia mater, and enclosed in a prolongation from the arachnoid. It passes between the superior cerebellar (below) and posterior cerebral arteries (above), and then pierces the dura mater anterior and lateral to the posterior clinoid process, passing between the free and attached borders of the tentorium cerebelli. It traverses the cavernous sinus, above the other orbital nerves receiving in its course one or two filaments from the cavernous plexus of the sympathetic nervous system, and a communicating branch from the ophthalmic division of the trigeminal nerve. As the oculomotor nerve enters the orbit via the superior orbital fissure it then divides into a superior and an inferior branch. Superior branch The superior branch of the oculomotor nerve or Document 2::: The ovarian cortex is the outer portion of the ovary. The ovarian follicles are located within the ovarian cortex. The ovarian cortex is made up of connective tissue. Ovarian cortex tissue transplant has been performed to treat infertility. Document 3::: The middle cranial fossa is formed by the sphenoid bones, and the temporal bones. It lodges the temporal lobes, and the pituitary gland. It is deeper than the anterior cranial fossa, is narrow medially and widens laterally to the sides of the skull. It is separated from the posterior cranial fossa by the clivus and the petrous crest. It is bounded in front by the posterior margins of the lesser wings of the sphenoid bone, the anterior clinoid processes, and the ridge forming the anterior margin of the chiasmatic groove; behind, by the superior angles of the petrous portions of the temporal bones and the dorsum sellae; laterally by the temporal squamae, sphenoidal angles of the parietals, and greater wings of the sphenoid. It is traversed by the squamosal, sphenoparietal, sphenosquamosal, and sphenopetrosal sutures. Anatomy Features Middle part The middle part of the fossa presents, in front, the chiasmatic groove and tuberculum sellae; the chiasmatic groove ends on either side at the optic foramen, which transmits the optic nerve and ophthalmic artery to the orbital cavity. Behind the optic foramen the anterior clinoid process is directed backward and medialward and gives attachment to the cerebellar tentorium . Behind the tuberculum sellae is a deep depression, the sella turcica, containing the fossa hypophyseos, which lodges the pituitary gland, and presents on its anterior wall the middle clinoid processes. The sella turcica is bounded posteriorly by a quadrilateral plate of bone, the dorsum sellae, the upper angles of which are surmounted by the posterior clinoid processes: these afford attachment to the cerebellar tentorium, and below each is a notch for the abducent nerve. On either side of the sella turcica is the carotid groove, which is broad, shallow, and curved somewhat like the italic letter f. It begins behind at the foramen lacerum, and ends on the medial side of the anterior clinoid process, where it is sometimes converted into a foramen (ca Document 4::: In anatomy and zoology, the cortex (: cortices) is the outermost (or superficial) layer of an organ. Organs with well-defined cortical layers include kidneys, adrenal glands, ovaries, the thymus, and portions of the brain, including the cerebral cortex, the best-known of all cortices. Etymology The word is of Latin origin and means bark, rind, shell or husk. Notable examples The renal cortex, between the renal capsule and the renal medulla; assists in ultrafiltration The adrenal cortex, situated along the perimeter of the adrenal gland; mediates the stress response through the production of various hormones The thymic cortex, mainly composed of lymphocytes; functions as a site for somatic recombination of T cell receptors, and positive selection The cerebral cortex, the outer layer of the cerebrum, plays a key role in memory, attention, perceptual awareness, thought, language, and consciousness. Cortical bone is the hard outer layer of bone; distinct from the spongy, inner cancellous bone tissue Ovarian cortex is the outer layer of the ovary and contains the follicles. The lymph node cortex is the outer layer of the lymph node. Cerebral cortex The cerebral cortex is typically described as comprising three parts: the sensory, motor, and association areas. These sensory areas receive and process information from the senses. The senses of vision, audition, and touch are served by the primary visual cortex, the primary auditory cortex, and primary somatosensory cortex. The cerebellar cortex is the thin gray surface layer of the cerebellum, consisting of an outer molecular layer or stratum moleculare, a single layer of Purkinje cells (the ganglionic layer), and an inner granular layer or stratum granulosum. The cortex is the outer surface of the cerebrum and is composed of gray matter. The motor areas are located in both hemispheres of the cerebral cortex. Two areas of the cortex are commonly referred to as motor: the primary motor cortex, which executes v The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. A crucial function of the cranial nerves is to keep visual stimuli centered on the fovea of what eye structure? A. retina B. pupil C. scelra D. iris Answer:
sciq-7067
multiple_choice
What instruments used in guidance systems to indicate directions in space must have an angular momentum that does not change in direction?
[ "gyroscopes", "actuators", "magnets", "elevators" ]
A
Relavent Documents: Document 0::: An aircraft in flight is free to rotate in three dimensions: yaw, nose left or right about an axis running up and down; pitch, nose up or down about an axis running from wing to wing; and roll, rotation about an axis running from nose to tail. The axes are alternatively designated as vertical, lateral (or transverse), and longitudinal respectively. These axes move with the vehicle and rotate relative to the Earth along with the craft. These definitions were analogously applied to spacecraft when the first crewed spacecraft were designed in the late 1950s. These rotations are produced by torques (or moments) about the principal axes. On an aircraft, these are intentionally produced by means of moving control surfaces, which vary the distribution of the net aerodynamic force about the vehicle's center of gravity. Elevators (moving flaps on the horizontal tail) produce pitch, a rudder on the vertical tail produces yaw, and ailerons (flaps on the wings that move in opposing directions) produce roll. On a spacecraft, the movements are usually produced by a reaction control system consisting of small rocket thrusters used to apply asymmetrical thrust on the vehicle. Principal axes Normal axis, or yaw axis — an axis drawn from top to bottom, and perpendicular to the other two axes, parallel to the fuselage station. Transverse axis, lateral axis, or pitch axis — an axis running from the pilot's left to right in piloted aircraft, and parallel to the wings of a winged aircraft, parallel to the buttock line. Longitudinal axis, or roll axis — an axis drawn through the body of the vehicle from tail to nose in the normal direction of flight, or the direction the pilot faces, similar to a ship's waterline. Normally, these axes are represented by the letters X, Y and Z in order to compare them with some reference frame, usually named x, y, z. Normally, this is made in such a way that the X is used for the longitudinal axis, but there are other possibilities to do it. Vertical Document 1::: The Euler angles are three angles introduced by Leonhard Euler to describe the orientation of a rigid body with respect to a fixed coordinate system. They can also represent the orientation of a mobile frame of reference in physics or the orientation of a general basis in 3-dimensional linear algebra. Classic Euler angles usually take the inclination angle in such a way that zero degrees represent the vertical orientation. Alternative forms were later introduced by Peter Guthrie Tait and George H. Bryan intended for use in aeronautics and engineering in which zero degrees represent the horizontal position. Chained rotations equivalence Euler angles can be defined by elemental geometry or by composition of rotations. The geometrical definition demonstrates that three composed elemental rotations (rotations about the axes of a coordinate system) are always sufficient to reach any target frame. The three elemental rotations may be extrinsic (rotations about the axes xyz of the original coordinate system, which is assumed to remain motionless), or intrinsic (rotations about the axes of the rotating coordinate system XYZ, solidary with the moving body, which changes its orientation with respect to the extrinsic frame after each elemental rotation). In the sections below, an axis designation with a prime mark superscript (e.g., z″) denotes the new axis after an elemental rotation. Euler angles are typically denoted as α, β, γ, or ψ, θ, φ. Different authors may use different sets of rotation axes to define Euler angles, or different names for the same angles. Therefore, any discussion employing Euler angles should always be preceded by their definition. Without considering the possibility of using two different conventions for the definition of the rotation axes (intrinsic or extrinsic), there exist twelve possible sequences of rotation axes, divided in two groups: Proper Euler angles Tait–Bryan angles . Tait–Bryan angles are also called Cardan angles; nautica Document 2::: In ballistics and flight dynamics, axes conventions are standardized ways of establishing the location and orientation of coordinate axes for use as a frame of reference. Mobile objects are normally tracked from an external frame considered fixed. Other frames can be defined on those mobile objects to deal with relative positions for other objects. Finally, attitudes or orientations can be described by a relationship between the external frame and the one defined over the mobile object. The orientation of a vehicle is normally referred to as attitude. It is described normally by the orientation of a frame fixed in the body relative to a fixed reference frame. The attitude is described by attitude coordinates, and consists of at least three coordinates. While from a geometrical point of view the different methods to describe orientations are defined using only some reference frames, in engineering applications it is important also to describe how these frames are attached to the lab and the body in motion. Due to the special importance of international conventions in air vehicles, several organizations have published standards to be followed. For example, German DIN has published the DIN 9300 norm for aircraft (adopted by ISO as ISO 1151–2:1985). Earth bounded axes conventions World reference frames: ENU and NED Basically, as lab frame or reference frame, there are two kinds of conventions for the frames: East, North, Up (ENU), used in geography North, East, Down (NED), used specially in aerospace This frame referenced w.r.t. Global Reference frames like Earth Center Earth Fixed (ECEF) non-inertial system. World reference frames for attitude description To establish a standard convention to describe attitudes, it is required to establish at least the axes of the reference system and the axes of the rigid body or vehicle. When an ambiguous notation system is used (such as Euler angles) the convention used should also be stated. Nevertheless, most used notatio Document 3::: The angular momentum problem is a problem in astrophysics identified by Leon Mestel in 1965. It was found that the angular momentum of a protoplanetary disk is misappropriated when compared to models during stellar birth. The Sun and other stars are predicted by models to be rotating considerably faster than they actually are. The Sun, for example, only accounts for about 0.3 percent of the total angular momentum of the Solar System while about 60% is attributed to Jupiter. See also History of Solar System formation and evolution hypotheses Document 4::: In astronomy, a transit instrument is a small telescope with extremely precisely graduated mount used for the precise observation of star positions. They were previously widely used in astronomical observatories and naval observatories to measure star positions in order to compile nautical almanacs for use by mariners for celestial navigation, and observe star transits to set extremely accurate clocks (astronomical regulators) which were used to set marine chronometers carried on ships to determine longitude, and as primary time standards before atomic clocks. The instruments can be divided into three groups: meridian, zenith, and universal instruments. Types Meridian instruments For observation of star transits in the exact direction of South or North: Meridian circles, Mural quadrants etc. Passage instruments (transportable, also for prime vertical transits) Zenith instruments Zenith telescope Photozenith tube (PZT) zenith cameras Danjon astrolabe, Zeiss Ni2 astrolabe, Circumzenital Universal instruments Allow transit measurements in any direction Theodolite (Describing a theodolite as a transit may refer to the ability to turn the telescope a full rotation on the horizontal axis, which provides a convenient way to reverse the direction of view, or to sight the same object with the yoke in opposite directions, which causes some instrumental errors to cancel.) Altaz telescopes with graduated eyepieces (also for satellite transits) Cinetheodolites Observation techniques and accuracy Depending on the type of instrument, the measurements are carried out visually and manual time registration (stopwatch, Auge-Ohr-Methode, chronograph) visually by impersonal micrometer (moving thread with automatic registration) photographic registration CCD or other electro optic sensors. The accuracy reaches from 0.2" (theodolites, small astrolabes) to 0.01" (modern meridian circles, Danjon). Early instruments (like the mural quadrants of Tycho Brahe) had no te The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. What instruments used in guidance systems to indicate directions in space must have an angular momentum that does not change in direction? A. gyroscopes B. actuators C. magnets D. elevators Answer:
sciq-10203
multiple_choice
What do you call a device that produces a very focused beam of visible light of just one wavelength and color?
[ "optical diffuser", "fusion relay", "light meter", "laser" ]
D
Relavent Documents: Document 0::: The National Center for Optics and Photonics Education, known as OP-TEC for short, was a joint effort by educational institutions and other groups to develop curriculum materials for photonics. Headquartered in Waco, Texas, it was funded by the National Science Foundation. OP-TEC held workshops at various institutions around the United States to promote the use of optics and photonics in secondary and post-secondary curricula. Document 1::: Optical engineering is the field of science and engineering encompassing the physical phenomena and technologies associated with the generation, transmission, manipulation, detection, and utilization of light. Optical engineers use optics to solve problems and to design and build devices that make light do something useful. They design and operate optical equipment that uses the properties of light using physics and chemistry, such as lenses, microscopes, telescopes, lasers, sensors, fiber optic communication systems and optical disc systems (e.g. CD, DVD). Optical engineering metrology uses optical methods to measure either micro-vibrations with instruments like the laser speckle interferometer, or properties of masses with instruments that measure refraction Nano-measuring and nano-positioning machines are devices designed by optical engineers. These machines, for example microphotolithographic steppers, have nanometer precision, and consequently are used in the fabrication of goods at this scale. See also Optical lens design Optical physics Optician Document 2::: In physics, monochromatic radiation is electromagnetic radiation with a single constant frequency. When that frequency is part of the visible spectrum (or near it) the term monochromatic light is often used. Monochromatic light is perceived by the human eye as a spectral color. When monochromatic radiation propagates through vacuum or a homogeneous transparent medium, it has a single constant wavelength. Practical monochromaticity No radiation can be totally monochromatic, since that would require a wave of infinite duration as a consequence of the Fourier transform's localization property (cf. spectral coherence). In practice, "monochromatic" radiation — even from lasers or spectral lines — always consists of components with a range of frequencies of non-zero width. Generation Monochromatic radiation can be produced by a number of methods. Isaac Newton observed that a beam of light from the sun could be spread out by refraction into a fan of light with varying colors; and that if a beam of any particular color was isolated from that fan, it behaved as "pure" light that could not be decomposed further. When atoms of a chemical element in gaseous state are subjected to an electric current, to suitable radiation, or to high enough temperature, they emit a light spectrum with a set of discrete spectral lines (monochromatic components), that are characteristic of the element. This phenomenon is the basis of the science of spectroscopy, and is exploited in fluorescent lamps and the so-called neon signs. A laser is a device that generates monochromatic and coherent radiation through a process of stimulated emission. Properties and uses When monochromatic radiation is made to interfere with itself, the result can be visible and stable interference fringes that can be used to measure very small distances, or large distances with very high accuracy. The current definition of the metre is based on this technique. In the technique of spectroscopic analysis, a mat Document 3::: A laser Doppler vibrometer (LDV) is a scientific instrument that is used to make non-contact vibration measurements of a surface. The laser beam from the LDV is directed at the surface of interest, and the vibration amplitude and frequency are extracted from the Doppler shift of the reflected laser beam frequency due to the motion of the surface. The output of an LDV is generally a continuous analog voltage that is directly proportional to the target velocity component along the direction of the laser beam. Some advantages of an LDV over similar measurement devices such as an accelerometer are that the LDV can be directed at targets that are difficult to access, or that may be too small or too hot to attach a physical transducer. Also, the LDV makes the vibration measurement without mass-loading the target, which is especially important for MEMS devices. Principles of operation A vibrometer is generally a two beam laser interferometer that measures the frequency (or phase) difference between an internal reference beam and a test beam. The most common type of laser in an LDV is the helium–neon laser, although laser diodes, fiber lasers, and Nd:YAG lasers are also used. The test beam is directed to the target, and scattered light from the target is collected and interfered with the reference beam on a photodetector, typically a photodiode. Most commercial vibrometers work in a heterodyne regime by adding a known frequency shift (typically 30–40 MHz) to one of the beams. This frequency shift is usually generated by a Bragg cell, or acousto-optic modulator. A schematic of a typical laser vibrometer is shown above. The beam from the laser, which has a frequency fo, is divided into a reference beam and a test beam with a beamsplitter. The test beam then passes through the Bragg cell, which adds a frequency shift fb. This frequency shifted beam then is directed to the target. The motion of the target adds a Doppler shift to the beam given by fd = 2*v(t)*cos(α)/λ, where Document 4::: Laser medicine is the use of lasers in medical diagnosis, treatments, or therapies, such as laser photodynamic therapy, photorejuvenation, and laser surgery. The word laser stands for "light amplification by stimulated emission of radiation". History The laser was invented in 1960 by Theodore Maiman, and its potential uses in medicine were subsequently explored. Lasers benefit from three interesting characteristics: directivity (multiple directional functions), impulse (possibility of operating in very short pulses), and monochromaticity. Several medical applications were found for this new instrument. In 1961, just one year after the laser's invention, Dr. Charles J. Campbell successfully used a ruby laser to destroy an angiomatous retinal tumor with a single pulse. In 1963, Dr. Leon Goldman used the ruby laser to treat pigmented skin cells and reported on his findings. The argon-ionized laser (wavelength: 488–514 nm) has since become the preferred laser for the treatment of retinal detachment. The carbon dioxide laser was developed by Kumar Patel and others in the early 1960s and is now a common and versatile tool not only for medicinal purposes but also for welding and drilling, among other uses. The possibility of using optical fiber (over a short distance in the operating room) since 1970 has opened many laser applications, in particular endocavitary, thanks to the possibility of introducing the fiber into the channel of an endoscope. During this time, the argon laser began to be used in gastroenterology and pneumology. Dr. Peter Kiefhaber was the first to "successfully perform endoscopic argon laser photocoagulation for gastrointestinal bleeding in humans". Kiefhaber is also considered a pioneer in using the Nd:YAG laser in medicine, also using it to control gastrointestinal bleeding. In 1976, Dr. Hofstetter employed lasers for the first time in urology. The late 1970s saw the rise of photodynamic therapy, thanks to laser dye. (Dougherty, 1972) Since The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. What do you call a device that produces a very focused beam of visible light of just one wavelength and color? A. optical diffuser B. fusion relay C. light meter D. laser Answer:
sciq-2691
multiple_choice
What is the cutting and burning trees to clear land for farming called?
[ "slash-and-burn agriculture", "drop-and-blaze agriculture", "cut-and-smoke farming", "reduce-and-ignite agriculture" ]
A
Relavent Documents: Document 0::: Arboriculture () is the cultivation, management, and study of individual trees, shrubs, vines, and other perennial woody plants. The science of arboriculture studies how these plants grow and respond to cultural practices and to their environment. The practice of arboriculture includes cultural techniques such as selection, planting, training, fertilization, pest and pathogen control, pruning, shaping, and removal. Overview A person who practices or studies arboriculture can be termed an arborist or an arboriculturist. A tree surgeon is more typically someone who is trained in the physical maintenance and manipulation of trees and therefore more a part of the arboriculture process rather than an arborist. Risk management, legal issues, and aesthetic considerations have come to play prominent roles in the practice of arboriculture. Businesses often need to hire arboriculturists to complete "tree hazard surveys" and generally manage the trees on-site to fulfill occupational safety and health obligations. Arboriculture is primarily focused on individual woody plants and trees maintained for permanent landscape and amenity purposes, usually in gardens, parks or other populated settings, by arborists, for the enjoyment, protection, and benefit of people. Arboricultural matters are also considered to be within the practice of urban forestry yet the clear and separate divisions are not distinct or discreet. Tree Benefits Tree benefits are the economic, ecological, social and aesthetic use, function purpose, or services of a tree (or group of trees), in its situational context in the landscape. Environmental tree benefits Erosion control and soil retention Improved water infiltration and percolation Protection from exposure: windbreak, shade, impact from hail/rainfall Humidification of the air Food for decomposers, consumers, and pollinators Soil health: organic matter accumulation from leaf litter and root exudates (symbiotic microbes) Ecological habitat Mod Document 1::: A controlled or prescribed (Rx) burn, which can include hazard reduction burning, backfire, swailing or a burn-off, is a fire set intentionally for purposes of forest management, fire suppression, farming, prairie restoration or greenhouse gas abatement. A controlled burn may also refer to the intentional burning of slash and fuels through burn piles. Fire is a natural part of both forest and grassland ecology and controlled fire can be a tool for foresters. Hazard reduction or controlled burning is conducted during the cooler months to reduce fuel buildup and decrease the likelihood of serious hotter fires. Controlled burning stimulates the germination of some desirable forest trees, and reveals soil mineral layers which increases seedling vitality, thus renewing the forest. Some cones, such as those of lodgepole pine, sequoia and many chaparral shrubs are pyriscent, meaning heat from fire opens cones to disperse seeds. In industrialized countries, controlled burning is usually overseen by fire control authorities for regulations and permits. History There are two basic causes of wildfires. One is natural, mainly through lightning, and the other is human activity. Controlled burns have a long history in wildland management. Pre-agricultural societies used fire to regulate both plant and animal life. Fire history studies have documented periodic wildland fires ignited by indigenous peoples in North America and Australia. Native Americans frequently used fire to manage natural environments in a way that benefited humans and wildlife, starting low-intensity fires that released nutrients for plants, reduced competition, and consumed excess flammable material that otherwise would eventually fuel high-intensity, catastrophic fires. Fires, both naturally caused and prescribed, were once part of natural landscapes in many areas. In the US, these practices ended in the early 20th century, when federal fire policies were enacted with the goal of suppressing all fires. S Document 2::: Assisted migration is "the intentional establishment of populations or meta-populations beyond the boundary of a species' historic range for the purpose of tracking suitable habitats through a period of changing climate...." It is therefore a nature conservation tactic by which plants or animals are intentionally moved to geographic locations better suited to their present or future habitat needs and climate tolerances — and to which they are unable to migrate or disperse on their own. In conservation biology, the term first appeared in publications in 2004. It signified a type of species translocation intended to reduce biodiversity losses owing to climate change. In the context of endangered species management, assisted colonization (2007) and managed relocation (2009) were soon offered as synonyms — the latter in a paper entailing 22 coauthors. In forestry science and management, assisted migration is discussed in its own journals and from perspectives different from those of conservation biologists. This is, in part, because paleoecologists had already concluded that there were significant lags in northward movement of even the dominant canopy trees in North America during the thousands of years since the final glacial retreat. In the 1990s, forestry researchers had begun applying climate change projections to their own tree species distribution modelling efforts, and some results on the probable distances of future range shifts prompted attention. As well, translocation terminology was not controversial among forestry researchers because migration was the standard term used in paleoecology for natural movements of tree species recorded in the geological record. Another key difference between forestry practices and conservation biology is that the former, necessarily, was guided by "seed transfer guidelines" whenever a timber or pulp harvest was followed up by reforestation plantings. The provincial government of British Columbia in Canada was the first to upd Document 3::: The Eastern Agricultural Complex in the woodlands of eastern North America was one of about 10 independent centers of plant domestication in the pre-historic world. Incipient agriculture dates back to about 5300 BCE. By about 1800 BCE the Native Americans of the woodlands were cultivating several species of food plants, thus beginning a transition from a hunter-gatherer economy to agriculture. After 200 BCE when maize from Mexico was introduced to the Eastern Woodlands, the Native Americans of the eastern United States and adjacent Canada slowly changed from growing local indigenous plants to a maize-based agricultural economy. The cultivation of local indigenous plants other than squash and sunflower declined and was eventually abandoned. The formerly domesticated plants returned to their wild forms. The first four plants known to have been domesticated at the Riverton Site in Illinois in 1800 BCE were goosefoot (Chenopodium berlandieri), sunflower (Helianthus annuus var. macrocarpus), marsh elder (Iva annua var. macrocarpa), and squash (Cucurbita pepo ssp. ovifera). Several other species of plants were later domesticated. Origin of name and concept The term Eastern Agricultural Complex (EAC) was popularized by anthropologist Ralph Linton in the 1940s. Linton suggested that the Eastern Woodland tribes integrated maize cultivation from Mayans and Aztecs in Mexico into their own pre-existing agricultural subsistence practices. Ethnobotanists Volney H. Jones and Melvin R. Gilmore built upon Ralph Linton's understanding of Eastern Woodland agriculture with their work in cave and bluff dwellings in Kentucky and the Ozark Mountains in Arkansas. George Quimby also popularized the term "Eastern complex" in the 1940s. Authors Guy Gibbons and Kenneth Ames suggested that "indigenous seed crops" is a more appropriate term than "complex". Until the 1970s and 1980s most archaeologists believed that agriculture by Eastern Woodland peoples had been imported from Mexico, along Document 4::: Weed science is a scientific discipline concerned with plants that may be considered weeds, their effects on human activities, and their management "a branch of applied ecology that attempts to modify the environment against natural evolutionary trends.". History Weeds have existed since humans began settled agriculture have existed since the advent of settled agriculture around 10,000 years ago it has been suggested that the most common characteristic of the ancestors of our presently dominant crop plants is their willingness—their tendency to be successful, to thrive, in disturbed habitats, mostly those around human dwellings. Farmers have likely always been aware of weeds in their crops, although the evidence for their awareness and concern is nearly all anecdotal. Unlike other agricultural sciences like entomology or plant pathology, the emergence of weed science is comparatively recent, occurring largely within the 20th century and coinciding with the development of herbicides. Weeds are controlled in much of the world by hand (roguing) or with crude hoes. The size of a farmer's holding and yield per unit area are limited by several things and paramount among them is the rapidity with which a family can weed its crops. More human labor may be expended to weed crops than on any other single human enterprise, and most of that labor is expended by women. Weed control in the Western world and other developed areas of the world is done by sophisticated machines and by substituting chemical energy (herbicides) for mechanical and human energy. There is a relationship between the way farmers control weeds and the ability of a nation to feed its people. Successful weed management is one of the essential ingredients to maintain and increase food production. In 1923, Clark and Fletcher suggested that the "annual losses due to the occurrence of pernicious weeds on farm land in Canada, although acknowledged in a general way, are far greater than is realized." They The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. What is the cutting and burning trees to clear land for farming called? A. slash-and-burn agriculture B. drop-and-blaze agriculture C. cut-and-smoke farming D. reduce-and-ignite agriculture Answer:
sciq-8990
multiple_choice
What property of carbon and other elements can be used to date fossils and rocks, among other things?
[ "mass", "full-life", "half-life", "magnetic force" ]
C
Relavent Documents: Document 0::: Isotope analysis has many applications in archaeology, from dating sites and artefacts, determination of past diets and migration patterns and for environmental reconstruction. Information is determined by assessing the ratio of different isotopes of a particular element in a sample. The most widely studied and used isotopes in archaeology are carbon, oxygen, nitrogen, strontium and calcium. An isotope is an atom of an element with an abnormal number of neutrons, changing their atomic mass. Isotopes can be subdivided into stable and unstable or radioactive. Unstable isotopes decay at a predictable rate over time. The first stable isotope was discovered in 1913, and most were identified by the 1930’s. Archaeology was relatively slow to adopt the study of isotopes. Whereas chemistry, biology and physics, saw a rapid uptake in applications of isotope analysis in the 1950’s and 60’s, following the commercialisation of the mass spectrometer. It wasn't until the 1970’s, with the publication of works by Vogel and Van Der Merwe (1977) and DeNiro and Epstein (1978; 1981)  that isotopic analysis became a mainstay of archaeological study. Isotopes Carbon Carbon is present in all biological material including skeletal remains, charcoal and food residues and plays an integral role in the dating of materials, through radiocarbon dating. The ratio of different carbon isotopes naturally fluctuates over time, and, by analysing the composition of carbon dioxide (CO2) in ancient air bubbles trapped in ice cores, a chronological record of these fluctuations can be constructed. Primary producers (such as grasses) absorb and sequester CO2 during photosynthesis, these plants are then eaten by consumers (such as cows, and later humans) which inherit this same CO2 signature. Therefore, by matching the carbon isotope ratios from a sample to ratios from the ice core record, the sample can be assigned to a broad period. After death, an organism no longer absorbs CO2, 14C's instability Document 1::: Radiocarbon is a scientific journal devoted to the topic of radiocarbon dating. It was founded in 1959 as a supplement to the American Journal of Science, and is an important source of data and information about radiocarbon dating. It publishes many radiocarbon results, and since 1979 it has published the proceedings of the international conferences on radiocarbon dating. The journal is published six times per year. it is published by Cambridge University Press. See also Carbon-14 Document 2::: Dendroarchaeology is a term used for the study of vegetation remains, old buildings, artifacts, furniture, art and musical instruments using the techniques of dendrochronology (tree-ring dating). It refers to dendrochronological research of wood from the past regardless of its current physical context (in or above the soil). This form of dating is the most accurate and precise absolute dating method available to archaeologists, as the last ring that grew is the first year the tree could have been incorporated into an archaeological structure. Tree-ring dating is useful in that it can contribute to chronometric, environmental, and behavioral archaeological research. The utility of tree-ring dating in an environmental sense is the most applicable of the three in today's world. Tree rings can be used to reconstruct numerous environmental variables such as temperature, precipitation, stream flow, drought society, fire frequency and intensity, insect infestation, atmospheric circulation patterns, among others. History At the beginning of the twentieth century, astronomer Andrew Ellicott Douglass first applied tree ring dating to prehistoric North American artifacts.  Through applying dendrochronology (tree-ring dating), Douglass hoped for more expansive climate studies. Douglass theorized organic materials (trees and plant remains) could assist in visualizing past climates. Despite Dr. Douglass’s contributions, archaeology as a discipline did not begin applying tree-ring dating until 1970s with Dr. Edward Cook and Dr. Gordon Jacoby. In 1929, American Southwestern archaeologists had charted a non continuous historic and prehistoric chronologies for the Chaco Canyon Region. Tree ring laboratory scientists from Columbia University were some of the first to apply tree-ring dating to the colonial period, specifically architectural timbers in the eastern United States. For agencies like the National Park Service and other historical societies, Dr. Jacoby and Cook began dat Document 3::: Chronology (from Latin chronologia, from Ancient Greek , chrónos, "time"; and , -logia) is the science of arranging events in their order of occurrence in time. Consider, for example, the use of a timeline or sequence of events. It is also "the determination of the actual temporal sequence of past events". Chronology is a part of periodization. It is also a part of the discipline of history including earth history, the earth sciences, and study of the geologic time scale. Related fields Chronology is the science of locating historical events in time. It relies upon chronometry, which is also known as timekeeping, and historiography, which examines the writing of history and the use of historical methods. Radiocarbon dating estimates the age of formerly living things by measuring the proportion of carbon-14 isotope in their carbon content. Dendrochronology estimates the age of trees by correlation of the various growth rings in their wood to known year-by-year reference sequences in the region to reflect year-to-year climatic variation. Dendrochronology is used in turn as a calibration reference for radiocarbon dating curves. Calendar and era The familiar terms calendar and era (within the meaning of a coherent system of numbered calendar years) concern two complementary fundamental concepts of chronology. For example, during eight centuries the calendar belonging to the Christian era, which era was taken in use in the 8th century by Bede, was the Julian calendar, but after the year 1582 it was the Gregorian calendar. Dionysius Exiguus (about the year 500) was the founder of that era, which is nowadays the most widespread dating system on earth. An epoch is the date (year usually) when an era begins. Ab Urbe condita era Ab Urbe condita is Latin for "from the founding of the City (Rome)", traditionally set in 753 BC. It was used to identify the Roman year by a few Roman historians. Modern historians use it much more frequently than the Romans themselves did; the Document 4::: A biosignature (sometimes called chemical fossil or molecular fossil) is any substance – such as an element, isotope, molecule, or phenomenon that provides scientific evidence of past or present life. Measurable attributes of life include its complex physical or chemical structures and its use of free energy and the production of biomass and wastes. A biosignature can provide evidence for living organisms outside the Earth and can be directly or indirectly detected by searching for their unique byproducts. Types In general, biosignatures can be grouped into ten broad categories: Isotope patterns: Isotopic evidence or patterns that require biological processes. Chemistry: Chemical features that require biological activity. Organic matter: Organics formed by biological processes. Minerals: Minerals or biomineral-phases whose composition and/or morphology indicate biological activity (e.g., biomagnetite). Microscopic structures and textures: Biologically formed cements, microtextures, microfossils, and films. Macroscopic physical structures and textures: Structures that indicate microbial ecosystems, biofilms (e.g., stromatolites), or fossils of larger organisms. Temporal variability: Variations in time of atmospheric gases, reflectivity, or macroscopic appearance that indicates life's presence. Surface reflectance features: Large-scale reflectance features due to biological pigments could be detected remotely. Atmospheric gases: Gases formed by metabolic and/or aqueous processes, which may be present on a planet-wide scale. Technosignatures: Signatures that indicate a technologically advanced civilization. Viability Determining whether a potential biosignature is worth investigating is a fundamentally complicated process. Scientists must consider any and every possible alternate explanation before concluding that something is a true biosignature. This includes investigating the minute details that make other planets unique and understanding when there is a deviat The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. What property of carbon and other elements can be used to date fossils and rocks, among other things? A. mass B. full-life C. half-life D. magnetic force Answer:
sciq-5417
multiple_choice
All living things maintain a stable internal environment through what process?
[ "alertness", "consciousness", "homeostasis", "maintenance" ]
C
Relavent Documents: Document 0::: This list of life sciences comprises the branches of science that involve the scientific study of life – such as microorganisms, plants, and animals including human beings. This science is one of the two major branches of natural science, the other being physical science, which is concerned with non-living matter. Biology is the overall natural science that studies life, with the other life sciences as its sub-disciplines. Some life sciences focus on a specific type of organism. For example, zoology is the study of animals, while botany is the study of plants. Other life sciences focus on aspects common to all or many life forms, such as anatomy and genetics. Some focus on the micro-scale (e.g. molecular biology, biochemistry) other on larger scales (e.g. cytology, immunology, ethology, pharmacy, ecology). Another major branch of life sciences involves understanding the mindneuroscience. Life sciences discoveries are helpful in improving the quality and standard of life and have applications in health, agriculture, medicine, and the pharmaceutical and food science industries. For example, it has provided information on certain diseases which has overall aided in the understanding of human health. Basic life science branches Biology – scientific study of life Anatomy – study of form and function, in plants, animals, and other organisms, or specifically in humans Astrobiology – the study of the formation and presence of life in the universe Bacteriology – study of bacteria Biotechnology – study of combination of both the living organism and technology Biochemistry – study of the chemical reactions required for life to exist and function, usually a focus on the cellular level Bioinformatics – developing of methods or software tools for storing, retrieving, organizing and analyzing biological data to generate useful biological knowledge Biolinguistics – the study of the biology and evolution of language. Biological anthropology – the study of humans, non-hum Document 1::: Controlled (or closed) ecological life-support systems (acronym CELSS) are a self-supporting life support system for space stations and colonies typically through controlled closed ecological systems, such as the BioHome, BIOS-3, Biosphere 2, Mars Desert Research Station, and Yuegong-1. Original concept CELSS was first pioneered by the Soviet Union during the famed "Space Race" in the 1950s–60s. Originated by Konstantin Tsiolkovsky and furthered by V.I. Vernadsky, the first forays into this science were the use of closed, unmanned ecosystems, expanding into the research facility known as the BIOS-3. Then in 1965, manned experiments began in the BIOS-3. Rationale Human presence in space, thus far, has been limited to our own Earth–Moon system. Also, everything that astronauts would need in the way of life support (air, water, and food) has been brought with them. This may be economical for short missions of spacecraft, but it is not the most viable solution when dealing with the life support systems of a long-term craft (such as a generation ship) or a settlement. The aim of CELSS is to create a regenerative environment that can support and maintain human life via agricultural means. Components of CELSS Air revitalization In non-CELSS environments, air replenishment and processing typically consists of stored air tanks and scrubbers. The drawback to this method lies in the fact that upon depletion the tanks would have to be refilled; the scrubbers would also require replacement after they become ineffective. There is also the issue of processing toxic fumes, which come from the synthetic materials used in the construction of habitats. Therefore, the issue of how air quality is maintained requires attention; in experiments, it was found that the plants also removed volatile organic compounds offgassed by synthetic materials used thus far to build and maintain all man-made habitats. In CELSS, air is initially supplied by external supply, but is maintained by Document 2::: The Seven Pillars of Life are the essential principles of life described by Daniel E. Koshland in 2002 in order to create a universal definition of life. One stated goal of this universal definition is to aid in understanding and identifying artificial and extraterrestrial life. The seven pillars are Program, Improvisation, Compartmentalization, Energy, Regeneration, Adaptability, and Seclusion. These can be abbreviated as PICERAS. The Seven Pillars Program Koshland defines "Program" as an "organized plan that describes both the ingredients themselves and the kinetics of the interactions among ingredients as the living system persists through time." In natural life as it is known on Earth, the program operates through the mechanisms of nucleic acids and amino acids, but the concept of program can apply to other imagined or undiscovered mechanisms. Improvisation "Improvisation" refers to the living system's ability to change its program in response to the larger environment in which it exists. An example of improvisation on earth is natural selection. Compartmentalization "Compartmentalization" refers to the separation of spaces in the living system that allow for separate environments for necessary chemical processes. Compartmentalization is necessary to protect the concentration of the ingredients for a reaction from outside environments. Energy Because living systems involve net movement in terms of chemical movement or body movement, and lose energy in those movements through entropy, energy is required for a living system to exist. The main source of energy on Earth is the sun, but other sources of energy exist for life on Earth, such as hydrogen gas or methane, used in chemosynthesis. Regeneration "Regeneration" in a living system refers to the general compensation for losses and degradation in the various components and processes in the system. This covers the thermodynamic loss in chemical reactions, the wear and tear of larger parts, and the large Document 3::: Ecological competence is a term that has several different meanings that are dependent on the context it is used. The term "Ecological competence" can be used in a microbial sense, and it can be used in a sociological sense. Microbiology Ecological competence is the ability of an organism, often a pathogen, to survive and compete in new habitats. In the case of plant pathogens, it is also their ability to survive between growing seasons. For example, peanut clump virus can survive in the spores of its fungal vector until a new growing season begins and it can proceed to infect its primary host again. If a pathogen does not have ecological competence it is likely to become extinct. Bacteria and other pathogens can increase their ecological competence by creating a micro-niche, or a highly specialized environment that only they can survive in. This in turn will increase plasmid stability. Increased plasmid stability leads to a higher ecological competence due to added spatial organization and regulated cell protection. Sociology Ecological competence in a sociological sense is based around the relationship that humans have formed with the environment. It is often important in certain careers that will have a drastic impact on the surrounding ecosystem. A specific example is engineers working around and planning mining operations, due to the possible negative effects it can have on the surrounding environment. Ecological competence is especially important at the managerial level so that managers may understand society's risk to nature. These risks are learned through specific ecological knowledge so that the environment can be better protected in the future. See also Cultural ecology Environmental education Sustainable development Ecological relationship Document 4::: Earth system science (ESS) is the application of systems science to the Earth. In particular, it considers interactions and 'feedbacks', through material and energy fluxes, between the Earth's sub-systems' cycles, processes and "spheres"—atmosphere, hydrosphere, cryosphere, geosphere, pedosphere, lithosphere, biosphere, and even the magnetosphere—as well as the impact of human societies on these components. At its broadest scale, Earth system science brings together researchers across both the natural and social sciences, from fields including ecology, economics, geography, geology, glaciology, meteorology, oceanography, climatology, paleontology, sociology, and space science. Like the broader subject of systems science, Earth system science assumes a holistic view of the dynamic interaction between the Earth's spheres and their many constituent subsystems fluxes and processes, the resulting spatial organization and time evolution of these systems, and their variability, stability and instability. Subsets of Earth System science include systems geology and systems ecology, and many aspects of Earth System science are fundamental to the subjects of physical geography and climate science. Definition The Science Education Resource Center, Carleton College, offers the following description: "Earth System science embraces chemistry, physics, biology, mathematics and applied sciences in transcending disciplinary boundaries to treat the Earth as an integrated system. It seeks a deeper understanding of the physical, chemical, biological and human interactions that determine the past, current and future states of the Earth. Earth System science provides a physical basis for understanding the world in which we live and upon which humankind seeks to achieve sustainability". Earth System science has articulated four overarching, definitive and critically important features of the Earth System, which include: Variability: Many of the Earth System's natural 'modes' and variab The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. All living things maintain a stable internal environment through what process? A. alertness B. consciousness C. homeostasis D. maintenance Answer:
sciq-4076
multiple_choice
What are the mass of threadlike filaments on the body of a fungus called?
[ "hyphae", "cuticle", "cytoplasm", "dendrites" ]
A
Relavent Documents: Document 0::: Plectenchyma (from Greek πλέκω pleko 'I weave' and ἔγχυμα enchyma 'infusion', i.e., 'a woven tissue') is the general term employed to designate all types of fungal tissues. The two most common types of tissues are prosenchyma and pseudoparenchyma. The hyphae specifically become fused together. Notes Fungal morphology and anatomy Document 1::: Pycniospores are a type of spore found in certain species of rust fungi. They are produced in special cup-like structures called pycnia or pynidia. Almost all fungi reproduce asexually with the production of spores. Spores may be colorless, green, yellow, orange, red, brown or black. Other types of spore Sporangiospores Sporangiospores (spore:spore, angion:sac) are spores formed inside the sporangium which is a spore sac. Conidia Conidia (singular: conidium) are spores produced at the tip of special branches called conidiophores. Oidia Oidia (singular: oidium). In several fungi, the hyphae is often divided into a large number of short pieces by transverse walls. Each piece is able to germinate into a new body. These pieces are called oidia (small egg). Chlamydospores Chlamydospores (chlymus: mantle) are produced like oidia but differ from oidia in being thick walled. They are either terminal or intercalary. Document 2::: A hypha (; : hyphae) is a long, branching, filamentous structure of a fungus, oomycete, or actinobacterium. In most fungi, hyphae are the main mode of vegetative growth, and are collectively called a mycelium. Structure A hypha consists of one or more cells surrounded by a tubular cell wall. In most fungi, hyphae are divided into cells by internal cross-walls called "septa" (singular septum). Septa are usually perforated by pores large enough for ribosomes, mitochondria, and sometimes nuclei to flow between cells. The major structural polymer in fungal cell walls is typically chitin, in contrast to plants and oomycetes that have cellulosic cell walls. Some fungi have aseptate hyphae, meaning their hyphae are not partitioned by septa. Hyphae have an average diameter of 4–6 µm. Growth Hyphae grow at their tips. During tip growth, cell walls are extended by the external assembly and polymerization of cell wall components, and the internal production of new cell membrane. The Spitzenkörper is an intracellular organelle associated with tip growth. It is composed of an aggregation of membrane-bound vesicles containing cell wall components. The Spitzenkörper is part of the endomembrane system of fungi, holding and releasing vesicles it receives from the Golgi apparatus. These vesicles travel to the cell membrane via the cytoskeleton and release their contents (including various cysteine-rich proteins including cerato-platanins and hydrophobins) outside the cell by the process of exocytosis, where they can then be transported to where they are needed. Vesicle membranes contribute to growth of the cell membrane while their contents form new cell wall. The Spitzenkörper moves along the apex of the hyphal strand and generates apical growth and branching; the apical growth rate of the hyphal strand parallels and is regulated by the movement of the Spitzenkörper. As a hypha extends, septa may be formed behind the growing tip to partition each hypha into individual cells. Document 3::: The Spitzenkörper (German for 'pointed body', SPK) is a structure found in fungal hyphae that is the organizing center for hyphal growth and morphogenesis. It consists of many small vesicles and is present in growing hyphal tips, during spore germination, and where branch formation occurs. Its position in the hyphal tip correlates with the direction of hyphal growth. The Spitzenkörper is a part of the endomembrane system in fungi. The vesicles are organized around a central area that contains a dense meshwork of microfilaments. Polysomes are often found closely to the posterior boundary of the Spitzenkörper core within the Ascomycota, microtubules extend into and often through the Spitzenkörper and within the Ascomycota Woronin bodies are found in the apical region near the Spitzenkörper. The cytoplasm of the extreme apex is occupied almost exclusively by secretory vesicles. In the higher fungi (Ascomycota and Basidiomycota), secretory vesicles are arranged into a dense, spherical aggregation called the Spitzenkörper or ‘apical body’. The Spitzenkörper may be seen in growing hyphae even with a light microscope. Hyphae of the Oomycota and some lower Eumycota (notably the Zygomycota) do not contain a recognizable Spitzenkörper, and the vesicles are instead distributed more loosely often in a crescent-shaped arrangement beneath the apical plasma membrane. This structure is most commonly found in Dikarya and was at first thought to only occur among them. Vargas et al 1993 however were the first to find a Spitzenkörper in another clade, specifically the Allomyces (Blastocladiomycota), then subsequently Basidiobolus ranarum which has been placed in several different phyla was also found to have an SPK. these and the Blastocladiella (also in Blastocladiomycota) are the only known taxa to bear this structure. Document 4::: A synnema (plural synnemata, also coremia; derivation: "Threads together") is a large, erect reproductive structure borne by some fungi, bearing compact conidiophores, which fuse together to form a strand resembling a stalk of wheat, with conidia at the end or on the edges. Fungal genera which bear synnemata include Doratomyces. The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. What are the mass of threadlike filaments on the body of a fungus called? A. hyphae B. cuticle C. cytoplasm D. dendrites Answer:
sciq-8136
multiple_choice
Timber, obtained from trees that can be replanted to replace those that are cut down, is an example of what type of resource?
[ "nonrenewable", "renewable", "fossil fuel", "mineral" ]
B
Relavent Documents: Document 0::: Forest reproductive material is a part of a tree that can be used for reproduction such as seed, cutting or seedling. Artificial regeneration, carried out through seeding or planting, typically involves transferring forest reproductive material to a particular site from other locations while natural regeneration relies on genetic material that is already available on the site. Technical opportunities and challenges to ensure quality and quantity of forest reproductive material can be found in the activities of identification, selection, procurement, propagation, conservation, improvement and sustained production of reproductive material. The use of low quality or poorly adapted forest reproductive material can have very negative impact on the vitality and resilience of a forest. In Europe, much of the material used for artificial regeneration is produced and transferred within a single country. However, forest reproductive material, usually in the form of seeds or cuttings, is increasingly traded across national borders, especially within the European Union. Forest reproductive material and climate changes As a result of climate changes, leading to increasing temperatures, some parts of the current distribution ranges of forest trees are expected to become unsuitable while new areas may become suitable for many species in higher latitudes or altitudes. This will most likely increase the future demand for imported forest reproductive material as forest managers and owners try to identify tree species and provenances that will be able to grow in their land under new climatic conditions. Especially, forest reproductive material with high plasticity will be increasingly useful for this purpose. Document 1::: Wood science, commonly referred to as wood sciences, is a scientific discipline that predominantly investigates elements associated with the formation, composition and macro- and microstructure of wood. It additionally delves into the biological, chemical, physical, and mechanical properties and characteristics of wood, as a natural lignocellulosic material. A deep understanding of wood plays a pivotal role in various endeavors, such as the processing of wood, the production of wood-based materials like particleboard, fiberboard, OSB, plywood and other materials, as well as the utilization of wood and wood-based materials in construction and a wide array of products, including pulpwood, furniture, engineered wood products such as glued laminated timber, CLT, LVL, PSL, as well as pellets, briquettes, and numerous other products. History Initial comprehensive investigations in the field of wood science emerged at the start of the 20th century. The advent of contemporary wood research commenced in 1910, when the Forest Products Laboratory (FPL) was established in Madison, Wisconsin, USA. The Forest Products Laboratory played a fundamental role in wood science providing scientific research on wood and wood products in partnership with academia, industry, local and other institutions in North and South America and worldwide. In the following years, many wood research institutes came into existence across almost all industrialized nations. A general overview of these institutes and laboratories is shown below: 1913: Institute of Wood and Pulp Chemistry Eberswalde (today's Eberswalde University for Sustainable Development), Germany 1913: Forest Products Laboratory Montreal, Canada 1918: Forest Products Laboratory Vancouver, Canada 1919: Forest Products Laboratory Melbourne, Australia 1923: Forest Products Research Laboratory, Princes Risborough, Great Britain 1929: Institute for Wood Science and Technology, Leningrant, St. Petersburg, USSR 1933: Centre Technique Document 2::: In forestry, the optimal rotation age is the growth period required to derive maximum value from a stand of timber. The calculation of this period is specific to each stand and to the economic and sustainability goals of the harvester. Economically optimum rotation age In forestry rotation analysis, economically optimum rotation can be defined as “that age of rotation when the harvest of stumpage will generate the maximum revenue or economic yield”. In an economically optimum forest rotation analysis, the decision regarding optimum rotation age is undertake by calculating the maximum net present value. It can be shown as follows: Revenue (R) = Volume × Price Cost (C) = Cost of harvesting + handling. Hence, Profit = Revenue − Cost. Since the benefit is generated over multiple years, it is necessary to calculate that particular age of harvesting which will generate the maximum revenue. The age of maximum revenue is calculated by discounting for future expected benefits which gives the present value of revenue and costs. From this net present value (NPV) of profit is calculated. This can be done as follows: NPV = PVR – PVC Where PVR is the present value of revenue and PVC is the present value of cost. Rotation will be undertaken where NPV is maximum. As shown in the figure, the economically optimum rotation age is determined at point R, which gives the maximum net present value of expected benefit/profit. Rotation at any age before or after R will cause the expected benefit/profit to fall. Biologically optimum rotation age Biologists use the concept of maximum sustainable yield (MSY) or mean annual increment (MAI), to determine the optimal harvest age of timber. MSY can be defined as “the largest yield that can be harvested which does not deplete the resource (timber) irreparably and which leaves the resource in good shape for future uses”. MAI can be defined as “the average annual increase in volume of individual trees or stands up to the specified point in t Document 3::: Biorefining is the process of "building" multiple products from biomass as a feedstock or raw material much like a petroleum refinery that is currently in use. The process of biorefining can be characterized as the sustainable processing of biomass, which eventually yields: biobased products, such as food, feed, chemicals or other materials, and/or bioenergy, such as biofuels, power or heat. A biorefinery is a facility like a petroleum refinery that comprises the various process steps or unit operations and related equipment to produce various bioproducts including fuels, power, materials and chemicals from biomass. Industrial biorefineries have been identified as the most promising route to the creation of a new domestic biobased industry producing entire spectrum of bioproducts or bio-based products. Biomass has various components such as lignin, cellulose, hemicelluloses, extractives, etc. Biorefinery can take advantage of the unique properties of each of biomass components enabling the production of various products. The various bioproducts can include fiber, fuels, chemicals, plastics etc. Processes Biorefining processes can be categorized into four groups: Mechanical Biochemical Chemical Thermochemical See also Biomass Biomass (ecology) Forest Agriculture Biogas Bioenergy Biofuels Biochemicals Bioproducts Document 4::: A pre-STEM program is a course of study at any two-year college that prepares a student to transfer to a four-year school to earn a bachelor's degree in a STEM field. Overview The concept of a pre-STEM program is being developed to address America's need for more college-trained professionals in science, technology, engineering, and mathematics (STEM). It is an innovation meant to fill a gap at community colleges that do not have 'major' degree paths that students identify with on their way to earning an Associates degree. Students must complete a considerable amount of STEM coursework before transferring from a two-year school to a four-year school and earn a baccalaureate degree in a STEM field. Schools with a pre-STEM program are able to identify those students and support them with STEM-specific academic and career advising, increasing the student's chances of going on to earn a STEM baccalaureate degree in a timely fashion. With over 50% of America's college-bound students starting their college career at public or private two-year school, and with a very small proportion of students who start college at a two-year school matriculating to and earning STEM degrees from four-year schools, pre-STEM programs have great potential for broadening participation in baccalaureate STEM studies. Example programs The effectiveness of pre-STEM programs is being investigated by a consortium of schools in Missouri: Moberly Area Community College, St. Charles Community College, Metropolitan Community College, and Truman State University. A larger group of schools met at the Belknap Springs Meetings in October 2009 to discuss the challenges and opportunities presented by STEM-focused partnerships between 2-year and 4-year schools. Each program represented a two-year school and a four-year school that were trying to increase the number of people who earn a baccalaureate degree in a STEM area through various means, some of which were pre-STEM programs. Other methods includes The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. Timber, obtained from trees that can be replanted to replace those that are cut down, is an example of what type of resource? A. nonrenewable B. renewable C. fossil fuel D. mineral Answer:
sciq-11560
multiple_choice
What is it called when the nucleus of an atom splits into two smaller nuclei?
[ "nuclear fusion", "cell division", "nuclear fission", "complex fission" ]
C
Relavent Documents: Document 0::: Nuclear binding energy in experimental physics is the minimum energy that is required to disassemble the nucleus of an atom into its constituent protons and neutrons, known collectively as nucleons. The binding energy for stable nuclei is always a positive number, as the nucleus must gain energy for the nucleons to move apart from each other. Nucleons are attracted to each other by the strong nuclear force. In theoretical nuclear physics, the nuclear binding energy is considered a negative number. In this context it represents the energy of the nucleus relative to the energy of the constituent nucleons when they are infinitely far apart. Both the experimental and theoretical views are equivalent, with slightly different emphasis on what the binding energy means. The mass of an atomic nucleus is less than the sum of the individual masses of the free constituent protons and neutrons. The difference in mass can be calculated by the Einstein equation, , where E is the nuclear binding energy, c is the speed of light, and m is the difference in mass. This 'missing mass' is known as the mass defect, and represents the energy that was released when the nucleus was formed. The term "nuclear binding energy" may also refer to the energy balance in processes in which the nucleus splits into fragments composed of more than one nucleon. If new binding energy is available when light nuclei fuse (nuclear fusion), or when heavy nuclei split (nuclear fission), either process can result in release of this binding energy. This energy may be made available as nuclear energy and can be used to produce electricity, as in nuclear power, or in a nuclear weapon. When a large nucleus splits into pieces, excess energy is emitted as gamma rays and the kinetic energy of various ejected particles (nuclear fission products). These nuclear binding energies and forces are on the order of one million times greater than the electron binding energies of light atoms like hydrogen. Introduction Nucl Document 1::: Atomic energy or energy of atoms is energy carried by atoms. The term originated in 1903 when Ernest Rutherford began to speak of the possibility of atomic energy. H. G. Wells popularized the phrase "splitting the atom", before discovery of the atomic nucleus. Atomic energy includes: Nuclear binding energy, the energy required to split a nucleus of an atom. Nuclear potential energy, the potential energy of the particles inside an atomic nucleus. Nuclear reaction, a process in which nuclei or nuclear particles interact, resulting in products different from the initial ones; see also nuclear fission and nuclear fusion. Radioactive decay, the set of various processes by which unstable atomic nuclei (nuclides) emit subatomic particles. The energy of inter-atomic or chemical bonds, which holds atoms together in compounds. Atomic energy is the source of nuclear power, which uses sustained nuclear fission to generate heat and electricity. It is also the source of the explosive force of an atomic bomb. Document 2::: In nuclear physics, separation energy is the energy needed to remove one nucleon (or other specified particle or particles) from an atomic nucleus. The separation energy is different for each nuclide and particle to be removed. Values are stated as "neutron separation energy", "two-neutron separation energy", "proton separation energy", "deuteron separation energy", "alpha separation energy", and so on. The lowest separation energy among stable nuclides is 1.67 MeV, to remove a neutron from beryllium-9. The energy can be added to the nucleus by an incident high-energy gamma ray. If the energy of the incident photon exceeds the separation energy, a photodisintegration might occur. Energy in excess of the threshold value becomes kinetic energy of the ejected particle. By contrast, nuclear binding energy is the energy needed to completely disassemble a nucleus, or the energy released when a nucleus is assembled from nucleons. It is the sum of multiple separation energies, which should add to the same total regardless of the order of assembly or disassembly. Physics and chemistry Electron separation energy or electron binding energy, the energy required to remove one electron from a neutral atom or molecule (or cation) is called ionization energy. The reaction leads to photoionization, photodissociation, the photoelectric effect, photovoltaics, etc. Bond-dissociation energy is the energy required to break one bond of a molecule or ion, usually separating an atom or atoms. See also Binding energy External links Nucleon separation energies charts of nuclides showing separation energies Binding energy Nuclear physics Document 3::: In nuclear physics and nuclear chemistry, the fission barrier is the activation energy required for a nucleus of an atom to undergo fission. This barrier may also be defined as the minimum amount of energy required to deform the nucleus to the point where it is irretrievably committed to the fission process. The energy to overcome this barrier can come from either neutron bombardment of the nucleus, where the additional energy from the neutron brings the nucleus to an excited state and undergoes deformation, or through spontaneous fission, where the nucleus is already in an excited and deformed state. It is important to note that efforts to understand fission processes are still an ongoing and have been a very difficult problem to solve since fission was first discovered by Lise Meitner, Otto Hahn, and Fritz Strassmann in 1938. While nuclear physicists understand many aspects of the fission process, there is currently no encompassing theoretical framework that gives a satisfactory account of the basic observations. Scission The fission process can be understood when a nucleus with some equilibrium deformation absorbs energy (through neutron capture, for example), becomes excited and deforms to a configuration known as the "transition state" or "saddle point" configuration. As the nucleus deforms, the nuclear Coulomb energy decreases while the nuclear surface energy increases. At the saddle point, the rate of change of the Coulomb energy is equal to the rate of change of the nuclear surface energy. The formation and eventual decay of this transition state nucleus is the rate-determining step in the fission process and corresponds to the passage over an activation energy barrier to the fission reaction. When this occurs, the neck between the nascent fragments disappears and the nucleus divides into two fragments. The point at which this occurs is called the "scission point". Liquid drop model From the description of the beginning of the fission process to the "scis Document 4::: Nuclear fission was discovered in December 1938 by chemists Otto Hahn and Fritz Strassmann and physicists Lise Meitner and Otto Robert Frisch. Fission is a nuclear reaction or radioactive decay process in which the nucleus of an atom splits into two or more smaller, lighter nuclei and often other particles. The fission process often produces gamma rays and releases a very large amount of energy, even by the energetic standards of radioactive decay. Scientists already knew about alpha decay and beta decay, but fission assumed great importance because the discovery that a nuclear chain reaction was possible led to the development of nuclear power and nuclear weapons. Hahn was awarded the 1944 Nobel Prize in Chemistry for the discovery of nuclear fission. Hahn and Strassmann at the Kaiser Wilhelm Institute for Chemistry in Berlin bombarded uranium with slow neutrons and discovered that barium had been produced. Hahn suggested a bursting of the nucleus, but he was unsure of what the physical basis for the results were. They reported their findings by mail to Meitner in Sweden, who a few months earlier had fled Nazi Germany. Meitner and her nephew Frisch theorised, and then proved, that the uranium nucleus had been split and published their findings in Nature. Meitner calculated that the energy released by each disintegration was approximately 200 megaelectronvolts, and Frisch observed this. By analogy with the division of biological cells, he named the process "fission". The discovery came after forty years of investigation into the nature and properties of radioactivity and radioactive substances. The discovery of the neutron by James Chadwick in 1932 created a new means of nuclear transmutation. Enrico Fermi and his colleagues in Rome studied the results of bombarding uranium with neutrons, and Fermi concluded that his experiments had created new elements with 93 and 94 protons, which his group dubbed ausenium and hesperium. Fermi won the 1938 Nobel Prize in Physics The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. What is it called when the nucleus of an atom splits into two smaller nuclei? A. nuclear fusion B. cell division C. nuclear fission D. complex fission Answer:
sciq-5298
multiple_choice
What is the largest mineral group, comprising over 90% of earth's crust?
[ "soils", "oxides", "carbonates", "silicates" ]
D
Relavent Documents: Document 0::: See also List of minerals Document 1::: In geology, rock (or stone) is any naturally occurring solid mass or aggregate of minerals or mineraloid matter. It is categorized by the minerals included, its chemical composition, and the way in which it is formed. Rocks form the Earth's outer solid layer, the crust, and most of its interior, except for the liquid outer core and pockets of magma in the asthenosphere. The study of rocks involves multiple subdisciplines of geology, including petrology and mineralogy. It may be limited to rocks found on Earth, or it may include planetary geology that studies the rocks of other celestial objects. Rocks are usually grouped into three main groups: igneous rocks, sedimentary rocks and metamorphic rocks. Igneous rocks are formed when magma cools in the Earth's crust, or lava cools on the ground surface or the seabed. Sedimentary rocks are formed by diagenesis and lithification of sediments, which in turn are formed by the weathering, transport, and deposition of existing rocks. Metamorphic rocks are formed when existing rocks are subjected to such high pressures and temperatures that they are transformed without significant melting. Humanity has made use of rocks since the earliest humans. This early period, called the Stone Age, saw the development of many stone tools. Stone was then used as a major component in the construction of buildings and early infrastructure. Mining developed to extract rocks from the Earth and obtain the minerals within them, including metals. Modern technology has allowed the development of new man-made rocks and rock-like substances, such as concrete. Study Geology is the study of Earth and its components, including the study of rock formations. Petrology is the study of the character and origin of rocks. Mineralogy is the study of the mineral components that create rocks. The study of rocks and their components has contributed to the geological understanding of Earth's history, the archaeological understanding of human history, and the Document 2::: The German Mineralogical Society (Deutsche Mineralogische Gesellschaft, or DMG, in German) is a non-profit German society for the promotion of mineralogy. It has about 1400 members (2021) and belongs to the International Mineralogical Association and the umbrella organization for geosciences. It was founded at the meeting of German natural scientists and physicians in Cologne in 1908 based on a proposal by Friedrich Martin Berwerth at the 1907 meeting in Dresden. The current chairman (2021-2022) is the geochemist Friedhelm von Blanckenburg. Organization structure The DMG has the sections: Applied mineralogy: systematics, properties of minerals; Organic, clay mineralogy, gemology Crystallography: Research into the atomic structure and properties of inorganic and organic crystals (structural research, crystal chemistry, crystal physics, crystal growth and growth) Geochemistry: distribution laws, frequency and mobility of chemical elements in the Earth, the seas, the atmosphere and in space (analytical, experimental, theoretical, applied, environmental geochemistry) Petrology and petrophysics: Formation, origin and transformation of rocks; Investigations and syntheses under simulated conditions of the Earth's interior (experimental petrology), structural investigations Besides, the DMG has the working groups Archaeometry and Monument Preservation, Raw Materials Research, Mineralogical Museums and Collections and Mineralogy in Schools and Universities. Awards and prizes The DMG awards prizes Abraham Gottlob Werner Medal in silver and gold Victor Moritz Goldschmidt Prize for young scientists Georg Agricola Medal in applied mineralogy Paul Ramdohr Prize for young scientists Beate Mocek Prize for young female scientists The DMG publishes multiple journals with other societies, including European Journal of Mineralogy along with the Italian and French mineralogical societies, and the magazine Elements, along with 18 other geochemical, cosmochemical and mine Document 3::: Ringwoodite is a high-pressure phase of Mg2SiO4 (magnesium silicate) formed at high temperatures and pressures of the Earth's mantle between depth. It may also contain iron and hydrogen. It is polymorphous with the olivine phase forsterite (a magnesium iron silicate). Ringwoodite is notable for being able to contain hydroxide ions (oxygen and hydrogen atoms bound together) within its structure. In this case two hydroxide ions usually take the place of a magnesium ion and two oxide ions. Combined with evidence of its occurrence deep in the Earth's mantle, this suggests that there is from one to three times the world ocean's equivalent of water in the mantle transition zone from 410 to 660 km deep. This mineral was first identified in the Tenham meteorite in 1969, and is inferred to be present in large quantities in the Earth's mantle. Olivine, wadsleyite, and ringwoodite are polymorphs found in the upper mantle of the earth. At depths greater than about , other minerals, including some with the perovskite structure, are stable. The properties of these minerals determine many of the properties of the mantle. Ringwoodite was named after the Australian earth scientist Ted Ringwood (1930–1993), who studied polymorphic phase transitions in the common mantle minerals olivine and pyroxene at pressures equivalent to depths as great as about 600 km. Characteristics Ringwoodite is polymorphous with forsterite, Mg2SiO4, and has a spinel structure. Spinel group minerals crystallize in the isometric system with an octahedral habit. Olivine is most abundant in the upper mantle, above about ; the olivine polymorphs wadsleyite and ringwoodite are thought to dominate the transition zone of the mantle, a zone present from about 410 to 660 km depth. Ringwoodite is thought to be the most abundant mineral phase in the lower part of Earth's transition zone. The physical and chemical property of this mineral partly determine properties of the mantle at those depths. The pressure r Document 4::: Automated mineralogy is a generic term describing a range of analytical solutions, areas of commercial enterprise, and a growing field of scientific research and engineering applications involving largely automated and quantitative analysis of minerals, rocks and man-made materials. Technology Automated mineralogy analytical solutions are characterised by integrating largely automated measurement techniques based on Scanning Electron Microscopy (SEM) and Energy-dispersive X-ray spectroscopy (EDS). Commercially available lab-based solutions include QEMSCAN and Mineral Liberation Analyzer (MLA) from FEI Company, Mineralogic from Zeiss, AZtecMineral from Oxford Instruments, the TIMA (Tescan integrated mineral analyzer) from TESCAN, AMICS from Bruker, and MaipSCAN from Rock Scientific. The first oil & gas wellsite solution was launched jointly by Zeiss and CGG Veritas in 2011 called RoqSCAN. This was followed approximately 6 months later by the release of QEMSCAN Wellsite by FEI Company. More recently in 2016, a ruggedized mine site solution for mining and mineral processing was launched by Zeiss called MinSCAN. Business The business of automated mineralogy is concerned with the commercialisation of the technology and software in terms of development and marketing of integrated solutions. This includes all aspects of: service; maintenance; customer support; R&D; marketing and sales. Customers of automated mineralogy solutions include: laboratory facilities; mine sites, well sites, and research institutions. Applications Automated mineralogy solutions are applied in a variety of fields requiring statistically reliable, quantitative mineralogical information. These include the following sectors: mining; O&G; coal; environmental sciences; forensic geosciences; archaeology;agribusiness; built environment and planetary geology. History of the use of the term The first recorded use of the term automated mineralogy in technical journals can be traced back to seminal pape The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. What is the largest mineral group, comprising over 90% of earth's crust? A. soils B. oxides C. carbonates D. silicates Answer:
sciq-8760
multiple_choice
What type of energy travels in waves across space as well as through matter?
[ "thermal energy", "electromagnetic radiation", "kinetic energy", "mechanical radiation" ]
B
Relavent Documents: Document 0::: This is a list of topics that are included in high school physics curricula or textbooks. Mathematical Background SI Units Scalar (physics) Euclidean vector Motion graphs and derivatives Pythagorean theorem Trigonometry Motion and forces Motion Force Linear motion Linear motion Displacement Speed Velocity Acceleration Center of mass Mass Momentum Newton's laws of motion Work (physics) Free body diagram Rotational motion Angular momentum (Introduction) Angular velocity Centrifugal force Centripetal force Circular motion Tangential velocity Torque Conservation of energy and momentum Energy Conservation of energy Elastic collision Inelastic collision Inertia Moment of inertia Momentum Kinetic energy Potential energy Rotational energy Electricity and magnetism Ampère's circuital law Capacitor Coulomb's law Diode Direct current Electric charge Electric current Alternating current Electric field Electric potential energy Electron Faraday's law of induction Ion Inductor Joule heating Lenz's law Magnetic field Ohm's law Resistor Transistor Transformer Voltage Heat Entropy First law of thermodynamics Heat Heat transfer Second law of thermodynamics Temperature Thermal energy Thermodynamic cycle Volume (thermodynamics) Work (thermodynamics) Waves Wave Longitudinal wave Transverse waves Transverse wave Standing Waves Wavelength Frequency Light Light ray Speed of light Sound Speed of sound Radio waves Harmonic oscillator Hooke's law Reflection Refraction Snell's law Refractive index Total internal reflection Diffraction Interference (wave propagation) Polarization (waves) Vibrating string Doppler effect Gravity Gravitational potential Newton's law of universal gravitation Newtonian constant of gravitation See also Outline of physics Physics education Document 1::: Applied physics is the application of physics to solve scientific or engineering problems. It is usually considered a bridge or a connection between physics and engineering. "Applied" is distinguished from "pure" by a subtle combination of factors, such as the motivation and attitude of researchers and the nature of the relationship to the technology or science that may be affected by the work. Applied physics is rooted in the fundamental truths and basic concepts of the physical sciences but is concerned with the utilization of scientific principles in practical devices and systems and with the application of physics in other areas of science and high technology. Examples of research and development areas Accelerator physics Acoustics Atmospheric physics Biophysics Brain–computer interfacing Chemistry Chemical physics Differentiable programming Artificial intelligence Scientific computing Engineering physics Chemical engineering Electrical engineering Electronics Sensors Transistors Materials science and engineering Metamaterials Nanotechnology Semiconductors Thin films Mechanical engineering Aerospace engineering Astrodynamics Electromagnetic propulsion Fluid mechanics Military engineering Lidar Radar Sonar Stealth technology Nuclear engineering Fission reactors Fusion reactors Optical engineering Photonics Cavity optomechanics Lasers Photonic crystals Geophysics Materials physics Medical physics Health physics Radiation dosimetry Medical imaging Magnetic resonance imaging Radiation therapy Microscopy Scanning probe microscopy Atomic force microscopy Scanning tunneling microscopy Scanning electron microscopy Transmission electron microscopy Nuclear physics Fission Fusion Optical physics Nonlinear optics Quantum optics Plasma physics Quantum technology Quantum computing Quantum cryptography Renewable energy Space physics Spectroscopy See also Applied science Applied mathematics Engineering Engineering Physics High Technology Document 2::: Conceptual questions or conceptual problems in science, technology, engineering, and mathematics (STEM) education are questions that can be answered based only on the knowledge of relevant concepts, rather than performing extensive calculations. They contrast with most homework and exam problems in science and engineering that typically require plugging in numerical values into previously discussed formulas. Such "plug-and-chug" numerical problems can often be solved correctly by just matching the pattern of the problem to a previously discussed problem and changing the numerical inputs, which requires significant amounts of time to perform the calculations but does not test or deepen the understanding of how the concepts and formulas should work together. Conceptual questions, therefore, provide a good complement to conventional numerical problems because they need minimal or no calculations and instead encourage the students to engage more deeply with the underlying concepts and how they relate to formulas. Conceptual problems are often formulated as multiple-choice questions, making them easy to use during in-class discussions, particularly when utilizing active learning, peer instruction, and audience response. An example of a conceptual question in undergraduate thermodynamics is provided below: During adiabatic expansion of an ideal gas, its temperatureincreases decreases stays the same Impossible to tell/need more information The use of conceptual questions in physics was popularized by Eric Mazur, particularly in the form of multiple-choice tests that he called ConcepTests. In recent years, multiple websites that maintain lists of conceptual questions have been created by instructors for various disciplines. Some books on physics provide many examples of conceptual questions as well. Multiple conceptual questions can be assembled into a concept inventory to test the working knowledge of students at the beginning of a course or to track the improvement in Document 3::: Thermal radiation is electromagnetic radiation generated by the thermal motion of particles in matter. Thermal radiation is generated when heat from the movement of charges in the material (electrons and protons in common forms of matter) is converted to electromagnetic radiation. All matter with a temperature greater than absolute zero emits thermal radiation. At room temperature, most of the emission is in the infrared (IR) spectrum. Particle motion results in charge-acceleration or dipole oscillation which produces electromagnetic radiation. Infrared radiation emitted by animals (detectable with an infrared camera) and cosmic microwave background radiation are examples of thermal radiation. If a radiation object meets the physical characteristics of a black body in thermodynamic equilibrium, the radiation is called blackbody radiation. Planck's law describes the spectrum of blackbody radiation, which depends solely on the object's temperature. Wien's displacement law determines the most likely frequency of the emitted radiation, and the Stefan–Boltzmann law gives the radiant intensity. Thermal radiation is also one of the fundamental mechanisms of heat transfer. Overview Thermal radiation is the emission of electromagnetic waves from all matter that has a temperature greater than absolute zero. Thermal radiation reflects the conversion of thermal energy into electromagnetic energy. Thermal energy is the kinetic energy of random movements of atoms and molecules in matter. All matter with a nonzero temperature is composed of particles with kinetic energy. These atoms and molecules are composed of charged particles, i.e., protons and electrons. The kinetic interactions among matter particles result in charge acceleration and dipole oscillation. This results in the electrodynamic generation of coupled electric and magnetic fields, resulting in the emission of photons, radiating energy away from the body. Electromagnetic radiation, including visible light, will pr Document 4::: A pre-STEM program is a course of study at any two-year college that prepares a student to transfer to a four-year school to earn a bachelor's degree in a STEM field. Overview The concept of a pre-STEM program is being developed to address America's need for more college-trained professionals in science, technology, engineering, and mathematics (STEM). It is an innovation meant to fill a gap at community colleges that do not have 'major' degree paths that students identify with on their way to earning an Associates degree. Students must complete a considerable amount of STEM coursework before transferring from a two-year school to a four-year school and earn a baccalaureate degree in a STEM field. Schools with a pre-STEM program are able to identify those students and support them with STEM-specific academic and career advising, increasing the student's chances of going on to earn a STEM baccalaureate degree in a timely fashion. With over 50% of America's college-bound students starting their college career at public or private two-year school, and with a very small proportion of students who start college at a two-year school matriculating to and earning STEM degrees from four-year schools, pre-STEM programs have great potential for broadening participation in baccalaureate STEM studies. Example programs The effectiveness of pre-STEM programs is being investigated by a consortium of schools in Missouri: Moberly Area Community College, St. Charles Community College, Metropolitan Community College, and Truman State University. A larger group of schools met at the Belknap Springs Meetings in October 2009 to discuss the challenges and opportunities presented by STEM-focused partnerships between 2-year and 4-year schools. Each program represented a two-year school and a four-year school that were trying to increase the number of people who earn a baccalaureate degree in a STEM area through various means, some of which were pre-STEM programs. Other methods includes The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. What type of energy travels in waves across space as well as through matter? A. thermal energy B. electromagnetic radiation C. kinetic energy D. mechanical radiation Answer:
sciq-9751
multiple_choice
What is the name of the line used in topographic maps that shows different elevations?
[ "a contour line", "a crater line", "a patch line", "a curve line" ]
A
Relavent Documents: Document 0::: In topography, the line of greatest slope is a curve following the steepest slope. In mountain biking and skiing, the line of greatest slope is sometimes called the fall line. Definition Mathematically, the line (or path) of greatest slope from a point is determined by the gradient of height, taken as a potential field with respect to an acceleration from the force of gravity. Lines of greatest slope are analogous to lines of force acting to accelerate an object downward at that point. These lines are orthogonal to contour lines. Discounting inertial forces and terrain roughness, a ball rolling down a slope, or water flowing down, will accelerate in the direction of greatest slope. Applications Mountain biking In mountain biking the line of greatest slope defines the fall line, which is the path a trail will follow to descend a hill or mountain with the shortest path, and will also cause the rider to gain the most velocity (assuming brakes are not used, and other factors such as rolling resistance are equal). Mountain climbing In mountain climbing, the line of greatest slope defines the fall line, which is the path a climber will take to gain the most elevation with the shortest possible path. Map reading The line of greatest slope has practical significance in map reading. On the terrain it is often far more discernible, even intuitively obvious, rather than accurately picking out the consistent height level on what is likely the undulating uneven ground along the ground represented on the contour line. But knowing that a greatest slope vector is orthogonal to the contour line, one can readily deduce the direction of the contour lines from the line of greatest slope. The extent and overall direction of the contour line to a map scale can only be found on the topographic map. By noting the corresponding compass vector, walking along the contour one can line up a hand held compass aligning the expected direction, and eye-balling the line of contour's estim Document 1::: A spot height is an exact point on a map with an elevation recorded beside it that represents its height above a given datum. In the UK this is the Ordnance Datum. Unlike a bench-mark, which is marked by a disc or plate, there is no official indication of a spot height on the ground although, in open country, spot heights may sometimes be marked by cairns. In geoscience, it can be used for showing elevations on a map, alongside contours, bench marks, etc. See also Surveying Benchmark (surveying) Triangulation station Document 2::: The grade (also called slope, incline, gradient, mainfall, pitch or rise) of a physical feature, landform or constructed line refers to the tangent of the angle of that surface to the horizontal. It is a special case of the slope, where zero indicates horizontality. A larger number indicates higher or steeper degree of "tilt". Often slope is calculated as a ratio of "rise" to "run", or as a fraction ("rise over run") in which run is the horizontal distance (not the distance along the slope) and rise is the vertical distance. Slopes of existing physical features such as canyons and hillsides, stream and river banks and beds are often described as grades, but typically grades are used for human-made surfaces such as roads, landscape grading, roof pitches, railroads, aqueducts, and pedestrian or bicycle routes. The grade may refer to the longitudinal slope or the perpendicular cross slope. Nomenclature There are several ways to express slope: as an angle of inclination to the horizontal. (This is the angle opposite the "rise" side of a triangle with a right angle between vertical rise and horizontal run.) as a percentage, the formula for which is which is equivalent to the tangent of the angle of inclination times 100. In Europe and the U.S. percentage "grade" is the most commonly used figure for describing slopes. as a per mille figure (‰), the formula for which is which could also be expressed as the tangent of the angle of inclination times 1000. This is commonly used in Europe to denote the incline of a railway. It is sometimes written as mm/m instead of the ‰ symbol. as a ratio of one part rise to so many parts run. For example, a slope that has a rise of 5 feet for every 1000 feet of run would have a slope ratio of 1 in 200. (The word "in" is normally used rather than the mathematical ratio notation of "1:200".) This is generally the method used to describe railway grades in Australia and the UK. It is used for roads in Hong Kong, and was used for roa Document 3::: A topographic profile or topographic cut or elevation profile is a representation of the relief of the terrain that is obtained by cutting transversely the lines of a topographic map. Each contour line can be defined as a closed line joining relief points at equal height above sea level. It is usually drawn on the same horizontal scale as the map, but the use of an exaggerated vertical scale is advisable to underline the elements of the relief. This can vary according to the slope and amplitude of the terrestrial relief, but is usually three to five times the horizontal scale. A series of parallel profiles, taken at regular intervals on a map, can be combined to provide a more complete three-dimensional view of the area that appears on the topographic map. It is evident that, thanks to computer science, more sophisticated three-dimensional models of the landscape can be made from digital terrain data. The line of the plane defined by the points that limit the profile is called the guideline and the horizontal line of comparison on which the profile is constructed is called base. Applications One of the most important applications of the topographic profiles is in the construction of works of great length and small width, for example roads, sewers or pipelines. Sometimes topographical profiles appear in printed maps, such as those designed for navigation routes, excavations and especially for geological maps, where they are used to show the internal structure of the rocks that populate a territory. People who study natural resources such as geologists, geomorphologists, soil scientists and vegetation scholars, among others, build profiles to observe the relationship of natural resources to changes in topography and analyze numerous problems. See also Fall line (topography) Document 4::: A graduation is a marking used to indicate points on a visual scale, which can be present on a container, a measuring device, or the axes of a line plot, usually one of many along a line or curve, each in the form of short line segments perpendicular to the line or curve. Often, some of these line segments are longer and marked with a numeral, such as every fifth or tenth graduation. The scale itself can be linear (the graduations are spaced at a constant distance apart) or nonlinear. Linear graduation of a scale occurs mainly (but not exclusively) on straight measuring devices, such as a rule or measuring tape, using units such as inches or millimetres. Graduations can also be spaced at varying spatial intervals, such as when using a logarithmic, for instance on a measuring cup, can vary in scale due to the container's non-cylindrical shape. Graduations along a curve Circular graduations of a scale occur on a circular arc or limb of an instrument. In some cases, non-circular curves are graduated in instruments. A typical circular arc graduation is the division into angular measurements, such as degrees, minutes and seconds. These types of graduated markings are traditionally seen on devices ranging from compasses and clock faces to alidades found on such instruments as telescopes, theodolites, inclinometers, astrolabes, armillary spheres, and celestial spheres. There can also be non-uniform graduations such as logarithmic or other scales such as seen on circular slide rules and graduated cylinders. Manufacture of graduations Graduations can be placed on an instrument by etching, scribing or engraving, painting, printing or other means. For durability and accuracy, etched or scribed marks are usually preferable to surface coatings such as paints and inks. Markings can be a combination of both physical marks such as a scribed line and a paint or other marking material. For example, it is common for black ink or paint to fill the grooves cut in a scribed ru The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. What is the name of the line used in topographic maps that shows different elevations? A. a contour line B. a crater line C. a patch line D. a curve line Answer:
sciq-11400
multiple_choice
The desire to understand how and why things happen is shared by all branches of what?
[ "population", "society", "science", "government" ]
C
Relavent Documents: Document 0::: A pre-STEM program is a course of study at any two-year college that prepares a student to transfer to a four-year school to earn a bachelor's degree in a STEM field. Overview The concept of a pre-STEM program is being developed to address America's need for more college-trained professionals in science, technology, engineering, and mathematics (STEM). It is an innovation meant to fill a gap at community colleges that do not have 'major' degree paths that students identify with on their way to earning an Associates degree. Students must complete a considerable amount of STEM coursework before transferring from a two-year school to a four-year school and earn a baccalaureate degree in a STEM field. Schools with a pre-STEM program are able to identify those students and support them with STEM-specific academic and career advising, increasing the student's chances of going on to earn a STEM baccalaureate degree in a timely fashion. With over 50% of America's college-bound students starting their college career at public or private two-year school, and with a very small proportion of students who start college at a two-year school matriculating to and earning STEM degrees from four-year schools, pre-STEM programs have great potential for broadening participation in baccalaureate STEM studies. Example programs The effectiveness of pre-STEM programs is being investigated by a consortium of schools in Missouri: Moberly Area Community College, St. Charles Community College, Metropolitan Community College, and Truman State University. A larger group of schools met at the Belknap Springs Meetings in October 2009 to discuss the challenges and opportunities presented by STEM-focused partnerships between 2-year and 4-year schools. Each program represented a two-year school and a four-year school that were trying to increase the number of people who earn a baccalaureate degree in a STEM area through various means, some of which were pre-STEM programs. Other methods includes Document 1::: Female education in STEM refers to child and adult female representation in the educational fields of science, technology, engineering, and mathematics (STEM). In 2017, 33% of students in STEM fields were women. The organization UNESCO has stated that this gender disparity is due to discrimination, biases, social norms and expectations that influence the quality of education women receive and the subjects they study. UNESCO also believes that having more women in STEM fields is desirable because it would help bring about sustainable development. Current status of girls and women in STEM education Overall trends in STEM education Gender differences in STEM education participation are already visible in early childhood care and education in science- and math-related play, and become more pronounced at higher levels of education. Girls appear to lose interest in STEM subjects with age, particularly between early and late adolescence. This decreased interest affects participation in advanced studies at the secondary level and in higher education. Female students represent 35% of all students enrolled in STEM-related fields of study at this level globally. Differences are also observed by disciplines, with female enrollment lowest in engineering, manufacturing and construction, natural science, mathematics and statistics and ICT fields. Significant regional and country differences in female representation in STEM studies can be observed, though, suggesting the presence of contextual factors affecting girls’ and women's engagement in these fields. Women leave STEM disciplines in disproportionate numbers during their higher education studies, in their transition to the world of work and even in their career cycle. Learning achievement in STEM education Data on gender differences in learning achievement present a complex picture, depending on what is measured (subject, knowledge acquisition against knowledge application), the level of education/age of students, and Document 2::: The STEM (Science, Technology, Engineering, and Mathematics) pipeline is a critical infrastructure for fostering the development of future scientists, engineers, and problem solvers. It's the educational and career pathway that guides individuals from early childhood through to advanced research and innovation in STEM-related fields. Description The "pipeline" metaphor is based on the idea that having sufficient graduates requires both having sufficient input of students at the beginning of their studies, and retaining these students through completion of their academic program. The STEM pipeline is a key component of workplace diversity and of workforce development that ensures sufficient qualified candidates are available to fill scientific and technical positions. The STEM pipeline was promoted in the United States from the 1970s onwards, as “the push for STEM (science, technology, engineering, and mathematics) education appears to have grown from a concern for the low number of future professionals to fill STEM jobs and careers and economic and educational competitiveness.” Today, this metaphor is commonly used to describe retention problems in STEM fields, called “leaks” in the pipeline. For example, the White House reported in 2012 that 80% of minority groups and women who enroll in a STEM field switch to a non-STEM field or drop out during their undergraduate education. These leaks often vary by field, gender, ethnic and racial identity, socioeconomic background, and other factors, drawing attention to structural inequities involved in STEM education and careers. Current efforts The STEM pipeline concept is a useful tool for programs aiming at increasing the total number of graduates, and is especially important in efforts to increase the number of underrepresented minorities and women in STEM fields. Using STEM methodology, educational policymakers can examine the quantity and retention of students at all stages of the K–12 educational process and beyo Document 3::: Science, technology, engineering, and mathematics (STEM) is an umbrella term used to group together the distinct but related technical disciplines of science, technology, engineering, and mathematics. The term is typically used in the context of education policy or curriculum choices in schools. It has implications for workforce development, national security concerns (as a shortage of STEM-educated citizens can reduce effectiveness in this area), and immigration policy, with regard to admitting foreign students and tech workers. There is no universal agreement on which disciplines are included in STEM; in particular, whether or not the science in STEM includes social sciences, such as psychology, sociology, economics, and political science. In the United States, these are typically included by organizations such as the National Science Foundation (NSF), the Department of Labor's O*Net online database for job seekers, and the Department of Homeland Security. In the United Kingdom, the social sciences are categorized separately and are instead grouped with humanities and arts to form another counterpart acronym HASS (Humanities, Arts, and Social Sciences), rebranded in 2020 as SHAPE (Social Sciences, Humanities and the Arts for People and the Economy). Some sources also use HEAL (health, education, administration, and literacy) as the counterpart of STEM. Terminology History Previously referred to as SMET by the NSF, in the early 1990s the acronym STEM was used by a variety of educators, including Charles E. Vela, the founder and director of the Center for the Advancement of Hispanics in Science and Engineering Education (CAHSEE). Moreover, the CAHSEE started a summer program for talented under-represented students in the Washington, D.C., area called the STEM Institute. Based on the program's recognized success and his expertise in STEM education, Charles Vela was asked to serve on numerous NSF and Congressional panels in science, mathematics, and engineering edu Document 4::: The Science, Technology, Engineering and Mathematics Network or STEMNET is an educational charity in the United Kingdom that seeks to encourage participation at school and college in science and engineering-related subjects (science, technology, engineering, and mathematics) and (eventually) work. History It is based at Woolgate Exchange near Moorgate tube station in London and was established in 1996. The chief executive is Kirsten Bodley. The STEMNET offices are housed within the Engineering Council. Function Its chief aim is to interest children in science, technology, engineering and mathematics. Primary school children can start to have an interest in these subjects, leading secondary school pupils to choose science A levels, which will lead to a science career. It supports the After School Science and Engineering Clubs at schools. There are also nine regional Science Learning Centres. STEM ambassadors To promote STEM subjects and encourage young people to take up jobs in these areas, STEMNET have around 30,000 ambassadors across the UK. these come from a wide selection of the STEM industries and include TV personalities like Rob Bell. Funding STEMNET used to receive funding from the Department for Education and Skills. Since June 2007, it receives funding from the Department for Children, Schools and Families and Department for Innovation, Universities and Skills, since STEMNET sits on the chronological dividing point (age 16) of both of the new departments. See also The WISE Campaign Engineering and Physical Sciences Research Council National Centre for Excellence in Teaching Mathematics Association for Science Education Glossary of areas of mathematics Glossary of astronomy Glossary of biology Glossary of chemistry Glossary of engineering Glossary of physics The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. The desire to understand how and why things happen is shared by all branches of what? A. population B. society C. science D. government Answer:
sciq-2803
multiple_choice
Certain characteristics are frequently inherited together because of what?
[ "correlation", "genetic combination", "linkage", "mitosis" ]
C
Relavent Documents: Document 0::: The Generalist Genes hypothesis of learning abilities and disabilities was originally coined in an article by Plomin & Kovas (2005). The Generalist Genes hypothesis suggests that most genes associated with common learning disabilities and abilities are generalist in three ways. Firstly, the same genes that influence common learning abilities (e.g., high reading aptitude) are also responsible for common learning disabilities (e.g., reading disability): they are strongly genetically correlated. Secondly, many of the genes associated with one aspect of a learning disability (e.g., vocabulary problems) also influence other aspects of this learning disability (e.g., grammar problems). Thirdly, genes that influence one learning disability (e.g., reading disability) are largely the same as those that influence other learning disabilities (e.g., mathematics disability). The Generalist Genes hypothesis has important implications for education, cognitive sciences and molecular genetics. Document 1::: Research on the heritability of IQ inquires into the degree of variation in IQ within a population that is due to genetic variation between individuals in that population. There has been significant controversy in the academic community about the heritability of IQ since research on the issue began in the late nineteenth century. Intelligence in the normal range is a polygenic trait, meaning that it is influenced by more than one gene, and in the case of intelligence at least 500 genes. Further, explaining the similarity in IQ of closely related persons requires careful study because environmental factors may be correlated with genetic factors. Early twin studies of adult individuals have found a heritability of IQ between 57% and 73%, with some recent studies showing heritability for IQ as high as 80%. IQ goes from being weakly correlated with genetics for children, to being strongly correlated with genetics for late teens and adults. The heritability of IQ increases with the child's age and reaches a plateau at 14-16 years old, continuing at that level well into adulthood. However, poor prenatal environment, malnutrition and disease are known to have lifelong deleterious effects. Although IQ differences between individuals have been shown to have a large hereditary component, it does not follow that disparities in IQ between groups have a genetic basis. The scientific consensus is that genetics does not explain average differences in IQ test performance between racial groups. Heritability and caveats Heritability is a statistic used in the fields of breeding and genetics that estimates the degree of variation in a phenotypic trait in a population that is due to genetic variation between individuals in that population. The concept of heritability can be expressed in the form of the following question: "What is the proportion of the variation in a given trait within a population that is not explained by the environment or random chance?" Estimates of heritabi Document 2::: Hard inheritance was a model of heredity that explicitly excludes any acquired characteristics, such as of Lamarckism. It is the exact opposite of soft inheritance, coined by Ernst Mayr to contrast ideas about inheritance. Hard inheritance states that characteristics of an organism's offspring (passed on through DNA) will not be affected by the actions that the parental organism performs during its lifetime. For example: a medieval blacksmith who uses only his right arm to forge steel will not sire a son with a stronger right arm than left because the blacksmith's actions do not alter his genetic code. Inheritance due to usage and non-usage is excluded. Inheritance works as described in the modern synthesis of evolutionary biology. The existence of inherited epigenetic variants has led to renewed interest in soft inheritance. Document 3::: Genetics is the study of genes and tries to explain what they are and how they work. Genes are how living organisms inherit features or traits from their ancestors; for example, children usually look like their parents because they have inherited their parents' genes. Genetics tries to identify which traits are inherited and to explain how these traits are passed from generation to generation. Some traits are part of an organism's physical appearance, such as eye color, height or weight. Other sorts of traits are not easily seen and include blood types or resistance to diseases. Some traits are inherited through genes, which is the reason why tall and thin people tend to have tall and thin children. Other traits come from interactions between genes and the environment, so a child who inherited the tendency of being tall will still be short if poorly nourished. The way our genes and environment interact to produce a trait can be complicated. For example, the chances of somebody dying of cancer or heart disease seems to depend on both their genes and their lifestyle. Genes are made from a long molecule called DNA, which is copied and inherited across generations. DNA is made of simple units that line up in a particular order within it, carrying genetic information. The language used by DNA is called genetic code, which lets organisms read the information in the genes. This information is the instructions for the construction and operation of a living organism. The information within a particular gene is not always exactly the same between one organism and another, so different copies of a gene do not always give exactly the same instructions. Each unique form of a single gene is called an allele. As an example, one allele for the gene for hair color could instruct the body to produce much pigment, producing black hair, while a different allele of the same gene might give garbled instructions that fail to produce any pigment, giving white hair. Mutations are random Document 4::: Quantitative genetics deals with quantitative traits, which are phenotypes that vary continuously (such as height or mass)—as opposed to discretely identifiable phenotypes and gene-products (such as eye-colour, or the presence of a particular biochemical). Both branches use the frequencies of different alleles of a gene in breeding populations (gamodemes), and combine them with concepts from simple Mendelian inheritance to analyze inheritance patterns across generations and descendant lines. While population genetics can focus on particular genes and their subsequent metabolic products, quantitative genetics focuses more on the outward phenotypes, and makes only summaries of the underlying genetics. Due to the continuous distribution of phenotypic values, quantitative genetics must employ many other statistical methods (such as the effect size, the mean and the variance) to link phenotypes (attributes) to genotypes. Some phenotypes may be analyzed either as discrete categories or as continuous phenotypes, depending on the definition of cut-off points, or on the metric used to quantify them. Mendel himself had to discuss this matter in his famous paper, especially with respect to his peas' attribute tall/dwarf, which actually was "length of stem". Analysis of quantitative trait loci, or QTL, is a more recent addition to quantitative genetics, linking it more directly to molecular genetics. Gene effects In diploid organisms, the average genotypic "value" (locus value) may be defined by the allele "effect" together with a dominance effect, and also by how genes interact with genes at other loci (epistasis). The founder of quantitative genetics - Sir Ronald Fisher - perceived much of this when he proposed the first mathematics of this branch of genetics. Being a statistician, he defined the gene effects as deviations from a central value—enabling the use of statistical concepts such as mean and variance, which use this idea. The central value he chose for the ge The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. Certain characteristics are frequently inherited together because of what? A. correlation B. genetic combination C. linkage D. mitosis Answer:
sciq-10930
multiple_choice
What is thought to be the oldest eukaryotes?
[ "arthropods", "protists", "amoebas", "prokaryotes" ]
B
Relavent Documents: Document 0::: The eukaryotes () constitute the domain of Eukarya, organisms whose cells have a membrane-bound nucleus. All animals, plants, fungi, and many unicellular organisms are eukaryotes. They constitute a major group of life forms alongside the two groups of prokaryotes: the Bacteria and the Archaea. Eukaryotes represent a small minority of the number of organisms, but due to their generally much larger size, their collective global biomass is much larger than that of prokaryotes. The eukaryotes seemingly emerged in the Archaea, within the Asgard archaea. This implies that there are only two domains of life, Bacteria and Archaea, with eukaryotes incorporated among the Archaea. Eukaryotes emerged approximately 2.2 billion years ago, during the Proterozoic eon, likely as flagellated cells. The leading evolutionary theory is they were created by symbiogenesis between an anaerobic Asgard archaean and an aerobic proteobacterium, which formed the mitochondria. A second episode of symbiogenesis with a cyanobacterium created the plants, with chloroplasts. The oldest-known eukaryote fossils, multicellular planktonic organisms belonging to the Gabonionta, were discovered in Gabon in 2023, dating back to 2.1 billion years ago. Eukaryotic cells contain membrane-bound organelles such as the nucleus, the endoplasmic reticulum, and the Golgi apparatus. Eukaryotes may be either unicellular or multicellular. In comparison, prokaryotes are typically unicellular. Unicellular eukaryotes are sometimes called protists. Eukaryotes can reproduce both asexually through mitosis and sexually through meiosis and gamete fusion (fertilization). Diversity Eukaryotes are organisms that range from microscopic single cells, such as picozoans under 3 micrometres across, to animals like the blue whale, weighing up to 190 tonnes and measuring up to long, or plants like the coast redwood, up to tall. Many eukaryotes are unicellular; the informal grouping called protists includes many of these, with some Document 1::: Eukaryogenesis, the process which created the eukaryotic cell and lineage, is a milestone in the evolution of life, since eukaryotes include all complex cells and almost all multicellular organisms. The process is widely agreed to have involved symbiogenesis, in which archaea and bacteria came together to create the first eukaryotic common ancestor (FECA). This cell had a new level of complexity and capability, with a nucleus, at least one centriole and cilium, facultatively aerobic mitochondria, sex (meiosis and syngamy), a dormant cyst with a cell wall of chitin and/or cellulose and peroxisomes. It evolved into a population of single-celled organisms that included the last eukaryotic common ancestor (LECA), gaining capabilities along the way, though the sequence of the steps involved has been disputed, and may not have started with symbiogenesis. In turn, the LECA gave rise to the eukaryotes' crown group, containing the ancestors of animals, fungi, plants, and a diverse range of single-celled organisms. Context Life arose on Earth once it had cooled enough for oceans to form. The last universal common ancestor (LUCA) was an organism which had ribosomes and the genetic code; it lived some 4 billion years ago. It gave rise to two main branches of prokaryotic life, the bacteria and the archaea. From among these small-celled, rapidly-dividing ancestors arose the Eukaryotes, with much larger cells, nuclei, and distinctive biochemistry. The eukaryotes form a domain that contains all complex cells and most types of multicellular organism, including the animals, plants, and fungi. Symbiogenesis According to the theory of symbiogenesis (also known as the endosymbiotic theory) championed by Lynn Margulis, a member of the archaea gained a bacterial cell as a component. The archaeal cell was a member of the Asgard group. The bacterium was one of the Alphaproteobacteria, which had the ability to use oxygen in its respiration. This enabled it – and the archaeal cells that Document 2::: Scientists trying to reconstruct evolutionary history have been challenged by the fact that genes can sometimes transfer between distant branches on the tree of life. This movement of genes can occur through horizontal gene transfer (HGT), scrambling the information on which biologists relied to reconstruct the phylogeny of organisms. Conversely, HGT can also help scientists to reconstruct and date the tree of life. Indeed, a gene transfer can be used as a phylogenetic marker, or as the proof of contemporaneity of the donor and recipient organisms, and as a trace of extinct biodiversity. HGT happens very infrequently – at the individual organism level, it is highly improbable for any such event to take place. However, on the grander scale of evolutionary history, these events occur with some regularity. On one hand, this forces biologists to abandon the use of individual genes as good markers for the history of life. On the other hand, this provides an almost unexploited large source of information about the past. Three domains of life The three main early branches of the tree of life have been intensively studied by microbiologists because the first organisms were microorganisms. Microbiologists (led by Carl Woese) have introduced the term domain for the three main branches of this tree, where domain is a phylogenetic term similar in meaning to biological kingdom. To reconstruct this tree of life, the gene sequence encoding the small subunit of ribosomal RNA (SSU rRNA, 16s rRNA) has proven useful, and the tree (as shown in the picture) relies heavily on information from this single gene. These three domains of life represent the main evolutionary lineages of early cellular life and currently include Bacteria, Archaea (single-celled organisms superficially similar to bacteria), and Eukarya. Eukarya includes only organisms having a well-defined nucleus, such as fungi, protists, and all organisms in the plant and animals kingdoms (see figure). The gene most com Document 3::: The eocyte hypothesis in evolutionary biology proposes that the eukaryotes originated from a group of prokaryotes called eocytes (later classified as Thermoproteota, a group of archaea). After his team at the University of California, Los Angeles discovered eocytes in 1984, James A. Lake formulated the hypothesis as "eocyte tree" that proposed eukaryotes as part of archaea. Lake hypothesised the tree of life as having only two primary branches: Parkaryoates that include Bacteria and Archaea, and karyotes that comprise Eukaryotes and eocytes. Parts of this early hypothesis were revived in a newer two-domain system of biological classification which named the primary domains as Archaea and Bacteria. Lake's hypothesis was based on an analysis of the structural components of ribosomes. It was largely ignored, being overshadowed by the three-domain system which relied on more precise genetic analysis. In 1990, Carl Woese and his colleagues proposed that cellular life consists of three domains – Eucarya, Bacteria, and Archaea – based on the ribosomal RNA sequences. The three-domain concept was widely accepted in genetics, and became the presumptive classification system for high-level taxonomy, and was promulgated in many textbooks. Resurgence of archaea research after the 2000s, using advanced genetic techniques, and later discoveries of new groups of archaea revived the eocyte hypothesis; consequently, the two-domain system has found wider acceptance. Description In 1984, James A. Lake, Michael W. Clark, Eric Henderson, and Melanie Oakes of the University of California, Los Angeles described a new group of prokaryotic organisms designated as "a group of sulfur-dependent bacteria." Based on the structure and composition of their ribosomal subunits, they found that these organisms were different from other prokaryotes, bacteria and archaea, known at the time. They named them eocytes (for "dawn cells") and proposed a new biological kingdom Eocyta. According to this disc Document 4::: The smallest organisms found on Earth can be determined according to various aspects of organism size, including volume, mass, height, length, or genome size. Given the incomplete nature of scientific knowledge, it is possible that the smallest organism is undiscovered. Furthermore, there is some debate over the definition of life, and what entities qualify as organisms; consequently the smallest known organism (microorganism) is debatable. Microorganisms Obligate endosymbiotic bacteria The genome of Nasuia deltocephalinicola, a symbiont of the European pest leafhopper, Macrosteles quadripunctulatus, consists of a circular chromosome of 112,031 base pairs. The genome of Nanoarchaeum equitans is 491 Kbp nucleotides long. Pelagibacter ubique Pelagibacter ubique is one of the smallest known free-living bacteria, with a length of and an average cell diameter of . They also have the smallest free-living bacterium genome: 1.3 Mbp, 1354 protein genes, 35 RNA genes. They are one of the most common and smallest organisms in the ocean, with their total weight exceeding that of all fish in the sea. Mycoplasma genitalium Mycoplasma genitalium, a parasitic bacterium which lives in the primate bladder, waste disposal organs, genital, and respiratory tracts, is thought to be the smallest known organism capable of independent growth and reproduction. With a size of approximately 200 to 300 nm, M. genitalium is an ultramicrobacterium, smaller than other small bacteria, including rickettsia and chlamydia. However, the vast majority of bacterial strains have not been studied, and the marine ultramicrobacterium Sphingomonas sp. strain RB2256 is reported to have passed through a ultrafilter. A complicating factor is nutrient-downsized bacteria, bacteria that become much smaller due to a lack of available nutrients. Nanoarchaeum Nanoarchaeum equitans is a species of microbe in diameter. It was discovered in 2002 in a hydrothermal vent off the coast of Iceland by Karl Stet The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. What is thought to be the oldest eukaryotes? A. arthropods B. protists C. amoebas D. prokaryotes Answer:
sciq-5683
multiple_choice
What is the common name of mixtures of hydrocarbons that formed over millions of years from the remains of dead organisms?
[ "non-renewable fuel", "fossil record", "fossil fuels", "renewable resources" ]
C
Relavent Documents: Document 0::: Biotic material or biological derived material is any material that originates from living organisms. Most such materials contain carbon and are capable of decay. The earliest life on Earth arose at least 3.5 billion years ago. Earlier physical evidences of life include graphite, a biogenic substance, in 3.7 billion-year-old metasedimentary rocks discovered in southwestern Greenland, as well as, "remains of biotic life" found in 4.1 billion-year-old rocks in Western Australia. Earth's biodiversity has expanded continually except when interrupted by mass extinctions. Although scholars estimate that over 99 percent of all species of life (over five billion) that ever lived on Earth are extinct, there are still an estimated 10–14 million extant species, of which about 1.2 million have been documented and over 86% have not yet been described. Examples of biotic materials are wood, straw, humus, manure, bark, crude oil, cotton, spider silk, chitin, fibrin, and bone. The use of biotic materials, and processed biotic materials (bio-based material) as alternative natural materials, over synthetics is popular with those who are environmentally conscious because such materials are usually biodegradable, renewable, and the processing is commonly understood and has minimal environmental impact. However, not all biotic materials are used in an environmentally friendly way, such as those that require high levels of processing, are harvested unsustainably, or are used to produce carbon emissions. When the source of the recently living material has little importance to the product produced, such as in the production of biofuels, biotic material is simply called biomass. Many fuel sources may have biological sources, and may be divided roughly into fossil fuels, and biofuel. In soil science, biotic material is often referred to as organic matter. Biotic materials in soil include glomalin, Dopplerite and humic acid. Some biotic material may not be considered to be organic matte Document 1::: Phyllocladane is a tricyclic diterpane which is generally found in gymnosperm resins. It has a formula of C20H34 and a molecular weight of 274.4840. As a biomarker, it can be used to learn about the gymnosperm input into a hydrocarbon deposit, and about the age of the deposit in general. It indicates a terrogenous origin of the source rock. Diterpanes, such as Phyllocladane are found in source rocks as early as the middle and late Devonian periods, which indicates any rock containing them must be no more than approximately 360 Ma. Phyllocladane is commonly found in lignite, and like other resinites derived from gymnosperms, is naturally enriched in 13C. This enrichment is a result of the enzymatic pathways used to synthesize the compound. The compound can be identified by GC-MS. A peak of m/z 123 is indicative of tricyclic diterpenoids in general, and phyllocladane in particular is further characterized by strong peaks at m/z 231 and m/z 189. Presence of phyllocladane and its relative abundance to other tricyclic diterpanes can be used to differentiate between various oil fields. Document 2::: A biogenic substance is a product made by or of life forms. While the term originally was specific to metabolite compounds that had toxic effects on other organisms, it has developed to encompass any constituents, secretions, and metabolites of plants or animals. In context of molecular biology, biogenic substances are referred to as biomolecules. They are generally isolated and measured through the use of chromatography and mass spectrometry techniques. Additionally, the transformation and exchange of biogenic substances can by modelled in the environment, particularly their transport in waterways. The observation and measurement of biogenic substances is notably important in the fields of geology and biochemistry. A large proportion of isoprenoids and fatty acids in geological sediments are derived from plants and chlorophyll, and can be found in samples extending back to the Precambrian. These biogenic substances are capable of withstanding the diagenesis process in sediment, but may also be transformed into other materials. This makes them useful as biomarkers for geologists to verify the age, origin and degradation processes of different rocks. Biogenic substances have been studied as part of marine biochemistry since the 1960s, which has involved investigating their production, transport, and transformation in the water, and how they may be used in industrial applications. A large fraction of biogenic compounds in the marine environment are produced by micro and macro algae, including cyanobacteria. Due to their antimicrobial properties they are currently the subject of research in both industrial projects, such as for anti-fouling paints, or in medicine. History of discovery and classification During a meeting of the New York Academy of Sciences' Section of Geology and Mineralogy in 1903, geologist Amadeus William Grabau proposed a new rock classification system in his paper 'Discussion of and Suggestions Regarding a New Classification of Rocks'. Within Document 3::: Retene, methyl isopropyl phenanthrene or 1-methyl-7-isopropyl phenanthrene, C18H18, is a polycyclic aromatic hydrocarbon present in the coal tar fraction, boiling above 360 °C. It occurs naturally in the tars obtained by the distillation of resinous woods. It crystallizes in large plates, which melt at 98.5 °C and boil at 390 °C. It is readily soluble in warm ether and in hot glacial acetic acid. Sodium and boiling amyl alcohol reduce it to a tetrahydroretene, but if it heated with phosphorus and hydriodic acid to 260 °C, a dodecahydride is formed. Chromic acid oxidizes it to retene quinone, phthalic acid and acetic acid. It forms a picrate that melts at 123-124 °C. Retene is derived by degradation of specific diterpenoids biologically produced by conifer trees. The presence of traces of retene in the air is an indicator of forest fires; it is a major product of pyrolysis of conifer trees. It is also present in effluents from wood pulp and paper mills. Retene, together with cadalene, simonellite and ip-iHMN, is a biomarker of vascular plants, which makes it useful for paleobotanic analysis of rock sediments. The ratio of retene/cadalene in sediments can reveal the ratio of the genus Pinaceae in the biosphere. Health effects A recent study has shown retene, which is a component of the Amazonian organic PM10, is cytotoxic to human lung cells. Document 4::: Bisnorhopanes (BNH) are a group of demethylated hopanes found in oil shales across the globe and can be used for understanding depositional conditions of the source rock. The most common member, 28,30-bisnorhopane, can be found in high concentrations in petroleum source rocks, most notably the Monterey Shale, as well as in oil and tar samples. 28,30-Bisnorhopane was first identified in samples from the Monterey Shale Formation in 1985. It occurs in abundance throughout the formation and appears in stratigraphically analogous locations along the California coast. Since its identification and analysis, 28,30-bisnorhopane has been discovered in oil shales around the globe, including lacustrine and offshore deposits of Brazil, silicified shales of the Eocene in Gabon, the Kimmeridge Clay Formation in the North Sea, and in Western Australian oil shales. Chemistry 28,30-bisnorhopane exists in three epimers: 17α,18α21β(H), 17β,18α,21α(H), and 17β,18α,21β(H). During GC-MS, the three epimers coelute at the same time and are nearly indistinguishable. However, mass spectral fragmentation of the 28,30-bisnorhopane is predominantly characterized by m/z 191, 177, and 163. The ratios of 163/191 fragments can be used to distinguish the epimers, where the βαβ orientation has the highest, m/z 163/191 ratio. Further, the D/E ring ratios can be used to create a hierarchy of epimer maturity. From this, it is believed that the ααβ epimer is the first-formed, diagenetically, supported also by its percent dominance in younger shales. 28,30-bisnorhopane is created independently from kerogen, instead derived from bitumen, unbound as free oil-hydrocarbons. As such, as oil generation increases with source maturation, the concentration of 28,30-bisnorhopane decreases. Bisnorhopane may not be a reliable diagnostic for oil maturity due to microbial biodegradation. Nomenclature Norhopanes are a family of demethylated hopanes, identical to the methylated hopane structure, minus indicated desmet The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. What is the common name of mixtures of hydrocarbons that formed over millions of years from the remains of dead organisms? A. non-renewable fuel B. fossil record C. fossil fuels D. renewable resources Answer:
sciq-11379
multiple_choice
What were the first forms of life on earth?
[ "protists", "eukaryotes", "aniryotes", "prokaryotes" ]
D
Relavent Documents: Document 0::: Scientists trying to reconstruct evolutionary history have been challenged by the fact that genes can sometimes transfer between distant branches on the tree of life. This movement of genes can occur through horizontal gene transfer (HGT), scrambling the information on which biologists relied to reconstruct the phylogeny of organisms. Conversely, HGT can also help scientists to reconstruct and date the tree of life. Indeed, a gene transfer can be used as a phylogenetic marker, or as the proof of contemporaneity of the donor and recipient organisms, and as a trace of extinct biodiversity. HGT happens very infrequently – at the individual organism level, it is highly improbable for any such event to take place. However, on the grander scale of evolutionary history, these events occur with some regularity. On one hand, this forces biologists to abandon the use of individual genes as good markers for the history of life. On the other hand, this provides an almost unexploited large source of information about the past. Three domains of life The three main early branches of the tree of life have been intensively studied by microbiologists because the first organisms were microorganisms. Microbiologists (led by Carl Woese) have introduced the term domain for the three main branches of this tree, where domain is a phylogenetic term similar in meaning to biological kingdom. To reconstruct this tree of life, the gene sequence encoding the small subunit of ribosomal RNA (SSU rRNA, 16s rRNA) has proven useful, and the tree (as shown in the picture) relies heavily on information from this single gene. These three domains of life represent the main evolutionary lineages of early cellular life and currently include Bacteria, Archaea (single-celled organisms superficially similar to bacteria), and Eukarya. Eukarya includes only organisms having a well-defined nucleus, such as fungi, protists, and all organisms in the plant and animals kingdoms (see figure). The gene most com Document 1::: Biology is the scientific study of life. It is a natural science with a broad scope but has several unifying themes that tie it together as a single, coherent field. For instance, all organisms are made up of cells that process hereditary information encoded in genes, which can be transmitted to future generations. Another major theme is evolution, which explains the unity and diversity of life. Energy processing is also important to life as it allows organisms to move, grow, and reproduce. Finally, all organisms are able to regulate their own internal environments. Biologists are able to study life at multiple levels of organization, from the molecular biology of a cell to the anatomy and physiology of plants and animals, and evolution of populations. Hence, there are multiple subdisciplines within biology, each defined by the nature of their research questions and the tools that they use. Like other scientists, biologists use the scientific method to make observations, pose questions, generate hypotheses, perform experiments, and form conclusions about the world around them. Life on Earth, which emerged more than 3.7 billion years ago, is immensely diverse. Biologists have sought to study and classify the various forms of life, from prokaryotic organisms such as archaea and bacteria to eukaryotic organisms such as protists, fungi, plants, and animals. These various organisms contribute to the biodiversity of an ecosystem, where they play specialized roles in the cycling of nutrients and energy through their biophysical environment. History The earliest of roots of science, which included medicine, can be traced to ancient Egypt and Mesopotamia in around 3000 to 1200 BCE. Their contributions shaped ancient Greek natural philosophy. Ancient Greek philosophers such as Aristotle (384–322 BCE) contributed extensively to the development of biological knowledge. He explored biological causation and the diversity of life. His successor, Theophrastus, began the scienti Document 2::: The history of life on Earth traces the processes by which living and fossil organisms evolved, from the earliest emergence of life to present day. Earth formed about 4.5 billion years ago (abbreviated as Ga, for gigaannum) and evidence suggests that life emerged prior to 3.7 Ga. Although there is some evidence of life as early as 4.1 to 4.28 Ga, it remains controversial due to the possible non-biological formation of the purported fossils. The similarities among all known present-day species indicate that they have diverged through the process of evolution from a common ancestor. Only a very small percentage of species have been identified: one estimate claims that Earth may have 1 trillion species. However, only 1.75–1.8 million have been named and 1.8 million documented in a central database. These currently living species represent less than one percent of all species that have ever lived on Earth. The earliest evidence of life comes from biogenic carbon signatures and stromatolite fossils discovered in 3.7 billion-year-old metasedimentary rocks from western Greenland. In 2015, possible "remains of biotic life" were found in 4.1 billion-year-old rocks in Western Australia. In March 2017, putative evidence of possibly the oldest forms of life on Earth was reported in the form of fossilized microorganisms discovered in hydrothermal vent precipitates in the Nuvvuagittuq Belt of Quebec, Canada, that may have lived as early as 4.28 billion years ago, not long after the oceans formed 4.4 billion years ago, and not long after the formation of the Earth 4.54 billion years ago. Microbial mats of coexisting bacteria and archaea were the dominant form of life in the early Archean eon and many of the major steps in early evolution are thought to have taken place in this environment. The evolution of photosynthesis by cyanobacteria, around 3.5 Ga, eventually led to a buildup of its waste product, oxygen, in the ocean and then the atmosphere after depleting all available Document 3::: The eukaryotes () constitute the domain of Eukarya, organisms whose cells have a membrane-bound nucleus. All animals, plants, fungi, and many unicellular organisms are eukaryotes. They constitute a major group of life forms alongside the two groups of prokaryotes: the Bacteria and the Archaea. Eukaryotes represent a small minority of the number of organisms, but due to their generally much larger size, their collective global biomass is much larger than that of prokaryotes. The eukaryotes seemingly emerged in the Archaea, within the Asgard archaea. This implies that there are only two domains of life, Bacteria and Archaea, with eukaryotes incorporated among the Archaea. Eukaryotes emerged approximately 2.2 billion years ago, during the Proterozoic eon, likely as flagellated cells. The leading evolutionary theory is they were created by symbiogenesis between an anaerobic Asgard archaean and an aerobic proteobacterium, which formed the mitochondria. A second episode of symbiogenesis with a cyanobacterium created the plants, with chloroplasts. The oldest-known eukaryote fossils, multicellular planktonic organisms belonging to the Gabonionta, were discovered in Gabon in 2023, dating back to 2.1 billion years ago. Eukaryotic cells contain membrane-bound organelles such as the nucleus, the endoplasmic reticulum, and the Golgi apparatus. Eukaryotes may be either unicellular or multicellular. In comparison, prokaryotes are typically unicellular. Unicellular eukaryotes are sometimes called protists. Eukaryotes can reproduce both asexually through mitosis and sexually through meiosis and gamete fusion (fertilization). Diversity Eukaryotes are organisms that range from microscopic single cells, such as picozoans under 3 micrometres across, to animals like the blue whale, weighing up to 190 tonnes and measuring up to long, or plants like the coast redwood, up to tall. Many eukaryotes are unicellular; the informal grouping called protists includes many of these, with some Document 4::: A biosignature (sometimes called chemical fossil or molecular fossil) is any substance – such as an element, isotope, molecule, or phenomenon that provides scientific evidence of past or present life. Measurable attributes of life include its complex physical or chemical structures and its use of free energy and the production of biomass and wastes. A biosignature can provide evidence for living organisms outside the Earth and can be directly or indirectly detected by searching for their unique byproducts. Types In general, biosignatures can be grouped into ten broad categories: Isotope patterns: Isotopic evidence or patterns that require biological processes. Chemistry: Chemical features that require biological activity. Organic matter: Organics formed by biological processes. Minerals: Minerals or biomineral-phases whose composition and/or morphology indicate biological activity (e.g., biomagnetite). Microscopic structures and textures: Biologically formed cements, microtextures, microfossils, and films. Macroscopic physical structures and textures: Structures that indicate microbial ecosystems, biofilms (e.g., stromatolites), or fossils of larger organisms. Temporal variability: Variations in time of atmospheric gases, reflectivity, or macroscopic appearance that indicates life's presence. Surface reflectance features: Large-scale reflectance features due to biological pigments could be detected remotely. Atmospheric gases: Gases formed by metabolic and/or aqueous processes, which may be present on a planet-wide scale. Technosignatures: Signatures that indicate a technologically advanced civilization. Viability Determining whether a potential biosignature is worth investigating is a fundamentally complicated process. Scientists must consider any and every possible alternate explanation before concluding that something is a true biosignature. This includes investigating the minute details that make other planets unique and understanding when there is a deviat The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. What were the first forms of life on earth? A. protists B. eukaryotes C. aniryotes D. prokaryotes Answer:
ai2_arc-260
multiple_choice
A person cuts down a living oak tree. The person burns the wood from the oak tree to boil water. Which sequence correctly orders the energy transformations that occurred from the living tree to the boiling of water?
[ "light energy → chemical energy → thermal energy", "thermal energy → chemical energy → light energy", "chemical energy → mechanical energy → electrical energy", "electrical energy → mechanical energy → chemical energy" ]
A
Relavent Documents: Document 0::: Energy flow is the flow of energy through living things within an ecosystem. All living organisms can be organized into producers and consumers, and those producers and consumers can further be organized into a food chain. Each of the levels within the food chain is a trophic level. In order to more efficiently show the quantity of organisms at each trophic level, these food chains are then organized into trophic pyramids. The arrows in the food chain show that the energy flow is unidirectional, with the head of an arrow indicating the direction of energy flow; energy is lost as heat at each step along the way. The unidirectional flow of energy and the successive loss of energy as it travels up the food web are patterns in energy flow that are governed by thermodynamics, which is the theory of energy exchange between systems. Trophic dynamics relates to thermodynamics because it deals with the transfer and transformation of energy (originating externally from the sun via solar radiation) to and among organisms. Energetics and the carbon cycle The first step in energetics is photosynthesis, wherein water and carbon dioxide from the air are taken in with energy from the sun, and are converted into oxygen and glucose. Cellular respiration is the reverse reaction, wherein oxygen and sugar are taken in and release energy as they are converted back into carbon dioxide and water. The carbon dioxide and water produced by respiration can be recycled back into plants. Energy loss can be measured either by efficiency (how much energy makes it to the next level), or by biomass (how much living material exists at those levels at one point in time, measured by standing crop). Of all the net primary productivity at the producer trophic level, in general only 10% goes to the next level, the primary consumers, then only 10% of that 10% goes on to the next trophic level, and so on up the food pyramid. Ecological efficiency may be anywhere from 5% to 20% depending on how efficient Document 1::: Energy transformation, also known as energy conversion, is the process of changing energy from one form to another. In physics, energy is a quantity that provides the capacity to perform work or moving (e.g. lifting an object) or provides heat. In addition to being converted, according to the law of conservation of energy, energy is transferable to a different location or object, but it cannot be created or destroyed. The energy in many of its forms may be used in natural processes, or to provide some service to society such as heating, refrigeration, lighting or performing mechanical work to operate machines. For example, to heat a home, the furnace burns fuel, whose chemical potential energy is converted into thermal energy, which is then transferred to the home's air to raise its temperature. Limitations in the conversion of thermal energy Conversions to thermal energy from other forms of energy may occur with 100% efficiency. Conversion among non-thermal forms of energy may occur with fairly high efficiency, though there is always some energy dissipated thermally due to friction and similar processes. Sometimes the efficiency is close to 100%, such as when potential energy is converted to kinetic energy as an object falls in a vacuum. This also applies to the opposite case; for example, an object in an elliptical orbit around another body converts its kinetic energy (speed) into gravitational potential energy (distance from the other object) as it moves away from its parent body. When it reaches the furthest point, it will reverse the process, accelerating and converting potential energy into kinetic. Since space is a near-vacuum, this process has close to 100% efficiency. Thermal energy is unique because it in most cases (willow) cannot be converted to other forms of energy. Only a difference in the density of thermal/heat energy (temperature) can be used to perform work, and the efficiency of this conversion will be (much) less than 100%. This is because t Document 2::: Conceptual questions or conceptual problems in science, technology, engineering, and mathematics (STEM) education are questions that can be answered based only on the knowledge of relevant concepts, rather than performing extensive calculations. They contrast with most homework and exam problems in science and engineering that typically require plugging in numerical values into previously discussed formulas. Such "plug-and-chug" numerical problems can often be solved correctly by just matching the pattern of the problem to a previously discussed problem and changing the numerical inputs, which requires significant amounts of time to perform the calculations but does not test or deepen the understanding of how the concepts and formulas should work together. Conceptual questions, therefore, provide a good complement to conventional numerical problems because they need minimal or no calculations and instead encourage the students to engage more deeply with the underlying concepts and how they relate to formulas. Conceptual problems are often formulated as multiple-choice questions, making them easy to use during in-class discussions, particularly when utilizing active learning, peer instruction, and audience response. An example of a conceptual question in undergraduate thermodynamics is provided below: During adiabatic expansion of an ideal gas, its temperatureincreases decreases stays the same Impossible to tell/need more information The use of conceptual questions in physics was popularized by Eric Mazur, particularly in the form of multiple-choice tests that he called ConcepTests. In recent years, multiple websites that maintain lists of conceptual questions have been created by instructors for various disciplines. Some books on physics provide many examples of conceptual questions as well. Multiple conceptual questions can be assembled into a concept inventory to test the working knowledge of students at the beginning of a course or to track the improvement in Document 3::: The energy systems language, also referred to as energese, or energy circuit language, or generic systems symbols, is a modelling language used for composing energy flow diagrams in the field of systems ecology. It was developed by Howard T. Odum and colleagues in the 1950s during studies of the tropical forests funded by the United States Atomic Energy Commission. Design intent The design intent of the energy systems language was to facilitate the generic depiction of energy flows through any scale system while encompassing the laws of physics, and in particular, the laws of thermodynamics (see energy transformation for an example). In particular H.T. Odum aimed to produce a language which could facilitate the intellectual analysis, engineering synthesis and management of global systems such as the geobiosphere, and its many subsystems. Within this aim, H.T. Odum had a strong concern that many abstract mathematical models of such systems were not thermodynamically valid. Hence he used analog computers to make system models due to their intrinsic value; that is, the electronic circuits are of value for modelling natural systems which are assumed to obey the laws of energy flow, because, in themselves the circuits, like natural systems, also obey the known laws of energy flow, where the energy form is electrical. However Odum was interested not only in the electronic circuits themselves, but also in how they might be used as formal analogies for modeling other systems which also had energy flowing through them. As a result, Odum did not restrict his inquiry to the analysis and synthesis of any one system in isolation. The discipline that is most often associated with this kind of approach, together with the use of the energy systems language is known as systems ecology. General characteristics When applying the electronic circuits (and schematics) to modeling ecological and economic systems, Odum believed that generic categories, or characteristic modules, could Document 4::: The Bionic Leaf is a biomimetic system that gathers solar energy via photovoltaic cells that can be stored or used in a number of different functions. Bionic leaves can be composed of both synthetic (metals, ceramics, polymers, etc.) and organic materials (bacteria), or solely made of synthetic materials. The Bionic Leaf has the potential to be implemented in communities, such as urbanized areas to provide clean air as well as providing needed clean energy. History In 2009 at MIT, Daniel Nocera's lab first developed the "artificial leaf", a device made from silicon and an anode electrocatalyst for the oxidation of water, capable of splitting water into hydrogen and oxygen gases. In 2012, Nocera came to Harvard and The Silver Lab of Harvard Medical School joined Nocera’s team. Together the teams expanded the existing technology to create the Bionic Leaf. It merged the concept of the artificial leaf with genetically engineered bacteria that feed on the hydrogen and convert CO2 in the air into alcohol fuels or chemicals. The first version of the teams Bionic Leaf was created in 2015 but the catalyst used was harmful to the bacteria. In 2016, a new catalyst was designed to solve this issue, named the "Bionic Leaf 2.0". Other versions of artificial leaves have been developed by the California Institute of Technology and the Joint Center for Artificial Photosynthesis, the University of Waterloo, and the University of Cambridge. Mechanics Photosynthesis In natural photosynthesis, photosynthetic organisms produce energy-rich organic molecules from water and carbon dioxide by using solar radiation. Therefore, the process of photosynthesis removes carbon dioxide, a greenhouse gas, from the air. Artificial photosynthesis, as performed by the Bionic Leaf, is approximately 10 times more efficient than natural photosynthesis. Using a catalyst, the Bionic Leaf can remove excess carbon dioxide in the air and convert that to useful alcohol fuels, like isopropanol and isobutan The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. A person cuts down a living oak tree. The person burns the wood from the oak tree to boil water. Which sequence correctly orders the energy transformations that occurred from the living tree to the boiling of water? A. light energy → chemical energy → thermal energy B. thermal energy → chemical energy → light energy C. chemical energy → mechanical energy → electrical energy D. electrical energy → mechanical energy → chemical energy Answer:
sciq-2379
multiple_choice
What will spores that eventually germinate develop into?
[ "gametes", "hydra", "new hyphae", "yeast cells" ]
C
Relavent Documents: Document 0::: A sporeling is a young plant or fungus produced by a germinated spore, similar to a seedling derived from a germinated seed. They occur in algae, fungi, lichens, bryophytes and seedless vascular plants. Sporeling development Most spores germinate by first producing a germ-rhizoid or holdfast followed by a germ tube emerging from the opposite end. The germ tube develops into the hypha, protonema or thallus of the gametophyte. In seedless vascular plants such as ferns and lycopodiophyta, the term "sporeling" refers to the young sporophyte growing on the gametophyte. These sporelings develop via an embryo stage from a fertilized egg inside an archegonium and depend on the gametophyte for their early stages of growth before becoming independent sporophytes. Young fern sporelings can often be found with the prothallus gametophyte still attached at the base of their fronds. See also Conidium (mitospore) Sporogenesis External links British Pteridological Society: An introduction to ferns (contains a picture of a sporeling fern attached to the prothallus) Plant morphology Plant reproduction Fungal morphology and anatomy Document 1::: In biology, a spore is a unit of sexual (in fungi) or asexual reproduction that may be adapted for dispersal and for survival, often for extended periods of time, in unfavourable conditions. Spores form part of the life cycles of many plants, algae, fungi and protozoa. Bacterial spores are not part of a sexual cycle, but are resistant structures used for survival under unfavourable conditions. Myxozoan spores release amoeboid infectious germs ("amoebulae") into their hosts for parasitic infection, but also reproduce within the hosts through the pairing of two nuclei within the plasmodium, which develops from the amoebula. In plants, spores are usually haploid and unicellular and are produced by meiosis in the sporangium of a diploid sporophyte. Under favourable conditions the spore can develop into a new organism using mitotic division, producing a multicellular gametophyte, which eventually goes on to produce gametes. Two gametes fuse to form a zygote, which develops into a new sporophyte. This cycle is known as alternation of generations. The spores of seed plants are produced internally, and the megaspores (formed within the ovules) and the microspores are involved in the formation of more complex structures that form the dispersal units, the seeds and pollen grains. Definition The term spore derives from the ancient Greek word σπορά spora, meaning "seed, sowing", related to σπόρος , "sowing", and σπείρειν , "to sow". In common parlance, the difference between a "spore" and a "gamete" is that a spore will germinate and develop into a sporeling, while a gamete needs to combine with another gamete to form a zygote before developing further. The main difference between spores and seeds as dispersal units is that spores are unicellular, the first cell of a gametophyte, while seeds contain within them a developing embryo (the multicellular sporophyte of the next generation), produced by the fusion of the male gamete of the pollen tube with the female gamete for Document 2::: Sporogenesis is the production of spores in biology. The term is also used to refer to the process of reproduction via spores. Reproductive spores were found to be formed in eukaryotic organisms, such as plants, algae and fungi, during their normal reproductive life cycle. Dormant spores are formed, for example by certain fungi and algae, primarily in response to unfavorable growing conditions. Most eukaryotic spores are haploid and form through cell division, though some types are diploid sor dikaryons and form through cell fusion.we can also say this type of reproduction as single pollination Reproduction via spores Reproductive spores are generally the result of cell division, most commonly meiosis (e.g. in plant sporophytes). Sporic meiosis is needed to complete the sexual life cycle of the organisms using it. In some cases, sporogenesis occurs via mitosis (e.g. in some fungi and algae). Mitotic sporogenesis is a form of asexual reproduction. Examples are the conidial fungi Aspergillus and Penicillium, for which mitospore formation appears to be the primary mode of reproduction. Other fungi, such as ascomycetes, utilize both mitotic and meiotic spores. The red alga Polysiphonia alternates between mitotic and meiotic sporogenesis and both processes are required to complete its complex reproductive life cycle. In the case of dormant spores in eukaryotes, sporogenesis often occurs as a result of fertilization or karyogamy forming a diploid spore equivalent to a zygote. Therefore, zygospores are the result of sexual reproduction. Reproduction via spores involves the spreading of the spores by water or air. Algae and some fungi (chytrids) often use motile zoospores that can swim to new locations before developing into sessile organisms. Airborne spores are obvious in fungi, for example when they are released from puffballs. Other fungi have more active spore dispersal mechanisms. For example, the fungus Pilobolus can shoot its sporangia towards light. Plant spor Document 3::: Pycniospores are a type of spore found in certain species of rust fungi. They are produced in special cup-like structures called pycnia or pynidia. Almost all fungi reproduce asexually with the production of spores. Spores may be colorless, green, yellow, orange, red, brown or black. Other types of spore Sporangiospores Sporangiospores (spore:spore, angion:sac) are spores formed inside the sporangium which is a spore sac. Conidia Conidia (singular: conidium) are spores produced at the tip of special branches called conidiophores. Oidia Oidia (singular: oidium). In several fungi, the hyphae is often divided into a large number of short pieces by transverse walls. Each piece is able to germinate into a new body. These pieces are called oidia (small egg). Chlamydospores Chlamydospores (chlymus: mantle) are produced like oidia but differ from oidia in being thick walled. They are either terminal or intercalary. Document 4::: The life stage at which a fungus lives, grows, and develops, gathering nutrients and energy. The fungus uses this stage to proliferate itself through asexually created mitotic spores. Cycles through somatic hyphae, zoosporangia, zoospores, encystation & germination, and back to somatic hyphae. The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. What will spores that eventually germinate develop into? A. gametes B. hydra C. new hyphae D. yeast cells Answer:
sciq-7427
multiple_choice
What is a funnel-shaped cloud of whirling high winds known as?
[ "hurricane", "tsunami", "tornado", "volcano" ]
C
Relavent Documents: Document 0::: Conceptual questions or conceptual problems in science, technology, engineering, and mathematics (STEM) education are questions that can be answered based only on the knowledge of relevant concepts, rather than performing extensive calculations. They contrast with most homework and exam problems in science and engineering that typically require plugging in numerical values into previously discussed formulas. Such "plug-and-chug" numerical problems can often be solved correctly by just matching the pattern of the problem to a previously discussed problem and changing the numerical inputs, which requires significant amounts of time to perform the calculations but does not test or deepen the understanding of how the concepts and formulas should work together. Conceptual questions, therefore, provide a good complement to conventional numerical problems because they need minimal or no calculations and instead encourage the students to engage more deeply with the underlying concepts and how they relate to formulas. Conceptual problems are often formulated as multiple-choice questions, making them easy to use during in-class discussions, particularly when utilizing active learning, peer instruction, and audience response. An example of a conceptual question in undergraduate thermodynamics is provided below: During adiabatic expansion of an ideal gas, its temperatureincreases decreases stays the same Impossible to tell/need more information The use of conceptual questions in physics was popularized by Eric Mazur, particularly in the form of multiple-choice tests that he called ConcepTests. In recent years, multiple websites that maintain lists of conceptual questions have been created by instructors for various disciplines. Some books on physics provide many examples of conceptual questions as well. Multiple conceptual questions can be assembled into a concept inventory to test the working knowledge of students at the beginning of a course or to track the improvement in Document 1::: A pre-STEM program is a course of study at any two-year college that prepares a student to transfer to a four-year school to earn a bachelor's degree in a STEM field. Overview The concept of a pre-STEM program is being developed to address America's need for more college-trained professionals in science, technology, engineering, and mathematics (STEM). It is an innovation meant to fill a gap at community colleges that do not have 'major' degree paths that students identify with on their way to earning an Associates degree. Students must complete a considerable amount of STEM coursework before transferring from a two-year school to a four-year school and earn a baccalaureate degree in a STEM field. Schools with a pre-STEM program are able to identify those students and support them with STEM-specific academic and career advising, increasing the student's chances of going on to earn a STEM baccalaureate degree in a timely fashion. With over 50% of America's college-bound students starting their college career at public or private two-year school, and with a very small proportion of students who start college at a two-year school matriculating to and earning STEM degrees from four-year schools, pre-STEM programs have great potential for broadening participation in baccalaureate STEM studies. Example programs The effectiveness of pre-STEM programs is being investigated by a consortium of schools in Missouri: Moberly Area Community College, St. Charles Community College, Metropolitan Community College, and Truman State University. A larger group of schools met at the Belknap Springs Meetings in October 2009 to discuss the challenges and opportunities presented by STEM-focused partnerships between 2-year and 4-year schools. Each program represented a two-year school and a four-year school that were trying to increase the number of people who earn a baccalaureate degree in a STEM area through various means, some of which were pre-STEM programs. Other methods includes Document 2::: This is a list of meteorology topics. The terms relate to meteorology, the interdisciplinary scientific study of the atmosphere that focuses on weather processes and forecasting. (see also: List of meteorological phenomena) A advection aeroacoustics aerobiology aerography (meteorology) aerology air parcel (in meteorology) air quality index (AQI) airshed (in meteorology) American Geophysical Union (AGU) American Meteorological Society (AMS) anabatic wind anemometer annular hurricane anticyclone (in meteorology) apparent wind Atlantic Oceanographic and Meteorological Laboratory (AOML) Atlantic hurricane season atmometer atmosphere Atmospheric Model Intercomparison Project (AMIP) Atmospheric Radiation Measurement (ARM) (atmospheric boundary layer [ABL]) planetary boundary layer (PBL) atmospheric chemistry atmospheric circulation atmospheric convection atmospheric dispersion modeling atmospheric electricity atmospheric icing atmospheric physics atmospheric pressure atmospheric sciences atmospheric stratification atmospheric thermodynamics atmospheric window (see under Threats) B ball lightning balloon (aircraft) baroclinity barotropity barometer ("to measure atmospheric pressure") berg wind biometeorology blizzard bomb (meteorology) buoyancy Bureau of Meteorology (in Australia) C Canada Weather Extremes Canadian Hurricane Centre (CHC) Cape Verde-type hurricane capping inversion (in meteorology) (see "severe thunderstorms" in paragraph 5) carbon cycle carbon fixation carbon flux carbon monoxide (see under Atmospheric presence) ceiling balloon ("to determine the height of the base of clouds above ground level") ceilometer ("to determine the height of a cloud base") celestial coordinate system celestial equator celestial horizon (rational horizon) celestial navigation (astronavigation) celestial pole Celsius Center for Analysis and Prediction of Storms (CAPS) (in Oklahoma in the US) Center for the Study o Document 3::: Branched flow refers to a phenomenon in wave dynamics, that produces a tree-like pattern involving successive mostly forward scattering events by smooth obstacles deflecting traveling rays or waves. Sudden and significant momentum or wavevector changes are absent, but accumulated small changes can lead to large momentum changes. The path of a single ray is less important than the environs around a ray, which rotate, compress, and stretch around in an area preserving way. Even more revealing are groups, or manifolds of neighboring rays extending over significant zones. Starting rays out from a point but varying their direction over a range, one to the next, or from different points along a line all with the same initial directions are examples of a manifold. Waves have analogous launching conditions, such as a point source spraying in many directions, or an extended plane wave heading on one direction. The ray bending or refraction leads to characteristic structure in phase space and nonuniform distributions in coordinate space that look somehow universal and resemble branches in trees or stream beds. The branches taken on non-obvious paths through the refracting landscape that are indirect and nonlocal results of terrain already traversed. For a given refracting landscape, the branches will look completely different depending on the initial manifold. Examples Two-dimensional electron gas Branched flow was first identified in experiments with a two-dimensional electron gas. Electrons flowing from a quantum point contact were scanned using a scanning probe microscope. Instead of usual diffraction patterns, the electrons flowed forming branching strands that persisted for several correlation lengths of the background potential. Ocean dynamics Focusing of random waves in the ocean can also lead to branched flow. The fluctuation in the depth of the ocean floor can be described as a random potential. A tsunami wave propagating in such medium will form branches which Document 4::: The STEM (Science, Technology, Engineering, and Mathematics) pipeline is a critical infrastructure for fostering the development of future scientists, engineers, and problem solvers. It's the educational and career pathway that guides individuals from early childhood through to advanced research and innovation in STEM-related fields. Description The "pipeline" metaphor is based on the idea that having sufficient graduates requires both having sufficient input of students at the beginning of their studies, and retaining these students through completion of their academic program. The STEM pipeline is a key component of workplace diversity and of workforce development that ensures sufficient qualified candidates are available to fill scientific and technical positions. The STEM pipeline was promoted in the United States from the 1970s onwards, as “the push for STEM (science, technology, engineering, and mathematics) education appears to have grown from a concern for the low number of future professionals to fill STEM jobs and careers and economic and educational competitiveness.” Today, this metaphor is commonly used to describe retention problems in STEM fields, called “leaks” in the pipeline. For example, the White House reported in 2012 that 80% of minority groups and women who enroll in a STEM field switch to a non-STEM field or drop out during their undergraduate education. These leaks often vary by field, gender, ethnic and racial identity, socioeconomic background, and other factors, drawing attention to structural inequities involved in STEM education and careers. Current efforts The STEM pipeline concept is a useful tool for programs aiming at increasing the total number of graduates, and is especially important in efforts to increase the number of underrepresented minorities and women in STEM fields. Using STEM methodology, educational policymakers can examine the quantity and retention of students at all stages of the K–12 educational process and beyo The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. What is a funnel-shaped cloud of whirling high winds known as? A. hurricane B. tsunami C. tornado D. volcano Answer:
sciq-376
multiple_choice
How many types of bosons are there?
[ "one", "four", "five", "three" ]
B
Relavent Documents: Document 0::: This is a list of known and hypothesized particles. Standard Model elementary particles Elementary particles are particles with no measurable internal structure; that is, it is unknown whether they are composed of other particles. They are the fundamental objects of quantum field theory. Many families and sub-families of elementary particles exist. Elementary particles are classified according to their spin. Fermions have half-integer spin while bosons have integer spin. All the particles of the Standard Model have been experimentally observed, including the Higgs boson in 2012. Many other hypothetical elementary particles, such as the graviton, have been proposed, but not observed experimentally. Fermions Fermions are one of the two fundamental classes of particles, the other being bosons. Fermion particles are described by Fermi–Dirac statistics and have quantum numbers described by the Pauli exclusion principle. They include the quarks and leptons, as well as any composite particles consisting of an odd number of these, such as all baryons and many atoms and nuclei. Fermions have half-integer spin; for all known elementary fermions this is . All known fermions except neutrinos, are also Dirac fermions; that is, each known fermion has its own distinct antiparticle. It is not known whether the neutrino is a Dirac fermion or a Majorana fermion. Fermions are the basic building blocks of all matter. They are classified according to whether they interact via the strong interaction or not. In the Standard Model, there are 12 types of elementary fermions: six quarks and six leptons. Quarks Quarks are the fundamental constituents of hadrons and interact via the strong force. Quarks are the only known carriers of fractional charge, but because they combine in groups of three quarks (baryons) or in pairs of one quark and one antiquark (mesons), only integer charge is observed in nature. Their respective antiparticles are the antiquarks, which are identical except th Document 1::: Conceptual questions or conceptual problems in science, technology, engineering, and mathematics (STEM) education are questions that can be answered based only on the knowledge of relevant concepts, rather than performing extensive calculations. They contrast with most homework and exam problems in science and engineering that typically require plugging in numerical values into previously discussed formulas. Such "plug-and-chug" numerical problems can often be solved correctly by just matching the pattern of the problem to a previously discussed problem and changing the numerical inputs, which requires significant amounts of time to perform the calculations but does not test or deepen the understanding of how the concepts and formulas should work together. Conceptual questions, therefore, provide a good complement to conventional numerical problems because they need minimal or no calculations and instead encourage the students to engage more deeply with the underlying concepts and how they relate to formulas. Conceptual problems are often formulated as multiple-choice questions, making them easy to use during in-class discussions, particularly when utilizing active learning, peer instruction, and audience response. An example of a conceptual question in undergraduate thermodynamics is provided below: During adiabatic expansion of an ideal gas, its temperatureincreases decreases stays the same Impossible to tell/need more information The use of conceptual questions in physics was popularized by Eric Mazur, particularly in the form of multiple-choice tests that he called ConcepTests. In recent years, multiple websites that maintain lists of conceptual questions have been created by instructors for various disciplines. Some books on physics provide many examples of conceptual questions as well. Multiple conceptual questions can be assembled into a concept inventory to test the working knowledge of students at the beginning of a course or to track the improvement in Document 2::: A scalar boson is a boson whose spin equals zero. A boson is a particle whose wave function is symmetric under particle exchange and therefore follows Bose–Einstein statistics. The spin–statistics theorem implies that all bosons have an integer-valued spin. Scalar bosons are the subset of bosons with zero-valued spin. The name scalar boson arises from quantum field theory, which demands that fields of spin-zero particles transform like a scalar under Lorentz transformation (i.e. are Lorentz invariant). A pseudoscalar boson is a scalar boson that has odd parity, whereas "regular" scalar bosons have even parity. Examples Scalar The only fundamental scalar boson in the Standard Model of particle physics is the Higgs boson, the existence of which was confirmed on 14 March 2013 at the Large Hadron Collider by CMS and ATLAS. As a result of this confirmation, the 2013 Nobel Prize in physics was awarded to Peter Higgs and François Englert. Various known composite particles are scalar bosons, e.g. the alpha particle and scalar mesons. The φ4-theory or quartic interaction is a popular "toy model" quantum field theory that uses scalar bosonic fields, used in many introductory quantum textbooks to introduce basic concepts in field theory. Pseudoscalar There are no fundamental pseudoscalars in the Standard Model, but there are pseudoscalar mesons, like the pion. See also Scalar field theory Klein–Gordon equation Vector boson Higgs boson Document 3::: Single Best Answer (SBA or One Best Answer) is a written examination form of multiple choice questions used extensively in medical education. Structure A single question is posed with typically five alternate answers, from which the candidate must choose the best answer. This method avoids the problems of past examinations of a similar form described as Single Correct Answer. The older form can produce confusion where more than one of the possible answers has some validity. The newer form makes it explicit that more than one answer may have elements that are correct, but that one answer will be superior. Prior to the widespread introduction of SBAs into medical education, the typical form of examination was true-false multiple choice questions. But during the 2000s, educators found that SBAs would be superior. Document 4::: In particle physics, a boson ( ) is a subatomic particle whose spin quantum number has an integer value (0, 1, 2, ...). Bosons form one of the two fundamental classes of subatomic particle, the other being fermions, which have odd half-integer spin (, , , ...). Every observed subatomic particle is either a boson or a fermion. Some bosons are elementary particles occupying a special role in particle physics, distinct from the role of fermions (which are sometimes described as the constituents of "ordinary matter"). Certain elementary bosons (e.g. gluons) act as force carriers, which give rise to forces between other particles, while one (the Higgs boson) contributes to the phenomenon of mass. Other bosons, such as mesons, are composite particles made up of smaller constituents. Outside the realm of particle physics, multiple identical composite bosons (in this context sometimes known as 'bose particles') behave at high densities or low temperatures in a characteristic manner described by Bose–Einstein statistics: for example a gas of helium-4 atoms becomes a superfluid at temperatures close to absolute zero. Similarly, superconductivity arises because some quasiparticles, such as Cooper pairs, behave in the same way. Name The name boson was coined by Paul Dirac to commemorate the contribution of Satyendra Nath Bose, an Indian physicist, when he was a reader (later professor) at the University of Dhaka, Bengal (now in Bangladesh), he developed, in conjunction with Albert Einstein, the theory characterising such particles, now known as Bose–Einstein statistics and Bose-Einstein condensate. Elementary bosons All observed elementary particles are either bosons (with integer spin) or fermions (with odd half-integer spin). Whereas the elementary particles that make up ordinary matter (leptons and quarks) are fermions, elementary bosons occupy a special role in particle physics. They act either as force carriers which give rise to forces between other particles, or The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. How many types of bosons are there? A. one B. four C. five D. three Answer:
sciq-245
multiple_choice
The rings of what planet can be easily seen from earth?
[ "saturn", "Neptune", "jupiter", "Venus" ]
A
Relavent Documents: Document 0::: The STEM (Science, Technology, Engineering, and Mathematics) pipeline is a critical infrastructure for fostering the development of future scientists, engineers, and problem solvers. It's the educational and career pathway that guides individuals from early childhood through to advanced research and innovation in STEM-related fields. Description The "pipeline" metaphor is based on the idea that having sufficient graduates requires both having sufficient input of students at the beginning of their studies, and retaining these students through completion of their academic program. The STEM pipeline is a key component of workplace diversity and of workforce development that ensures sufficient qualified candidates are available to fill scientific and technical positions. The STEM pipeline was promoted in the United States from the 1970s onwards, as “the push for STEM (science, technology, engineering, and mathematics) education appears to have grown from a concern for the low number of future professionals to fill STEM jobs and careers and economic and educational competitiveness.” Today, this metaphor is commonly used to describe retention problems in STEM fields, called “leaks” in the pipeline. For example, the White House reported in 2012 that 80% of minority groups and women who enroll in a STEM field switch to a non-STEM field or drop out during their undergraduate education. These leaks often vary by field, gender, ethnic and racial identity, socioeconomic background, and other factors, drawing attention to structural inequities involved in STEM education and careers. Current efforts The STEM pipeline concept is a useful tool for programs aiming at increasing the total number of graduates, and is especially important in efforts to increase the number of underrepresented minorities and women in STEM fields. Using STEM methodology, educational policymakers can examine the quantity and retention of students at all stages of the K–12 educational process and beyo Document 1::: This article is a list of notable unsolved problems in astronomy. Some of these problems are theoretical, meaning that existing theories may be incapable of explaining certain observed phenomena or experimental results. Others are experimental, meaning that experiments necessary to test proposed theory or investigate a phenomenon in greater detail have not yet been performed. Some pertain to unique events or occurrences that have not repeated themselves and whose causes remain unclear. Planetary astronomy Our solar system Orbiting bodies and rotation: Are there any non-dwarf planets beyond Neptune? Why do extreme trans-Neptunian objects have elongated orbits? Rotation rate of Saturn: Why does the magnetosphere of Saturn rotate at a rate close to that at which the planet's clouds rotate? What is the rotation rate of Saturn's deep interior? Satellite geomorphology: What is the origin of the chain of high mountains that closely follows the equator of Saturn's moon, Iapetus? Are the mountains the remnant of hot and fast-rotating young Iapetus? Are the mountains the result of material (either from the rings of Saturn or its own ring) that over time collected upon the surface? Extra-solar How common are Solar System-like planetary systems? Some observed planetary systems contain Super-Earths and Hot Jupiters that orbit very close to their stars. Systems with Jupiter-like planets in Jupiter-like orbits appear to be rare. There are several possibilities why Jupiter-like orbits are rare, including that data is lacking or the grand tack hypothesis. Stellar astronomy and astrophysics Solar cycle: How does the Sun generate its periodically reversing large-scale magnetic field? How do other Sol-like stars generate their magnetic fields, and what are the similarities and differences between stellar activity cycles and that of the Sun? What caused the Maunder Minimum and other grand minima, and how does the solar cycle recover from a minimum state? Coronal heat Document 2::: A pre-STEM program is a course of study at any two-year college that prepares a student to transfer to a four-year school to earn a bachelor's degree in a STEM field. Overview The concept of a pre-STEM program is being developed to address America's need for more college-trained professionals in science, technology, engineering, and mathematics (STEM). It is an innovation meant to fill a gap at community colleges that do not have 'major' degree paths that students identify with on their way to earning an Associates degree. Students must complete a considerable amount of STEM coursework before transferring from a two-year school to a four-year school and earn a baccalaureate degree in a STEM field. Schools with a pre-STEM program are able to identify those students and support them with STEM-specific academic and career advising, increasing the student's chances of going on to earn a STEM baccalaureate degree in a timely fashion. With over 50% of America's college-bound students starting their college career at public or private two-year school, and with a very small proportion of students who start college at a two-year school matriculating to and earning STEM degrees from four-year schools, pre-STEM programs have great potential for broadening participation in baccalaureate STEM studies. Example programs The effectiveness of pre-STEM programs is being investigated by a consortium of schools in Missouri: Moberly Area Community College, St. Charles Community College, Metropolitan Community College, and Truman State University. A larger group of schools met at the Belknap Springs Meetings in October 2009 to discuss the challenges and opportunities presented by STEM-focused partnerships between 2-year and 4-year schools. Each program represented a two-year school and a four-year school that were trying to increase the number of people who earn a baccalaureate degree in a STEM area through various means, some of which were pre-STEM programs. Other methods includes Document 3::: This is a list of potentially habitable exoplanets. The list is mostly based on estimates of habitability by the Habitable Exoplanets Catalog (HEC), and data from the NASA Exoplanet Archive. The HEC is maintained by the Planetary Habitability Laboratory at the University of Puerto Rico at Arecibo. There is also a speculative list being developed of superhabitable planets. Surface planetary habitability is thought to require orbiting at the right distance from the host star for liquid surface water to be present, in addition to various geophysical and geodynamical aspects, atmospheric density, radiation type and intensity, and the host star's plasma environment. List This is a list of exoplanets within the circumstellar habitable zone that are under 10 Earth masses and smaller than 2.5 Earth radii, and thus have a chance of being rocky. Note that inclusion on this list does not guarantee habitability, and in particular the larger planets are unlikely to have a rocky composition. Earth is included for comparison. Note that mass and radius values prefixed with "~" have not been measured, but are estimated from a mass-radius relationship. Previous candidates Some exoplanet candidates detected by radial velocity that were originally thought to be potentially habitable were later found to most likely be artifacts of stellar activity. These include Gliese 581 d & g, Gliese 667 Ce & f, Gliese 682 b & c, Kapteyn b, and Gliese 832 c. HD 85512 b was initially estimated to be potentially habitable, but updated models for the boundaries of the habitable zone placed the planet interior to the HZ, and it is now considered non-habitable. Kepler-69c has gone through a similar process; though initially estimated to be potentially habitable, it was quickly realized that the planet is more likely to be similar to Venus, and is thus no longer considered habitable. Several other planets, such as Gliese 180 b, also appear to be examples of planets once considered potentially habit Document 4::: Female education in STEM refers to child and adult female representation in the educational fields of science, technology, engineering, and mathematics (STEM). In 2017, 33% of students in STEM fields were women. The organization UNESCO has stated that this gender disparity is due to discrimination, biases, social norms and expectations that influence the quality of education women receive and the subjects they study. UNESCO also believes that having more women in STEM fields is desirable because it would help bring about sustainable development. Current status of girls and women in STEM education Overall trends in STEM education Gender differences in STEM education participation are already visible in early childhood care and education in science- and math-related play, and become more pronounced at higher levels of education. Girls appear to lose interest in STEM subjects with age, particularly between early and late adolescence. This decreased interest affects participation in advanced studies at the secondary level and in higher education. Female students represent 35% of all students enrolled in STEM-related fields of study at this level globally. Differences are also observed by disciplines, with female enrollment lowest in engineering, manufacturing and construction, natural science, mathematics and statistics and ICT fields. Significant regional and country differences in female representation in STEM studies can be observed, though, suggesting the presence of contextual factors affecting girls’ and women's engagement in these fields. Women leave STEM disciplines in disproportionate numbers during their higher education studies, in their transition to the world of work and even in their career cycle. Learning achievement in STEM education Data on gender differences in learning achievement present a complex picture, depending on what is measured (subject, knowledge acquisition against knowledge application), the level of education/age of students, and The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. The rings of what planet can be easily seen from earth? A. saturn B. Neptune C. jupiter D. Venus Answer:
sciq-7569
multiple_choice
In addition to five classes of fish, what other classes make up the species of vertebrates?
[ "reptiles, birds, mammals, and primates", "amphibians, reptiles, birds, and mammals", "amphibians , vertebrae , birds , and mammals", "insects, amphibians, reptiles, and birds" ]
B
Relavent Documents: Document 0::: Vertebrate zoology is the biological discipline that consists of the study of Vertebrate animals, i.e., animals with a backbone, such as fish, amphibians, reptiles, birds and mammals. Many natural history museums have departments named Vertebrate Zoology. In some cases whole museums bear this name, e.g. the Museum of Vertebrate Zoology at the University of California, Berkeley. Subdivisions This subdivision of zoology has many further subdivisions, including: Ichthyology - the study of fishes. Mammalogy - the study of mammals. Chiropterology - the study of bats. Primatology - the study of primates. Ornithology - the study of birds. Herpetology - the study of reptiles. Batrachology - the study of amphibians. These divisions are sometimes further divided into more specific specialties. Document 1::: Animals are multicellular, eukaryotic organisms in the biological kingdom Animalia. With few exceptions, animals consume organic material, breathe oxygen, have myocytes and are able to move, can reproduce sexually, and grow from a hollow sphere of cells, the blastula, during embryonic development. As of 2022, 2.16 million living animal species have been described—of which around 1.05 million are insects, over 85,000 are molluscs, and around 65,000 are vertebrates. It has been estimated there are around 7.77 million animal species. Animals range in length from to . They have complex interactions with each other and their environments, forming intricate food webs. The scientific study of animals is known as zoology. Most living animal species are in Bilateria, a clade whose members have a bilaterally symmetric body plan. The Bilateria include the protostomes, containing animals such as nematodes, arthropods, flatworms, annelids and molluscs, and the deuterostomes, containing the echinoderms and the chordates, the latter including the vertebrates. Life forms interpreted as early animals were present in the Ediacaran biota of the late Precambrian. Many modern animal phyla became clearly established in the fossil record as marine species during the Cambrian explosion, which began around 539 million years ago. 6,331 groups of genes common to all living animals have been identified; these may have arisen from a single common ancestor that lived 650 million years ago. Historically, Aristotle divided animals into those with blood and those without. Carl Linnaeus created the first hierarchical biological classification for animals in 1758 with his Systema Naturae, which Jean-Baptiste Lamarck expanded into 14 phyla by 1809. In 1874, Ernst Haeckel divided the animal kingdom into the multicellular Metazoa (now synonymous with Animalia) and the Protozoa, single-celled organisms no longer considered animals. In modern times, the biological classification of animals relies on ad Document 2::: Animals are multicellular eukaryotic organisms in the biological kingdom Animalia. With few exceptions, animals consume organic material, breathe oxygen, are able to move, reproduce sexually, and grow from a hollow sphere of cells, the blastula, during embryonic development. Over 1.5 million living animal species have been described—of which around 1 million are insects—but it has been estimated there are over 7 million in total. Animals range in size from 8.5 millionths of a metre to long and have complex interactions with each other and their environments, forming intricate food webs. The study of animals is called zoology. Animals may be listed or indexed by many criteria, including taxonomy, status as endangered species, their geographical location, and their portrayal and/or naming in human culture. By common name List of animal names (male, female, young, and group) By aspect List of common household pests List of animal sounds List of animals by number of neurons By domestication List of domesticated animals By eating behaviour List of herbivorous animals List of omnivores List of carnivores By endangered status IUCN Red List endangered species (Animalia) United States Fish and Wildlife Service list of endangered species By extinction List of extinct animals List of extinct birds List of extinct mammals List of extinct cetaceans List of extinct butterflies By region Lists of amphibians by region Lists of birds by region Lists of mammals by region Lists of reptiles by region By individual (real or fictional) Real Lists of snakes List of individual cats List of oldest cats List of giant squids List of individual elephants List of historical horses List of leading Thoroughbred racehorses List of individual apes List of individual bears List of giant pandas List of individual birds List of individual bovines List of individual cetaceans List of individual dogs List of oldest dogs List of individual monkeys List of individual pigs List of w Document 3::: In zoology, mammalogy is the study of mammals – a class of vertebrates with characteristics such as homeothermic metabolism, fur, four-chambered hearts, and complex nervous systems. Mammalogy has also been known as "mastology," "theriology," and "therology." The archive of number of mammals on earth is constantly growing, but is currently set at 6,495 different mammal species including recently extinct. There are 5,416 living mammals identified on earth and roughly 1,251 have been newly discovered since 2006. The major branches of mammalogy include natural history, taxonomy and systematics, anatomy and physiology, ethology, ecology, and management and control. The approximate salary of a mammalogist varies from $20,000 to $60,000 a year, depending on their experience. Mammalogists are typically involved in activities such as conducting research, managing personnel, and writing proposals. Mammalogy branches off into other taxonomically-oriented disciplines such as primatology (study of primates), and cetology (study of cetaceans). Like other studies, mammalogy is also a part of zoology which is also a part of biology, the study of all living things. Research purposes Mammalogists have stated that there are multiple reasons for the study and observation of mammals. Knowing how mammals contribute or thrive in their ecosystems gives knowledge on the ecology behind it. Mammals are often used in business industries, agriculture, and kept for pets. Studying mammals habitats and source of energy has led to aiding in survival. The domestication of some small mammals has also helped discover several different diseases, viruses, and cures. Mammalogist A mammalogist studies and observes mammals. In studying mammals, they can observe their habitats, contributions to the ecosystem, their interactions, and the anatomy and physiology. A mammalogist can do a broad variety of things within the realm of mammals. A mammalogist on average can make roughly $58,000 a year. This dep Document 4::: The Reptile Database is a scientific database that collects taxonomic information on all living reptile species (i.e. no fossil species such as dinosaurs). The database focuses on species (as opposed to higher ranks such as families) and has entries for all currently recognized ~13,000 species and their subspecies, although there is usually a lag time of up to a few months before newly described species become available online. The database collects scientific and common names, synonyms, literature references, distribution information, type information, etymology, and other taxonomically relevant information. History The database was founded in 1995 as EMBL Reptile Database when the founder, Peter Uetz, was a graduate student at the European Molecular Biology Laboratory (EMBL) in Heidelberg, Germany. Thure Etzold had developed the first web interface for the EMBL DNA sequence database which was also used as interface for the Reptile Database. In 2006 the database moved to The Institute of Genomic Research (TIGR) and briefly operated as TIGR Reptile Database until TIGR was merged into the J Craig Venter Institute (JCVI) where Uetz was an associate professor until 2010. Since 2010 the database has been maintained on servers in the Czech Republic under the supervision of Peter Uetz and Jirí Hošek, a Czech programmer. The database celebrated its 25th anniversary together with AmphibiaWeb which had its 20th anniversary in 2021. Content As of September 2020, the Reptile Database lists about 11,300 species (including another ~2,200 subspecies) in about 1200 genera (see figure), and has more than 50,000 literature references and about 15,000 photos. The database has constantly grown since its inception with an average of 100 to 200 new species described per year over the preceding decade. Recently, the database also added a more or less complete list of primary type specimens. Relationship to other databases The Reptile Database has been a member of the Species 2000 pro The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. In addition to five classes of fish, what other classes make up the species of vertebrates? A. reptiles, birds, mammals, and primates B. amphibians, reptiles, birds, and mammals C. amphibians , vertebrae , birds , and mammals D. insects, amphibians, reptiles, and birds Answer:
sciq-3306
multiple_choice
Gene duplications that are able to persist over many generations without causing too much harm to an organism or species can lead to what?
[ "characteristics", "mutations", "permutations", "parasites" ]
B
Relavent Documents: Document 0::: Evolution by gene duplication is an event by which a gene or part of a gene can have two identical copies that can not be distinguished from each other. This phenomenon is understood to be an important source of novelty in evolution, providing for an expanded repertoire of molecular activities. The underlying mutational event of duplication may be a conventional gene duplication mutation within a chromosome, or a larger-scale event involving whole chromosomes (aneuploidy) or whole genomes (polyploidy). A classic view, owing to Susumu Ohno, which is known as Ohno model, he explains how duplication creates redundancy, the redundant copy accumulates beneficial mutations which provides fuel for innovation. Knowledge of evolution by gene duplication has advanced more rapidly in the past 15 years due to new genomic data, more powerful computational methods of comparative inference, and new evolutionary models. Theoretical models Several models exist that try to explain how new cellular functions of genes and their encoded protein products evolve through the mechanism of duplication and divergence. Although each model can explain certain aspects of the evolutionary process, the relative importance of each aspect is still unclear. This page only presents which theoretical models are currently discussed in the literature. Review articles on this topic can be found at the bottom. In the following, a distinction will be made between explanations for the short-term effects (preservation) of a gene duplication and its long-term outcomes. Preservation of gene duplicates Since a gene duplication occurs in only one cell, either in a single-celled organism or in the germ cell of a multi-cellular organism, its carrier (i.e. the organism) usually has to compete against other organisms that do not carry the duplication. If the duplication disrupts the normal functioning of an organism, the organism has a reduced reproductive success (or low fitness) compared to its competitors Document 1::: Genetic redundancy is a term typically used to describe situations where a given biochemical function is redundantly encoded by two or more genes. In these cases, mutations (or defects) in one of these genes will have a smaller effect on the fitness of the organism than expected from the genes’ function. Characteristic examples of genetic redundancy include (Enns, Kanaoka et al. 2005) and (Pearce, Senis et al. 2004). Many more examples are thoroughly discussed in (Kafri, Levy & Pilpel. 2006). The main source of genetic redundancy is the process of gene duplication which generates multiplicity in gene copy number. A second and less frequent source of genetic redundancy are convergent evolutionary processes leading to genes that are close in function but unrelated in sequence (Galperin, Walker & Koonin 1998). Genetic redundancy is typically associated with signaling networks, in which many proteins act together to accomplish teleological functions. In contrast to expectations, genetic redundancy is not associated with gene duplications [Wagner, 2007], neither do redundant genes mutate faster than essential genes [Hurst 1999]. Therefore, genetic redundancy has classically aroused much debate in the context of evolutionary biology (Nowak et al., 1997; Kafri, Springer & Pilpel . 2009). From an evolutionary standpoint, genes with overlapping functions imply minimal, if any, selective pressures acting on these genes. One therefore expects that the genes participating in such buffering of mutations will be subject to severe mutational drift diverging their functions and/or expression patterns with considerably high rates. Indeed it has been shown that the functional divergence of paralogous pairs in both yeast and human is an extremely rapid process. Taking these notions into account, the very existence of genetic buffering, and the functional redundancies required for it, presents a paradox in light of the evolutionary concepts. On one hand, for genetic buffering to take Document 2::: Gene redundancy is the existence of multiple genes in the genome of an organism that perform the same function. Gene redundancy can result from gene duplication. Such duplication events are responsible for many sets of paralogous genes. When an individual gene in such a set is disrupted by mutation or targeted knockout, there can be little effect on phenotype as a result of gene redundancy, whereas the effect is large for the knockout of a gene with only one copy. Gene knockout is a method utilized in some studies aiming to characterize the maintenance and fitness effects functional overlap. Classical models of maintenance propose that duplicated genes may be conserved to various extents in genomes due to their ability to compensate for deleterious loss of function mutations. These classical models do not take into account the potential impact of positive selection. Beyond these classical models, researchers continue to explore the mechanisms by which redundant genes are maintained and evolve. Gene redundancy has long been appreciated as a source of novel gene origination; that is, new genes may arise when selective pressure exists on the duplicate, while the original gene is maintained to perform the original function, as proposed by newer models. Origin and Evolution of Redundant Genes Gene redundancy most often results from Gene duplication. Three of the more common mechanisms of gene duplication are retroposition, unequal crossing over, and non-homologous segmental duplication. Retroposition is when the mRNA transcript of a gene is reverse transcribed back into DNA and inserted into the genome at a different location. During unequal crossing over, homologous chromosomes exchange uneven portions of their DNA. This can lead to the transfer of one chromosome's gene to the other chromosome, leaving two of the same gene on one chromosome, and no copies of the gene on the other chromosome. Non-homologous duplications result from replication errors that shift the g Document 3::: The Bateson Lecture is an annual genetics lecture held as a part of the John Innes Symposium since 1972, in honour of the first Director of the John Innes Centre, William Bateson. Past Lecturers Source: John Innes Centre 1951 Sir Ronald Fisher - "Statistical methods in Genetics" 1953 Julian Huxley - "Polymorphic variation: a problem in genetical natural history" 1955 Sidney C. Harland - "Plant breeding: present position and future perspective" 1957 J.B.S. Haldane - "The theory of evolution before and after Bateson" 1959 Kenneth Mather - "Genetics Pure and Applied" 1972 William Hayes - "Molecular genetics in retrospect" 1974 Guido Pontecorvo - "Alternatives to sex: genetics by means of somatic cells" 1976 Max F. Perutz - "Mechanism of respiratory haemoglobin" 1979 J. Heslop-Harrison - "The forgotten generation: some thoughts on the genetics and physiology of Angiosperm Gametophytes " 1982 Sydney Brenner - "Molecular genetics in prospect" 1984 W.W. Franke - "The cytoskeleton - the insoluble architectural framework of the cell" 1986 Arthur Kornberg - "Enzyme systems initiating replication at the origin of the E. coli chromosome" 1988 Gottfried Schatz - "Interaction between mitochondria and the nucleus" 1990 Christiane Nusslein-Volhard - "Axis determination in the Drosophila embryo" 1992 Frank Stahl - "Genetic recombination: thinking about it in phage and fungi" 1994 Ira Herskowitz - "Violins and orchestras: what a unicellular organism can do" 1996 R.J.P. Williams - "An Introduction to Protein Machines" 1999 Eugene Nester - "DNA and Protein Transfer from Bacteria to Eukaryotes - the Agrobacterium story" 2001 David Botstein - "Extracting biological information from DNA Microarray Data" 2002 Elliot Meyerowitz 2003 Thomas Steitz - "The Macromolecular machines of gene expression" 2008 Sean Carroll - "Endless flies most beautiful: the role of cis-regulatory sequences in the evolution of animal form" 2009 Sir Paul Nurse - "Genetic transmission through Document 4::: Evolutionary biology is the subfield of biology that studies the evolutionary processes (natural selection, common descent, speciation) that produced the diversity of life on Earth. It is also defined as the study of the history of life forms on Earth. Evolution holds that all species are related and gradually change over generations. In a population, the genetic variations affect the phenotypes (physical characteristics) of an organism. These changes in the phenotypes will be an advantage to some organisms, which will then be passed onto their offspring. Some examples of evolution in species over many generations are the peppered moth and flightless birds. In the 1930s, the discipline of evolutionary biology emerged through what Julian Huxley called the modern synthesis of understanding, from previously unrelated fields of biological research, such as genetics and ecology, systematics, and paleontology. The investigational range of current research has widened to encompass the genetic architecture of adaptation, molecular evolution, and the different forces that contribute to evolution, such as sexual selection, genetic drift, and biogeography. Moreover, the newer field of evolutionary developmental biology ("evo-devo") investigates how embryogenesis is controlled, thus yielding a wider synthesis that integrates developmental biology with the fields of study covered by the earlier evolutionary synthesis. Subfields Evolution is the central unifying concept in biology. Biology can be divided into various ways. One way is by the level of biological organization, from molecular to cell, organism to population. Another way is by perceived taxonomic group, with fields such as zoology, botany, and microbiology, reflecting what was once seen as the major divisions of life. A third way is by approaches, such as field biology, theoretical biology, experimental evolution, and paleontology. These alternative ways of dividing up the subject have been combined with evolution The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. Gene duplications that are able to persist over many generations without causing too much harm to an organism or species can lead to what? A. characteristics B. mutations C. permutations D. parasites Answer:
sciq-27
multiple_choice
A small scale version of what type of map displays individual rock units?
[ "geographic map", "geologic map", "seismic map", "polar map" ]
B
Relavent Documents: Document 0::: Spatial scale is a specific application of the term scale for describing or categorizing (e.g. into orders of magnitude) the size of a space (hence spatial), or the extent of it at which a phenomenon or process occurs. For instance, in physics an object or phenomenon can be called microscopic if too small to be visible. In climatology, a micro-climate is a climate which might occur in a mountain, valley or near a lake shore. In statistics, a megatrend is a political, social, economical, environmental or technological trend which involves the whole planet or is supposed to last a very large amount of time. The concept is also used in geography, astronomy, and meteorology. These divisions are somewhat arbitrary; where, on this table, mega- is assigned global scope, it may only apply continentally or even regionally in other contexts. The interpretations of meso- and macro- must then be adjusted accordingly. See also Astronomical units of length Cosmic distance ladder List of examples of lengths Orders of magnitude (length) Scale (analytical tool) Scale (geography) Scale (map) Scale (ratio) Location of Earth Document 1::: Geochores (Greek gé "the earth" and chora "area") are relatively large landscape areas with similar – but owing to their size not fully uniform – characteristics. They therefore consist of a tapestry of smaller landscape units, which can be hierarchically grouped: Physiotopes or geotopes form the base unit (tope from the Greek, τόπος, "place"). These are objects whose features are assessed as homogenous and which cannot sensibly be subdivided into smaller landscapes. Their area depends on the distribution pattern of their features and on the purpose or aim of the classification, but in general they are between 0.1 and 5 hectares in area. Nanogeochores or nanochores are the simplest level of physiotopes. Example: Ameisenberg near Oybin is part of the Oybin Rock Region (microgeochore) Microgeochores are small scale landscape units with an average area of 12 km2. In terms of biotopes or woodland or agricultural land which is managed in a certain way, they form a tapestry of nanogeochores. They cover areas which are similar mainly in terms of their geological origins, rocks, topographical elevation or relief energy. They are a good example of the how geological and topographical history affects the resulting landscape structure. Example: Hochwald Ridge and Oybin Rock Region Mesogeochores are simply formations and groups of microgeochores. Their association is based on similarities of climate, topography such as mountains, valleys and hills or associated features from the Pleistocene (ice age). They are oriented towards the management and relative size of the microgeochores of which they comprise. Example: Zittau Mountains or Zittau Basin Macrogeochores or major landscapes - as natural region major units - are simply groupings of mesogeochores, whose cohesiveness is based e.g. on geological foundations, on climatic conditions or vegetation (e. g. hpnV). They are "regional" in size. Example: Lusatian Mountains or Upper Lusatian Highlands. Literature Haase, G. Document 2::: Engels Maps is a map company in the Ohio Valley with particular concentration on the Cincinnati-Dayton region. It also produces chamber of commerce maps. Publications It has three semi-annual publications that form its foundation: Cincinnati Engels Guide Dayton Engels Guide Indianapolis Engels Guide Their maps are also found in the Cincinnati Bell Yellow Pages and the Dayton WorkBook. Corporate history Engels Maps was founded by Judson Engels in 1994. Sources External links Engels Maps http://cincinnati.citysearch.com/profile/4343456/fort_thomas_ky/engels_maps_guide.html Target Marketing http://www.macraesbluebook.com/search/company.cfm?company=838024 http://engelsmaps.com engelsmaps.com Geodesy Companies based in Kentucky Software companies based in Kentucky American companies established in 1994 Map companies of the United States Campbell County, Kentucky 1994 establishments in Kentucky Software companies of the United States Software companies established in 1994 Document 3::: This is a list of free and open-source software for geological data handling and interpretation. The list is split into broad categories, depending on the intended use of the software and its scope of functionality. Notice that 'free and open-source' requires that the source code is available and users are given a free software license. Simple being 'free of charge' is not sufficient—see gratis versus libre. Well logging & Borehole visualisation Geosciences software platforms Geostatistics Forward modeling Geomodeling Visualization, interpretation & analysis packages Geographic information systems (GIS) This important class of tools is already listed in the article List of GIS software. Not true free and open-source projects The following projects have unknown licensing, licenses or other conditions which place some restriction on use or redistribution, or which depend on non-open-source software like MATLAB or XVT (and therefore do not meet the Open Source Definition from the Open Source Initiative). Document 4::: A cognitive map is a type of mental representation which serves an individual to acquire, code, store, recall, and decode information about the relative locations and attributes of phenomena in their everyday or metaphorical spatial environment. The concept was introduced by Edward Tolman in 1948. He tried to explain the behavior of rats that appeared to learn the spatial layout of a maze, and subsequently the concept was applied to other animals, including humans. The term was later generalized by some researchers, especially in the field of operations research, to refer to a kind of semantic network representing an individual's personal knowledge or schemas. Overview Cognitive maps have been studied in various fields, such as psychology, education, archaeology, planning, geography, cartography, architecture, landscape architecture, urban planning, management and history. Because of the broad use and study of cognitive maps, it has become a colloquialism for almost any mental representation or model. As a consequence, these mental models are often referred to, variously, as cognitive maps, mental maps, scripts, schemata, and frame of reference. Cognitive maps are a function of the working brain that humans and animals use for movement in a new environment. They help us in recognizing places, computing directions and distances, and in critical-thinking on shortcuts. They support us in wayfinding in an environment, and act as blueprints for new technology. Cognitive maps serve the construction and accumulation of spatial knowledge, allowing the "mind's eye" to visualize images in order to reduce cognitive load, enhance recall and learning of information. This type of spatial thinking can also be used as a metaphor for non-spatial tasks, where people performing non-spatial tasks involving memory and imaging use spatial knowledge to aid in processing the task. They include information about the spatial relations that objects have among each other in an environment The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. A small scale version of what type of map displays individual rock units? A. geographic map B. geologic map C. seismic map D. polar map Answer:
sciq-8166
multiple_choice
Snakes are which type of animal?
[ "herbivorous", "omnivores", "carnivorous", "vegivors" ]
C
Relavent Documents: Document 0::: The Haitian border threadsnake (Mitophis leptepileptus) is a possibly extinct species of snake in the family Leptotyphlopidae endemic to Haiti. Description Last seen in 1984, the species was thought to be already rare, but intensive surveys in the area have not recorded it. If it is extinct, causes are certainly due to deforestation of its habitat and agricultural activities, which have intensified since its last collection. Document 1::: A herpetarium is a zoological exhibition space for reptiles and amphibians, most commonly a dedicated area of a larger zoo. A herpetarium which specializes in snakes is an ophidiarium or serpentarium, which are more common as stand-alone entities also known as snake farms. Many snake farms milk snakes for venom for medical and scientific research. Notable herpetariums Alice Springs Reptile Centre in Alice Springs, Australia Armadale Reptile Centre in Perth, Australia Australian Reptile Park in Somersby, Australia Chennai Snake Park Trust in Chennai, India Crocodiles of the World in Brize Norton, UK Crocosaurus Cove in Darwin, Australia Clyde Peeling's Reptiland in Allenwood, Pennsylvania Kentucky Reptile Zoo in Slade, Kentucky The LAIR at the Los Angeles Zoo in Los Angeles, California Serpent Safari in Gurnee, Illinois Saint Louis Zoo Herpetarium in St. Louis, Missouri Staten Island Zoo Serpentarium in New York City, New York World of Reptiles at the Bronx Zoo in New York City, New York See also Herpetoculture Bill Haast (founder of Miami Serpentarium) Document 2::: Animal science is described as "studying the biology of animals that are under the control of humankind". It can also be described as the production and management of farm animals. Historically, the degree was called animal husbandry and the animals studied were livestock species, like cattle, sheep, pigs, poultry, and horses. Today, courses available look at a broader area, including companion animals, like dogs and cats, and many exotic species. Degrees in Animal Science are offered at a number of colleges and universities. Animal science degrees are often offered at land-grant universities, which will often have on-campus farms to give students hands-on experience with livestock animals. Education Professional education in animal science prepares students for careers in areas such as animal breeding, food and fiber production, nutrition, animal agribusiness, animal behavior, and welfare. Courses in a typical Animal Science program may include genetics, microbiology, animal behavior, nutrition, physiology, and reproduction. Courses in support areas, such as genetics, soils, agricultural economics and marketing, legal aspects, and the environment also are offered. Bachelor degree At many universities, a Bachelor of Science (BS) degree in Animal Science allows emphasis in certain areas. Typical areas are species-specific or career-specific. Species-specific areas of emphasis prepare students for a career in dairy management, beef management, swine management, sheep or small ruminant management, poultry production, or the horse industry. Other career-specific areas of study include pre-veterinary medicine studies, livestock business and marketing, animal welfare and behavior, animal nutrition science, animal reproduction science, or genetics. Youth programs are also an important part of animal science programs. Pre-veterinary emphasis Many schools that offer a degree option in Animal Science also offer a pre-veterinary emphasis such as Iowa State University, th Document 3::: Envenomation is the process by which venom is injected by the bite or sting of a venomous animal. Many kinds of animals, including mammals (e.g., the northern short-tailed shrew, Blarina brevicauda), reptiles (e.g., the king cobra), spiders (e.g., black widows), insects (e.g., wasps), and fish (e.g., stone fish) employ venom for hunting and for self-defense. In particular, snakebite envenoming is considered a neglected tropical disease resulting in >100,000 deaths and maiming >400,000 people per year. Mechanisms Some venoms are applied externally, especially to sensitive tissues such as the eyes, but most venoms are administered by piercing the skin of the victim. Venom in the saliva of the Gila monster and some other reptiles enters prey through bites of grooved teeth. More commonly animals have specialized organs such as hollow teeth (fangs) and tubular stingers that penetrate the prey's skin, whereupon muscles attached to the attacker's venom reservoir squirt venom deep within the victim's body tissue. For example, the fangs of venomous snakes are connected to a venom gland by means of a duct. Death may occur as a result of bites or stings. The rate of envenoming is described as the likelihood of venom successfully entering a system upon bite or sting. Mechanisms of snake envenomation Snakes administer venom to their target by piercing the target's skin with specialized organs known as fangs. Snakebites can be broken into four stages; strike launch, fang erection, fang penetration, and fang withdrawal. Snakes have a venom gland connected to a duct and subsequent fangs. The fangs have hollow tubes with grooved sides that allow venom to flow within them. During snake bites, the fangs penetrate the skin of the target and the fang sheath, a soft tissue organ surrounding the fangs, is retracted. The fang sheath retraction causes an increase in internal pressures. This pressure differential initiates venom flow in the venom delivery system. Larger snakes have been Document 4::: The Bachelor of Science in Aquatic Resources and Technology (B.Sc. in AQT) (or Bachelor of Aquatic Resource) is an undergraduate degree that prepares students to pursue careers in the public, private, or non-profit sector in areas such as marine science, fisheries science, aquaculture, aquatic resource technology, food science, management, biotechnology and hydrography. Post-baccalaureate training is available in aquatic resource management and related areas. The Department of Animal Science and Export Agriculture, at the Uva Wellassa University of Badulla, Sri Lanka, has the largest enrollment of undergraduate majors in Aquatic Resources and Technology, with about 200 students as of 2014. The Council on Education for Aquatic Resources and Technology includes undergraduate AQT degrees in the accreditation review of Aquatic Resources and Technology programs and schools. See also Marine Science Ministry of Fisheries and Aquatic Resources Development The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. Snakes are which type of animal? A. herbivorous B. omnivores C. carnivorous D. vegivors Answer:
sciq-9611
multiple_choice
What decay produces helium nuclei?
[ "Beta Decay", "alpha decay", "radiative decay", "duo decay" ]
B
Relavent Documents: Document 0::: Reaction products This sequence of reactions can be understood by thinking of the two interacting carbon nuclei as coming together to form an excited state of the 24Mg nucleus, which then decays in one of the five ways listed above. The first two reactions are strongly exothermic, as indicated by the large positive energies released, and ar Document 1::: Neutron emission is a mode of radioactive decay in which one or more neutrons are ejected from a nucleus. It occurs in the most neutron-rich/proton-deficient nuclides, and also from excited states of other nuclides as in photoneutron emission and beta-delayed neutron emission. As only a neutron is lost by this process the number of protons remains unchanged, and an atom does not become an atom of a different element, but a different isotope of the same element. Neutrons are also produced in the spontaneous and induced fission of certain heavy nuclides. Spontaneous neutron emission As a consequence of the Pauli exclusion principle, nuclei with an excess of protons or neutrons have a higher average energy per nucleon. Nuclei with a sufficient excess of neutrons have a greater energy than the combination of a free neutron and a nucleus with one less neutron, and therefore can decay by neutron emission. Nuclei which can decay by this process are described as lying beyond the neutron drip line. Two examples of isotopes that emit neutrons are beryllium-13 (decaying to beryllium-12 with a mean life ) and helium-5 (helium-4, ). In tables of nuclear decay modes, neutron emission is commonly denoted by the abbreviation n. {| class="wikitable" align="left" |+ Neutron emitters to the left of lower dashed line (see also: Table of nuclides) |- |- |- |- |- |- |- |- |- |- |- |- |- |- |- |- |- |} Double neutron emission Some neutron-rich isotopes decay by the emission of two or more neutrons. For example hydrogen-5 and helium-10 decay by the emission of two neutrons, hydrogen-6 by the emission of 3 or 4 neutrons, and hydrogen-7 by emission of 4 neutrons. Photoneutron emission Some nuclides can be induced to eject a neutron by gamma radiation. One such nuclide is 9Be; its photodisintegration is significant in nuclear astrophysics, pertaining to the abundance of beryllium and the consequences of the instability of 8Be. This also makes this isotope useful as a Document 2::: In nuclear physics, beta decay (β-decay) is a type of radioactive decay in which an atomic nucleus emits a beta particle (fast energetic electron or positron), transforming into an isobar of that nuclide. For example, beta decay of a neutron transforms it into a proton by the emission of an electron accompanied by an antineutrino; or, conversely a proton is converted into a neutron by the emission of a positron with a neutrino in so-called positron emission. Neither the beta particle nor its associated (anti-)neutrino exist within the nucleus prior to beta decay, but are created in the decay process. By this process, unstable atoms obtain a more stable ratio of protons to neutrons. The probability of a nuclide decaying due to beta and other forms of decay is determined by its nuclear binding energy. The binding energies of all existing nuclides form what is called the nuclear band or valley of stability. For either electron or positron emission to be energetically possible, the energy release (see below) or Q value must be positive. Beta decay is a consequence of the weak force, which is characterized by relatively lengthy decay times. Nucleons are composed of up quarks and down quarks, and the weak force allows a quark to change its flavour by emission of a W boson leading to creation of an electron/antineutrino or positron/neutrino pair. For example, a neutron, composed of two down quarks and an up quark, decays to a proton composed of a down quark and two up quarks. Electron capture is sometimes included as a type of beta decay, because the basic nuclear process, mediated by the weak force, is the same. In electron capture, an inner atomic electron is captured by a proton in the nucleus, transforming it into a neutron, and an electron neutrino is released. Description The two types of beta decay are known as beta minus and beta plus. In beta minus (β−) decay, a neutron is converted to a proton, and the process creates an electron and an electron antineutrino; Document 3::: In nuclear physics and chemistry, the value for a reaction is the amount of energy absorbed or released during the nuclear reaction. The value relates to the enthalpy of a chemical reaction or the energy of radioactive decay products. It can be determined from the masses of reactants and products. values affect reaction rates. In general, the larger the positive value for the reaction, the faster the reaction proceeds, and the more likely the reaction is to "favor" the products. where the masses are in atomic mass units. Also, both and are the sums of the reactant and product masses respectively. Definition The conservation of energy, between the initial and final energy of a nuclear process enables the general definition of based on the mass–energy equivalence. For any radioactive particle decay, the kinetic energy difference will be given by: where denotes the kinetic energy of the mass  . A reaction with a positive value is exothermic, i.e. has a net release of energy, since the kinetic energy of the final state is greater than the kinetic energy of the initial state. A reaction with a negative value is endothermic, i.e. requires a net energy input, since the kinetic energy of the final state is less than the kinetic energy of the initial state. Observe that a chemical reaction is exothermic when it has a negative enthalpy of reaction, in contrast a positive value in a nuclear reaction. The value can also be expressed in terms of the Mass excess of the nuclear species as: Proof The mass of a nucleus can be written as where is the mass number (sum of number of protons and neutrons) and MeV/c. Note that the count of nucleons is conserved in a nuclear reaction. Hence, and . Applications Chemical values are measurement in calorimetry. Exothermic chemical reactions tend to be more spontaneous and can emit light or heat, resulting in runaway feedback(i.e. explosions). values are also featured in particle physics. For example, Document 4::: In nuclear physics, double beta decay is a type of radioactive decay in which two neutrons are simultaneously transformed into two protons, or vice versa, inside an atomic nucleus. As in single beta decay, this process allows the atom to move closer to the optimal ratio of protons and neutrons. As a result of this transformation, the nucleus emits two detectable beta particles, which are electrons or positrons. The literature distinguishes between two types of double beta decay: ordinary double beta decay and neutrinoless double beta decay. In ordinary double beta decay, which has been observed in several isotopes, two electrons and two electron antineutrinos are emitted from the decaying nucleus. In neutrinoless double beta decay, a hypothesized process that has never been observed, only electrons would be emitted. History The idea of double beta decay was first proposed by Maria Goeppert Mayer in 1935. In 1937, Ettore Majorana demonstrated that all results of beta decay theory remain unchanged if the neutrino were its own antiparticle, now known as a Majorana particle. In 1939, Wendell H. Furry proposed that if neutrinos are Majorana particles, then double beta decay can proceed without the emission of any neutrinos, via the process now called neutrinoless double beta decay. It is not yet known whether the neutrino is a Majorana particle, and, relatedly, whether neutrinoless double beta decay exists in nature. In 1930–1940s, parity violation in weak interactions was not known, and consequently calculations showed that neutrinoless double beta decay should be much more likely to occur than ordinary double beta decay, if neutrinos were Majorana particles. The predicted half-lives were on the order of ~ years. Efforts to observe the process in laboratory date back to at least 1948 when E.L. Fireman made the first attempt to directly measure the half-life of the isotope with a Geiger counter. Radiometric experiments through about 1960 produced negative results or The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. What decay produces helium nuclei? A. Beta Decay B. alpha decay C. radiative decay D. duo decay Answer:
sciq-4496
multiple_choice
What blood pressure reading measures the pressure in the vessels between heartbeats?
[ "systolic", "diastolic", "metabolic", "plasma" ]
B
Relavent Documents: Document 0::: Blood pressure (BP) is the pressure of circulating blood against the walls of blood vessels. Most of this pressure results from the heart pumping blood through the circulatory system. When used without qualification, the term "blood pressure" refers to the pressure in a brachial artery, where it is most commonly measured. Blood pressure is usually expressed in terms of the systolic pressure (maximum pressure during one heartbeat) over diastolic pressure (minimum pressure between two heartbeats) in the cardiac cycle. It is measured in millimeters of mercury (mmHg) above the surrounding atmospheric pressure, or in kilopascals (kPa). Blood pressure is one of the vital signs—together with respiratory rate, heart rate, oxygen saturation, and body temperature—that healthcare professionals use in evaluating a patient's health. Normal resting blood pressure, in an adult is approximately systolic over diastolic, denoted as "120/80 mmHg". Globally, the average blood pressure, age standardized, has remained about the same since 1975 to the present, at approx. 127/79 mmHg in men and 122/77 mmHg in women, although these average data mask significantly diverging regional trends. Traditionally, a health-care worker measured blood pressure non-invasively by auscultation (listening) through a stethoscope for sounds in one arm's artery as the artery is squeezed, closer to the heart, by an aneroid gauge or a mercury-tube sphygmomanometer. Auscultation is still generally considered to be the gold standard of accuracy for non-invasive blood pressure readings in clinic. However, semi-automated methods have become common, largely due to concerns about potential mercury toxicity, although cost, ease of use and applicability to ambulatory blood pressure or home blood pressure measurements have also influenced this trend. Early automated alternatives to mercury-tube sphygmomanometers were often seriously inaccurate, but modern devices validated to international standards achieve an av Document 1::: In medicine, a pulse represents the tactile arterial palpation of the cardiac cycle (heartbeat) by trained fingertips. The pulse may be palpated in any place that allows an artery to be compressed near the surface of the body, such as at the neck (carotid artery), wrist (radial artery), at the groin (femoral artery), behind the knee (popliteal artery), near the ankle joint (posterior tibial artery), and on foot (dorsalis pedis artery). Pulse (or the count of arterial pulse per minute) is equivalent to measuring the heart rate. The heart rate can also be measured by listening to the heart beat by auscultation, traditionally using a stethoscope and counting it for a minute. The radial pulse is commonly measured using three fingers. This has a reason: the finger closest to the heart is used to occlude the pulse pressure, the middle finger is used get a crude estimate of the blood pressure, and the finger most distal to the heart (usually the ring finger) is used to nullify the effect of the ulnar pulse as the two arteries are connected via the palmar arches (superficial and deep). The study of the pulse is known as sphygmology. Physiology Claudius Galen was perhaps the first physiologist to describe the pulse. The pulse is an expedient tactile method of determination of systolic blood pressure to a trained observer. Diastolic blood pressure is non-palpable and unobservable by tactile methods, occurring between heartbeats. Pressure waves generated by the heart in systole move the arterial walls. Forward movement of blood occurs when the boundaries are pliable and compliant. These properties form enough to create a palpable pressure wave. The heart rate may greater or lesser than the pulse rate depending upon physiologic demand. In this case, the heart rate is determined by auscultation or audible sounds at the heart apex, in which case it is not the pulse. The pulse deficit (difference between heart beats and pulsations at the periphery) is determined by simul Document 2::: The article reviews the evolution of continuous noninvasive arterial pressure measurement (CNAP). The historical gap between ease of use, but intermittent upper arm instruments and bulky, but continuous “pulse writers” (sphygmographs) is discussed starting with the first efforts to measure pulse, published by Jules Harrison in 1835. Such sphygmographs led a shadowy existence in the past, while Riva Rocci's upper arm blood pressure measurement started its triumphant success over 100 years ago. In recent times, CNAP measurement introduced by Jan Penáz in 1973 enabled the first recording of noninvasive beat-to-beat blood pressure resulting in marketed products such as the Finapres™ device and its successors. Recently, a novel method for CNAP monitoring has been designed for patient monitoring in perioperative, critical and emergency care, where blood pressure needs to be measured repeatedly or even continuously to facilitate the best care for patients. Early sphygmographs Prior to quantitative measurement, which was applied in medicine in the 19th century, diagnostic possibilities of hemodynamic activities had been limited to qualitative sensing of pulse through palpation. In some cultures, sensitive palpation is still a main part of medicine like pulse diagnosis in Traditional Chinese medicine (TCM) or the identification of the ayurvedic doshas. The introduction of the stethoscope and the methods of auscultation by René-Théophile-Hyacinthe Laennec in 1816 changed the medical behavior consistently and forced the need of quantitative hemodynamic measurements. The first instrument which could measure the force of pulse with a mercury filled glass tube was developed by Jules Harrison in 1835. Jean Léonard Marie Poiseuille invented the first mercury “Hemodynameter”, a forerunner of the sphygmomanometer in 1821. The first sphygmograph (pulse writer) for the continuous graphical registration of pulse dates back to Karl von Vierordt in 1854. More popular, however, was the Document 3::: Pressure rate product (also known as Cardiovascular Product or Double Product), within medical cardiology, specifically for cardiovascular physiology and exercise physiology is used to determine the myocardial workload. Description The calculation formula is: Rate Pressure Product (RPP) = Heart Rate (HR) * Systolic Blood Pressure (SBP) The units for the Heart Rate are beats per minute and for the Blood Pressure mmHg. Rate pressure product is a measure of the stress put on the cardiac muscle based on the number of times it needs to beat per minute (HR) and the arterial blood pressure that it is pumping against (SBP). It will be a direct indication of the energy demand of the heart and thus a good measure of the energy consumption of the heart. The rate pressure product allows you to calculate the internal workload or hemodynamic response. Document 4::: Cardiovascular physiology is the study of the cardiovascular system, specifically addressing the physiology of the heart ("cardio") and blood vessels ("vascular"). These subjects are sometimes addressed separately, under the names cardiac physiology and circulatory physiology. Although the different aspects of cardiovascular physiology are closely interrelated, the subject is still usually divided into several subtopics. Heart Cardiac output (= heart rate * stroke volume. Can also be calculated with Fick principle, palpating method.) Stroke volume (= end-diastolic volume − end-systolic volume) Ejection fraction (= stroke volume / end-diastolic volume) Cardiac output is mathematically ` to systole Inotropic, chronotropic, and dromotropic states Cardiac input (= heart rate * suction volume Can be calculated by inverting terms in Fick principle) Suction volume (= end-systolic volume + end-diastolic volume) Injection fraction (=suction volume / end-systolic volume) Cardiac input is mathematically ` to diastole Electrical conduction system of the heart Electrocardiogram Cardiac marker Cardiac action potential Frank–Starling law of the heart Wiggers diagram Pressure volume diagram Regulation of blood pressure Baroreceptor Baroreflex Renin–angiotensin system Renin Angiotensin Juxtaglomerular apparatus Aortic body and carotid body Autoregulation Cerebral Autoregulation Hemodynamics Under most circumstances, the body attempts to maintain a steady mean arterial pressure. When there is a major and immediate decrease (such as that due to hemorrhage or standing up), the body can increase the following: Heart rate Total peripheral resistance (primarily due to vasoconstriction of arteries) Inotropic state In turn, this can have a significant impact upon several other variables: Stroke volume Cardiac output Pressure Pulse pressure (systolic pressure - diastolic pressure) Mean arterial pressure (usually approximated with diastolic pressure + The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. What blood pressure reading measures the pressure in the vessels between heartbeats? A. systolic B. diastolic C. metabolic D. plasma Answer:
ai2_arc-489
multiple_choice
Jeremiah noticed a plant had many missing leaves and large holes in other leaves. Why do missing leaves hurt the plant?
[ "The plant makes less food.", "The plant takes in less water.", "The plant attracts fewer insects.", "The plant does not have support." ]
A
Relavent Documents: Document 0::: Tolerance is the ability of plants to mitigate the negative fitness effects caused by herbivory. It is one of the general plant defense strategies against herbivores, the other being resistance, which is the ability of plants to prevent damage (Strauss and Agrawal 1999). Plant defense strategies play important roles in the survival of plants as they are fed upon by many different types of herbivores, especially insects, which may impose negative fitness effects (Strauss and Zangerl 2002). Damage can occur in almost any part of the plants, including the roots, stems, leaves, flowers and seeds (Strauss and Zergerl 2002). In response to herbivory, plants have evolved a wide variety of defense mechanisms and although relatively less studied than resistance strategies, tolerance traits play a major role in plant defense (Strauss and Zergerl 2002, Rosenthal and Kotanen 1995). Traits that confer tolerance are controlled genetically and therefore are heritable traits under selection (Strauss and Agrawal 1999). Many factors intrinsic to the plants, such as growth rate, storage capacity, photosynthetic rates and nutrient allocation and uptake, can affect the extent to which plants can tolerate damage (Rosenthal and Kotanen 1994). Extrinsic factors such as soil nutrition, carbon dioxide levels, light levels, water availability and competition also have an effect on tolerance (Rosenthal and Kotanen 1994). History of the study of plant tolerance Studies of tolerance to herbivory has historically been the focus of agricultural scientists (Painter 1958; Bardner and Fletcher 1974). Tolerance was actually initially classified as a form of resistance (Painter 1958). Agricultural studies on tolerance, however, are mainly concerned with the compensatory effect on the plants' yield and not its fitness, since it is of economical interest to reduce crop losses due to herbivory by pests (Trumble 1993; Bardner and Fletcher 1974). One surprising discovery made about plant tolerance was th Document 1::: In ecology, shade tolerance is a plant's ability to tolerate low light levels. The term is also used in horticulture and landscaping, although in this context its use is sometimes imprecise, especially in labeling of plants for sale in commercial nurseries. Shade tolerance is a complex, multi-faceted property of plants. Different plant species exhibit different adaptations to shade, and a particular plant can exhibit varying degrees of shade tolerance, or even of requirement for light, depending on its history or stage of development. Basic concepts Except for some parasitic plants, all land plants need sunlight to survive. However, in general, more sunlight does not always make it easier for plants to survive. In direct sunlight, plants face desiccation and exposure to UV rays, and must expend energy producing pigments to block UV light, and waxy coatings to prevent water loss. Plants adapted to shade have the ability to use far-red light (about 730 nm) more effectively than plants adapted to full sunlight. Most red light gets absorbed by the shade-intolerant canopy plants, but more of the far-red light penetrates the canopy, reaching the understorey. The shade-tolerant plants found here are capable of photosynthesis using light at such wavelengths. The situation with respect to nutrients is often different in shade and sun. Most shade is due to the presence of a canopy of other plants, and this is usually associated with a completely different environment—richer in soil nutrients—than sunny areas. Shade-tolerant plants are thus adapted to be efficient energy-users. In simple terms, shade-tolerant plants grow broader, thinner leaves to catch more sunlight relative to the cost of producing the leaf. Shade-tolerant plants are also usually adapted to make more use of soil nutrients than shade-intolerant plants. A distinction may be made between "shade-tolerant" plants and "shade-loving" or sciophilous plants. Sciophilous plants are dependent on a degree of sha Document 2::: Xenohormesis is a hypothesis that posits that certain molecules such as plant polyphenols, which indicate stress in the plants, can have benefits of another organism (heterotrophs) which consumes it. Or in simpler terms, xenohormesis is interspecies hormesis. The expected benefits include improve lifespan and fitness, by activating the animal's cellular stress response. This may be useful to evolve, as it gives possible cues about the state of the environment. If the plants an animal is eating have increased polyphenol content, it means the plant is under stress and may signal famines. Using the chemical cues the heterotophs could preemptively prepare and defend itself before conditions worsen. A possible example may be resveratrol, which is famously found in red wine, which modulates over two dozen receptors and enzymes in mammals. Xenohormesis could also explain several phenomena seen in the ethno-pharmaceutical (traditional medicine) side of things. Such as in the case of cinnamon, which in several studies have shown to help treat type 2 diabetes, but hasn't been confirmed in meta analysis. This can be caused by the cinnamon used in one study differing from the other in xenohormic properties. Some explanations as to why this works, is first and foremost, it could be a coincidence. Especially for cases which partially venomous products, cause a positive stress in the organism. The second is that it is a shared evolutionary attribute, as both animals and plants share a huge amount of homology between their pathways. The third is that there is evolutionary pressure to evolve to better respond to the molecules. The latter is proposed mainly by Howitz and his team. There also might be the problem that our focus on maximizing the crop output, may be losing many of the xenohormetic advantages. Although the ideal conditions will cause the plant to increase its crop output it can also be argued it is loosing stress and therefore the hormesis. The honeybee colony colla Document 3::: Conceptual questions or conceptual problems in science, technology, engineering, and mathematics (STEM) education are questions that can be answered based only on the knowledge of relevant concepts, rather than performing extensive calculations. They contrast with most homework and exam problems in science and engineering that typically require plugging in numerical values into previously discussed formulas. Such "plug-and-chug" numerical problems can often be solved correctly by just matching the pattern of the problem to a previously discussed problem and changing the numerical inputs, which requires significant amounts of time to perform the calculations but does not test or deepen the understanding of how the concepts and formulas should work together. Conceptual questions, therefore, provide a good complement to conventional numerical problems because they need minimal or no calculations and instead encourage the students to engage more deeply with the underlying concepts and how they relate to formulas. Conceptual problems are often formulated as multiple-choice questions, making them easy to use during in-class discussions, particularly when utilizing active learning, peer instruction, and audience response. An example of a conceptual question in undergraduate thermodynamics is provided below: During adiabatic expansion of an ideal gas, its temperatureincreases decreases stays the same Impossible to tell/need more information The use of conceptual questions in physics was popularized by Eric Mazur, particularly in the form of multiple-choice tests that he called ConcepTests. In recent years, multiple websites that maintain lists of conceptual questions have been created by instructors for various disciplines. Some books on physics provide many examples of conceptual questions as well. Multiple conceptual questions can be assembled into a concept inventory to test the working knowledge of students at the beginning of a course or to track the improvement in Document 4::: What a Plant Knows is a popular science book by Daniel Chamovitz, originally published in 2012, discussing the sensory system of plants. A revised edition was published in 2017. Release details / Editions / Publication Hardcover edition, 2012 Paperback version, 2013 Revised edition, 2017 What a Plant Knows has been translated and published in a number of languages. The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. Jeremiah noticed a plant had many missing leaves and large holes in other leaves. Why do missing leaves hurt the plant? A. The plant makes less food. B. The plant takes in less water. C. The plant attracts fewer insects. D. The plant does not have support. Answer:
sciq-10959
multiple_choice
What are structures that have a common function and suggest common ancestry?
[ "monogamous structures", "reversible structures", "homologous structures", "analogous structures" ]
C
Relavent Documents: Document 0::: In biology, homology is similarity due to shared ancestry between a pair of structures or genes in different taxa. A common example of homologous structures is the forelimbs of vertebrates, where the wings of bats and birds, the arms of primates, the front flippers of whales, and the forelegs of four-legged vertebrates like dogs and crocodiles are all derived from the same ancestral tetrapod structure. Evolutionary biology explains homologous structures adapted to different purposes as the result of descent with modification from a common ancestor. The term was first applied to biology in a non-evolutionary context by the anatomist Richard Owen in 1843. Homology was later explained by Charles Darwin's theory of evolution in 1859, but had been observed before this, from Aristotle onwards, and it was explicitly analysed by Pierre Belon in 1555. In developmental biology, organs that developed in the embryo in the same manner and from similar origins, such as from matching primordia in successive segments of the same animal, are serially homologous. Examples include the legs of a centipede, the maxillary palp and labial palp of an insect, and the spinous processes of successive vertebrae in a vertebral column. Male and female reproductive organs are homologous if they develop from the same embryonic tissue, as do the ovaries and testicles of mammals including humans. Sequence homology between protein or DNA sequences is similarly defined in terms of shared ancestry. Two segments of DNA can have shared ancestry because of either a speciation event (orthologs) or a duplication event (paralogs). Homology among proteins or DNA is inferred from their sequence similarity. Significant similarity is strong evidence that two sequences are related by divergent evolution from a common ancestor. Alignments of multiple sequences are used to discover the homologous regions. Homology remains controversial in animal behaviour, but there is suggestive evidence that, for example, dom Document 1::: Serial homology is a special type of homology, defined by Owen as "representative or repetitive relation in the segments of the same organism." Ernst Haeckel preferred the term "homotypy" for the same phenomenon. Classical examples of serial homologies are the development of forelimbs and hind limbs of tetrapods and the iterative structure of the vertebrae. See also Deep homology Evolutionary developmental biology Document 2::: In chemistry and crystallography, crystal structures that have the same set of interatomic distances are called homometric structures. Homometric structures need not be congruent (that is, related by a rigid motion or reflection). Homometric crystal structures produce identical diffraction patterns; therefore, they cannot be distinguished by a diffraction experiment. Recently, a Monte Carlo algorithm was proposed to calculate the number of homometric structures corresponding to any given set of interatomic distances. See also Patterson function Arthur Lindo Patterson Document 3::: Symmetry breaking in biology is the process by which uniformity is broken, or the number of points to view invariance are reduced, to generate a more structured and improbable state. Symmetry breaking is the event where symmetry along a particular axis is lost to establish a polarity. Polarity is a measure for a biological system to distinguish poles along an axis. This measure is important because it is the first step to building complexity. For example, during organismal development, one of the first steps for the embryo is to distinguish its dorsal-ventral axis. The symmetry-breaking event that occurs here will determine which end of this axis will be the ventral side, and which end will be the dorsal side. Once this distinction is made, then all the structures that are located along this axis can develop at the proper location. As an example, during human development, the embryo needs to establish where is ‘back’ and where is ‘front’ before complex structures, such as the spine and lungs, can develop in the right location (where the lungs are placed ‘in front’ of the spine). This relationship between symmetry breaking and complexity was articulated by P.W. Anderson. He speculated that increasing levels of broken symmetry in many-body systems correlates with increasing complexity and functional specialization. In a biological perspective, the more complex an organism is, the higher number of symmetry-breaking events can be found. The importance of symmetry breaking in biology is also reflected in the fact that it's found at all scales. Symmetry breaking can be found at the macromolecular level, at the subcellular level and even at the tissues and organ level. It's also interesting to note that most asymmetry on a higher scale is a reflection of symmetry breaking on a lower scale. Cells first need to establish a polarity through a symmetry-breaking event before tissues and organs themselves can be polar. For example, one model proposes that left-right bo Document 4::: Origination of Organismal Form: Beyond the Gene in Developmental and Evolutionary Biology is an anthology published in 2003 edited by Gerd B. Müller and Stuart A. Newman. The book is the outcome of the 4th Altenberg Workshop in Theoretical Biology on "Origins of Organismal Form: Beyond the Gene Paradigm", hosted in 1999 at the Konrad Lorenz Institute for Evolution and Cognition Research. It has been cited over 200 times and has a major influence on extended evolutionary synthesis research. Description of the book The book explores the multiple factors that may have been responsible for the origination of biological form in multicellular life. These biological forms include limbs, segmented structures, and different body symmetries. It explores why the basic body plans of nearly all multicellular life arose in the relatively short time span of the Cambrian Explosion. The authors focus on physical factors (structuralism) other than changes in an organism's genome that may have caused multicellular life to form new structures. These physical factors include differential adhesion of cells and feedback oscillations between cells. The book also presents recent experimental results that examine how the same embryonic tissues or tumor cells can be coaxed into forming dramatically different structures under different environmental conditions. One of the goals of the book is to stimulate research that may lead to a more comprehensive theory of evolution. It is frequently cited as foundational to the development of the extended evolutionary synthesis. List of contributions Origination of Organismal Form: The Forgotten Cause in Evolutionary Theory, Gerd B. Müller and Stuart A. Newman The Cambrian "Explosion" of Metazoans, Simon Conway Morris Convergence and Homoplasy in the Evolution of Organismal Form, Pat Willmer Homology:The Evolution of Morphological Organization, Gerd B. Müller Only Details Determine, Roy J. Britten The Reactive Genome, Scott F. Gilbert Tis The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. What are structures that have a common function and suggest common ancestry? A. monogamous structures B. reversible structures C. homologous structures D. analogous structures Answer: