id
stringlengths
6
15
question_type
stringclasses
1 value
question
stringlengths
15
683
choices
listlengths
4
4
answer
stringclasses
5 values
explanation
stringclasses
481 values
prompt
stringlengths
1.75k
10.9k
sciq-9627
multiple_choice
What is measured with a personal dosimeter?
[ "blood insulin", "alcohol concentration", "exposure to radioactivity", "internal temperature" ]
C
Relavent Documents: Document 0::: GRE Subject Biochemistry, Cell and Molecular Biology was a standardized exam provided by ETS (Educational Testing Service) that was discontinued in December 2016. It is a paper-based exam and there are no computer-based versions of it. ETS places this exam three times per year: once in April, once in October and once in November. Some graduate programs in the United States recommend taking this exam, while others require this exam score as a part of the application to their graduate programs. ETS sends a bulletin with a sample practice test to each candidate after registration for the exam. There are 180 questions within the biochemistry subject test. Scores are scaled and then reported as a number between 200 and 990; however, in recent versions of the test, the maximum and minimum reported scores have been 760 (corresponding to the 99 percentile) and 320 (1 percentile) respectively. The mean score for all test takers from July, 2009, to July, 2012, was 526 with a standard deviation of 95. After learning that test content from editions of the GRE® Biochemistry, Cell and Molecular Biology (BCM) Test has been compromised in Israel, ETS made the decision not to administer this test worldwide in 2016–17. Content specification Since many students who apply to graduate programs in biochemistry do so during the first half of their fourth year, the scope of most questions is largely that of the first three years of a standard American undergraduate biochemistry curriculum. A sampling of test item content is given below: Biochemistry (36%) A Chemical and Physical Foundations Thermodynamics and kinetics Redox states Water, pH, acid-base reactions and buffers Solutions and equilibria Solute-solvent interactions Chemical interactions and bonding Chemical reaction mechanisms B Structural Biology: Structure, Assembly, Organization and Dynamics Small molecules Macromolecules (e.g., nucleic acids, polysaccharides, proteins and complex lipids) Supramolecular complexes (e.g. Document 1::: The SAT Subject Test in Biology was the name of a one-hour multiple choice test given on biology by the College Board. A student chose whether to take the test depending upon college entrance requirements for the schools in which the student is planning to apply. Until 1994, the SAT Subject Tests were known as Achievement Tests; and from 1995 until January 2005, they were known as SAT IIs. Of all SAT subject tests, the Biology E/M test was the only SAT II that allowed the test taker a choice between the ecological or molecular tests. A set of 60 questions was taken by all test takers for Biology and a choice of 20 questions was allowed between either the E or M tests. This test was graded on a scale between 200 and 800. The average for Molecular is 630 while Ecological is 591. On January 19 2021, the College Board discontinued all SAT Subject tests, including the SAT Subject Test in Biology E/M. This was effective immediately in the United States, and the tests were to be phased out by the following summer for international students. This was done as a response to changes in college admissions due to the impact of the COVID-19 pandemic on education. Format This test had 80 multiple-choice questions that were to be answered in one hour. All questions had five answer choices. Students received one point for each correct answer, lost ¼ of a point for each incorrect answer, and received 0 points for questions left blank. The student's score was based entirely on his or her performance in answering the multiple-choice questions. The questions covered a broad range of topics in general biology. There were more specific questions related respectively on ecological concepts (such as population studies and general Ecology) on the E test and molecular concepts such as DNA structure, translation, and biochemistry on the M test. Preparation The College Board suggested a year-long course in biology at the college preparatory level, as well as a one-year course in algebra, a Document 2::: Single Best Answer (SBA or One Best Answer) is a written examination form of multiple choice questions used extensively in medical education. Structure A single question is posed with typically five alternate answers, from which the candidate must choose the best answer. This method avoids the problems of past examinations of a similar form described as Single Correct Answer. The older form can produce confusion where more than one of the possible answers has some validity. The newer form makes it explicit that more than one answer may have elements that are correct, but that one answer will be superior. Prior to the widespread introduction of SBAs into medical education, the typical form of examination was true-false multiple choice questions. But during the 2000s, educators found that SBAs would be superior. Document 3::: The Mathematics Subject Classification (MSC) is an alphanumerical classification scheme that has collaboratively been produced by staff of, and based on the coverage of, the two major mathematical reviewing databases, Mathematical Reviews and Zentralblatt MATH. The MSC is used by many mathematics journals, which ask authors of research papers and expository articles to list subject codes from the Mathematics Subject Classification in their papers. The current version is MSC2020. Structure The MSC is a hierarchical scheme, with three levels of structure. A classification can be two, three or five digits long, depending on how many levels of the classification scheme are used. The first level is represented by a two-digit number, the second by a letter, and the third by another two-digit number. For example: 53 is the classification for differential geometry 53A is the classification for classical differential geometry 53A45 is the classification for vector and tensor analysis First level At the top level, 64 mathematical disciplines are labeled with a unique two-digit number. In addition to the typical areas of mathematical research, there are top-level categories for "History and Biography", "Mathematics Education", and for the overlap with different sciences. Physics (i.e. mathematical physics) is particularly well represented in the classification scheme with a number of different categories including: Fluid mechanics Quantum mechanics Geophysics Optics and electromagnetic theory All valid MSC classification codes must have at least the first-level identifier. Second level The second-level codes are a single letter from the Latin alphabet. These represent specific areas covered by the first-level discipline. The second-level codes vary from discipline to discipline. For example, for differential geometry, the top-level code is 53, and the second-level codes are: A for classical differential geometry B for local differential geometry C for glo Document 4::: Equivalent dose is a dose quantity H representing the stochastic health effects of low levels of ionizing radiation on the human body which represents the probability of radiation-induced cancer and genetic damage. It is derived from the physical quantity absorbed dose, but also takes into account the biological effectiveness of the radiation, which is dependent on the radiation type and energy. In the SI system of units, the unit of measure is the sievert (Sv). Application To enable consideration of stochastic health risk, calculations are performed to convert the physical quantity absorbed dose into equivalent dose, the details of which depend on the radiation type. For applications in radiation protection and dosimetry assessment, the International Commission on Radiological Protection (ICRP) and the International Commission on Radiation Units and Measurements (ICRU) have published recommendations and data on how to calculate equivalent dose from absorbed dose. Equivalent dose is designated by the ICRP as a "limiting quantity"; to specify exposure limits to ensure that "the occurrence of stochastic health effects is kept below unacceptable levels and that tissue reactions are avoided". This is a calculated value, as equivalent dose cannot be practically measured, and the purpose of the calculation is to generate a value of equivalent dose for comparison with observed health effects. Calculation Equivalent dose HT is calculated using the mean absorbed dose deposited in body tissue or organ T, multiplied by the radiation weighting factor WR which is dependent on the type and energy of the radiation R. The radiation weighting factor represents the relative biological effectiveness of the radiation and modifies the absorbed dose to take account of the different biological effects of various types and energies of radiation. The ICRP has assigned radiation weighting factors to specified radiation types dependent on their relative biological effectiveness, whic The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. What is measured with a personal dosimeter? A. blood insulin B. alcohol concentration C. exposure to radioactivity D. internal temperature Answer:
sciq-2637
multiple_choice
The german physicist max planck (1858–1947) used the idea that atoms and molecules in a body act like oscillators to absorb and emit this?
[ "blood", "radiation", "convection", "energy" ]
B
Relavent Documents: Document 0::: The Planck postulate (or Planck's postulate), one of the fundamental principles of quantum mechanics, is the postulate that the energy of oscillators in a black body is quantized, and is given by , where is an integer (1, 2, 3, ...), is Planck's constant, and (the Greek letter nu, not the Latin letter v) is the frequency of the oscillator. The postulate was introduced by Max Planck in his derivation of his law of black body radiation in 1900. This assumption allowed Planck to derive a formula for the entire spectrum of the radiation emitted by a black body. Planck was unable to justify this assumption based on classical physics; he considered quantization as being purely a mathematical trick, rather than (as is now known) a fundamental change in the understanding of the world. In other words, Planck then contemplated virtual oscillators. In 1905, Albert Einstein adapted the Planck postulate to explain the photoelectric effect, but Einstein proposed that the energy of photons themselves was quantized (with photon energy given by the Planck–Einstein relation), and that quantization was not merely a feature of microscopic oscillators. Planck's postulate was further applied to understanding the Compton effect, and was applied by Niels Bohr to explain the emission spectrum of the hydrogen atom and derive the correct value of the Rydberg constant. Notes Document 1::: Matter waves are a central part of the theory of quantum mechanics, being half of wave–particle duality. All matter exhibits wave-like behavior. For example, a beam of electrons can be diffracted just like a beam of light or a water wave. The concept that matter behaves like a wave was proposed by French physicist Louis de Broglie () in 1924, and so matter waves are also known as de Broglie waves. The de Broglie wavelength is the wavelength, , associated with a particle with momentum through the Planck constant, : Wave-like behavior of matter was first experimentally demonstrated by George Paget Thomson and Alexander Reid's transmission diffraction experiment, and independently in the Davisson–Germer experiment, both using electrons; and it has also been confirmed for other elementary particles, neutral atoms and molecules. Introduction Background At the end of the 19th century, light was thought to consist of waves of electromagnetic fields which propagated according to Maxwell's equations, while matter was thought to consist of localized particles (see history of wave and particle duality). In 1900, this division was questioned when, investigating the theory of black-body radiation, Max Planck proposed that the thermal energy of oscillating atoms is divided into discrete portions, or quanta. Extending Planck's investigation in several ways, including its connection with the photoelectric effect, Albert Einstein proposed in 1905 that light is also propagated and absorbed in quanta, now called photons. These quanta would have an energy given by the Planck–Einstein relation: and a momentum vector where (lowercase Greek letter nu) and (lowercase Greek letter lambda) denote the frequency and wavelength of the light, the speed of light, and the Planck constant. In the modern convention, frequency is symbolized by as is done in the rest of this article. Einstein's postulate was verified experimentally by K. T. Compton and O. W. Richardson and by A. L. Hugh Document 2::: The timeline of quantum mechanics is a list of key events in the history of quantum mechanics, quantum field theories and quantum chemistry. 19th century 1801 – Thomas Young establishes that light made up of waves with his Double-slit experiment. 1859 – Gustav Kirchhoff introduces the concept of a blackbody and proves that its emission spectrum depends only on its temperature. 1860-1900 – Ludwig Eduard Boltzmann, James Clerk Maxwell and others develop the theory of statistical mechanics. Boltzmann argues that entropy is a measure of disorder. 1877 – Boltzmann suggests that the energy levels of a physical system could be discrete based on statistical mechanics and mathematical arguments; also produces the first circle diagram representation, or atomic model of a molecule (such as an iodine gas molecule) in terms of the overlapping terms α and β, later (in 1928) called molecular orbitals, of the constituting atoms. 1885 – Johann Jakob Balmer discovers a numerical relationship between visible spectral lines of hydrogen, the Balmer series. 1887 – Heinrich Hertz discovers the photoelectric effect, shown by Einstein in 1905 to involve quanta of light. 1888 – Hertz demonstrates experimentally that electromagnetic waves exist, as predicted by Maxwell. 1888 – Johannes Rydberg modifies the Balmer formula to include all spectral series of lines for the hydrogen atom, producing the Rydberg formula which is employed later by Niels Bohr and others to verify Bohr's first quantum model of the atom. 1895 – Wilhelm Conrad Röntgen discovers X-rays in experiments with electron beams in plasma. 1896 – Antoine Henri Becquerel accidentally discovers radioactivity while investigating the work of Wilhelm Conrad Röntgen; he finds that uranium salts emit radiation that resembled Röntgen's X-rays in their penetrating power. In one experiment, Becquerel wraps a sample of a phosphorescent substance, potassium uranyl sulfate, in photographic plates surrounded by very thick black paper i Document 3::: Brownian motion is the random motion of particles suspended in a medium (a liquid or a gas). This motion pattern typically consists of random fluctuations in a particle's position inside a fluid sub-domain, followed by a relocation to another sub-domain. Each relocation is followed by more fluctuations within the new closed volume. This pattern describes a fluid at thermal equilibrium, defined by a given temperature. Within such a fluid, there exists no preferential direction of flow (as in transport phenomena). More specifically, the fluid's overall linear and angular momenta remain null over time. The kinetic energies of the molecular Brownian motions, together with those of molecular rotations and vibrations, sum up to the caloric component of a fluid's internal energy (the equipartition theorem). This motion is named after the botanist Robert Brown, who first described the phenomenon in 1827, while looking through a microscope at pollen of the plant Clarkia pulchella immersed in water. In 1900, almost eighty years later, the French mathematician Louis Bachelier modeled the stochastic process now called Brownian motion in his doctoral thesis, The Theory of Speculation (Théorie de la spéculation), prepared under the supervision of Henri Poincaré. Then, in 1905, theoretical physicist Albert Einstein published a paper where he modeled the motion of the pollen particles as being moved by individual water molecules, making one of his first major scientific contributions. The direction of the force of atomic bombardment is constantly changing, and at different times the particle is hit more on one side than another, leading to the seemingly random nature of the motion. This explanation of Brownian motion served as convincing evidence that atoms and molecules exist and was further verified experimentally by Jean Perrin in 1908. Perrin was awarded the Nobel Prize in Physics in 1926 "for his work on the discontinuous structure of matter". The many-body interactions th Document 4::: Quantum biology is the study of applications of quantum mechanics and theoretical chemistry to aspects of biology that cannot be accurately described by the classical laws of physics. An understanding of fundamental quantum interactions is important because they determine the properties of the next level of organization in biological systems. Many biological processes involve the conversion of energy into forms that are usable for chemical transformations, and are quantum mechanical in nature. Such processes involve chemical reactions, light absorption, formation of excited electronic states, transfer of excitation energy, and the transfer of electrons and protons (hydrogen ions) in chemical processes, such as photosynthesis, olfaction and cellular respiration. Quantum biology may use computations to model biological interactions in light of quantum mechanical effects. Quantum biology is concerned with the influence of non-trivial quantum phenomena, which can be explained by reducing the biological process to fundamental physics, although these effects are difficult to study and can be speculative. History Quantum biology is an emerging field, in the sense that most current research is theoretical and subject to questions that require further experimentation. Though the field has only recently received an influx of attention, it has been conceptualized by physicists throughout the 20th century. It has been suggested that quantum biology might play a critical role in the future of the medical world. Early pioneers of quantum physics saw applications of quantum mechanics in biological problems. Erwin Schrödinger's 1944 book What Is Life? discussed applications of quantum mechanics in biology. Schrödinger introduced the idea of an "aperiodic crystal" that contained genetic information in its configuration of covalent chemical bonds. He further suggested that mutations are introduced by "quantum leaps". Other pioneers Niels Bohr, Pascual Jordan, and Max Delbrück argu The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. The german physicist max planck (1858–1947) used the idea that atoms and molecules in a body act like oscillators to absorb and emit this? A. blood B. radiation C. convection D. energy Answer:
sciq-8639
multiple_choice
What type of protein fibers in the cytoskeleton are the narrowest?
[ "capillaries", "microfilaments", "mitosis", "macrophages" ]
B
Relavent Documents: Document 0::: In cell biology, microtrabeculae were a hypothesised fourth element of the cytoskeleton (the other three being microfilaments, microtubules and intermediate filaments), proposed by Keith Porter based on images obtained from high-voltage electron microscopy of whole cells in the 1970s. The images showed short, filamentous structures of unknown molecular composition associated with known cytoplasmic structures. It is now generally accepted that microtrabeculae are nothing more than an artifact of certain types of fixation treatment, although the complexity of the cell's cytoskeleton is not yet fully understood. Document 1::: A microfibril is a very fine fibril, or fiber-like strand, consisting of glycoproteins and cellulose. It is usually, but not always, used as a general term in describing the structure of protein fiber, e.g. hair and sperm tail. Its most frequently observed structural pattern is the 9+2 pattern in which two central protofibrils are surrounded by nine other pairs. Cellulose inside plants is one of the examples of non-protein compounds that are using this term with the same purpose. Cellulose microfibrils are laid down in the inner surface of the primary cell wall. As the cell absorbs water, its volume increases and the existing microfibrils separate and new ones are formed to help increase cell strength. Synthesis and function Cellulose is synthesized by cellulose synthase or Rosette terminal complexes which reside on a cells membrane. As cellulose fibrils are synthesized and grow extracellularly they push up against neighboring cells. Since the neighboring cell can not move easily the Rosette complex is instead pushed around the cell through the fluid phospholipid membrane. Eventually this results in the cell becoming wrapped in a microfibril layer. This layer becomes the cell wall. The organization of microfibrils forming the primary cell wall is rather disorganized. However, another mechanism is used in secondary cell walls leading to its organization. Essentially, lanes on the secondary cell wall are built with microtubules. These lanes force microfibrils to remain in a certain area while they wrap. During this process microtubules can spontaneously depolymerize and repolymerize in a different orientation. This leads to a different direction in which the cell continues getting wrapped. Fibrillin microfibrils are found in connective tissues, which mainly makes up fibrillin-1 and provides elasticity. During the assembly, mirofibrils exhibit a repeating stringed-beads arrangement produced by the cross-linking of molecules forming a striated pattern with a given Document 2::: The cytoskeleton is a complex, dynamic network of interlinking protein filaments present in the cytoplasm of all cells, including those of bacteria and archaea. In eukaryotes, it extends from the cell nucleus to the cell membrane and is composed of similar proteins in the various organisms. It is composed of three main components:microfilaments, intermediate filaments, and microtubules, and these are all capable of rapid growth or disassembly depending on the cell's requirements. A multitude of functions can be performed by the cytoskeleton. Its primary function is to give the cell its shape and mechanical resistance to deformation, and through association with extracellular connective tissue and other cells it stabilizes entire tissues. The cytoskeleton can also contract, thereby deforming the cell and the cell's environment and allowing cells to migrate. Moreover, it is involved in many cell signaling pathways and in the uptake of extracellular material (endocytosis), the segregation of chromosomes during cellular division, the cytokinesis stage of cell division, as scaffolding to organize the contents of the cell in space and in intracellular transport (for example, the movement of vesicles and organelles within the cell) and can be a template for the construction of a cell wall. Furthermore, it can form specialized structures, such as flagella, cilia, lamellipodia and podosomes. The structure, function and dynamic behavior of the cytoskeleton can be very different, depending on organism and cell type. Even within one cell, the cytoskeleton can change through association with other proteins and the previous history of the network. A large-scale example of an action performed by the cytoskeleton is muscle contraction. This is carried out by groups of highly specialized cells working together. A main component in the cytoskeleton that helps show the true function of this muscle contraction is the microfilament. Microfilaments are composed of the most abundant cel Document 3::: H2.00.04.4.01001: Lymphoid tissue H2.00.05.0.00001: Muscle tissue H2.00.05.1.00001: Smooth muscle tissue H2.00.05.2.00001: Striated muscle tissue H2.00.06.0.00001: Nerve tissue H2.00.06.1.00001: Neuron H2.00.06.2.00001: Synapse H2.00.06.2.00001: Neuroglia h3.01: Bones h3.02: Joints h3.03: Muscles h3.04: Alimentary system h3.05: Respiratory system h3.06: Urinary system h3.07: Genital system h3.08: Document 4::: Vertebrates Tendon cells, or tenocytes, are elongated fibroblast type cells. The cytoplasm is stretched between the collagen fibres of the tendon. They have a central cell nucleus with a prominent nucleolus. Tendon cells have a well-developed rough endoplasmic reticulum and they are responsible for synthesis and turnover of tendon fibres and ground substance. Invertebrates Tendon cells form a connecting epithelial layer between the muscle and shell in molluscs. In gastropods, for example, the retractor muscles connect to the shell via tendon cells. Muscle cells are attached to the collagenous myo-tendon space via hemidesmosomes. The myo-tendon space is then attached to the base of the tendon cells via basal hemidesmosomes, while apical hemidesmosomes, which sit atop microvilli, attach the tendon cells to a thin layer of collagen. This is in turn attached to the shell via organic fibres which insert into the shell. Molluscan tendon cells appear columnar and contain a large basal cell nucleus. The cytoplasm is filled with granular endoplasmic reticulum and sparse golgi. Dense bundles of microfilaments run the length of the cell connecting the basal to the apical hemidesmosomes. See also List of human cell types derived from the germ layers List of distinct cell types in the adult human body The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. What type of protein fibers in the cytoskeleton are the narrowest? A. capillaries B. microfilaments C. mitosis D. macrophages Answer:
sciq-1505
multiple_choice
What are the two components of a mixture called?
[ "nutrients and a solvent", "acid and base", "solute and a solvent", "concentration and a solvent" ]
C
Relavent Documents: Document 0::: In chemistry, a mixture is a material made up of two or more different chemical substances which are not chemically bonded. A mixture is the physical combination of two or more substances in which the identities are retained and are mixed in the form of solutions, suspensions and colloids. Mixtures are one product of mechanically blending or mixing chemical substances such as elements and compounds, without chemical bonding or other chemical change, so that each ingredient substance retains its own chemical properties and makeup. Despite the fact that there are no chemical changes to its constituents, the physical properties of a mixture, such as its melting point, may differ from those of the components. Some mixtures can be separated into their components by using physical (mechanical or thermal) means. Azeotropes are one kind of mixture that usually poses considerable difficulties regarding the separation processes required to obtain their constituents (physical or chemical processes or, even a blend of them). Characteristics of mixtures All mixtures can be characterized as being separable by mechanical means (e.g. purification, distillation, electrolysis, chromatography, heat, filtration, gravitational sorting, centrifugation). Mixtures differ from chemical compounds in the following ways: the substances in a mixture can be separated using physical methods such as filtration, freezing, and distillation. there is little or no energy change when a mixture forms (see Enthalpy of mixing). The substances in a mixture keep its separate properties. In the example of sand and water, neither one of the two substances changed in any way when they are mixed. Although the sand is in the water it still keeps the same properties that it had when it was outside the water. mixtures have variable compositions, while compounds have a fixed, definite formula. when mixed, individual substances keep their properties in a mixture, while if they form a compound their properties Document 1::: In chemistry, solubility is the ability of a substance, the solute, to form a solution with another substance, the solvent. Insolubility is the opposite property, the inability of the solute to form such a solution. The extent of the solubility of a substance in a specific solvent is generally measured as the concentration of the solute in a saturated solution, one in which no more solute can be dissolved. At this point, the two substances are said to be at the solubility equilibrium. For some solutes and solvents, there may be no such limit, in which case the two substances are said to be "miscible in all proportions" (or just "miscible"). The solute can be a solid, a liquid, or a gas, while the solvent is usually solid or liquid. Both may be pure substances, or may themselves be solutions. Gases are always miscible in all proportions, except in very extreme situations, and a solid or liquid can be "dissolved" in a gas only by passing into the gaseous state first. The solubility mainly depends on the composition of solute and solvent (including their pH and the presence of other dissolved substances) as well as on temperature and pressure. The dependency can often be explained in terms of interactions between the particles (atoms, molecules, or ions) of the two substances, and of thermodynamic concepts such as enthalpy and entropy. Under certain conditions, the concentration of the solute can exceed its usual solubility limit. The result is a supersaturated solution, which is metastable and will rapidly exclude the excess solute if a suitable nucleation site appears. The concept of solubility does not apply when there is an irreversible chemical reaction between the two substances, such as the reaction of calcium hydroxide with hydrochloric acid; even though one might say, informally, that one "dissolved" the other. The solubility is also not the same as the rate of solution, which is how fast a solid solute dissolves in a liquid solvent. This property de Document 2::: Conceptual questions or conceptual problems in science, technology, engineering, and mathematics (STEM) education are questions that can be answered based only on the knowledge of relevant concepts, rather than performing extensive calculations. They contrast with most homework and exam problems in science and engineering that typically require plugging in numerical values into previously discussed formulas. Such "plug-and-chug" numerical problems can often be solved correctly by just matching the pattern of the problem to a previously discussed problem and changing the numerical inputs, which requires significant amounts of time to perform the calculations but does not test or deepen the understanding of how the concepts and formulas should work together. Conceptual questions, therefore, provide a good complement to conventional numerical problems because they need minimal or no calculations and instead encourage the students to engage more deeply with the underlying concepts and how they relate to formulas. Conceptual problems are often formulated as multiple-choice questions, making them easy to use during in-class discussions, particularly when utilizing active learning, peer instruction, and audience response. An example of a conceptual question in undergraduate thermodynamics is provided below: During adiabatic expansion of an ideal gas, its temperatureincreases decreases stays the same Impossible to tell/need more information The use of conceptual questions in physics was popularized by Eric Mazur, particularly in the form of multiple-choice tests that he called ConcepTests. In recent years, multiple websites that maintain lists of conceptual questions have been created by instructors for various disciplines. Some books on physics provide many examples of conceptual questions as well. Multiple conceptual questions can be assembled into a concept inventory to test the working knowledge of students at the beginning of a course or to track the improvement in Document 3::: In physics, a dynamical system is said to be mixing if the phase space of the system becomes strongly intertwined, according to at least one of several mathematical definitions. For example, a measure-preserving transformation T is said to be strong mixing if whenever A and B are any measurable sets and μ is the associated measure. Other definitions are possible, including weak mixing and topological mixing. The mathematical definition of mixing is meant to capture the notion of physical mixing. A canonical example is the Cuba libre: suppose one is adding rum (the set A) to a glass of cola. After stirring the glass, the bottom half of the glass (the set B) will contain rum, and it will be in equal proportion as it is elsewhere in the glass. The mixing is uniform: no matter which region B one looks at, some of A will be in that region. A far more detailed, but still informal description of mixing can be found in the article on mixing (mathematics). Every mixing transformation is ergodic, but there are ergodic transformations which are not mixing. Physical mixing The mixing of gases or liquids is a complex physical process, governed by a convective diffusion equation that may involve non-Fickian diffusion as in spinodal decomposition. The convective portion of the governing equation contains fluid motion terms that are governed by the Navier–Stokes equations. When fluid properties such as viscosity depend on composition, the governing equations may be coupled. There may also be temperature effects. It is not clear that fluid mixing processes are mixing in the mathematical sense. Small rigid objects (such as rocks) are sometimes mixed in a rotating drum or tumbler. The 1969 Selective Service draft lottery was carried out by mixing plastic capsules which contained a slip of paper (marked with a day of the year). See also Miscibility Document 4::: In chemistry, the mass fraction of a substance within a mixture is the ratio (alternatively denoted ) of the mass of that substance to the total mass of the mixture. Expressed as a formula, the mass fraction is: Because the individual masses of the ingredients of a mixture sum to , their mass fractions sum to unity: Mass fraction can also be expressed, with a denominator of 100, as percentage by mass (in commercial contexts often called percentage by weight, abbreviated wt.% or % w/w; see mass versus weight). It is one way of expressing the composition of a mixture in a dimensionless size; mole fraction (percentage by moles, mol%) and volume fraction (percentage by volume, vol%) are others. When the prevalences of interest are those of individual chemical elements, rather than of compounds or other substances, the term mass fraction can also refer to the ratio of the mass of an element to the total mass of a sample. In these contexts an alternative term is mass percent composition. The mass fraction of an element in a compound can be calculated from the compound's empirical formula or its chemical formula. Terminology Percent concentration does not refer to this quantity. This improper name persists, especially in elementary textbooks. In biology, the unit "%" is sometimes (incorrectly) used to denote mass concentration, also called mass/volume percentage. A solution with 1g of solute dissolved in a final volume of 100mL of solution would be labeled as "1%" or "1% m/v" (mass/volume). This is incorrect because the unit "%" can only be used for dimensionless quantities. Instead, the concentration should simply be given in units of g/mL. Percent solution or percentage solution are thus terms best reserved for mass percent solutions (m/m, m%, or mass solute/mass total solution after mixing), or volume percent solutions (v/v, v%, or volume solute per volume of total solution after mixing). The very ambiguous terms percent solution and percentage solutions The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. What are the two components of a mixture called? A. nutrients and a solvent B. acid and base C. solute and a solvent D. concentration and a solvent Answer:
sciq-9488
multiple_choice
What do you call the transfer of thermal energy?
[ "precipitation", "humidity", "heat", "formation" ]
C
Relavent Documents: Document 0::: Thermofluids is a branch of science and engineering encompassing four intersecting fields: Heat transfer Thermodynamics Fluid mechanics Combustion The term is a combination of "thermo", referring to heat, and "fluids", which refers to liquids, gases and vapors. Temperature, pressure, equations of state, and transport laws all play an important role in thermofluid problems. Phase transition and chemical reactions may also be important in a thermofluid context. The subject is sometimes also referred to as "thermal fluids". Heat transfer Heat transfer is a discipline of thermal engineering that concerns the transfer of thermal energy from one physical system to another. Heat transfer is classified into various mechanisms, such as heat conduction, convection, thermal radiation, and phase-change transfer. Engineers also consider the transfer of mass of differing chemical species, either cold or hot, to achieve heat transfer. Sections include : Energy transfer by heat, work and mass Laws of thermodynamics Entropy Refrigeration Techniques Properties and nature of pure substances Applications Engineering : Predicting and analysing the performance of machines Thermodynamics Thermodynamics is the science of energy conversion involving heat and other forms of energy, most notably mechanical work. It studies and interrelates the macroscopic variables, such as temperature, volume and pressure, which describe physical, thermodynamic systems. Fluid mechanics Fluid Mechanics the study of the physical forces at work during fluid flow. Fluid mechanics can be divided into fluid kinematics, the study of fluid motion, and fluid kinetics, the study of the effect of forces on fluid motion. Fluid mechanics can further be divided into fluid statics, the study of fluids at rest, and fluid dynamics, the study of fluids in motion. Some of its more interesting concepts include momentum and reactive forces in fluid flow and fluid machinery theory and performance. Sections include: Flu Document 1::: Heat transfer is a discipline of thermal engineering that concerns the generation, use, conversion, and exchange of thermal energy (heat) between physical systems. Heat transfer is classified into various mechanisms, such as thermal conduction, thermal convection, thermal radiation, and transfer of energy by phase changes. Engineers also consider the transfer of mass of differing chemical species (mass transfer in the form of advection), either cold or hot, to achieve heat transfer. While these mechanisms have distinct characteristics, they often occur simultaneously in the same system. Heat conduction, also called diffusion, is the direct microscopic exchanges of kinetic energy of particles (such as molecules) or quasiparticles (such as lattice waves) through the boundary between two systems. When an object is at a different temperature from another body or its surroundings, heat flows so that the body and the surroundings reach the same temperature, at which point they are in thermal equilibrium. Such spontaneous heat transfer always occurs from a region of high temperature to another region of lower temperature, as described in the second law of thermodynamics. Heat convection occurs when the bulk flow of a fluid (gas or liquid) carries its heat through the fluid. All convective processes also move heat partly by diffusion, as well. The flow of fluid may be forced by external processes, or sometimes (in gravitational fields) by buoyancy forces caused when thermal energy expands the fluid (for example in a fire plume), thus influencing its own transfer. The latter process is often called "natural convection". The former process is often called "forced convection." In this case, the fluid is forced to flow by use of a pump, fan, or other mechanical means. Thermal radiation occurs through a vacuum or any transparent medium (solid or fluid or gas). It is the transfer of energy by means of photons or electromagnetic waves governed by the same laws. Overview Heat Document 2::: Thermal engineering is a specialized sub-discipline of mechanical engineering that deals with the movement of heat energy and transfer. The energy can be transferred between two mediums or transformed into other forms of energy. A thermal engineer will have knowledge of thermodynamics and the process to convert generated energy from thermal sources into chemical, mechanical, or electrical energy. Many process plants use a wide variety of machines that utilize components that use heat transfer in some way. Many plants use heat exchangers in their operations. A thermal engineer must allow the proper amount of energy to be transferred for correct use. Too much and the components could fail, too little and the system will not function at all. Thermal engineers must have an understanding of economics and the components that they will be servicing or interacting with. Some components that a thermal engineer could work with include heat exchangers, heat sinks, bi-metals strips, radiators and many more. Some systems that require a thermal engineer include; Boilers, heat pumps, water pumps, engines, and more. Part of being a thermal engineer is to improve a current system and make it more efficient than the current system. Many industries employ thermal engineers, some main ones are the automotive manufacturing industry, commercial construction, and Heating Ventilation and Cooling industry. Job opportunities for a thermal engineer are very broad and promising. Thermal engineering may be practiced by mechanical engineers and chemical engineers. One or more of the following disciplines may be involved in solving a particular thermal engineering problem: Thermodynamics, Fluid mechanics, Heat transfer, or Mass transfer. One branch of knowledge used frequently in thermal engineering is that of thermofluids. Applications Boiler design Combustion engines Cooling systems Cooling of computer chips Heat exchangers HVAC Process Fired Heaters Refrigeration Systems Compressed Air Sy Document 3::: Perfect thermal contact of the surface of a solid with the environment (convective heat transfer) or another solid occurs when the temperatures of the mating surfaces are equal. Perfect thermal contact conditions Perfect thermal contact supposes that on the boundary surface there holds an equality of the temperatures and an equality of heat fluxes where are temperatures of the solid and environment (or mating solid), respectively; are thermal conductivity coefficients of the solid and mating laminar layer (or solid), respectively; is normal to the surface . If there is a heat source on the boundary surface , e.g. caused by sliding friction, the latter equality transforms in the following manner where is heat-generation rate per unit area. Document 4::: A thermal reservoir, also thermal energy reservoir or thermal bath, is a thermodynamic system with a heat capacity so large that the temperature of the reservoir changes relatively little when a much more significant amount of heat is added or extracted. As a conceptual simplification, it effectively functions as an infinite pool of thermal energy at a given, constant temperature. Since it can act as an inertial source and sink of heat, it is often also referred to as a heat reservoir or heat bath. Lakes, oceans and rivers often serve as thermal reservoirs in geophysical processes, such as the weather. In atmospheric science, large air masses in the atmosphere often function as thermal reservoirs. Since the temperature of a thermal reservoir does not change during the heat transfer, the change of entropy in the reservoir is The microcanonical partition sum of a heat bath of temperature has the property where is the Boltzmann constant. It thus changes by the same factor when a given amount of energy is added. The exponential factor in this expression can be identified with the reciprocal of the Boltzmann factor. For an engineering application, see geothermal heat pump. See also Thermal battery Thermal energy storage The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. What do you call the transfer of thermal energy? A. precipitation B. humidity C. heat D. formation Answer:
sciq-5126
multiple_choice
Crystallization separates mixtures based on differences in what, which usually increases with temperature?
[ "density", "viscosity", "humidity", "solubility" ]
D
Relavent Documents: Document 0::: In chemistry, fractional crystallization is a method of refining substances based on differences in their solubility. It fractionates via differences in crystallization (forming of crystals). If a mixture of two or more substances in solution are allowed to crystallize, for example by allowing the temperature of the solution to decrease or increase, the precipitate will contain more of the least soluble substance. The proportion of components in the precipitate will depend on their solubility products. If the solubility products are very similar, a cascade process will be needed to effectuate a complete separation. This technique is often used in chemical engineering to obtain pure substances, or to recover saleable products from waste solutions. Fractional crystallization can be used to separate solid-solid mixtures. An example of this is separating KNO3 and KClO3. See also Cold Water Extraction Fractional crystallization (geology) Fractional freezing Laser-heated pedestal growth Pumpable ice technology Recrystallization (chemistry) Seed crystal Single crystal Document 1::: Crystallization is the process by which solid forms, where the atoms or molecules are highly organized into a structure known as a crystal. Some ways by which crystals form are precipitating from a solution, freezing, or more rarely deposition directly from a gas. Attributes of the resulting crystal depend largely on factors such as temperature, air pressure, and in the case of liquid crystals, time of fluid evaporation. Crystallization occurs in two major steps. The first is nucleation, the appearance of a crystalline phase from either a supercooled liquid or a supersaturated solvent. The second step is known as crystal growth, which is the increase in the size of particles and leads to a crystal state. An important feature of this step is that loose particles form layers at the crystal's surface and lodge themselves into open inconsistencies such as pores, cracks, etc. The majority of minerals and organic molecules crystallize easily, and the resulting crystals are generally of good quality, i.e. without visible defects. However, larger biochemical particles, like proteins, are often difficult to crystallize. The ease with which molecules will crystallize strongly depends on the intensity of either atomic forces (in the case of mineral substances), intermolecular forces (organic and biochemical substances) or intramolecular forces (biochemical substances). Crystallization is also a chemical solid–liquid separation technique, in which mass transfer of a solute from the liquid solution to a pure solid crystalline phase occurs. In chemical engineering, crystallization occurs in a crystallizer. Crystallization is therefore related to precipitation, although the result is not amorphous or disordered, but a crystal. Process The crystallization process consists of two major events, nucleation and crystal growth which are driven by thermodynamic properties as well as chemical properties. Nucleation is the step where the solute molecules or atoms dispersed in the so Document 2::: In physical chemistry, supersaturation occurs with a solution when the concentration of a solute exceeds the concentration specified by the value of solubility at equilibrium. Most commonly the term is applied to a solution of a solid in a liquid, but it can also be applied to liquids and gases dissolved in a liquid. A supersaturated solution is in a metastable state; it may return to equilibrium by separation of the excess of solute from the solution, by dilution of the solution by adding solvent, or by increasing the solubility of the solute in the solvent. History Early studies of the phenomenon were conducted with sodium sulfate, also known as Glauber's Salt because, unusually, the solubility of this salt in water may decrease with increasing temperature. Early studies have been summarised by Tomlinson. It was shown that the crystallization of a supersaturated solution does not simply come from its agitation, (the previous belief) but from solid matter entering and acting as a "starting" site for crystals to form, now called "seeds". Expanding upon this, Gay-Lussac brought attention to the kinematics of salt ions and the characteristics of the container having an impact on the supersaturation state. He was also able to expand upon the number of salts with which a supersaturated solution can be obtained. Later Henri Löwel came to the conclusion that both nuclei of the solution and the walls of the container have a catalyzing effect on the solution that cause crystallization. Explaining and providing a model for this phenomenon has been a task taken on by more recent research. Désiré Gernez contributed to this research by discovering that nuclei must be of the same salt that is being crystallized in order to promote crystallization. Occurrence and examples Solid precipitate, liquid solvent A solution of a chemical compound in a liquid will become supersaturated when the temperature of the saturated solution is changed. In most cases solubility decreases wit Document 3::: In materials science, segregation is the enrichment of atoms, ions, or molecules at a microscopic region in a materials system. While the terms segregation and adsorption are essentially synonymous, in practice, segregation is often used to describe the partitioning of molecular constituents to defects from solid solutions, whereas adsorption is generally used to describe such partitioning from liquids and gases to surfaces. The molecular-level segregation discussed in this article is distinct from other types of materials phenomena that are often called segregation, such as particle segregation in granular materials, and phase separation or precipitation, wherein molecules are segregated in to macroscopic regions of different compositions. Segregation has many practical consequences, ranging from the formation of soap bubbles, to microstructural engineering in materials science, to the stabilization of colloidal suspensions. Segregation can occur in various materials classes. In polycrystalline solids, segregation occurs at defects, such as dislocations, grain boundaries, stacking faults, or the interface between two phases. In liquid solutions, chemical gradients exist near second phases and surfaces due to combinations of chemical and electrical effects. Segregation which occurs in well-equilibrated systems due to the instrinsic chemical properties of the system is termed equilibrium segregation. Segregation that occurs due to the processing history of the sample (but that would disappear at long times) is termed non-equilibrium segregation. History Equilibrium segregation is associated with the lattice disorder at interfaces, where there are sites of energy different from those within the lattice at which the solute atoms can deposit themselves. The equilibrium segregation is so termed because the solute atoms segregate themselves to the interface or surface in accordance with the statistics of thermodynamics in order to minimize the overall free energy of t Document 4::: In chemistry, recrystallization is a technique used to purify chemicals. By dissolving a mixture of a compound and impurities in an appropriate solvent, either the desired compound or impurities can be removed from the solution, leaving the other behind. It is named for the crystals often formed when the compound precipitates out. Alternatively, recrystallization can refer to the natural growth of larger ice crystals at the expense of smaller ones. Chemistry In chemistry, recrystallization is a procedure for purifying compounds. The most typical situation is that a desired "compound A" is contaminated by a small amount of "impurity B". There are various methods of purification that may be attempted (see Separation process), recrystallization being one of them. There are also different recrystallization techniques that can be used such as: Single-solvent recrystallization Typically, the mixture of "compound A" and "impurity B" is dissolved in the smallest amount of hot solvent to fully dissolve the mixture, thus making a saturated solution. The solution is then allowed to cool. As the solution cools the solubility of compounds in the solution drops. This results in the desired compound dropping (recrystallizing) from the solution. The slower the rate of cooling, the bigger the crystals form. In an ideal situation the solubility product of the impurity, B, is not exceeded at any temperature. In that case, the solid crystals will consist of pure A and all the impurities will remain in the solution. The solid crystals are collected by filtration and the filtrate is discarded. If the solubility product of the impurity is exceeded, some of the impurities will co-precipitate. However, because of the relatively low concentration of the impurity, its concentration in the precipitated crystals will be less than its concentration in the original solid. Repeated recrystallization will result in an even purer crystalline precipitate. The purity is checked after each recrysta The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. Crystallization separates mixtures based on differences in what, which usually increases with temperature? A. density B. viscosity C. humidity D. solubility Answer:
sciq-1577
multiple_choice
Glycolysis harvests chemical energy by oxidizing glucose to what?
[ "chlorophyll", "pyruvate", "cellulose", "oxygen" ]
B
Relavent Documents: Document 0::: Bioenergetics is a field in biochemistry and cell biology that concerns energy flow through living systems. This is an active area of biological research that includes the study of the transformation of energy in living organisms and the study of thousands of different cellular processes such as cellular respiration and the many other metabolic and enzymatic processes that lead to production and utilization of energy in forms such as adenosine triphosphate (ATP) molecules. That is, the goal of bioenergetics is to describe how living organisms acquire and transform energy in order to perform biological work. The study of metabolic pathways is thus essential to bioenergetics. Overview Bioenergetics is the part of biochemistry concerned with the energy involved in making and breaking of chemical bonds in the molecules found in biological organisms. It can also be defined as the study of energy relationships and energy transformations and transductions in living organisms. The ability to harness energy from a variety of metabolic pathways is a property of all living organisms. Growth, development, anabolism and catabolism are some of the central processes in the study of biological organisms, because the role of energy is fundamental to such biological processes. Life is dependent on energy transformations; living organisms survive because of exchange of energy between living tissues/ cells and the outside environment. Some organisms, such as autotrophs, can acquire energy from sunlight (through photosynthesis) without needing to consume nutrients and break them down. Other organisms, like heterotrophs, must intake nutrients from food to be able to sustain energy by breaking down chemical bonds in nutrients during metabolic processes such as glycolysis and the citric acid cycle. Importantly, as a direct consequence of the First Law of Thermodynamics, autotrophs and heterotrophs participate in a universal metabolic network—by eating autotrophs (plants), heterotrophs ha Document 1::: Primary nutritional groups are groups of organisms, divided in relation to the nutrition mode according to the sources of energy and carbon, needed for living, growth and reproduction. The sources of energy can be light or chemical compounds; the sources of carbon can be of organic or inorganic origin. The terms aerobic respiration, anaerobic respiration and fermentation (substrate-level phosphorylation) do not refer to primary nutritional groups, but simply reflect the different use of possible electron acceptors in particular organisms, such as O2 in aerobic respiration, or nitrate (), sulfate () or fumarate in anaerobic respiration, or various metabolic intermediates in fermentation. Primary sources of energy Phototrophs absorb light in photoreceptors and transform it into chemical energy. Chemotrophs release chemical energy. The freed energy is stored as potential energy in ATP, carbohydrates, or proteins. Eventually, the energy is used for life processes such as moving, growth and reproduction. Plants and some bacteria can alternate between phototrophy and chemotrophy, depending on the availability of light. Primary sources of reducing equivalents Organotrophs use organic compounds as electron/hydrogen donors. Lithotrophs use inorganic compounds as electron/hydrogen donors. The electrons or hydrogen atoms from reducing equivalents (electron donors) are needed by both phototrophs and chemotrophs in reduction-oxidation reactions that transfer energy in the anabolic processes of ATP synthesis (in heterotrophs) or biosynthesis (in autotrophs). The electron or hydrogen donors are taken up from the environment. Organotrophic organisms are often also heterotrophic, using organic compounds as sources of both electrons and carbon. Similarly, lithotrophic organisms are often also autotrophic, using inorganic sources of electrons and CO2 as their inorganic carbon source. Some lithotrophic bacteria can utilize diverse sources of electrons, depending on the avail Document 2::: Cellular respiration is the process by which biological fuels are oxidized in the presence of an inorganic electron acceptor, such as oxygen, to drive the bulk production of adenosine triphosphate (ATP), which contains energy. Cellular respiration may be described as a set of metabolic reactions and processes that take place in the cells of organisms to convert chemical energy from nutrients into ATP, and then release waste products. Cellular respiration is a vital process that happens in the cells of living organisms, including humans, plants, and animals. It's how cells produce energy to power all the activities necessary for life. The reactions involved in respiration are catabolic reactions, which break large molecules into smaller ones, producing large amounts of energy (ATP). Respiration is one of the key ways a cell releases chemical energy to fuel cellular activity. The overall reaction occurs in a series of biochemical steps, some of which are redox reactions. Although cellular respiration is technically a combustion reaction, it is an unusual one because of the slow, controlled release of energy from the series of reactions. Nutrients that are commonly used by animal and plant cells in respiration include sugar, amino acids and fatty acids, and the most common oxidizing agent is molecular oxygen (O2). The chemical energy stored in ATP (the bond of its third phosphate group to the rest of the molecule can be broken allowing more stable products to form, thereby releasing energy for use by the cell) can then be used to drive processes requiring energy, including biosynthesis, locomotion or transportation of molecules across cell membranes. Aerobic respiration Aerobic respiration requires oxygen (O2) in order to create ATP. Although carbohydrates, fats and proteins are consumed as reactants, aerobic respiration is the preferred method of pyruvate production in glycolysis, and requires pyruvate to the mitochondria in order to be fully oxidized by the c Document 3::: Ethanol fermentation, also called alcoholic fermentation, is a biological process which converts sugars such as glucose, fructose, and sucrose into cellular energy, producing ethanol and carbon dioxide as by-products. Because yeasts perform this conversion in the absence of oxygen, alcoholic fermentation is considered an anaerobic process. It also takes place in some species of fish (including goldfish and carp) where (along with lactic acid fermentation) it provides energy when oxygen is scarce. Ethanol fermentation is the basis for alcoholic beverages, ethanol fuel and bread dough rising. Biochemical process of fermentation of sucrose The chemical equations below summarize the fermentation of sucrose (C12H22O11) into ethanol (C2H5OH). Alcoholic fermentation converts one mole of glucose into two moles of ethanol and two moles of carbon dioxide, producing two moles of ATP in the process. C6H12O6 → 2 C2H5OH + 2 CO2 Sucrose is a sugar composed of a glucose linked to a fructose. In the first step of alcoholic fermentation, the enzyme invertase cleaves the glycosidic linkage between the glucose and fructose molecules. C12H22O11 + H2O + invertase → 2 C6H12O6 Next, each glucose molecule is broken down into two pyruvate molecules in a process known as glycolysis. Glycolysis is summarized by the equation: C6H12O6 + 2 ADP + 2 Pi + 2 NAD+ → 2 CH3COCOO− + 2 ATP + 2 NADH + 2 H2O + 2 H+ CH3COCOO− is pyruvate, and Pi is inorganic phosphate. Finally, pyruvate is converted to ethanol and CO2 in two steps, regenerating oxidized NAD+ needed for glycolysis: 1. CH3COCOO− + H+ → CH3CHO + CO2 catalyzed by pyruvate decarboxylase 2. CH3CHO + NADH + H+ → C2H5OH + NAD+ This reaction is catalyzed by alcohol dehydrogenase (ADH1 in baker's yeast). Document 4::: Amylolytic process or amylolysis is the conversion of starch into sugar by the action of acids or enzymes such as amylase. Starch begins to pile up inside the leaves of plants during times of light when starch is able to be produced by photosynthetic processes. This ability to make starch disappears in the dark due to the lack of illumination; there is insufficient amount of light produced during the dark needed to carry this reaction forward. Turning starch into sugar is done by the enzyme amylase. Different pathways of amylase & location of amylase activity The process in which amylase breaks down starch for sugar consumption is not consistent with all organisms that use amylase to breakdown stored starch. There are different amylase pathways that are involved in starch degradation. The occurrence of starch degradation into sugar by the enzyme amylase was most commonly known to take place in the Chloroplast, but that has been proven wrong. One example is the spinach plant, in which the chloroplast contains both alpha and beta amylase (They are different versions of amylase involved in the breakdown of starch and they differ in their substrate specificity). In spinach leaves, the extrachloroplastic region contains the highest level of amylase degradation of starch. The difference between chloroplast and extrachloroplastic starch degradation is in the amylase pathway they prefer; either beta or alpha amylase. For spinach leaves, Alpha-amylase is preferred but for plants/organisms like wheat, barley, peas, etc. the Beta-amylase is preferred. Usage The amylolytic process is used in the brewing of alcohol from grains. Since grains contain starches but little to no simple sugars, the sugar needed to produce alcohol is derived from starch via the amylolytic process. In beer brewing, this is done through malting. In sake brewing, the mold Aspergillus oryzae provides amylolysis, and in Tapai, Saccharomyces cerevisiae. The amylolytic process can also be used to allow The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. Glycolysis harvests chemical energy by oxidizing glucose to what? A. chlorophyll B. pyruvate C. cellulose D. oxygen Answer:
sciq-2718
multiple_choice
What does a pollinator pick from its body and carry directly to another plant of the same species?
[ "egg", "pollen", "pathogen", "spore" ]
B
Relavent Documents: Document 0::: Zoophily, or zoogamy, is a form of pollination whereby pollen is transferred by animals, usually by invertebrates but in some cases vertebrates, particularly birds and bats, but also by other animals. Zoophilous species frequently have evolved mechanisms to make themselves more appealing to the particular type of pollinator, e.g. brightly colored or scented flowers, nectar, and appealing shapes and patterns. These plant-animal relationships are often mutually beneficial because of the food source provided in exchange for pollination. Pollination is defined as the transfer of pollen from the anther to the stigma. There are many vectors for pollination, including abiotic (wind and water) and biotic (animal). There are benefits and costs associated with any vector. For instance, using animal pollination is beneficial because the process is more directed and often results in pollination. At the same time it is costly for the plant to produce rewards, such as nectar, to attract animal pollinators. Not producing such rewards is one benefit of using abiotic pollinators, but a cost associated with this approach is that the pollen may be distributed more randomly. In general, pollination by animals occurs after they reach inside the flowers for nectar. While feeding on the nectar, the animal rubs or touches the stamens and is covered in pollen. Some of this pollen will be deposited on the stigma of the next flower it visits, pollinating the flower. Insect pollination This is known as entomophily. There are many different subtypes. Bee pollination (melittophily) There are diverse types of bees (such as honeybees, bumblebees, and orchid bees), forming large groups that are quite distinctive in size, tongue length and behaviour (some solitary, some colonial); thus generalization about bee pollination is difficult. Some plants can only be pollinated by bees because their anthers release pollen internally, and it must be shaken out by buzz pollination (also known as "sonicati Document 1::: The term oligolecty is used in pollination ecology to refer to bees that exhibit a narrow, specialized preference for pollen sources, typically to a single family or genus of flowering plants. The preference may occasionally extend broadly to multiple genera within a single plant family, or be as narrow as a single plant species. When the choice is very narrow, the term monolecty is sometimes used, originally meaning a single plant species but recently broadened to include examples where the host plants are related members of a single genus. The opposite term is polylectic and refers to species that collect pollen from a wide range of species. The most familiar example of a polylectic species is the domestic honey bee. Oligolectic pollinators are often called oligoleges or simply specialist pollinators, and this behavior is especially common in the bee families Andrenidae and Halictidae, though there are thousands of species in hundreds of genera, in essentially all known bee families; in certain areas of the world, such as deserts, oligoleges may represent half or more of all the resident bee species. Attempts have been made to determine whether a narrow host preference is due to an inability of the bee larvae to digest and develop on a variety of pollen types, or a limitation of the adult bee's learning and perception (i.e., they simply do not recognize other flowers as potential food sources), and most of the available evidence suggests the latter. However, a few plants whose pollen contains toxic substances (e.g., Toxicoscordion and related genera in the Melanthieae) are visited by oligolectic bees, and these may fall into the former category. The evidence from large-scale phylogenetic analyses of bee evolution suggests that, for most groups of bees, oligolecty is the ancestral condition and polylectic lineages arose from among those ancestral specialists. There are some cases where oligoleges collect their host plant's pollen as larval food but, for various r Document 2::: An elaiophore (from Gr. elaion -oil and phorein -carry) is a plant organ that secretes oil. A distinction is made in: epithelial elaiophores: oil glands trichome elaiophores: glandular hairs. The oils consist of fatty acids and/or glycerides, but may also contain other components such as aldehydes, amino acids, carbohydrates, carotenoids, hydrocarbons, ketones, phenolic compounds, saponins and terpenes. Elaiophores occur in the flowers of some families, such as Malpighiaceae, Scrophulariaceae, Iridaceae, Cucurbitaceae, Primulaceae and Solanaceae. Elaiophores can be present on the axial part of the sepals or corollas, on the surface of the lip, at the base of stamens (as in Lysimachia vulgaris) and also on the callus. The oils secreted by the elaiophores act as attractants for pollinating insects. Representatives of several bee families collect these oils to add to the food of the larvae or to line the nest, including the families and subfamilies Melittidae, Ctenoplectrini, Apidae and Anthophorini. Bees of the subfamily Ctenoplectrini have specialized oil-collecting structures such as pads or combs on the ventral thorax or on the front and middle legs. Bees visiting flowers with trichome elaiophores generally have pads, and bees visiting flowers with epithelial elaiophores have brush-like combs. Document 3::: Insect cognition describes the mental capacities and study of those capacities in insects. The field developed from comparative psychology where early studies focused more on animal behavior. Researchers have examined insect cognition in bees, fruit flies, and wasps.   Research questions consist of experiments aimed to evaluate insects abilities such as perception, emotions attention, memory (wasp multiple nest), spatial cognition, tools use, problem solving, and concepts. Unlike in animal behavior the concept of group cognition plays a big part in insect studies. It is hypothesized some insect classes like ants and bees think with a group cognition to function within their societies; more recent studies show that individual cognition exists and plays a role in overall group cognitive task. Insect cognition experiments have been more prevalent in the past decade than prior. It is logical for the understanding of cognitive capacities as adaptations to differing ecological niches under the Cognitive faculty by species when analyzing behaviors, this means viewing behaviors as adaptations to an individual's environment and not weighing them more advanced when compared to other different individuals. Insect foraging cognition Insects inhabit many diverse and complex environments within which they must find food. Cognition shapes how an insect comes to find its food. The particular cognitive abilities used by insects in finding food has been the focus of much scientific inquiry. The social insects are often study subjects and much has been discovered about the intelligence of insects by investigating the abilities of bee species. Fruit flies are also common study subjects. Learning and memory Learning biases Through learning, insects can increase their foraging efficiency, decreasing the time spent searching for food which allows for more time and energy to invest in other fitness related activities, such as searching for mates. Depending on the ecology of the Document 4::: Entomophily or insect pollination is a form of pollination whereby pollen of plants, especially but not only of flowering plants, is distributed by insects. Flowers pollinated by insects typically advertise themselves with bright colours, sometimes with conspicuous patterns (honey guides) leading to rewards of pollen and nectar; they may also have an attractive scent which in some cases mimics insect pheromones. Insect pollinators such as bees have adaptations for their role, such as lapping or sucking mouthparts to take in nectar, and in some species also pollen baskets on their hind legs. This required the coevolution of insects and flowering plants in the development of pollination behaviour by the insects and pollination mechanisms by the flowers, benefiting both groups. Both the size and the density of a population are known to affect pollination and subsequent reproductive performance. Coevolution History The early spermatophytes (seed plants) were largely dependent on the wind to carry their pollen from one plant to another. Prior to the appearance of flowering plants some gymnosperms, such as Bennettitales, developed flower-like structures that were likely insect pollinated. Insects pollination for gymnosperms likely originated in the Permian period. Candidates for pollinators include extinct long proboscis insect groups, including Aneuretopsychid, Mesopsychid and Pseudopolycentropodid scorpionflies, Kalligrammatid and Paradoxosisyrine lacewings and Zhangsolvid flies, as well as some extant families that specialised on gymnosperms before switching to angiosperms, including Nemestrinid, Tabanid and Acrocerid flies. Living cycads have mutualistic relationships with specific insect species (typically beetles) which pollinate them. Such relationships extend back to at least the late Mesozoic, with both oedemerid beetles (which today are exclusively found on flowering plants) and boganiid beetles (which still pollinate cycads today) from the Cretaceous being The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. What does a pollinator pick from its body and carry directly to another plant of the same species? A. egg B. pollen C. pathogen D. spore Answer:
sciq-3570
multiple_choice
Cells selective for different odorants are interspersed in what anatomical cavity?
[ "facial", "mucus", "abdominal", "nasal" ]
D
Relavent Documents: Document 0::: Olfactory glands, also known as Bowman's glands, are a type of nasal gland situated in the part of the olfactory mucosa beneath the olfactory epithelium, that is the lamina propria, a connective tissue also containing fibroblasts, blood vessels and bundles of fine axons from the olfactory neurons. An olfactory gland consists of an acinus in the lamina propria and a secretory duct going out through the olfactory epithelium. Electron microscopy studies show that olfactory glands contain cells with large secretory vesicles. Olfactory glands secrete the gel-forming mucin protein MUC5B. They might secrete proteins such as lactoferrin, lysozyme, amylase and IgA, similarly to serous glands. The exact composition of the secretions from olfactory glands is unclear, but there is evidence that they produce odorant-binding protein. Function The olfactory glands are tubuloalveolar glands surrounded by olfactory receptors and sustentacular cells in the olfactory epithelium. These glands produce mucous to lubricate the olfactory epithelium and dissolve odorant-containing gases. Several olfactory binding proteins are produced from the olfactory glands that help facilitate the transportation of odorants to the olfactory receptors. These cells exhibit the mRNA to transform growth factor α, stimulating the production of new olfactory receptor cells. See also William Bowman List of distinct cell types in the adult human body Document 1::: In a multicellular organism, an organ is a collection of tissues joined in a structural unit to serve a common function. In the hierarchy of life, an organ lies between tissue and an organ system. Tissues are formed from same type cells to act together in a function. Tissues of different types combine to form an organ which has a specific function. The intestinal wall for example is formed by epithelial tissue and smooth muscle tissue. Two or more organs working together in the execution of a specific body function form an organ system, also called a biological system or body system. An organ's tissues can be broadly categorized as parenchyma, the functional tissue, and stroma, the structural tissue with supportive, connective, or ancillary functions. For example, the gland's tissue that makes the hormones is the parenchyma, whereas the stroma includes the nerves that innervate the parenchyma, the blood vessels that oxygenate and nourish it and carry away its metabolic wastes, and the connective tissues that provide a suitable place for it to be situated and anchored. The main tissues that make up an organ tend to have common embryologic origins, such as arising from the same germ layer. Organs exist in most multicellular organisms. In single-celled organisms such as members of the eukaryotes, the functional analogue of an organ is known as an organelle. In plants, there are three main organs. The number of organs in any organism depends on the definition used. By one widely adopted definition, 79 organs have been identified in the human body. Animals Except for placozoans, multicellular animals including humans have a variety of organ systems. These specific systems are widely studied in human anatomy. The functions of these organ systems often share significant overlap. For instance, the nervous and endocrine system both operate via a shared organ, the hypothalamus. For this reason, the two systems are combined and studied as the neuroendocrine system. The sam Document 2::: H2.00.04.4.01001: Lymphoid tissue H2.00.05.0.00001: Muscle tissue H2.00.05.1.00001: Smooth muscle tissue H2.00.05.2.00001: Striated muscle tissue H2.00.06.0.00001: Nerve tissue H2.00.06.1.00001: Neuron H2.00.06.2.00001: Synapse H2.00.06.2.00001: Neuroglia h3.01: Bones h3.02: Joints h3.03: Muscles h3.04: Alimentary system h3.05: Respiratory system h3.06: Urinary system h3.07: Genital system h3.08: Document 3::: This table lists the epithelia of different organs of the human body Human anatomy Document 4::: Sniffing is a perceptually-relevant behavior, defined as the active sampling of odors through the nasal cavity for the purpose of information acquisition. This behavior, displayed by all terrestrial vertebrates, is typically identified based upon changes in respiratory frequency and/or amplitude, and is often studied in the context of odor guided behaviors and olfactory perceptual tasks. Sniffing is quantified by measuring intra-nasal pressure or flow or air or, while less accurate, through a strain gauge on the chest to measure total respiratory volume. Strategies for sniffing behavior vary depending upon the animal, with small animals (rats, mice, hamsters) displaying sniffing frequencies ranging from 4 to 12 Hz but larger animals (humans) sniffing at much lower frequencies, usually less than 2 Hz. Subserving sniffing behaviors, evidence for an "olfactomotor" circuit in the brain exists, wherein perception or expectation of an odor can trigger brain respiratory center to allow for the modulation of sniffing frequency and amplitude and thus acquisition of odor information. Sniffing is analogous to other stimulus sampling behaviors, including visual saccades, active touch, and whisker movements in small animals (viz., whisking). Atypical sniffing has been reported in cases of neurological disorders, especially those disorders characterized by impaired motor function and olfactory perception. Background and history of sniffing Background The behavior of sniffing incorporates changes in air flow within the nose. This can involve changes in the depth of inhalation and the frequency of inhalations. Both of these entail modulations in the manner whereby air flows within the nasal cavity and through the nostrils. As a consequence, when the air being breathed is odorized, odors can enter and leave the nasal cavity with each sniff. The same applies regardless of what gas is being inhaled, including toxins and solvents, and other industrial chemicals which may be inh The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. Cells selective for different odorants are interspersed in what anatomical cavity? A. facial B. mucus C. abdominal D. nasal Answer:
sciq-5055
multiple_choice
What contracts to move food throughout the gastrointestinal tract?
[ "vessels", "nerves", "fluids", "muscles" ]
D
Relavent Documents: Document 0::: The gastrointestinal wall of the gastrointestinal tract is made up of four layers of specialised tissue. From the inner cavity of the gut (the lumen) outwards, these are: Mucosa Submucosa Muscular layer Serosa or adventitia The mucosa is the innermost layer of the gastrointestinal tract. It surrounds the lumen of the tract and comes into direct contact with digested food (chyme). The mucosa itself is made up of three layers: the epithelium, where most digestive, absorptive and secretory processes occur; the lamina propria, a layer of connective tissue, and the muscularis mucosae, a thin layer of smooth muscle. The submucosa contains nerves including the submucous plexus (also called Meissner's plexus), blood vessels and elastic fibres with collagen, that stretches with increased capacity but maintains the shape of the intestine. The muscular layer surrounds the submucosa. It comprises layers of smooth muscle in longitudinal and circular orientation that also helps with continued bowel movements (peristalsis) and the movement of digested material out of and along the gut. In between the two layers of muscle lies the myenteric plexus (also called Auerbach's plexus). The serosa/adventitia are the final layers. These are made up of loose connective tissue and coated in mucus so as to prevent any friction damage from the intestine rubbing against other tissue. The serosa is present if the tissue is within the peritoneum, and the adventitia if the tissue is retroperitoneal. Structure When viewed under the microscope, the gastrointestinal wall has a consistent general form, but with certain parts differing along its course. Mucosa The mucosa is the innermost layer of the gastrointestinal tract. It surrounds the cavity (lumen) of the tract and comes into direct contact with digested food (chyme). The mucosa is made up of three layers: The epithelium is the innermost layer. It is where most digestive, absorptive and secretory processes occur. The lamina propr Document 1::: Gastrointestinal physiology is the branch of human physiology that addresses the physical function of the gastrointestinal (GI) tract. The function of the GI tract is to process ingested food by mechanical and chemical means, extract nutrients and excrete waste products. The GI tract is composed of the alimentary canal, that runs from the mouth to the anus, as well as the associated glands, chemicals, hormones, and enzymes that assist in digestion. The major processes that occur in the GI tract are: motility, secretion, regulation, digestion and circulation. The proper function and coordination of these processes are vital for maintaining good health by providing for the effective digestion and uptake of nutrients. Motility The gastrointestinal tract generates motility using smooth muscle subunits linked by gap junctions. These subunits fire spontaneously in either a tonic or a phasic fashion. Tonic contractions are those contractions that are maintained from several minutes up to hours at a time. These occur in the sphincters of the tract, as well as in the anterior stomach. The other type of contractions, called phasic contractions, consist of brief periods of both relaxation and contraction, occurring in the posterior stomach and the small intestine, and are carried out by the muscularis externa. Motility may be overactive (hypermotility), leading to diarrhea or vomiting, or underactive (hypomotility), leading to constipation or vomiting; either may cause abdominal pain. Stimulation The stimulation for these contractions likely originates in modified smooth muscle cells called interstitial cells of Cajal. These cells cause spontaneous cycles of slow wave potentials that can cause action potentials in smooth muscle cells. They are associated with the contractile smooth muscle via gap junctions. These slow wave potentials must reach a threshold level for the action potential to occur, whereupon Ca2+ channels on the smooth muscle open and an action potential Document 2::: The gastrocolic reflex or gastrocolic response is a physiological reflex that controls the motility, or peristalsis, of the gastrointestinal tract following a meal. It involves an increase in motility of the colon consisting primarily of giant migrating contractions, or migrating motor complexes, in response to stretch in the stomach following ingestion and byproducts of digestion entering the small intestine. Thus, this reflex is responsible for the urge to defecate following a meal. The small intestine also shows a similar motility response. The gastrocolic reflex's function in driving existing intestinal contents through the digestive system helps make way for ingested food. The reflex was demonstrated by myoelectric recordings in the colons of animals and humans, which showed an increase in electrical activity within as little as 15 minutes after eating. The recordings also demonstrated that the gastrocolic reflex is uneven in its distribution throughout the colon. The sigmoid colon is more greatly affected than the rest of the colon in terms of a phasic response, recurring periods of contraction followed by relaxation, in order to propel food distally into the rectum; however, the tonic response across the colon is uncertain. These contractions are generated by the muscularis externa stimulated by the myenteric plexus. When pressure within the rectum becomes increased, the gastrocolic reflex acts as a stimulus for defecation. A number of neuropeptides have been proposed as mediators of the gastrocolic reflex. These include serotonin, neurotensin, cholecystokinin, prostaglandin E1, and gastrin. Coffee can induce a significant response, with 29% of subjects in a study reporting an urge to defecate after ingestion, and manometry showing a reaction typically between 4 and 30 minutes after consumption and potentially lasting for more than 30 minutes. Decaffeinated coffee is also capable of generating a similar effect, albeit slightly weaker. Essentially, this m Document 3::: The esophagus (American English) or oesophagus (British English, see spelling differences; both ; : (o)esophagi or (o)esophaguses), colloquially known also as the food pipe or gullet, is an organ in vertebrates through which food passes, aided by peristaltic contractions, from the pharynx to the stomach. The esophagus is a fibromuscular tube, about long in adults, that travels behind the trachea and heart, passes through the diaphragm, and empties into the uppermost region of the stomach. During swallowing, the epiglottis tilts backwards to prevent food from going down the larynx and lungs. The word oesophagus is from Ancient Greek οἰσοφάγος (oisophágos), from οἴσω (oísō), future form of φέρω (phérō, “I carry”) + ἔφαγον (éphagon, “I ate”). The wall of the esophagus from the lumen outwards consists of mucosa, submucosa (connective tissue), layers of muscle fibers between layers of fibrous tissue, and an outer layer of connective tissue. The mucosa is a stratified squamous epithelium of around three layers of squamous cells, which contrasts to the single layer of columnar cells of the stomach. The transition between these two types of epithelium is visible as a zig-zag line. Most of the muscle is smooth muscle although striated muscle predominates in its upper third. It has two muscular rings or sphincters in its wall, one at the top and one at the bottom. The lower sphincter helps to prevent reflux of acidic stomach content. The esophagus has a rich blood supply and venous drainage. Its smooth muscle is innervated by involuntary nerves (sympathetic nerves via the sympathetic trunk and parasympathetic nerves via the vagus nerve) and in addition voluntary nerves (lower motor neurons) which are carried in the vagus nerve to innervate its striated muscle. The esophagus passes through the thoracic cavity into the diaphragm into the stomach. Document 4::: The Joan Mott Prize Lecture is a prize lecture awarded annually by The Physiological Society in honour of Joan Mott. Laureates Laureates of the award have included: - Intestinal absorption of sugars and peptides: from textbook to surprises See also Physiological Society Annual Review Prize Lecture The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. What contracts to move food throughout the gastrointestinal tract? A. vessels B. nerves C. fluids D. muscles Answer:
sciq-5217
multiple_choice
As a diver descends, the increase in pressure causes the body’s air pockets in the ears and lungs to do what?
[ "heat up", "blow up", "expand", "compress" ]
D
Relavent Documents: Document 0::: Suction is the result of air pressure differential between areas. Removing air from a space results in a pressure differential. Suction pressure is therefore limited by external air pressure. Even a perfect vacuum cannot suck with more pressure than is available in the surrounding environment. Suctions can form on the sea, for example, when a ship founders. When the pressure in one part of a physical system is reduced relative to another, the fluid in the higher pressure region will exert a force relative to the region of lowered pressure, referred to as pressure-gradient force. Pressure reduction may be static, as in a piston and cylinder arrangement, or dynamic, as in the case of a vacuum cleaner when air flow results in a reduced pressure region. When animals breathe, the diaphragm and muscles around the rib cage cause a change of volume in the lungs. The increased volume of the chest cavity decreases the pressure inside, creating an imbalance with the ambient air pressure, resulting in suction. See also Pump Vacuum pump Suction devices used in medicine Implosion Suction cup Suction cupping Document 1::: Speech science refers to the study of production, transmission and perception of speech. Speech science involves anatomy, in particular the anatomy of the oro-facial region and neuroanatomy, physiology, and acoustics. Speech production The production of speech is a highly complex motor task that involves approximately 100 orofacial, laryngeal, pharyngeal, and respiratory muscles. Precise and expeditious timing of these muscles is essential for the production of temporally complex speech sounds, which are characterized by transitions as short as 10 ms between frequency bands and an average speaking rate of approximately 15 sounds per second. Speech production requires airflow from the lungs (respiration) to be phonated through the vocal folds of the larynx (phonation) and resonated in the vocal cavities shaped by the jaw, soft palate, lips, tongue and other articulators (articulation). Respiration Respiration is the physical process of gas exchange between an organism and its environment involving four steps (ventilation, distribution, perfusion and diffusion) and two processes (inspiration and expiration). Respiration can be described as the mechanical process of air flowing into and out of the lungs on the principle of Boyle's law, stating that, as the volume of a container increases, the air pressure will decrease. This relatively negative pressure will cause air to enter the container until the pressure is equalized. During inspiration of air, the diaphragm contracts and the lungs expand drawn by pleurae through surface tension and negative pressure. When the lungs expand, air pressure becomes negative compared to atmospheric pressure and air will flow from the area of higher pressure to fill the lungs. Forced inspiration for speech uses accessory muscles to elevate the rib cage and enlarge the thoracic cavity in the vertical and lateral dimensions. During forced expiration for speech, muscles of the trunk and abdomen reduce the size of the thoracic cavity by Document 2::: The control of ventilation is the physiological mechanisms involved in the control of breathing, which is the movement of air into and out of the lungs. Ventilation facilitates respiration. Respiration refers to the utilization of oxygen and balancing of carbon dioxide by the body as a whole, or by individual cells in cellular respiration. The most important function of breathing is the supplying of oxygen to the body and balancing of the carbon dioxide levels. Under most conditions, the partial pressure of carbon dioxide (PCO2), or concentration of carbon dioxide, controls the respiratory rate. The peripheral chemoreceptors that detect changes in the levels of oxygen and carbon dioxide are located in the arterial aortic bodies and the carotid bodies. Central chemoreceptors are primarily sensitive to changes in the pH of the blood, (resulting from changes in the levels of carbon dioxide) and they are located on the medulla oblongata near to the medullar respiratory groups of the respiratory center. Information from the peripheral chemoreceptors is conveyed along nerves to the respiratory groups of the respiratory center. There are four respiratory groups, two in the medulla and two in the pons. The two groups in the pons are known as the pontine respiratory group. Dorsal respiratory group – in the medulla Ventral respiratory group – in the medulla Pneumotaxic center – various nuclei of the pons Apneustic center – nucleus of the pons From the respiratory center, the muscles of respiration, in particular the diaphragm, are activated to cause air to move in and out of the lungs. Control of respiratory rhythm Ventilatory pattern Breathing is normally an unconscious, involuntary, automatic process. The pattern of motor stimuli during breathing can be divided into an inhalation stage and an exhalation stage. Inhalation shows a sudden, ramped increase in motor discharge to the respiratory muscles (and the pharyngeal constrictor muscles). Before the end of inh Document 3::: The diving reflex, also known as the diving response and mammalian diving reflex, is a set of physiological responses to immersion that overrides the basic homeostatic reflexes, and is found in all air-breathing vertebrates studied to date. It optimizes respiration by preferentially distributing oxygen stores to the heart and brain, enabling submersion for an extended time. The diving reflex is exhibited strongly in aquatic mammals, such as seals, otters, dolphins, and muskrats, and exists as a lesser response in other animals, including human babies up to 6 months old (see infant swimming), and diving birds, such as ducks and penguins. Adult humans generally exhibit a mild response, the dive-hunting Sama-Bajau people being a notable outlier. The diving reflex is triggered specifically by chilling and wetting the nostrils and face while breath-holding, and is sustained via neural processing originating in the carotid chemoreceptors. The most noticeable effects are on the cardiovascular system, which displays peripheral vasoconstriction, slowed heart rate, redirection of blood to the vital organs to conserve oxygen, release of red blood cells stored in the spleen, and, in humans, heart rhythm irregularities. Although aquatic animals have evolved profound physiological adaptations to conserve oxygen during submersion, the apnea and its duration, bradycardia, vasoconstriction, and redistribution of cardiac output occur also in terrestrial animals as a neural response, but the effects are more profound in natural divers. Physiological response When the face is submerged and water fills the nostrils, sensory receptors sensitive to wetness within the nasal cavity and other areas of the face supplied by the fifth (V) cranial nerve (the trigeminal nerve) relay the information to the brain. The tenth (X) cranial nerve, (the vagus nerve) – part of the autonomic nervous system – then produces bradycardia and other neural pathways elicit peripheral vasoconstriction, restri Document 4::: Elastic recoil means the rebound of the lungs after having been stretched by inhalation, or rather, the ease with which the lung rebounds. With inhalation, the intrapleural pressure (the pressure within the pleural cavity) of the lungs decreases. Relaxing the diaphragm during expiration allows the lungs to recoil and regain the intrapleural pressure experienced previously at rest. Elastic recoil is inversely related to lung compliance. This phenomenon occurs because of the elastin in the elastic fibers in the connective tissue of the lungs, and because of the surface tension of the film of fluid that lines the alveoli. As water molecules pull together, they also pull on the alveolar walls causing the alveoli to recoil and become smaller. But two factors prevent the lungs from collapsing: surfactant and the intrapleural pressure. Surfactant is a surface-active lipoprotein complex formed by type II alveolar cells. The proteins and lipids that comprise surfactant have both a hydrophilic region and a hydrophobic region. By absorbing to the air-water interface of alveoli with the hydrophilic head groups in the water and the hydrophobic tails facing towards the air, the main lipid component of surfactant, dipalmitoylphosphatidylcholine, reduces surface tension. It also means the rate of shrinking is more regular because of the stability of surface area caused by surfactant. Pleural pressure is the pressure in the pleural space. When this pressure is lower than the pressure of alveoli they tend to expand. This prevents the elastic fibers and outside pressure from crushing the lungs. It is a homeostatic mechanism. Notes and references Respiratory physiology The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. As a diver descends, the increase in pressure causes the body’s air pockets in the ears and lungs to do what? A. heat up B. blow up C. expand D. compress Answer:
sciq-8523
multiple_choice
What kind of mutations are produced by nucleotide-pair insertions or deletions?
[ "frameshift", "cloned", "shifty", "framing" ]
A
Relavent Documents: Document 0::: Silent mutations are mutations in DNA that do not have an observable effect on the organism's phenotype. They are a specific type of neutral mutation. The phrase silent mutation is often used interchangeably with the phrase synonymous mutation; however, synonymous mutations are not always silent, nor vice versa. Synonymous mutations can affect transcription, splicing, mRNA transport, and translation, any of which could alter phenotype, rendering the synonymous mutation non-silent. The substrate specificity of the tRNA to the rare codon can affect the timing of translation, and in turn the co-translational folding of the protein. This is reflected in the codon usage bias that is observed in many species. Mutations that cause the altered codon to produce an amino acid with similar functionality (e.g. a mutation producing leucine instead of isoleucine) are often classified as silent; if the properties of the amino acid are conserved, this mutation does not usually significantly affect protein function. Genetic code The genetic code translates mRNA nucleotide sequences to amino acid sequences. Genetic information is coded using this process with groups of three nucleotides along the mRNA which are commonly known as codons. The set of three nucleotides almost always produce the same amino acid with a few exceptions like UGA which typically serves as the stop codon but can also encode tryptophan in mammalian mitochondria. Most amino acids are specified by multiple codons demonstrating that the genetic code is degenerate–different codons result in the same amino acid. Codons that code for the same amino acid are termed synonyms. Silent mutations are base substitutions that result in no change of the amino acid or amino acid functionality when the altered messenger RNA (mRNA) is translated. For example, if the codon AAA is altered to become AAG, the same amino acid – lysine – will be incorporated into the peptide chain. Mutations are often linked to diseases or negativ Document 1::: A frameshift mutation (also called a framing error or a reading frame shift) is a genetic mutation caused by indels (insertions or deletions) of a number of nucleotides in a DNA sequence that is not divisible by three. Due to the triplet nature of gene expression by codons, the insertion or deletion can change the reading frame (the grouping of the codons), resulting in a completely different translation from the original. The earlier in the sequence the deletion or insertion occurs, the more altered the protein. A frameshift mutation is not the same as a single-nucleotide polymorphism in which a nucleotide is replaced, rather than inserted or deleted. A frameshift mutation will in general cause the reading of the codons after the mutation to code for different amino acids. The frameshift mutation will also alter the first stop codon ("UAA", "UGA" or "UAG") encountered in the sequence. The polypeptide being created could be abnormally short or abnormally long, and will most likely not be functional. Frameshift mutations are apparent in severe genetic diseases such as Tay–Sachs disease; they increase susceptibility to certain cancers and classes of familial hypercholesterolaemia; in 1997, a frameshift mutation was linked to resistance to infection by the HIV retrovirus. Frameshift mutations have been proposed as a source of biological novelty, as with the alleged creation of nylonase, however, this interpretation is controversial. A study by Negoro et al (2006) found that a frameshift mutation was unlikely to have been the cause and that rather a two amino acid substitution in the active site of an ancestral esterase resulted in nylonase. Background The information contained in DNA determines protein function in the cells of all organisms. Transcription and translation allow this information to be communicated into making proteins. However, an error in reading this communication can cause protein function to be incorrect and eventually cause disease even as the c Document 2::: In biology, and especially in genetics, a mutant is an organism or a new genetic character arising or resulting from an instance of mutation, which is generally an alteration of the DNA sequence of the genome or chromosome of an organism. It is a characteristic that would not be observed naturally in a specimen. The term mutant is also applied to a virus with an alteration in its nucleotide sequence whose genome is in the nuclear genome. The natural occurrence of genetic mutations is integral to the process of evolution. The study of mutants is an integral part of biology; by understanding the effect that a mutation in a gene has, it is possible to establish the normal function of that gene. Mutants arise by mutation Mutants arise by mutations occurring in pre-existing genomes as a result of errors of DNA replication or errors of DNA repair. Errors of replication often involve translesion synthesis by a DNA polymerase when it encounters and bypasses a damaged base in the template strand. A DNA damage is an abnormal chemical structure in DNA, such as a strand break or an oxidized base, whereas a mutation, by contrast, is a change in the sequence of standard base pairs. Errors of repair occur when repair processes inaccurately replace a damaged DNA sequence. The DNA repair process microhomology-mediated end joining is particularly error-prone. Etymology Although not all mutations have a noticeable phenotypic effect, the common usage of the word "mutant" is generally a pejorative term, only used for genetically or phenotypically noticeable mutations. Previously, people used the word "sport" (related to spurt) to refer to abnormal specimens. The scientific usage is broader, referring to any organism differing from the wild type. The word finds its origin in the Latin term mūtant- (stem of mūtāns), which means "to change". Mutants should not be confused with organisms born with developmental abnormalities, which are caused by errors during morphogenesis. In a devel Document 3::: Muton is a term in genetics that means the smallest unit in a chromosome that can be changed by mutations. The term Muton was created by Seymour Benzer in 1955 after his work about the mapping of bacteriophages T4. Document 4::: A postzygotic mutation (or post-zygotic mutation) is a change in an organism's genome that is acquired during its lifespan, instead of being inherited from its parent(s) through fusion of two haploid gametes. Mutations that occur after the zygote has formed can be caused by a variety of sources that fall under two classes: spontaneous mutations and induced mutations. How detrimental a mutation is to an organism is dependent on what the mutation is, where it occurred in the genome and when it occurred. Causes Postzygotic changes to a genome can be caused by small mutations that affect a single base pair, or large mutations that affect entire chromosomes and are divided into two classes, spontaneous mutations and induced mutations. Spontaneous Mutations Most spontaneous mutations are the result of naturally occurring lesions to DNA and errors during DNA replication without direct exposure to an agent. A few common spontaneous mutations are: Depurination- The loss of a purine (A or G) base to form an apurinic site. An apurinic site, also known as an AP site, is the location in a genetic sequence that does not contain a purine base. During replication, the affected double stranded DNA will produce one doubled-stranded daughter containing the missing purine, resulting in an unchanged sequence. The other strand will produce a shorter strand, missing the purine and its complementary base. Deamination- The amine group on a base is changed to a keto group. This results in cytosine being changed to uracil and adenine being changed to hypoxanthine which can result in incorrect DNA replication and repair. Tautomerization- The hydrogen atom on a nucleotide base is repositioned causing altered hydrogen bonding pattern and incorrect base pairing during replication. For example, the keto tautomer of thymine normally pairs with adenine, however the enol tautomer of thymine can bind with guanine. This results in an incorrect base pair match. Similarly there are amino and imi The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. What kind of mutations are produced by nucleotide-pair insertions or deletions? A. frameshift B. cloned C. shifty D. framing Answer:
sciq-35
multiple_choice
The cells of all eukarya have a what?
[ "epidermis", "necrosis", "nucleus", "chloroplast" ]
C
Relavent Documents: Document 0::: Cell physiology is the biological study of the activities that take place in a cell to keep it alive. The term physiology refers to normal functions in a living organism. Animal cells, plant cells and microorganism cells show similarities in their functions even though they vary in structure. General characteristics There are two types of cells: prokaryotes and eukaryotes. Prokaryotes were the first of the two to develop and do not have a self-contained nucleus. Their mechanisms are simpler than later-evolved eukaryotes, which contain a nucleus that envelops the cell's DNA and some organelles. Prokaryotes Prokaryotes have DNA located in an area called the nucleoid, which is not separated from other parts of the cell by a membrane. There are two domains of prokaryotes: bacteria and archaea. Prokaryotes have fewer organelles than eukaryotes. Both have plasma membranes and ribosomes (structures that synthesize proteins and float free in cytoplasm). Two unique characteristics of prokaryotes are fimbriae (finger-like projections on the surface of a cell) and flagella (threadlike structures that aid movement). Eukaryotes Eukaryotes have a nucleus where DNA is contained. They are usually larger than prokaryotes and contain many more organelles. The nucleus, the feature of a eukaryote that distinguishes it from a prokaryote, contains a nuclear envelope, nucleolus and chromatin. In cytoplasm, endoplasmic reticulum (ER) synthesizes membranes and performs other metabolic activities. There are two types, rough ER (containing ribosomes) and smooth ER (lacking ribosomes). The Golgi apparatus consists of multiple membranous sacs, responsible for manufacturing and shipping out materials such as proteins. Lysosomes are structures that use enzymes to break down substances through phagocytosis, a process that comprises endocytosis and exocytosis. In the mitochondria, metabolic processes such as cellular respiration occur. The cytoskeleton is made of fibers that support the str Document 1::: Cellular components are the complex biomolecules and structures of which cells, and thus living organisms, are composed. Cells are the structural and functional units of life. The smallest organisms are single cells, while the largest organisms are assemblages of trillions of cells. DNA, double stranded macromolecule that carries the hereditary information of the cell and found in all living cells; each cell carries chromosome(s) having a distinctive DNA sequence. Examples include macromolecules such as proteins and nucleic acids, biomolecular complexes such as a ribosome, and structures such as membranes, and organelles. While the majority of cellular components are located within the cell itself, some may exist in extracellular areas of an organism. Cellular components may also be called biological matter or biological material. Most biological matter has the characteristics of soft matter, being governed by relatively small energies. All known life is made of biological matter. To be differentiated from other theoretical or fictional life forms, such life may be called carbon-based, cellular, organic, biological, or even simply living – as some definitions of life exclude hypothetical types of biochemistry. See also Cell (biology) Cell biology Biomolecule Organelle Tissue (biology) External links https://web.archive.org/web/20130918033010/http://bioserv.fiu.edu/~walterm/FallSpring/review1_fall05_chap_cell3.htm Document 2::: The cell is the basic structural and functional unit of all forms of life. Every cell consists of cytoplasm enclosed within a membrane, and contains many macromolecules such as proteins, DNA and RNA, as well as many small molecules of nutrients and metabolites. The term comes from the Latin word meaning 'small room'. Cells can acquire specified function and carry out various tasks within the cell such as replication, DNA repair, protein synthesis, and motility. Cells are capable of specialization and mobility within the cell. Most plant and animal cells are only visible under a light microscope, with dimensions between 1 and 100 micrometres. Electron microscopy gives a much higher resolution showing greatly detailed cell structure. Organisms can be classified as unicellular (consisting of a single cell such as bacteria) or multicellular (including plants and animals). Most unicellular organisms are classed as microorganisms. The study of cells and how they work has led to many other studies in related areas of biology, including: discovery of DNA, cancer systems biology, aging and developmental biology. Cell biology is the study of cells, which were discovered by Robert Hooke in 1665, who named them for their resemblance to cells inhabited by Christian monks in a monastery. Cell theory, first developed in 1839 by Matthias Jakob Schleiden and Theodor Schwann, states that all organisms are composed of one or more cells, that cells are the fundamental unit of structure and function in all living organisms, and that all cells come from pre-existing cells. Cells emerged on Earth about 4 billion years ago. Discovery With continual improvements made to microscopes over time, magnification technology became advanced enough to discover cells. This discovery is largely attributed to Robert Hooke, and began the scientific study of cells, known as cell biology. When observing a piece of cork under the scope, he was able to see pores. This was shocking at the time as i Document 3::: This lecture, named in memory of Keith R. Porter, is presented to an eminent cell biologist each year at the ASCB Annual Meeting. The ASCB Program Committee and the ASCB President recommend the Porter Lecturer to the Porter Endowment each year. Lecturers Source: ASCB See also List of biology awards Document 4::: A cell type is a classification used to identify cells that share morphological or phenotypical features. A multicellular organism may contain cells of a number of widely differing and specialized cell types, such as muscle cells and skin cells, that differ both in appearance and function yet have identical genomic sequences. Cells may have the same genotype, but belong to different cell types due to the differential regulation of the genes they contain. Classification of a specific cell type is often done through the use of microscopy (such as those from the cluster of differentiation family that are commonly used for this purpose in immunology). Recent developments in single cell RNA sequencing facilitated classification of cell types based on shared gene expression patterns. This has led to the discovery of many new cell types in e.g. mouse cortex, hippocampus, dorsal root ganglion and spinal cord. Animals have evolved a greater diversity of cell types in a multicellular body (100–150 different cell types), compared with 10–20 in plants, fungi, and protists. The exact number of cell types is, however, undefined, and the Cell Ontology, as of 2021, lists over 2,300 different cell types. Multicellular organisms All higher multicellular organisms contain cells specialised for different functions. Most distinct cell types arise from a single totipotent cell that differentiates into hundreds of different cell types during the course of development. Differentiation of cells is driven by different environmental cues (such as cell–cell interaction) and intrinsic differences (such as those caused by the uneven distribution of molecules during division). Multicellular organisms are composed of cells that fall into two fundamental types: germ cells and somatic cells. During development, somatic cells will become more specialized and form the three primary germ layers: ectoderm, mesoderm, and endoderm. After formation of the three germ layers, cells will continue to special The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. The cells of all eukarya have a what? A. epidermis B. necrosis C. nucleus D. chloroplast Answer:
sciq-6633
multiple_choice
What is split to produce nuclear energy?
[ "molecules", "chemicals", "atoms", "protons" ]
C
Relavent Documents: Document 0::: Atomic energy or energy of atoms is energy carried by atoms. The term originated in 1903 when Ernest Rutherford began to speak of the possibility of atomic energy. H. G. Wells popularized the phrase "splitting the atom", before discovery of the atomic nucleus. Atomic energy includes: Nuclear binding energy, the energy required to split a nucleus of an atom. Nuclear potential energy, the potential energy of the particles inside an atomic nucleus. Nuclear reaction, a process in which nuclei or nuclear particles interact, resulting in products different from the initial ones; see also nuclear fission and nuclear fusion. Radioactive decay, the set of various processes by which unstable atomic nuclei (nuclides) emit subatomic particles. The energy of inter-atomic or chemical bonds, which holds atoms together in compounds. Atomic energy is the source of nuclear power, which uses sustained nuclear fission to generate heat and electricity. It is also the source of the explosive force of an atomic bomb. Document 1::: Conceptual questions or conceptual problems in science, technology, engineering, and mathematics (STEM) education are questions that can be answered based only on the knowledge of relevant concepts, rather than performing extensive calculations. They contrast with most homework and exam problems in science and engineering that typically require plugging in numerical values into previously discussed formulas. Such "plug-and-chug" numerical problems can often be solved correctly by just matching the pattern of the problem to a previously discussed problem and changing the numerical inputs, which requires significant amounts of time to perform the calculations but does not test or deepen the understanding of how the concepts and formulas should work together. Conceptual questions, therefore, provide a good complement to conventional numerical problems because they need minimal or no calculations and instead encourage the students to engage more deeply with the underlying concepts and how they relate to formulas. Conceptual problems are often formulated as multiple-choice questions, making them easy to use during in-class discussions, particularly when utilizing active learning, peer instruction, and audience response. An example of a conceptual question in undergraduate thermodynamics is provided below: During adiabatic expansion of an ideal gas, its temperatureincreases decreases stays the same Impossible to tell/need more information The use of conceptual questions in physics was popularized by Eric Mazur, particularly in the form of multiple-choice tests that he called ConcepTests. In recent years, multiple websites that maintain lists of conceptual questions have been created by instructors for various disciplines. Some books on physics provide many examples of conceptual questions as well. Multiple conceptual questions can be assembled into a concept inventory to test the working knowledge of students at the beginning of a course or to track the improvement in Document 2::: Nuclear binding energy in experimental physics is the minimum energy that is required to disassemble the nucleus of an atom into its constituent protons and neutrons, known collectively as nucleons. The binding energy for stable nuclei is always a positive number, as the nucleus must gain energy for the nucleons to move apart from each other. Nucleons are attracted to each other by the strong nuclear force. In theoretical nuclear physics, the nuclear binding energy is considered a negative number. In this context it represents the energy of the nucleus relative to the energy of the constituent nucleons when they are infinitely far apart. Both the experimental and theoretical views are equivalent, with slightly different emphasis on what the binding energy means. The mass of an atomic nucleus is less than the sum of the individual masses of the free constituent protons and neutrons. The difference in mass can be calculated by the Einstein equation, , where E is the nuclear binding energy, c is the speed of light, and m is the difference in mass. This 'missing mass' is known as the mass defect, and represents the energy that was released when the nucleus was formed. The term "nuclear binding energy" may also refer to the energy balance in processes in which the nucleus splits into fragments composed of more than one nucleon. If new binding energy is available when light nuclei fuse (nuclear fusion), or when heavy nuclei split (nuclear fission), either process can result in release of this binding energy. This energy may be made available as nuclear energy and can be used to produce electricity, as in nuclear power, or in a nuclear weapon. When a large nucleus splits into pieces, excess energy is emitted as gamma rays and the kinetic energy of various ejected particles (nuclear fission products). These nuclear binding energies and forces are on the order of one million times greater than the electron binding energies of light atoms like hydrogen. Introduction Nucl Document 3::: In nuclear physics, separation energy is the energy needed to remove one nucleon (or other specified particle or particles) from an atomic nucleus. The separation energy is different for each nuclide and particle to be removed. Values are stated as "neutron separation energy", "two-neutron separation energy", "proton separation energy", "deuteron separation energy", "alpha separation energy", and so on. The lowest separation energy among stable nuclides is 1.67 MeV, to remove a neutron from beryllium-9. The energy can be added to the nucleus by an incident high-energy gamma ray. If the energy of the incident photon exceeds the separation energy, a photodisintegration might occur. Energy in excess of the threshold value becomes kinetic energy of the ejected particle. By contrast, nuclear binding energy is the energy needed to completely disassemble a nucleus, or the energy released when a nucleus is assembled from nucleons. It is the sum of multiple separation energies, which should add to the same total regardless of the order of assembly or disassembly. Physics and chemistry Electron separation energy or electron binding energy, the energy required to remove one electron from a neutral atom or molecule (or cation) is called ionization energy. The reaction leads to photoionization, photodissociation, the photoelectric effect, photovoltaics, etc. Bond-dissociation energy is the energy required to break one bond of a molecule or ion, usually separating an atom or atoms. See also Binding energy External links Nucleon separation energies charts of nuclides showing separation energies Binding energy Nuclear physics Document 4::: GRE Subject Biochemistry, Cell and Molecular Biology was a standardized exam provided by ETS (Educational Testing Service) that was discontinued in December 2016. It is a paper-based exam and there are no computer-based versions of it. ETS places this exam three times per year: once in April, once in October and once in November. Some graduate programs in the United States recommend taking this exam, while others require this exam score as a part of the application to their graduate programs. ETS sends a bulletin with a sample practice test to each candidate after registration for the exam. There are 180 questions within the biochemistry subject test. Scores are scaled and then reported as a number between 200 and 990; however, in recent versions of the test, the maximum and minimum reported scores have been 760 (corresponding to the 99 percentile) and 320 (1 percentile) respectively. The mean score for all test takers from July, 2009, to July, 2012, was 526 with a standard deviation of 95. After learning that test content from editions of the GRE® Biochemistry, Cell and Molecular Biology (BCM) Test has been compromised in Israel, ETS made the decision not to administer this test worldwide in 2016–17. Content specification Since many students who apply to graduate programs in biochemistry do so during the first half of their fourth year, the scope of most questions is largely that of the first three years of a standard American undergraduate biochemistry curriculum. A sampling of test item content is given below: Biochemistry (36%) A Chemical and Physical Foundations Thermodynamics and kinetics Redox states Water, pH, acid-base reactions and buffers Solutions and equilibria Solute-solvent interactions Chemical interactions and bonding Chemical reaction mechanisms B Structural Biology: Structure, Assembly, Organization and Dynamics Small molecules Macromolecules (e.g., nucleic acids, polysaccharides, proteins and complex lipids) Supramolecular complexes (e.g. The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. What is split to produce nuclear energy? A. molecules B. chemicals C. atoms D. protons Answer:
sciq-7902
multiple_choice
What is the net force when two forces act in the same direction?
[ "number of the forces", "sum of the forces", "arguing forces", "group of forces" ]
B
Relavent Documents: Document 0::: As described by the third of Newton's laws of motion of classical mechanics, all forces occur in pairs such that if one object exerts a force on another object, then the second object exerts an equal and opposite reaction force on the first. The third law is also more generally stated as: "To every action there is always opposed an equal reaction: or the mutual actions of two bodies upon each other are always equal, and directed to contrary parts." The attribution of which of the two forces is the action and which is the reaction is arbitrary. Either of the two can be considered the action, while the other is its associated reaction. Examples Interaction with ground When something is exerting force on the ground, the ground will push back with equal force in the opposite direction. In certain fields of applied physics, such as biomechanics, this force by the ground is called 'ground reaction force'; the force by the object on the ground is viewed as the 'action'. When someone wants to jump, he or she exerts additional downward force on the ground ('action'). Simultaneously, the ground exerts upward force on the person ('reaction'). If this upward force is greater than the person's weight, this will result in upward acceleration. When these forces are perpendicular to the ground, they are also called a normal force. Likewise, the spinning wheels of a vehicle attempt to slide backward across the ground. If the ground is not too slippery, this results in a pair of friction forces: the 'action' by the wheel on the ground in backward direction, and the 'reaction' by the ground on the wheel in forward direction. This forward force propels the vehicle. Gravitational forces The Earth, among other planets, orbits the Sun because the Sun exerts a gravitational pull that acts as a centripetal force, holding the Earth to it, which would otherwise go shooting off into space. If the Sun's pull is considered an action, then Earth simultaneously exerts a reaction as a gravi Document 1::: The parallelogram of forces is a method for solving (or visualizing) the results of applying two forces to an object. When more than two forces are involved, the geometry is no longer parallelogrammatic, but the same principles apply. Forces, being vectors are observed to obey the laws of vector addition, and so the overall (resultant) force due to the application of a number of forces can be found geometrically by drawing vector arrows for each force. For example, see Figure 1. This construction has the same result as moving F2 so its tail coincides with the head of F1, and taking the net force as the vector joining the tail of F1 to the head of F2. This procedure can be repeated to add F3 to the resultant F1 + F2, and so forth. Newton's proof Preliminary: the parallelogram of velocity Suppose a particle moves at a uniform rate along a line from A to B (Figure 2) in a given time (say, one second), while in the same time, the line AB moves uniformly from its position at AB to a position at DC, remaining parallel to its original orientation throughout. Accounting for both motions, the particle traces the line AC. Because a displacement in a given time is a measure of velocity, the length of AB is a measure of the particle's velocity along AB, the length of AD is a measure of the line's velocity along AD, and the length of AC is a measure of the particle's velocity along AC. The particle's motion is the same as if it had moved with a single velocity along AC. Newton's proof of the parallelogram of force Suppose two forces act on a particle at the origin (the "tails" of the vectors) of Figure 1. Let the lengths of the vectors F1 and F2 represent the velocities the two forces could produce in the particle by acting for a given time, and let the direction of each represent the direction in which they act. Each force acts independently and will produce its particular velocity whether the other force acts or not. At the end of the given time, the particle has both v Document 2::: In mechanics, the net force is the sum of all the forces acting on an object. For example, if two forces are acting upon an object in opposite directions, and one force is greater than the other, the forces can be replaced with a single force that is the difference of the greater and smaller force. That force is the net force. When forces act upon an object, they change its acceleration. The net force is the combined effect of all the forces on the object's acceleration, as described by Newton's second law of motion. When the net force is applied at a specific point on an object, the associated torque can be calculated. The sum of the net force and torque is called the resultant force, which causes the object to rotate in the same way as all the forces acting upon it would if they were applied individually. It is possible for all the forces acting upon an object to produce no torque at all. This happens when the net force is applied along the line of action. In some texts, the terms resultant force and net force are used as if they mean the same thing. This is not always true, especially when in complex topics like the motion of spinning objects or situations where everything is perfectly balanced, known as static equilibrium. In these cases, it's important to understand that "net force" and "resultant force" can have distinct meanings. The Concept of Total Force In physics, a force is considered a vector quantity. This means that it not only has a size (or magnitude) but also a direction in which it acts. We typically represent force with the symbol F in boldface, or sometimes, we place an arrow over the symbol to indicate its vector nature, like this: . When we need to visually represent a force, we draw a line segment. This segment starts at a point A, where the force is applied, and ends at another point B. This line not only gives us the direction of the force (from A to B) but also its magnitude: the longer the line, the stronger the force. One of the Document 3::: In physics, tension is described as the pulling force transmitted axially by the means of a string, a rope, chain, or similar object, or by each end of a rod, truss member, or similar three-dimensional object; tension might also be described as the action-reaction pair of forces acting at each end of said elements. Tension could be the opposite of compression. At the atomic level, when atoms or molecules are pulled apart from each other and gain potential energy with a restoring force still existing, the restoring force might create what is also called tension. Each end of a string or rod under such tension could pull on the object it is attached to, in order to restore the string/rod to its relaxed length. Tension (as a transmitted force, as an action-reaction pair of forces, or as a restoring force) is measured in newtons in the International System of Units (or pounds-force in Imperial units). The ends of a string or other object transmitting tension will exert forces on the objects to which the string or rod is connected, in the direction of the string at the point of attachment. These forces due to tension are also called "passive forces". There are two basic possibilities for systems of objects held by strings: either acceleration is zero and the system is therefore in equilibrium, or there is acceleration, and therefore a net force is present in the system. Tension in one dimension Tension in a string is a non-negative vector quantity. Zero tension is slack. A string or rope is often idealized as one dimension, having length but being massless with zero cross section. If there are no bends in the string, as occur with vibrations or pulleys, then tension is a constant along the string, equal to the magnitude of the forces applied by the ends of the string. By Newton's third law, these are the same forces exerted on the ends of the string by the objects to which the ends are attached. If the string curves around one or more pulleys, it will still have const Document 4::: In mechanical engineering, a parallel force system is a situation in which two forces of equal magnitude act in the same direction within the same plane, with the counter force in the middle. An example of this is a see saw. The children are applying the two forces at the ends, and the fulcrum in the middle gives the counter force to maintain the see saw in neutral position. Another example are the major vertical forces on an airplane in flight (see image at right). The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. What is the net force when two forces act in the same direction? A. number of the forces B. sum of the forces C. arguing forces D. group of forces Answer:
sciq-6977
multiple_choice
What types of orbits do comets usually have?
[ "spectral", "vertical", "elliptical", "convex" ]
C
Relavent Documents: Document 0::: This is a list of comets (bodies that travel in elliptical, parabolic, and sometimes hyperbolic orbits and display a tail behind them) listed by type. Comets are sorted into four categories: periodic comets (e.g. Halley's Comet), non-periodic comets (e.g. Comet Hale–Bopp), comets with no meaningful orbit (the Great Comet of 1106), and lost comets (5D/Brorsen), displayed as either P (periodic), C (non-periodic), X (no orbit), and D (lost). Many of the earlier comets to be observed in history are designated with an X or D due to not having the tools to measure a comet's orbit accurately and eventually losing it. X/1106 C1 (the Great Comet of 1106) is a good example. The orbital elements for the older non-periodic comets in the list assume that the comet has an eccentricity of roughly 1, and therefore the calculations are only approximate. Guide to comet lists Hyperbolic comet list—Comets that are hyperbolic Near-parabolic comet list—Comets that have a period of over 1000 years Long-period comet list—Comets with a period between 200 and 1000 years List of periodic comets—Unnumbered comets with a period of less than 200 years List of numbered comets—Comets numbered by the Minor Planet Center Sungrazing comets Kreutz sungrazers Meyer group (below) Kracht group (below) Marsden group (below) Ungrouped sungrazers (below) After Edmond Halley recognized that several apparitions of a comet every 75.3 years were the same comet, it gave way to a new designation of periodic comets, with the first being named 1P/Halley. To date, there are 440 of these periodic comets, with many more on the way to getting an official designation. Non-periodic comets Non-periodic comets are generally comets that have only been seen on one occasion, or comets that have periods of thousands of years, to comets that are truly non-periodic, and will only come around the Solar System. The following comets are organized by their described types: Ejection-trajectory comets These are comets with an e Document 1::: This is a list of parabolic and hyperbolic comets in the Solar System. Many of these comets may come from the Oort cloud, or perhaps even have interstellar origin. The Oort Cloud is not gravitationally attracted enough to the Sun to form into a fairly thin disk, like the inner Solar System. Thus, comets originating from the Oort Cloud can come from roughly any orientation (inclination to the ecliptic), and many even have a retrograde orbit. By definition, a hyperbolic orbit means that the comet will only travel through the Solar System once, with the Sun acting as a gravitational slingshot, sending the comet hurtling out of the Solar System entirely unless its eccentricity is otherwise changed. Comets orbiting in this way still originate from the Solar System, however. Typically comets in the Oort Cloud are thought to have roughly circular orbits around the Sun, but their orbital velocity is so slow that they may easily be perturbed by passing stars and the galactic tide. Astronomers have been discovering weakly hyperbolic comets that were perturbed out of the Oort Cloud since the mid-1800s. Prior to finding a well-determined orbit for comets, the JPL Small-Body Database and the Minor Planet Center list comet orbits as having an assumed eccentricity of 1.0. (This is the eccentricity of a parabolic trajectory; hyperbolics will be those with eccentricity greater than 1.0.) In the list below, a number of comets discovered by the SOHO space telescope have assumed eccentricities of exactly 1.0, because most orbits are based on only an insufficient observation arc of several hours or minutes. The SOHO satellite observes the corona of the Sun and the area around it, and as a result often observes sungrazing comets, including the Kreutz sungrazers. The Kreutz sungrazers originate from the progenitor of the Great Comet of 1106. Although officially given an assumed eccentricity of 1.0, they have an orbital period of roughly 750 years (which would give an actual eccentricit Document 2::: This is a list of periodic comets that were numbered by the Minor Planet Center after having been observed on at least two occasions. Their orbital periods vary from 3.2 to 366 years. there are 471 numbered comets (1P–471P). There are 405 Jupiter-family comets (JFCs), 38 Encke-type comets (ETCs), 14 Halley-type comets (HTCs), five Chiron-type comets (CTCs), and one long-period comet (153P). 75 bodies are also near-Earth comets (NECs). In addition, eight numbered comets are principally classified as minor planets – five main-belt comets, two centaurs (CEN), and one Apollo asteroid – and display characteristics of both an asteroid and a comet. Occasionally, comets will break up into multiple chunks, as volatiles coming off the comet and rotational forces may cause it to break into two or more pieces. An extreme example of this is 73P/Schwassmann–Wachmann, which broke into over 50 pieces during its 1995 perihelion. For a larger list of periodic Jupiter-family and Halley-type comets including unnumbered bodies, see list of periodic comets. List Multiples 51P/Harrington back to main list This is a list of (3 entries) with all its cometary fragments listed at JPL's SBDB (see ). 57P/du Toit–Neujmin–Delporte back to main list This is a list of (2 entries) with all its cometary fragments listed at JPL's SBDB (see ). 73P/Schwassmann–Wachmann back to main list In 1995, comet 73P/Schwassmann–Wachmann, broke up into several pieces and as of its last perihelion date, the pieces numbered at least 67 with 73P/Schwassmann–Wachmann C as the presumed original nucleus. Because of the enormous number, the pieces of it have been compiled into a separate list. This is a list of (68 entries) with all its cometary fragments listed at JPL's SBDB (see ). 101P/Chernykh back to main list This is a list of (2 entries) with all its cometary fragments listed at JPL's SBDB (see ). 128P/Shoemaker–Holt back to main list Document 3::: This is a list of comets designated with X/ prefix. The majority of these comets were discovered before the invention of the telescope in 1610, and as such there was nobody to plot the positions of the comets to a high enough precision to generate any meaningful orbit. Later comets, observed in the 17th century or later, either did not have enough observations, sometimes as few as one or two, or the comet disintegrated or moved out of a favorable location in the sky before it was possible to make more observations of it. Document 4::: The following is a list of Halley-type comets (HTCs), which are periodic comets with an orbital period between 20 and 200 years, often appearing only once or twice within one's lifetime. The majority come from between the orbits of Saturn and Neptune. Due to the nature of their orbits, they can be perturbed by the giant planets and sent into orbits too far from the Sun to outgas, and vice versa. Minor planets in comet-like orbits similar to HTCs that never come close enough to the Sun to outgas are called centaurs. HTCs are named after the first discovered member, and the first discovered periodic comet, Halley's Comet, which orbits the Sun in about 75 years, and passing as far as the orbit of Neptune. Most of the comets that have a period between 20 and 200 years (making them HTCs based on the classical definition) are actually officially classified as either Jupiter-family comets (JFCs) or Chiron-type comets (CTCs), based on their Jupiter Tisserand's parameter (TJupiter). Although JFCs are classically defined by (P < 20 y), they're officially defined by (2 < TJupiter < 3). CTCs, on the other hand, are officially defined by (TJupiter > 3; a > aJupiter). Since they do not include any period-related constraints, some of the 20–200 year-period comets unfortunately match one of the classifications, making comet classifications even more vague. Numbered HTCs For the 14 numbered HTCs, see the list of numbered comets, where they are labelled "HTC" in column "class". Unnumbered HTCs This list contains only Halley-type comets which are not numbered yet because they have been observed only once. Comets that belong to a different comet classification based on its Jupiter Tisserand parameter are given its alternative classification next to the comets' name. See also List of comets by type List of near-parabolic comets List of hyperbolic comets The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. What types of orbits do comets usually have? A. spectral B. vertical C. elliptical D. convex Answer:
sciq-8861
multiple_choice
What kind of mammalian reproduction is risky for the offspring but not the mother?
[ "sexual", "asexual", "cactaceae", "monotreme" ]
D
Relavent Documents: Document 0::: The "Vicar of Bray" hypothesis (or Fisher-Muller Model) attempts to explain why sexual reproduction might have advantages over asexual reproduction. Reproduction is the process by which organisms give rise to offspring. Asexual reproduction involves a single parent and results in offspring that are genetically identical to each other and to the parent. In contrast to asexual reproduction, sexual reproduction involves two parents. Both the parents produce gametes through meiosis, a special type of cell division that reduces the chromosome number by half. During an early stage of meiosis, before the chromosomes are separated in the two daughter cells, the chromosomes undergo genetic recombination. This allows them to exchange some of their genetic information. Therefore, the gametes from a single organism are all genetically different from each other. The process in which the two gametes from the two parents unite is called fertilization. Half of the genetic information from both parents is combined. This results in offspring that are genetically different from each other and from the parents. In short, sexual reproduction allows a continuous rearrangement of genes. Therefore, the offspring of a population of sexually reproducing individuals will show a more varied selection of phenotypes. Due to faster attainment of favorable genetic combinations, sexually reproducing populations evolve more rapidly in response to environmental changes. Under the Vicar of Bray hypothesis, sex benefits a population as a whole, but not individuals within it, making it a case of group selection. Disadvantage of sexual reproduction Sexual reproduction often takes a lot of effort. Finding a mate can sometimes be an expensive, risky and time consuming process. Courtship, copulation and taking care of the new born offspring may also take up a lot of time and energy. From this point of view, asexual reproduction may seem a lot easier and more efficient. But another important thing to co Document 1::: In genetics, paternal mtDNA transmission and paternal mtDNA inheritance refer to the incidence of mitochondrial DNA (mtDNA) being passed from a father to his offspring. Paternal mtDNA inheritance is observed in a small proportion of species; in general, mtDNA is passed unchanged from a mother to her offspring, making it an example of non-Mendelian inheritance. In contrast, mtDNA transmission from both parents occurs regularly in certain bivalves. In animals Paternal mtDNA inheritance in animals varies. For example, in Mytilidae mussels, paternal mtDNA "is transmitted through the sperm and establishes itself only in the male gonad." In testing 172 sheep, "The Mitochondrial DNA from three lambs in two half-sib families were found to show paternal inheritance." An instance of paternal leakage resulted in a study on chickens. There has been evidences that paternal leakage is an integral part of mitochondrial inheritance of Drosophila simulans. In humans In human mitochondrial genetics, there is debate over whether or not paternal mtDNA transmission is possible. Many studies hold that paternal mtDNA is never transmitted to offspring. This thought is central to mtDNA genealogical DNA testing and to the theory of mitochondrial Eve. The fact that mitochondrial DNA is maternally inherited enables researchers to trace maternal lineage far back in time. Y chromosomal DNA, paternally inherited, is used in an analogous way to trace the agnate lineage. In sexual reproduction, paternal mitochondria found in the sperm are actively decomposed, thus preventing "paternal leakage". Mitochondria in mammalian sperm are usually destroyed by the egg cell after fertilization. In 1999 it was reported that paternal sperm mitochondria (containing mtDNA) are marked with ubiquitin to select them for later destruction inside the embryo. Some in vitro fertilization (IVF) techniques, particularly intracytoplasmic sperm injection (ICSI) of a sperm into an oocyte, may interfere with thi Document 2::: Sperm heteromorphism is the simultaneous production of two or more distinguishable types of sperm by a single male. The sperm types might differ in size, shape and/or chromosome complement. Sperm heteromorphism is also called sperm polymorphism or sperm dimorphism (for species with two sperm types). Typically, only one sperm type is capable of fertilizing eggs. Fertile types have been called "eusperm" or "eupyrene sperm" and infertile types "parasperm" or "apyrene sperm". One interpretation of sperm polymorphism is the "kamikaze sperm" hypothesis (Baker and Bellis, 1988), which has been widely discredited in humans. The kamikaze sperm hypothesis states that the polymorphism of sperm is due to a subdivision of sperm into different functional groups. There are those that defend the egg from fertilization by other male sperm, and those that fertilize the egg. However, there is no evidence that the polymorphism of human sperm is for the purpose of antagonizing rival sperm. Distribution Sperm heteromorphism is known from several different groups of animals. Insects Lepidoptera (i.e. butterflies and moths): Almost all known species produce two sperm types. The fertilizing type has a longer tail and contains a nucleus. The other type is shorter and lacks a nucleus, meaning it contains no genetic information at all. Drosophila (fruit-flies): the D. obscura group of species in the genus Drosophila is sperm heteromorphic. As with the Lepidoptera, there is a long, fertile type and a short, infertile type. However, the infertile type has a nucleus with a normal, haploid chromosome complement. It is not known why the shorter sperm are infertile, though it has been suggested that the slightly wider head of the infertile type might prevent it from entering the micropyle of the egg. Diosidae (stalk-eyed flies): several species have a long, fertile type and a shorter infertile type. Carabidae (ground beetles): some species produce large, infertile sperm that may contain up to 10 Document 3::: Reproductive biology includes both sexual and asexual reproduction. Reproductive biology includes a wide number of fields: Reproductive systems Endocrinology Sexual development (Puberty) Sexual maturity Reproduction Fertility Human reproductive biology Endocrinology Human reproductive biology is primarily controlled through hormones, which send signals to the human reproductive structures to influence growth and maturation. These hormones are secreted by endocrine glands, and spread to different tissues in the human body. In humans, the pituitary gland synthesizes hormones used to control the activity of endocrine glands. Reproductive systems Internal and external organs are included in the reproductive system. There are two reproductive systems including the male and female, which contain different organs from one another. These systems work together in order to produce offspring. Female reproductive system The female reproductive system includes the structures involved in ovulation, fertilization, development of an embryo, and birth. These structures include: Ovaries Oviducts Uterus Vagina Mammary Glands Estrogen is one of the sexual reproductive hormones that aid in the sexual reproductive system of the female. Male reproductive system The male reproductive system includes testes, rete testis, efferent ductules, epididymis, sex accessory glands, sex accessory ducts and external genitalia. Testosterone, an androgen, although present in both males and females, is relatively more abundant in males. Testosterone serves as one of the major sexual reproductive hormones in the male reproductive system However, the enzyme aromatase is present in testes and capable of synthesizing estrogens from androgens. Estrogens are present in high concentrations in luminal fluids of the male reproductive tract. Androgen and estrogen receptors are abundant in epithelial cells of the male reproductive tract. Animal Reproductive Biology Animal reproduction oc Document 4::: Extranuclear inheritance or cytoplasmic inheritance is the transmission of genes that occur outside the nucleus. It is found in most eukaryotes and is commonly known to occur in cytoplasmic organelles such as mitochondria and chloroplasts or from cellular parasites like viruses or bacteria. Organelles Mitochondria are organelles which function to transform energy as a result of cellular respiration. Chloroplasts are organelles which function to produce sugars via photosynthesis in plants and algae. The genes located in mitochondria and chloroplasts are very important for proper cellular function. The mitochondrial DNA and other extranuclear types of DNA replicate independently of the DNA located in the nucleus, which is typically arranged in chromosomes that only replicate one time preceding cellular division. The extranuclear genomes of mitochondria and chloroplasts however replicate independently of cell division. They replicate in response to a cell's increasing energy needs which adjust during that cell's lifespan. Since they replicate independently, genomic recombination of these genomes is rarely found in offspring, contrary to nuclear genomes in which recombination is common. Mitochondrial diseases are inherited from the mother, not from the father. Mitochondria with their mitochondrial DNA are already present in the egg cell before it gets fertilized by a sperm. In many cases of fertilization, the head of the sperm enters the egg cell; leaving its middle part, with its mitochondria, behind. The mitochondrial DNA of the sperm often remains outside the zygote and gets excluded from inheritance. Parasites Extranuclear transmission of viral genomes and symbiotic bacteria is also possible. An example of viral genome transmission is perinatal transmission. This occurs from mother to fetus during the perinatal period, which begins before birth and ends about 1 month after birth. During this time viral material may be passed from mother to child in the bloodst The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. What kind of mammalian reproduction is risky for the offspring but not the mother? A. sexual B. asexual C. cactaceae D. monotreme Answer:
sciq-3917
multiple_choice
What does an electric conductor cross to generate current?
[ "magnetic zone waves", "magnetic field lines", "waves field lines", "magnetic polar waves" ]
B
Relavent Documents: Document 0::: Advanced Placement (AP) Physics C: Electricity and Magnetism (also known as AP Physics C: E&M or AP E&M) is an introductory physics course administered by the College Board as part of its Advanced Placement program. It is intended to proxy a second-semester calculus-based university course in electricity and magnetism. The content of Physics C: E&M overlaps with that of AP Physics 2, but Physics 2 is algebra-based and covers other topics outside of electromagnetism, while Physics C is calculus-based and only covers electromagnetism. Physics C: E&M may be combined with its mechanics counterpart to form a year-long course that prepares for both exams. Course content E&M is equivalent to an introductory college course in electricity and magnetism for physics or engineering majors. The course modules are: Electrostatics Conductors, capacitors, and dielectrics Electric circuits Magnetic fields Electromagnetism. Methods of calculus are used wherever appropriate in formulating physical principles and in applying them to physical problems. Therefore, students should have completed or be concurrently enrolled in a calculus class. AP test The course culminates in an optional exam for which high-performing students may receive some credit towards their college coursework, depending on the institution. Registration The AP examination for AP Physics C: Electricity and Magnetism is separate from the AP examination for AP Physics C: Mechanics. Before 2006, test-takers paid only once and were given the choice of taking either one or two parts of the Physics C test. Format The exam is typically administered on a Monday afternoon in May. The exam is configured in two categories: a 35-question multiple choice section and a 3-question free response section. Test takers are allowed to use an approved calculator during the entire exam. The test is weighted such that each section is worth half of the final score. This and AP Physics C: Mechanics are the shortest AP exams, with Document 1::: The following is a chronology of discoveries concerning the magnetosphere. 1600 - William Gilbert in London suggests the Earth is a giant magnet. 1741 - Hiorter and Anders Celsius note that the polar aurora is accompanied by a disturbance of the magnetic needle. 1820 - Hans Christian Ørsted discovers electric currents create magnetic effects. André-Marie Ampère deduces that magnetism is basically the force between electric currents. 1833 - Carl Friedrich Gauss and Wilhelm Weber worked out the mathematical theory for separating the inner and outer Magnetosphere sources of Earth's magnetic field. 1843 - Samuel Schwabe, a German amateur astronomer, shows the existence of an 11-year sunspot cycle. 1859 - Richard Carrington in England observes a solar flare; 17 hours later a large magnetic storm begins. 1892 - George Ellery Hale introduces the spectroheliograph, observing the Sun in hydrogen light from the chromosphere, a sensitive way of detecting flares. He confirms the connection between flares and magnetic storms. 1900-3 - Kristian Birkeland experiments with beams of electrons aimed at a magnetized sphere ("terrella") in a vacuum chamber. The electrons hit near the magnetic poles, leading him to propose that the polar aurora is created by electron beams from the Sun. Birkeland also observes magnetic disturbances associated with the aurora, suggesting to him that localized "polar magnetic storms" exist in the auroral zone. 1902 - Marconi successfully sends radio signals across the Atlantic Ocean. Oliver Heaviside suggests that the radio waves found their way around the curving Earth because they were reflected from electrically conducting layer at the top of the atmosphere. 1926 - Gregory Breit and Merle Tuve measure the distance to the conducting layer—which R. Watson-Watt proposes naming "ionosphere"—by measuring the time needed for a radio signal to bounce back. 1930-1 - After Birkeland's "electron beam" theory is disproved, Sydney Chapman and Vincent Ferrar Document 2::: In physics, specifically electromagnetism, the magnetic flux through a surface is the surface integral of the normal component of the magnetic field B over that surface. It is usually denoted or . The SI unit of magnetic flux is the weber (Wb; in derived units, volt–seconds), and the CGS unit is the maxwell. Magnetic flux is usually measured with a fluxmeter, which contains measuring coils, and it calculates the magnetic flux from the change of voltage on the coils. Description The magnetic interaction is described in terms of a vector field, where each point in space is associated with a vector that determines what force a moving charge would experience at that point (see Lorentz force). Since a vector field is quite difficult to visualize, introductory physics instruction often uses field lines to visualize this field. The magnetic flux through some surface, in this simplified picture, is proportional to the number of field lines passing through that surface (in some contexts, the flux may be defined to be precisely the number of field lines passing through that surface; although technically misleading, this distinction is not important). The magnetic flux is the net number of field lines passing through that surface; that is, the number passing through in one direction minus the number passing through in the other direction (see below for deciding in which direction the field lines carry a positive sign and in which they carry a negative sign). More sophisticated physical models drop the field line analogy and define magnetic flux as the surface integral of the normal component of the magnetic field passing through a surface. If the magnetic field is constant, the magnetic flux passing through a surface of vector area S is where B is the magnitude of the magnetic field (the magnetic flux density) having the unit of Wb/m2 (tesla), S is the area of the surface, and θ is the angle between the magnetic field lines and the normal (perpendicular) to S. For a vary Document 3::: In classical electromagnetism, Ampère's circuital law (not to be confused with Ampère's force law) relates the circulation of a magnetic field around a closed loop to the electric current passing through the loop. James Clerk Maxwell (not Ampère) derived it using hydrodynamics in his 1861 published paper "On Physical Lines of Force". In 1865 he generalized the equation to apply to time-varying currents by adding the displacement current term, resulting in the modern form of the law, sometimes called the Ampère–Maxwell law, which is one of Maxwell's equations which form the basis of classical electromagnetism. Ampère's original circuital law In 1820 Danish physicist Hans Christian Ørsted discovered that an electric current creates a magnetic field around it, when he noticed that the needle of a compass next to a wire carrying current turned so that the needle was perpendicular to the wire. He investigated and discovered the rules which govern the field around a straight current-carrying wire: The magnetic field lines encircle the current-carrying wire. The magnetic field lines lie in a plane perpendicular to the wire. If the direction of the current is reversed, the direction of the magnetic field reverses. The strength of the field is directly proportional to the magnitude of the current. The strength of the field at any point is inversely proportional to the distance of the point from the wire. This sparked a great deal of research into the relation between electricity and magnetism. André-Marie Ampère investigated the magnetic force between two current-carrying wires, discovering Ampère's force law. In the 1850s Scottish mathematical physicist James Clerk Maxwell generalized these results and others into a single mathematical law. The original form of Maxwell's circuital law, which he derived as early as 1855 in his paper "On Faraday's Lines of Force" based on an analogy to hydrodynamics, relates magnetic fields to electric currents that produce them. It Document 4::: The coherer was a primitive form of radio signal detector used in the first radio receivers during the wireless telegraphy era at the beginning of the 20th century. Its use in radio was based on the 1890 findings of French physicist Édouard Branly and adapted by other physicists and inventors over the next ten years. The device consists of a tube or capsule containing two electrodes spaced a small distance apart with loose metal filings in the space between. When a radio frequency signal is applied to the device, the metal particles would cling together or "cohere", reducing the initial high resistance of the device, thereby allowing a much greater direct current to flow through it. In a receiver, the current would activate a bell, or a Morse paper tape recorder to make a record of the received signal. The metal filings in the coherer remained conductive after the signal (pulse) ended so that the coherer had to be "decohered" by tapping it with a clapper actuated by an electromagnet, each time a signal was received, thereby restoring the coherer to its original state. Coherers remained in widespread use until about 1907, when they were replaced by more sensitive electrolytic and crystal detectors. History The behavior of particles or metal filings in the presence of electricity or electric sparks was noticed in many experiments well before Édouard Branly's 1890 paper and even before there was proof of the theory of electromagnetism. In 1835 Swedish scientist Peter Samuel Munk noticed a change of resistance in a mixture of metal filings in the presence of spark discharge from a Leyden jar. In 1850 Pierre Guitard found that when dusty air was electrified, the particles would tend to collect in the form of strings. The idea that particles could react to electricity was used in English engineer Samuel Alfred Varley's 1866 lightning bridge, a lightning arrester attached to telegraph lines consisting of a piece of wood with two metal spikes extending into a chamber. The The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. What does an electric conductor cross to generate current? A. magnetic zone waves B. magnetic field lines C. waves field lines D. magnetic polar waves Answer:
sciq-1922
multiple_choice
The structure of the gas carbon dioxide consists of one atom of carbon and two atoms of what?
[ "sulfur", "oxygen", "Helium", "methane" ]
B
Relavent Documents: Document 0::: Conceptual questions or conceptual problems in science, technology, engineering, and mathematics (STEM) education are questions that can be answered based only on the knowledge of relevant concepts, rather than performing extensive calculations. They contrast with most homework and exam problems in science and engineering that typically require plugging in numerical values into previously discussed formulas. Such "plug-and-chug" numerical problems can often be solved correctly by just matching the pattern of the problem to a previously discussed problem and changing the numerical inputs, which requires significant amounts of time to perform the calculations but does not test or deepen the understanding of how the concepts and formulas should work together. Conceptual questions, therefore, provide a good complement to conventional numerical problems because they need minimal or no calculations and instead encourage the students to engage more deeply with the underlying concepts and how they relate to formulas. Conceptual problems are often formulated as multiple-choice questions, making them easy to use during in-class discussions, particularly when utilizing active learning, peer instruction, and audience response. An example of a conceptual question in undergraduate thermodynamics is provided below: During adiabatic expansion of an ideal gas, its temperatureincreases decreases stays the same Impossible to tell/need more information The use of conceptual questions in physics was popularized by Eric Mazur, particularly in the form of multiple-choice tests that he called ConcepTests. In recent years, multiple websites that maintain lists of conceptual questions have been created by instructors for various disciplines. Some books on physics provide many examples of conceptual questions as well. Multiple conceptual questions can be assembled into a concept inventory to test the working knowledge of students at the beginning of a course or to track the improvement in Document 1::: A carbon–carbon bond is a covalent bond between two carbon atoms. The most common form is the single bond: a bond composed of two electrons, one from each of the two atoms. The carbon–carbon single bond is a sigma bond and is formed between one hybridized orbital from each of the carbon atoms. In ethane, the orbitals are sp3-hybridized orbitals, but single bonds formed between carbon atoms with other hybridizations do occur (e.g. sp2 to sp2). In fact, the carbon atoms in the single bond need not be of the same hybridization. Carbon atoms can also form double bonds in compounds called alkenes or triple bonds in compounds called alkynes. A double bond is formed with an sp2-hybridized orbital and a p-orbital that is not involved in the hybridization. A triple bond is formed with an sp-hybridized orbital and two p-orbitals from each atom. The use of the p-orbitals forms a pi bond. Chains and branching Carbon is one of the few elements that can form long chains of its own atoms, a property called catenation. This coupled with the strength of the carbon–carbon bond gives rise to an enormous number of molecular forms, many of which are important structural elements of life, so carbon compounds have their own field of study: organic chemistry. Branching is also common in C−C skeletons. Carbon atoms in a molecule are categorized by the number of carbon neighbors they have: A primary carbon has one carbon neighbor. A secondary carbon has two carbon neighbors. A tertiary carbon has three carbon neighbors. A quaternary carbon has four carbon neighbors. In "structurally complex organic molecules", it is the three-dimensional orientation of the carbon–carbon bonds at quaternary loci which dictates the shape of the molecule. Further, quaternary loci are found in many biologically active small molecules, such as cortisone and morphine. Synthesis Carbon–carbon bond-forming reactions are organic reactions in which a new carbon–carbon bond is formed. They are important in th Document 2::: The SAT Subject Test in Biology was the name of a one-hour multiple choice test given on biology by the College Board. A student chose whether to take the test depending upon college entrance requirements for the schools in which the student is planning to apply. Until 1994, the SAT Subject Tests were known as Achievement Tests; and from 1995 until January 2005, they were known as SAT IIs. Of all SAT subject tests, the Biology E/M test was the only SAT II that allowed the test taker a choice between the ecological or molecular tests. A set of 60 questions was taken by all test takers for Biology and a choice of 20 questions was allowed between either the E or M tests. This test was graded on a scale between 200 and 800. The average for Molecular is 630 while Ecological is 591. On January 19 2021, the College Board discontinued all SAT Subject tests, including the SAT Subject Test in Biology E/M. This was effective immediately in the United States, and the tests were to be phased out by the following summer for international students. This was done as a response to changes in college admissions due to the impact of the COVID-19 pandemic on education. Format This test had 80 multiple-choice questions that were to be answered in one hour. All questions had five answer choices. Students received one point for each correct answer, lost ¼ of a point for each incorrect answer, and received 0 points for questions left blank. The student's score was based entirely on his or her performance in answering the multiple-choice questions. The questions covered a broad range of topics in general biology. There were more specific questions related respectively on ecological concepts (such as population studies and general Ecology) on the E test and molecular concepts such as DNA structure, translation, and biochemistry on the M test. Preparation The College Board suggested a year-long course in biology at the college preparatory level, as well as a one-year course in algebra, a Document 3::: Carbon is a primary component of all known life on Earth, representing approximately 45–50% of all dry biomass. Carbon compounds occur naturally in great abundance on Earth. Complex biological molecules consist of carbon atoms bonded with other elements, especially oxygen and hydrogen and frequently also nitrogen, phosphorus, and sulfur (collectively known as CHNOPS). Because it is lightweight and relatively small in size, carbon molecules are easy for enzymes to manipulate. It is frequently assumed in astrobiology that if life exists elsewhere in the Universe, it will also be carbon-based. Critics refer to this assumption as carbon chauvinism. Characteristics Carbon is capable of forming a vast number of compounds, more than any other element, with almost ten million compounds described to date, and yet that number is but a fraction of the number of theoretically possible compounds under standard conditions. The enormous diversity of carbon-containing compounds, known as organic compounds, has led to a distinction between them and compounds that do not contain carbon, known as inorganic compounds. The branch of chemistry that studies organic compounds is known as organic chemistry. Carbon is the 15th most abundant element in the Earth's crust, and the fourth most abundant element in the universe by mass, after hydrogen, helium, and oxygen. Carbon's widespread abundance, its ability to form stable bonds with numerous other elements, and its unusual ability to form polymers at the temperatures commonly encountered on Earth enables it to serve as a common element of all known living organisms. In a 2018 study, carbon was found to compose approximately 550 billion tons of all life on Earth. It is the second most abundant element in the human body by mass (about 18.5%) after oxygen. The most important characteristics of carbon as a basis for the chemistry of life are that each carbon atom is capable of forming up to four valence bonds with other atoms simultaneously Document 4::: Atomicity is the total number of atoms present in a molecule. For example, each molecule of oxygen (O2) is composed of two oxygen atoms. Therefore, the atomicity of oxygen is 2. In older contexts, atomicity is sometimes equivalent to valency. Some authors also use the term to refer to the maximum number of valencies observed for an element. Classifications Based on atomicity, molecules can be classified as: Monoatomic (composed of one atom). Examples include He (helium), Ne (neon), Ar (argon), and Kr (krypton). All noble gases are monoatomic. Diatomic (composed of two atoms). Examples include H2 (hydrogen), N2 (nitrogen), O2 (oxygen), F2 (fluorine), and Cl2 (chlorine). Halogens are usually diatomic. Triatomic (composed of three atoms). Examples include O3 (ozone). Polyatomic (composed of three or more atoms). Examples include S8. Atomicity may vary in different allotropes of the same element. The exact atomicity of metals, as well as some other elements such as carbon, cannot be determined because they consist of a large and indefinite number of atoms bonded together. They are typically designated as having an atomicity of 1. The atomicity of homonuclear molecule can be derived by dividing the molecular weight by the atomic weight. For example, the molecular weight of oxygen is 31.999, while its atomic weight is 15.879; therefore, its atomicity is approximately 2 (31.999/15.879 ≈ 2). Examples The most common values of atomicity for the first 30 elements in the periodic table are as follows: The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. The structure of the gas carbon dioxide consists of one atom of carbon and two atoms of what? A. sulfur B. oxygen C. Helium D. methane Answer:
sciq-2061
multiple_choice
The vertebrate endoskeleton can also be called what?
[ "internal skeleton", "deep skeleton", "structural skeleton", "exoskeleton" ]
A
Relavent Documents: Document 0::: Work He is an associate professor of anatomy, Department of Anatomy, Howard University College of Medicine (US). He was among the most cited/influential anatomists in 2019. Books Single author or co-author books DIOGO, R. (2021). Meaning of Life, Human Nature and Delusions - How Tales about Love, Sex, Races, Gods and Progress Affect Us and Earth's Splendor. Springer (New York, US). MONTERO, R., ADESOMO, A. & R. DIOGO (2021). On viruses, pandemics, and us: a developing story [De virus, pandemias y nosotros: una historia en desarollo]. Independently published, Tucuman, Argentina. 495 pages. DIOGO, R., J. ZIERMANN, J. MOLNAR, N. SIOMAVA & V. ABDALA (2018). Muscles of Chordates: development, homologies and evolution. Taylor & Francis (Oxford, UK). 650 pages. DIOGO, R., B. SHEARER, J. M. POTAU, J. F. PASTOR, F. J. DE PAZ, J. ARIAS-MARTORELL, C. TURCOTTE, A. HAMMOND, E. VEREECKE, M. VANHOOF, S. NAUWELAERTS & B. WOOD (2017). Photographic and descriptive musculoskeletal atlas of bonobos - with notes on the weight, attachments, variations, and innervation of the muscles and comparisons with common chimpanzees and humans. Springer (New York, US). 259 pages. DIOGO, R. (2017). Evolution driven by organismal behavior: a unifying view of life, function, form, mismatches and trends. Springer Document 1::: Arthropods are covered with a tough, resilient integument or exoskeleton of chitin. Generally the exoskeleton will have thickened areas in which the chitin is reinforced or stiffened by materials such as minerals or hardened proteins. This happens in parts of the body where there is a need for rigidity or elasticity. Typically the mineral crystals, mainly calcium carbonate, are deposited among the chitin and protein molecules in a process called biomineralization. The crystals and fibres interpenetrate and reinforce each other, the minerals supplying the hardness and resistance to compression, while the chitin supplies the tensile strength. Biomineralization occurs mainly in crustaceans. In insects and arachnids, the main reinforcing materials are various proteins hardened by linking the fibres in processes called sclerotisation and the hardened proteins are called sclerotin. The dorsal tergum, ventral sternum, and the lateral pleura form the hardened plates or sclerites of a typical body segment. In either case, in contrast to the carapace of a tortoise or the cranium of a vertebrate, the exoskeleton has little ability to grow or change its form once it has matured. Except in special cases, whenever the animal needs to grow, it moults, shedding the old skin after growing a new skin from beneath. Microscopic structure A typical arthropod exoskeleton is a multi-layered structure with four functional regions: epicuticle, procuticle, epidermis and basement membrane. Of these, the epicuticle is a multi-layered external barrier that, especially in terrestrial arthropods, acts as a barrier against desiccation. The strength of the exoskeleton is provided by the underlying procuticle, which is in turn secreted by the epidermis. Arthropod cuticle is a biological composite material, consisting of two main portions: fibrous chains of alpha-chitin within a matrix of silk-like and globular proteins, of which the best-known is the rubbery protein called resilin. The rel Document 2::: Comparative foot morphology involves comparing the form of distal limb structures of a variety of terrestrial vertebrates. Understanding the role that the foot plays for each type of organism must take account of the differences in body type, foot shape, arrangement of structures, loading conditions and other variables. However, similarities also exist among the feet of many different terrestrial vertebrates. The paw of the dog, the hoof of the horse, the manus (forefoot) and pes (hindfoot) of the elephant, and the foot of the human all share some common features of structure, organization and function. Their foot structures function as the load-transmission platform which is essential to balance, standing and types of locomotion (such as walking, trotting, galloping and running). The discipline of biomimetics applies the information gained by comparing the foot morphology of a variety of terrestrial vertebrates to human-engineering problems. For instance, it may provide insights that make it possible to alter the foot's load transmission in people who wear an external orthosis because of paralysis from spinal-cord injury, or who use a prosthesis following the diabetes-related amputation of a leg. Such knowledge can be incorporated in technology that improves a person's balance when standing; enables them to walk more efficiently, and to exercise; or otherwise enhances their quality of life by improving their mobility. Structure Limb and foot structure of representative terrestrial vertebrates: Variability in scaling and limb coordination There is considerable variation in the scale and proportions of body and limb, as well as the nature of loading, during standing and locomotion both among and between quadrupeds and bipeds. The anterior-posterior body mass distribution varies considerably among mammalian quadrupeds, which affects limb loading. When standing, many terrestrial quadrupeds support more of their weight on their forelimbs rather than their hi Document 3::: Proprioception ( ), also called kinaesthesia (or kinesthesia), is the sense of self-movement, force, and body position. Proprioception is mediated by proprioceptors, mechanosensory neurons located within muscles, tendons, and joints. Most animals possess multiple subtypes of proprioceptors, which detect distinct kinematic parameters, such as joint position, movement, and load. Although all mobile animals possess proprioceptors, the structure of the sensory organs can vary across species. Proprioceptive signals are transmitted to the central nervous system, where they are integrated with information from other sensory systems, such as the visual system and the vestibular system, to create an overall representation of body position, movement, and acceleration. In many animals, sensory feedback from proprioceptors is essential for stabilizing body posture and coordinating body movement. System overview In vertebrates, limb movement and velocity (muscle length and the rate of change) are encoded by one group of sensory neurons (type Ia sensory fiber) and another type encode static muscle length (group II neurons). These two types of sensory neurons compose muscle spindles. There is a similar division of encoding in invertebrates; different subgroups of neurons of the Chordotonal organ encode limb position and velocity. To determine the load on a limb, vertebrates use sensory neurons in the Golgi tendon organs: type Ib afferents. These proprioceptors are activated at given muscle forces, which indicate the resistance that muscle is experiencing. Similarly, invertebrates have a mechanism to determine limb load: the Campaniform sensilla. These proprioceptors are active when a limb experiences resistance. A third role for proprioceptors is to determine when a joint is at a specific position. In vertebrates, this is accomplished by Ruffini endings and Pacinian corpuscles. These proprioceptors are activated when the joint is at a threshold position, usually at the extre Document 4::: An exoskeleton (from Greek éxō "outer" and skeletós "skeleton") is an external skeleton that both supports the body shape and protects the internal organs of an animal, in contrast to an internal endoskeleton (e.g. that of a human) which is enclosed under other soft tissues. Some large, hard protective exoskeletons are known as "shells". Examples of exoskeletons in animals include the arthropod exoskeleton shared by arthropods (insects, chelicerates, myriapods and crustaceans) and tardigrades, as well as the outer shell of certain sponges and the mollusc shell shared by snails, clams, tusk shells, chitons and nautilus. Some vertebrate animals, such as the turtle, have both an endoskeleton and a protective exoskeleton. Role Exoskeletons contain rigid and resistant components that fulfil a set of functional roles in many animals including protection, excretion, sensing, support, feeding, and acting as a barrier against desiccation in terrestrial organisms. Exoskeletons have roles in defence from pests and predators and in providing an attachment framework for musculature. Arthropod exoskeletons contain chitin; the addition of calcium carbonate makes them harder and stronger, at the price of increased weight. Ingrowths of the arthropod exoskeleton known as apodemes serve as attachment sites for muscles. These structures are composed of chitin and are approximately six times stronger and twice the stiffness of vertebrate tendons. Similar to tendons, apodemes can stretch to store elastic energy for jumping, notably in locusts. Calcium carbonates constitute the shells of molluscs, brachiopods, and some tube-building polychaete worms. Silica forms the exoskeleton in the microscopic diatoms and radiolaria. One mollusc species, the scaly-foot gastropod, even uses the iron sulfides greigite and pyrite. Some organisms, such as some foraminifera, agglutinate exoskeletons by sticking grains of sand and shell to their exterior. Contrary to a common misconception, echinoder The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. The vertebrate endoskeleton can also be called what? A. internal skeleton B. deep skeleton C. structural skeleton D. exoskeleton Answer:
sciq-2945
multiple_choice
Seismic waves show that the inner core of the earth is solid while the outer core is what?
[ "lava", "silicon", "liquid", "gas" ]
C
Relavent Documents: Document 0::: The internal structure of Earth is the layers of the Earth, excluding its atmosphere and hydrosphere. The structure consists of an outer silicate solid crust, a highly viscous asthenosphere and solid mantle, a liquid outer core whose flow generates the Earth's magnetic field, and a solid inner core. Scientific understanding of the internal structure of Earth is based on observations of topography and bathymetry, observations of rock in outcrop, samples brought to the surface from greater depths by volcanoes or volcanic activity, analysis of the seismic waves that pass through Earth, measurements of the gravitational and magnetic fields of Earth, and experiments with crystalline solids at pressures and temperatures characteristic of Earth's deep interior. Global properties "Note: In chondrite model (1), the light element in the core is assumed to be Si. Chondrite model (2) is a model of chemical composition of the mantle corresponding to the model of core shown in chondrite model (1)."Measurements of the force exerted by Earth's gravity can be used to calculate its mass. Astronomers can also calculate Earth's mass by observing the motion of orbiting satellites. Earth's average density can be determined through gravimetric experiments, which have historically involved pendulums. The mass of Earth is about . The average density of Earth is . Layers The structure of Earth can be defined in two ways: by mechanical properties such as rheology, or chemically. Mechanically, it can be divided into lithosphere, asthenosphere, mesospheric mantle, outer core, and the inner core. Chemically, Earth can be divided into the crust, upper mantle, lower mantle, outer core, and inner core. The geologic component layers of Earth are at increasing depths below the surface: Crust and lithosphere Earth's crust ranges from in depth and is the outermost layer. The thin parts are the oceanic crust, which underlie the ocean basins (5–10 km) and is mafic-rich (dense iron-magnesium silic Document 1::: The core–mantle boundary (CMB) of Earth lies between the planet's silicate mantle and its liquid iron–nickel outer core, at a depth of below Earth's surface. The boundary is observed via the discontinuity in seismic wave velocities at that depth due to the differences between the acoustic impedances of the solid mantle and the molten outer core. P-wave velocities are much slower in the outer core than in the deep mantle while S-waves do not exist at all in the liquid portion of the core. Recent evidence suggests a distinct boundary layer directly above the CMB possibly made of a novel phase of the basic perovskite mineralogy of the deep mantle named post-perovskite. Seismic tomography studies have shown significant irregularities within the boundary zone and appear to be dominated by the African and Pacific Large Low-Shear-Velocity Provinces (LLSVP). The uppermost section of the outer core is thought to be about 500–1,800 K hotter than the overlying mantle, creating a thermal boundary layer. The boundary is thought to harbor topography, much like Earth's surface, that is supported by solid-state convection within the overlying mantle. Variations in the thermal properties of the core-mantle boundary may affect how the outer core's iron-rich fluids flow, which are ultimately responsible for Earth's magnetic field. The D″ region The approx. 200 km thick layer of the lower mantle directly above the boundary is referred to as the D″ region ("D double-prime" or "D prime prime") and is sometimes included in discussions regarding the core–mantle boundary zone. The D″ name originates from geophysicist Keith Bullen's designations for the Earth's layers. His system was to label each layer alphabetically, A through G, with the crust as 'A' and the inner core as 'G'. In his 1942 publication of his model, the entire lower mantle was the D layer. In 1949, Bullen found his 'D' layer to actually be two different layers. The upper part of the D layer, about 1800 km thick, was r Document 2::: The thermal history of Earth involves the study of the cooling history of Earth's interior. It is a sub-field of geophysics. (Thermal histories are also computed for the internal cooling of other planetary and stellar bodies.) The study of the thermal evolution of Earth's interior is uncertain and controversial in all aspects, from the interpretation of petrologic observations used to infer the temperature of the interior, to the fluid dynamics responsible for heat loss, to material properties that determine the efficiency of heat transport. Overview Observations that can be used to infer the temperature of Earth's interior range from the oldest rocks on Earth to modern seismic images of the inner core size. Ancient volcanic rocks can be associated with a depth and temperature of melting through their geochemical composition. Using this technique and some geological inferences about the conditions under which the rock is preserved, the temperature of the mantle can be inferred. The mantle itself is fully convective, so that the temperature in the mantle is basically constant with depth outside the top and bottom thermal boundary layers. This is not quite true because the temperature in any convective body under pressure must increase along an adiabat, but the adiabatic temperature gradient is usually much smaller than the temperature jumps at the boundaries. Therefore, the mantle is usually associated with a single or potential temperature that refers to the mid-mantle temperature extrapolated along the adiabat to the surface. The potential temperature of the mantle is estimated to be about 1350 C today. There is an analogous potential temperature of the core but since there are no samples from the core its present-day temperature relies on extrapolating the temperature along an adiabat from the inner core boundary, where the iron solidus is somewhat constrained. Thermodynamics The simplest mathematical formulation of the thermal history of Earth's interior i Document 3::: A lithosphere () is the rigid, outermost rocky shell of a terrestrial planet or natural satellite. On Earth, it is composed of the crust and the lithospheric mantle, the topmost portion of the upper mantle that behaves elastically on time scales of up to thousands of years or more. The crust and upper mantle are distinguished on the basis of chemistry and mineralogy. Earth's lithosphere Earth's lithosphere, which constitutes the hard and rigid outer vertical layer of the Earth, includes the crust and the lithospheric mantle (or mantle lithosphere), the uppermost part of the mantle that is not convecting. The lithosphere is underlain by the asthenosphere which is the weaker, hotter, and deeper part of the upper mantle that is able to convect. The lithosphere–asthenosphere boundary is defined by a difference in response to stress. The lithosphere remains rigid for very long periods of geologic time in which it deforms elastically and through brittle failure, while the asthenosphere deforms viscously and accommodates strain through plastic deformation. The thickness of the lithosphere is thus considered to be the depth to the isotherm associated with the transition between brittle and viscous behavior. The temperature at which olivine becomes ductile (~) is often used to set this isotherm because olivine is generally the weakest mineral in the upper mantle. The lithosphere is subdivided horizontally into tectonic plates, which often include terranes accreted from other plates. History of the concept The concept of the lithosphere as Earth's strong outer layer was described by the English mathematician A. E. H. Love in his 1911 monograph "Some problems of Geodynamics" and further developed by the American geologist Joseph Barrell, who wrote a series of papers about the concept and introduced the term "lithosphere". The concept was based on the presence of significant gravity anomalies over continental crust, from which he inferred that there must exist a strong, s Document 4::: In seismology and other areas involving elastic waves, S waves, secondary waves, or shear waves (sometimes called elastic S waves) are a type of elastic wave and are one of the two main types of elastic body waves, so named because they move through the body of an object, unlike surface waves. S waves are transverse waves, meaning that the direction of particle movement of an S wave is perpendicular to the direction of wave propagation, and the main restoring force comes from shear stress. Therefore, S waves cannot propagate in liquids with zero (or very low) viscosity; however, they may propagate in liquids with high viscosity. The name secondary wave comes from the fact that they are the second type of wave to be detected by an earthquake seismograph, after the compressional primary wave, or P wave, because S waves travel more slowly in solids. Unlike P waves, S waves cannot travel through the molten outer core of the Earth, and this causes a shadow zone for S waves opposite to their origin. They can still propagate through the solid inner core: when a P wave strikes the boundary of molten and solid cores at an oblique angle, S waves will form and propagate in the solid medium. When these S waves hit the boundary again at an oblique angle, they will in turn create P waves that propagate through the liquid medium. This property allows seismologists to determine some physical properties of the Earth's inner core. History In 1830, the mathematician Siméon Denis Poisson presented to the French Academy of Sciences an essay ("memoir") with a theory of the propagation of elastic waves in solids. In his memoir, he states that an earthquake would produce two different waves: one having a certain speed and the other having a speed . At a sufficient distance from the source, when they can be considered plane waves in the region of interest, the first kind consists of expansions and compressions in the direction perpendicular to the wavefront (that is, parallel to the The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. Seismic waves show that the inner core of the earth is solid while the outer core is what? A. lava B. silicon C. liquid D. gas Answer:
sciq-1740
multiple_choice
A recent deadly explosion in the gulf of mexico exemplified what source of ocean pollution?
[ "greenhouse gases", "algal bloom", "oil spill", "fracking disaster" ]
C
Relavent Documents: Document 0::: The Mega Borg oil spill occurred in the Gulf of Mexico on June 8, 1990, roughly 50 miles off the coast of Texas, when the oil tanker Mega Borg caught on fire and exploded. The cleanup was one of the first practical uses of bioremediation. Initial explosion and cause At 11:30 PM on the evening of Friday June 8, 1990, an explosion in the cargo room of the Norwegian oil tanker the Mega Borg “ruptured the bulkhead between the pump room and the engine room”, causing the ship to catch fire and begin to leak oil. The 853-foot-long, 15-year-old vessel was about 50 miles off the coast of Galveston, Texas when the explosion occurred. The weather at the time was calm and the tanker had easily passed Coast Guard safety inspections in April earlier that year. While the direct cause of the engine room explosion remains unknown, the initial blast occurred during a lightering process in which the Mega Borg was transferring oil onto a smaller Italian tanker, the Fraqmura, in order to then transport the oil to Houston. This transfer was necessary, as the Mega Borg was too large to dock at the Texas port. Three million gallons of the total 38 million gallons of light Angolan Palanca crude oil on board the tanker were able to be transferred to the Fraqmura before the blast. Two days after the initial blast, there were five successive explosions in a ten-minute window. These explosions greatly increased the rate of the spill from the tanker into the water. By the end of that day (June 11) the tanker stern had dropped 58 feet and had stabilized five feet above the water line. This was either due to shifting cargo or the tanker taking on water, which would be an indication of the vessel’s imminent sinking. The light crude oil spilled in the Mega Borg incident was brown and evaporated much quicker than the heavy crude oil in spills such as the Exxon Valdez. This means that the oil is less likely to heavily coat nearby beaches, flora and fauna, however the tanker was carrying more oil Document 1::: Marine technology is defined by WEGEMT (a European association of 40 universities in 17 countries) as "technologies for the safe use, exploitation, protection of, and intervention in, the marine environment." In this regard, according to WEGEMT, the technologies involved in marine technology are the following: naval architecture, marine engineering, ship design, ship building and ship operations; oil and gas exploration, exploitation, and production; hydrodynamics, navigation, sea surface and sub-surface support, underwater technology and engineering; marine resources (including both renewable and non-renewable marine resources); transport logistics and economics; inland, coastal, short sea and deep sea shipping; protection of the marine environment; leisure and safety. Education and training According to the Cape Fear Community College of Wilmington, North Carolina, the curriculum for a marine technology program provides practical skills and academic background that are essential in succeeding in the area of marine scientific support. Through a marine technology program, students aspiring to become marine technologists will become proficient in the knowledge and skills required of scientific support technicians. The educational preparation includes classroom instructions and practical training aboard ships, such as how to use and maintain electronic navigation devices, physical and chemical measuring instruments, sampling devices, and data acquisition and reduction systems aboard ocean-going and smaller vessels, among other advanced equipment. As far as marine technician programs are concerned, students learn hands-on to trouble shoot, service and repair four- and two-stroke outboards, stern drive, rigging, fuel & lube systems, electrical including diesel engines. Relationship to commerce Marine technology is related to the marine science and technology industry, also known as maritime commerce. The Executive Office of Housing and Economic Development (EOHED Document 2::: Trashing the Planet: How Science Can Help Us Deal With Acid Rain, Depletion of the Ozone, and Nuclear Waste (Among Other Things) is a 1990 book by zoologist and Governor of Washington Dixy Lee Ray. The book talks about the seriousness about acid rain, the problems with the ozone layer and other environmental issues. Ray co-wrote the book with journalist Lou Guzzo. Document 3::: The STEM (Science, Technology, Engineering, and Mathematics) pipeline is a critical infrastructure for fostering the development of future scientists, engineers, and problem solvers. It's the educational and career pathway that guides individuals from early childhood through to advanced research and innovation in STEM-related fields. Description The "pipeline" metaphor is based on the idea that having sufficient graduates requires both having sufficient input of students at the beginning of their studies, and retaining these students through completion of their academic program. The STEM pipeline is a key component of workplace diversity and of workforce development that ensures sufficient qualified candidates are available to fill scientific and technical positions. The STEM pipeline was promoted in the United States from the 1970s onwards, as “the push for STEM (science, technology, engineering, and mathematics) education appears to have grown from a concern for the low number of future professionals to fill STEM jobs and careers and economic and educational competitiveness.” Today, this metaphor is commonly used to describe retention problems in STEM fields, called “leaks” in the pipeline. For example, the White House reported in 2012 that 80% of minority groups and women who enroll in a STEM field switch to a non-STEM field or drop out during their undergraduate education. These leaks often vary by field, gender, ethnic and racial identity, socioeconomic background, and other factors, drawing attention to structural inequities involved in STEM education and careers. Current efforts The STEM pipeline concept is a useful tool for programs aiming at increasing the total number of graduates, and is especially important in efforts to increase the number of underrepresented minorities and women in STEM fields. Using STEM methodology, educational policymakers can examine the quantity and retention of students at all stages of the K–12 educational process and beyo Document 4::: The Ramón Margalef Award for Excellence in Education was launched in 2008 by the Association for the Sciences of Limnology and Oceanography to recognize innovations and excellence in teaching and mentoring students in the fields of limnology and oceanography. Criteria for the award requires "adherence to the highest standards of excellence" in pedagogy as well as verification that the teaching techniques have furthered the field of aquatic science. The award is not affiliated with the Ramon Margalef Prize in Ecology, often referred to as the Ramon Margalef Award, given by the Generalitat de Catalunya in Barcelona. The award has been presented annually since 2009. Winners The winners have included: The information in this table is from the Association for the Sciences of Limnology and Oceanography. The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. A recent deadly explosion in the gulf of mexico exemplified what source of ocean pollution? A. greenhouse gases B. algal bloom C. oil spill D. fracking disaster Answer:
sciq-4207
multiple_choice
What type of waves start when a source of energy causes a disturbance in the medium?
[ "mechanical waves", "fluid waves", "magnetic waves", "mechanical currents" ]
A
Relavent Documents: Document 0::: In physics, a mechanical wave is a wave that is an oscillation of matter, and therefore transfers energy through a medium. While waves can move over long distances, the movement of the medium of transmission—the material—is limited. Therefore, the oscillating material does not move far from its initial equilibrium position. Mechanical waves can be produced only in media which possess elasticity and inertia. There are three types of mechanical waves: transverse waves, longitudinal waves, and surface waves. Some of the most common examples of mechanical waves are water waves, sound waves, and seismic waves. Like all waves, mechanical waves transport energy. This energy propagates in the same direction as the wave. A wave requires an initial energy input; once this initial energy is added, the wave travels through the medium until all its energy is transferred. In contrast, electromagnetic waves require no medium, but can still travel through one. Transverse wave A transverse wave is the form of a wave in which particles of medium vibrate about their mean position perpendicular to the direction of the motion of the wave. To see an example, move an end of a Slinky (whose other end is fixed) to the left-and-right of the Slinky, as opposed to to-and-fro. Light also has properties of a transverse wave, although it is an electromagnetic wave. Longitudinal wave Longitudinal waves cause the medium to vibrate parallel to the direction of the wave. It consists of multiple compressions and rarefactions. The rarefaction is the farthest distance apart in the longitudinal wave and the compression is the closest distance together. The speed of the longitudinal wave is increased in higher index of refraction, due to the closer proximity of the atoms in the medium that is being compressed. Sound is a longitudinal wave. Surface waves This type of wave travels along the surface or interface between two media. An example of a surface wave would be waves in a pool, or in an ocean Document 1::: In physics, a transverse wave is a wave that oscillates perpendicularly to the direction of the wave's advance. In contrast, a longitudinal wave travels in the direction of its oscillations. All waves move energy from place to place without transporting the matter in the transmission medium if there is one. Electromagnetic waves are transverse without requiring a medium. The designation “transverse” indicates the direction of the wave is perpendicular to the displacement of the particles of the medium through which it passes, or in the case of EM waves, the oscillation is perpendicular to the direction of the wave. A simple example is given by the waves that can be created on a horizontal length of string by anchoring one end and moving the other end up and down. Another example is the waves that are created on the membrane of a drum. The waves propagate in directions that are parallel to the membrane plane, but each point in the membrane itself gets displaced up and down, perpendicular to that plane. Light is another example of a transverse wave, where the oscillations are the electric and magnetic fields, which point at right angles to the ideal light rays that describe the direction of propagation. Transverse waves commonly occur in elastic solids due to the shear stress generated; the oscillations in this case are the displacement of the solid particles away from their relaxed position, in directions perpendicular to the propagation of the wave. These displacements correspond to a local shear deformation of the material. Hence a transverse wave of this nature is called a shear wave. Since fluids cannot resist shear forces while at rest, propagation of transverse waves inside the bulk of fluids is not possible. In seismology, shear waves are also called secondary waves or S-waves. Transverse waves are contrasted with longitudinal waves, where the oscillations occur in the direction of the wave. The standard example of a longitudinal wave is a sound wave or " Document 2::: Wave loading is most commonly the application of a pulsed or wavelike load to a material or object. This is most commonly used in the analysis of piping, ships, or building structures which experience wind, water, or seismic disturbances. Examples of wave loading Offshore storms and pipes: As large waves pass over shallowly buried pipes, water pressure increases above it. As the trough approaches, pressure over the pipe drops and this sudden and repeated variation in pressure can break pipes. The difference in pressure for a wave with wave height of about 10 m would be equivalent to one atmosphere (101.3 kPa or 14.7 psi) pressure variation between crest and trough and repeated fluctuations over pipes in relatively shallow environments could set up resonance vibrations within pipes or structures and cause problems. Engineering oil platforms: The effects of wave-loading are a serious issue for engineers designing oil platforms, which must contend with the effects of wave loading, and have devised a number of algorithms to do so. Document 3::: This is a list of wave topics. 0–9 21 cm line A Abbe prism Absorption spectroscopy Absorption spectrum Absorption wavemeter Acoustic wave Acoustic wave equation Acoustics Acousto-optic effect Acousto-optic modulator Acousto-optics Airy disc Airy wave theory Alfvén wave Alpha waves Amphidromic point Amplitude Amplitude modulation Animal echolocation Antarctic Circumpolar Wave Antiphase Aquamarine Power Arrayed waveguide grating Artificial wave Atmospheric diffraction Atmospheric wave Atmospheric waveguide Atom laser Atomic clock Atomic mirror Audience wave Autowave Averaged Lagrangian B Babinet's principle Backward wave oscillator Bandwidth-limited pulse beat Berry phase Bessel beam Beta wave Black hole Blazar Bloch's theorem Blueshift Boussinesq approximation (water waves) Bow wave Bragg diffraction Bragg's law Breaking wave Bremsstrahlung, Electromagnetic radiation Brillouin scattering Bullet bow shockwave Burgers' equation Business cycle C Capillary wave Carrier wave Cherenkov radiation Chirp Ernst Chladni Circular polarization Clapotis Closed waveguide Cnoidal wave Coherence (physics) Coherence length Coherence time Cold wave Collimated light Collimator Compton effect Comparison of analog and digital recording Computation of radiowave attenuation in the atmosphere Continuous phase modulation Continuous wave Convective heat transfer Coriolis frequency Coronal mass ejection Cosmic microwave background radiation Coulomb wave function Cutoff frequency Cutoff wavelength Cymatics D Damped wave Decollimation Delta wave Dielectric waveguide Diffraction Direction finding Dispersion (optics) Dispersion (water waves) Dispersion relation Dominant wavelength Doppler effect Doppler radar Douglas Sea Scale Draupner wave Droplet-shaped wave Duhamel's principle E E-skip Earthquake Echo (phenomenon) Echo sounding Echolocation (animal) Echolocation (human) Eddy (fluid dynamics) Edge wave Eikonal equation Ekman layer Ekman spiral Ekman transport El Niño–Southern Oscillation El Document 4::: Inertial waves, also known as inertial oscillations, are a type of mechanical wave possible in rotating fluids. Unlike surface gravity waves commonly seen at the beach or in the bathtub, inertial waves flow through the interior of the fluid, not at the surface. Like any other kind of wave, an inertial wave is caused by a restoring force and characterized by its wavelength and frequency. Because the restoring force for inertial waves is the Coriolis force, their wavelengths and frequencies are related in a peculiar way. Inertial waves are transverse. Most commonly they are observed in atmospheres, oceans, lakes, and laboratory experiments. Rossby waves, geostrophic currents, and geostrophic winds are examples of inertial waves. Inertial waves are also likely to exist in the molten core of the rotating Earth. Restoring force Inertial waves are restored to equilibrium by the Coriolis force, a result of rotation. To be precise, the Coriolis force arises (along with the centrifugal force) in a rotating frame to account for the fact that such a frame is always accelerating. Inertial waves, therefore, cannot exist without rotation. More complicated than tension on a string, the Coriolis force acts at a 90° angle to the direction of motion, and its strength depends on the rotation rate of the fluid. These two properties lead to the peculiar characteristics of inertial waves. Characteristics Inertial waves are possible only when a fluid is rotating, and exist in the bulk of the fluid, not at its surface. Like light waves, inertial waves are transverse, which means that their vibrations occur perpendicular to the direction of wave travel. One peculiar geometrical characteristic of inertial waves is that their phase velocity, which describes the movement of the crests and troughs of the wave, is perpendicular to their group velocity, which is a measure of the propagation of energy. Whereas a sound wave or an electromagnetic wave of any frequency is possible, inertial wa The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. What type of waves start when a source of energy causes a disturbance in the medium? A. mechanical waves B. fluid waves C. magnetic waves D. mechanical currents Answer:
sciq-11151
multiple_choice
How do you add two dimensional vectors?
[ "graphically", "linearly", "geometrically", "topologically" ]
C
Relavent Documents: Document 0::: Conceptual questions or conceptual problems in science, technology, engineering, and mathematics (STEM) education are questions that can be answered based only on the knowledge of relevant concepts, rather than performing extensive calculations. They contrast with most homework and exam problems in science and engineering that typically require plugging in numerical values into previously discussed formulas. Such "plug-and-chug" numerical problems can often be solved correctly by just matching the pattern of the problem to a previously discussed problem and changing the numerical inputs, which requires significant amounts of time to perform the calculations but does not test or deepen the understanding of how the concepts and formulas should work together. Conceptual questions, therefore, provide a good complement to conventional numerical problems because they need minimal or no calculations and instead encourage the students to engage more deeply with the underlying concepts and how they relate to formulas. Conceptual problems are often formulated as multiple-choice questions, making them easy to use during in-class discussions, particularly when utilizing active learning, peer instruction, and audience response. An example of a conceptual question in undergraduate thermodynamics is provided below: During adiabatic expansion of an ideal gas, its temperatureincreases decreases stays the same Impossible to tell/need more information The use of conceptual questions in physics was popularized by Eric Mazur, particularly in the form of multiple-choice tests that he called ConcepTests. In recent years, multiple websites that maintain lists of conceptual questions have been created by instructors for various disciplines. Some books on physics provide many examples of conceptual questions as well. Multiple conceptual questions can be assembled into a concept inventory to test the working knowledge of students at the beginning of a course or to track the improvement in Document 1::: In mathematical psychology and education theory, a knowledge space is a combinatorial structure used to formulate mathematical models describing the progression of a human learner. Knowledge spaces were introduced in 1985 by Jean-Paul Doignon and Jean-Claude Falmagne, and remain in extensive use in the education theory. Modern applications include two computerized tutoring systems, ALEKS and the defunct RATH. Formally, a knowledge space assumes that a domain of knowledge is a collection of concepts or skills, each of which must be eventually mastered. Not all concepts are interchangeable; some require other concepts as prerequisites. Conversely, competency at one skill may ease the acquisition of another through similarity. A knowledge space marks out which collections of skills are feasible: they can be learned without mastering any other skills. Under reasonable assumptions, the collection of feasible competencies forms the mathematical structure known as an antimatroid. Researchers and educators usually explore the structure of a discipline's knowledge space as a latent class model. Motivation Knowledge Space Theory attempts to address shortcomings of standardized testing when used in educational psychometry. Common tests, such as the SAT and ACT, compress a student's knowledge into a very small range of ordinal ranks, in the process effacing the conceptual dependencies between questions. Consequently, the tests cannot distinguish between true understanding and guesses, nor can they identify a student's particular weaknesses, only the general proportion of skills mastered. The goal of knowledge space theory is to provide a language by which exams can communicate What the student can do and What the student is ready to learn. Model structure Knowledge Space Theory-based models presume that an educational subject can be modeled as a finite set of concepts, skills, or topics. Each feasible state of knowledge about is then a subset of ; the set of Document 2::: Further Mathematics is the title given to a number of advanced secondary mathematics courses. The term "Higher and Further Mathematics", and the term "Advanced Level Mathematics", may also refer to any of several advanced mathematics courses at many institutions. In the United Kingdom, Further Mathematics describes a course studied in addition to the standard mathematics AS-Level and A-Level courses. In the state of Victoria in Australia, it describes a course delivered as part of the Victorian Certificate of Education (see § Australia (Victoria) for a more detailed explanation). Globally, it describes a course studied in addition to GCE AS-Level and A-Level Mathematics, or one which is delivered as part of the International Baccalaureate Diploma. In other words, more mathematics can also be referred to as part of advanced mathematics, or advanced level math. United Kingdom Background A qualification in Further Mathematics involves studying both pure and applied modules. Whilst the pure modules (formerly known as Pure 4–6 or Core 4–6, now known as Further Pure 1–3, where 4 exists for the AQA board) build on knowledge from the core mathematics modules, the applied modules may start from first principles. The structure of the qualification varies between exam boards. With regard to Mathematics degrees, most universities do not require Further Mathematics, and may incorporate foundation math modules or offer "catch-up" classes covering any additional content. Exceptions are the University of Warwick, the University of Cambridge which requires Further Mathematics to at least AS level; University College London requires or recommends an A2 in Further Maths for its maths courses; Imperial College requires an A in A level Further Maths, while other universities may recommend it or may promise lower offers in return. Some schools and colleges may not offer Further mathematics, but online resources are available Although the subject has about 60% of its cohort obtainin Document 3::: Additional Mathematics is a qualification in mathematics, commonly taken by students in high-school (or GCSE exam takers in the United Kingdom). It features a range of problems set out in a different format and wider content to the standard Mathematics at the same level. Additional Mathematics in Singapore In Singapore, Additional Mathematics is an optional subject offered to pupils in secondary school—specifically those who have an aptitude in Mathematics and are in the Normal (Academic) stream or Express stream. The syllabus covered is more in-depth as compared to Elementary Mathematics, with additional topics including Algebra binomial expansion, proofs in plane geometry, differential calculus and integral calculus. Additional Mathematics is also a prerequisite for students who are intending to offer H2 Mathematics and H2 Further Mathematics at A-level (if they choose to enter a Junior College after secondary school). Students without Additional Mathematics at the 'O' level will usually be offered H1 Mathematics instead. Examination Format The syllabus was updated starting with the 2021 batch of candidates. There are two written papers, each comprising half of the weightage towards the subject. Each paper is 2 hours 15 minutes long and worth 90 marks. Paper 1 has 12 to 14 questions, while Paper 2 has 9 to 11 questions. Generally, Paper 2 would have a graph plotting question based on linear law. GCSE Additional Mathematics in Northern Ireland In Northern Ireland, Additional Mathematics was offered as a GCSE subject by the local examination board, CCEA. There were two examination papers: one which tested topics in Pure Mathematics, and one which tested topics in Mechanics and Statistics. It was discontinued in 2014 and replaced with GCSE Further Mathematics—a new qualification whose level exceeds both those offered by GCSE Mathematics, and the analogous qualifications offered in England. Further Maths IGCSE and Additional Maths FSMQ in England Starting from Document 4::: The Mathematics Subject Classification (MSC) is an alphanumerical classification scheme that has collaboratively been produced by staff of, and based on the coverage of, the two major mathematical reviewing databases, Mathematical Reviews and Zentralblatt MATH. The MSC is used by many mathematics journals, which ask authors of research papers and expository articles to list subject codes from the Mathematics Subject Classification in their papers. The current version is MSC2020. Structure The MSC is a hierarchical scheme, with three levels of structure. A classification can be two, three or five digits long, depending on how many levels of the classification scheme are used. The first level is represented by a two-digit number, the second by a letter, and the third by another two-digit number. For example: 53 is the classification for differential geometry 53A is the classification for classical differential geometry 53A45 is the classification for vector and tensor analysis First level At the top level, 64 mathematical disciplines are labeled with a unique two-digit number. In addition to the typical areas of mathematical research, there are top-level categories for "History and Biography", "Mathematics Education", and for the overlap with different sciences. Physics (i.e. mathematical physics) is particularly well represented in the classification scheme with a number of different categories including: Fluid mechanics Quantum mechanics Geophysics Optics and electromagnetic theory All valid MSC classification codes must have at least the first-level identifier. Second level The second-level codes are a single letter from the Latin alphabet. These represent specific areas covered by the first-level discipline. The second-level codes vary from discipline to discipline. For example, for differential geometry, the top-level code is 53, and the second-level codes are: A for classical differential geometry B for local differential geometry C for glo The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. How do you add two dimensional vectors? A. graphically B. linearly C. geometrically D. topologically Answer:
sciq-6734
multiple_choice
What is the main form of energy storage in plants?
[ "dioxide", "starch", "liquid", "nitrogen" ]
B
Relavent Documents: Document 0::: Conceptual questions or conceptual problems in science, technology, engineering, and mathematics (STEM) education are questions that can be answered based only on the knowledge of relevant concepts, rather than performing extensive calculations. They contrast with most homework and exam problems in science and engineering that typically require plugging in numerical values into previously discussed formulas. Such "plug-and-chug" numerical problems can often be solved correctly by just matching the pattern of the problem to a previously discussed problem and changing the numerical inputs, which requires significant amounts of time to perform the calculations but does not test or deepen the understanding of how the concepts and formulas should work together. Conceptual questions, therefore, provide a good complement to conventional numerical problems because they need minimal or no calculations and instead encourage the students to engage more deeply with the underlying concepts and how they relate to formulas. Conceptual problems are often formulated as multiple-choice questions, making them easy to use during in-class discussions, particularly when utilizing active learning, peer instruction, and audience response. An example of a conceptual question in undergraduate thermodynamics is provided below: During adiabatic expansion of an ideal gas, its temperatureincreases decreases stays the same Impossible to tell/need more information The use of conceptual questions in physics was popularized by Eric Mazur, particularly in the form of multiple-choice tests that he called ConcepTests. In recent years, multiple websites that maintain lists of conceptual questions have been created by instructors for various disciplines. Some books on physics provide many examples of conceptual questions as well. Multiple conceptual questions can be assembled into a concept inventory to test the working knowledge of students at the beginning of a course or to track the improvement in Document 1::: The energy content of biofuel is the chemical energy contained in a given biofuel, measured per unit mass of that fuel, as specific energy, or per unit of volume of the fuel, as energy density. A biofuel is a fuel produced from recently living organisms. Biofuels include bioethanol, an alcohol made by fermentation—often used as a gasoline additive, and biodiesel, which is usually used as a diesel additive. Specific energy is energy per unit mass, which is used to describe the chemical energy content of a fuel, expressed in SI units as joule per kilogram (J/kg) or equivalent units. Energy density is the amount of chemical energy per unit volume of the fuel, expressed in SI units as joule per litre (J/L) or equivalent units. Energy and CO2 output of common biofuels The table below includes entries for popular substances already used for their energy, or being discussed for such use. The second column shows specific energy, the energy content in megajoules per unit of mass in kilograms, useful in understanding the energy that can be extracted from the fuel. The third column in the table lists energy density, the energy content per liter of volume, which is useful for understanding the space needed for storing the fuel. The final two columns deal with the carbon footprint of the fuel. The fourth column contains the proportion of CO2 released when the fuel is converted for energy, with respect to its starting mass, and the fifth column lists the energy produced per kilogram of CO2 produced. As a guideline, a higher number in this column is better for the environment. But these numbers do not account for other green house gases released during burning, production, storage, or shipping. For example, methane may have hidden environmental costs that are not reflected in the table. Notes Yields of common crops associated with biofuels production Notes See also Eichhornia crassipes#Bioenergy Syngas Conversion of units Energy density Heat of combustion Document 2::: {{DISPLAYTITLE: C3 carbon fixation}} carbon fixation is the most common of three metabolic pathways for carbon fixation in photosynthesis, the other two being and CAM. This process converts carbon dioxide and ribulose bisphosphate (RuBP, a 5-carbon sugar) into two molecules of 3-phosphoglycerate through the following reaction: CO2 + H2O + RuBP → (2) 3-phosphoglycerate This reaction was first discovered by Melvin Calvin, Andrew Benson and James Bassham in 1950. C3 carbon fixation occurs in all plants as the first step of the Calvin–Benson cycle. (In and CAM plants, carbon dioxide is drawn out of malate and into this reaction rather than directly from the air.) Plants that survive solely on fixation ( plants) tend to thrive in areas where sunlight intensity is moderate, temperatures are moderate, carbon dioxide concentrations are around 200 ppm or higher, and groundwater is plentiful. The plants, originating during Mesozoic and Paleozoic eras, predate the plants and still represent approximately 95% of Earth's plant biomass, including important food crops such as rice, wheat, soybeans and barley. plants cannot grow in very hot areas at today's atmospheric CO2 level (significantly depleted during hundreds of millions of years from above 5000 ppm) because RuBisCO incorporates more oxygen into RuBP as temperatures increase. This leads to photorespiration (also known as the oxidative photosynthetic carbon cycle, or C2 photosynthesis), which leads to a net loss of carbon and nitrogen from the plant and can therefore limit growth. plants lose up to 97% of the water taken up through their roots by transpiration. In dry areas, plants shut their stomata to reduce water loss, but this stops from entering the leaves and therefore reduces the concentration of in the leaves. This lowers the :O2 ratio and therefore also increases photorespiration. and CAM plants have adaptations that allow them to survive in hot and dry areas, and they can therefore out-compete Document 3::: The Bionic Leaf is a biomimetic system that gathers solar energy via photovoltaic cells that can be stored or used in a number of different functions. Bionic leaves can be composed of both synthetic (metals, ceramics, polymers, etc.) and organic materials (bacteria), or solely made of synthetic materials. The Bionic Leaf has the potential to be implemented in communities, such as urbanized areas to provide clean air as well as providing needed clean energy. History In 2009 at MIT, Daniel Nocera's lab first developed the "artificial leaf", a device made from silicon and an anode electrocatalyst for the oxidation of water, capable of splitting water into hydrogen and oxygen gases. In 2012, Nocera came to Harvard and The Silver Lab of Harvard Medical School joined Nocera’s team. Together the teams expanded the existing technology to create the Bionic Leaf. It merged the concept of the artificial leaf with genetically engineered bacteria that feed on the hydrogen and convert CO2 in the air into alcohol fuels or chemicals. The first version of the teams Bionic Leaf was created in 2015 but the catalyst used was harmful to the bacteria. In 2016, a new catalyst was designed to solve this issue, named the "Bionic Leaf 2.0". Other versions of artificial leaves have been developed by the California Institute of Technology and the Joint Center for Artificial Photosynthesis, the University of Waterloo, and the University of Cambridge. Mechanics Photosynthesis In natural photosynthesis, photosynthetic organisms produce energy-rich organic molecules from water and carbon dioxide by using solar radiation. Therefore, the process of photosynthesis removes carbon dioxide, a greenhouse gas, from the air. Artificial photosynthesis, as performed by the Bionic Leaf, is approximately 10 times more efficient than natural photosynthesis. Using a catalyst, the Bionic Leaf can remove excess carbon dioxide in the air and convert that to useful alcohol fuels, like isopropanol and isobutan Document 4::: Lignocellulose refers to plant dry matter (biomass), so called lignocellulosic biomass. It is the most abundantly available raw material on the Earth for the production of biofuels. It is composed of two kinds of carbohydrate polymers, cellulose and hemicellulose, and an aromatic-rich polymer called lignin. Any biomass rich in cellulose, hemicelluloses, and lignin are commonly referred to as lignocellulosic biomass. Each component has a distinct chemical behavior. Being a composite of three very different components makes the processing of lignocellulose challenging. The evolved resistance to degradation or even separation is referred to as recalcitrance. Overcoming this recalcitrance to produce useful, high value products requires a combination of heat, chemicals, enzymes, and microorganisms. These carbohydrate-containing polymers contain different sugar monomers (six and five carbon sugars) and they are covalently bound to lignin. Lignocellulosic biomass can be broadly classified as virgin biomass, waste biomass, and energy crops. Virgin biomass includes plants. Waste biomass is produced as a low value byproduct of various industrial sectors such as agriculture (corn stover, sugarcane bagasse, straw etc.) and forestry (saw mill and paper mill discards). Energy crops are crops with a high yield of lignocellulosic biomass produced as a raw material for the production of second-generation biofuel; examples include switchgrass (Panicum virgatum) and Elephant grass. The biofuels generated from these energy crops are sources of sustainable energy. Chemical composition Lignocellulose consists of three components, each with properties that pose challenges to commercial applications. lignin is a heterogeneous, highly crosslinked polymer akin to phenol-formaldehyde resins. It is derived from 3-4 monomers, the ratio of which varies from species to species. The crosslinking is extensive. Being rich in aromatics, lignin is hydrophobic and relatively rigid. Lignin confe The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. What is the main form of energy storage in plants? A. dioxide B. starch C. liquid D. nitrogen Answer:
ai2_arc-41
multiple_choice
How is a moth's life cycle most different from an insect that goes through incomplete metamorphosis?
[ "It creates a cocoon.", "It becomes an adult.", "It lays eggs.", "It eats leaves." ]
A
Relavent Documents: Document 0::: In animal dormancy, diapause is the delay in development in response to regular and recurring periods of adverse environmental conditions. It is a physiological state with very specific initiating and inhibiting conditions. The mechanism is a means of surviving predictable, unfavorable environmental conditions, such as temperature extremes, drought, or reduced food availability. Diapause is observed in all the life stages of arthropods, especially insects. Activity levels of diapausing stages can vary considerably among species. Diapause may occur in a completely immobile stage, such as the pupae and eggs, or it may occur in very active stages that undergo extensive migrations, such as the adult monarch butterfly, Danaus plexippus. In cases where the insect remains active, feeding is reduced and reproductive development is slowed or halted. Embryonic diapause, a somewhat similar phenomenon, occurs in over 130 species of mammals, possibly even in humans, and in the embryos of many of the oviparous species of fish in the order Cyprinodontiformes. Phases of insect diapause Diapause in insects is a dynamic process consisting of several distinct phases. While diapause varies considerably from one taxon of insects to another, these phases can be characterized by particular sets of metabolic processes and responsiveness of the insect to certain environmental stimuli. For example, Sepsis cynipsea flies primarily use temperature to determine when to enter diapause. Diapause can occur during any stage of development in arthropods, but each species exhibits diapause in specific phases of development. Reduced oxygen consumption is typical as is reduced movement and feeding. In Polistes exclamans, a social wasp, only the queen is said to be able to undergo diapause. Comparison of diapause periods The sensitive stage is the period when stimulus must occur to trigger diapause in the organism. Examples of sensitive stage/diapause periods in various insects: Induction The indu Document 1::: Statary is a term currently applied in fields such as ecology, ethology, psychology. In modern use it contrasts on the one hand with such concepts as migratory, nomadic, or shifting, and on the other with static or immobile. The word also is of historical interest in its change of meaning as its usage changed. Current usage In current usage in fields such as biology, statary commonly means in a particular location or state, but not rigidly so. Army ant colonies for example are said to be in a statary phase when they occupy one bivouac for an extended period instead of just overnight. This is as opposed to a nomadic phase, in which they travel and forage practically daily. This does not mean that ant colonies in a statary phase do not move nor even that they do not forage while statary; they often do both, sometimes daily. Correspondingly a colony in a nomadic phase does not travel without rest; it bivouacs for the night. The significance of the terms is that the colonies' behaviour patterns differ radically according to their activity phase; one pattern favours maintaining a persistent presence where brood is being raised, whereas the other favours continual nomadic wandering into new foraging grounds. Such phases have raised interest in studies in aspects of comparative psychology and evolution. The term statary also applies in contexts other than ants or colonial organisms. Swarm-forming species of locusts go beyond having statary and nomadic phases of behaviour; their growing nymphs actually develop into different adult morphologies, depending on whether the conditions during their growth favour swarming or not. Locusts that adopt the swarming morphology are said to be the migratory morphs, while the rest are called statary morphs. Effectively similar morphs occur in some other insect species, such as army worm. In some technical fields statary need not refer literally to location or motion, but refer figuratively to their having particular characteristic but Document 2::: This glossary of entomology describes terms used in the formal study of insect species by entomologists. A–C A synthetic chlorinated hydrocarbon insecticide, toxic to vertebrates. Though its phytotoxicity is low, solvents in some formulations may damage certain crops. cf. the related Dieldrin, Endrin, Isodrin D–F A synthetic chlorinated hydrocarbon insecticide, toxic to vertebrates. cf. the related Aldrin, Endrin, Isodrin A synthetic chlorinated hydrocarbon insecticide, toxic to vertebrates. Though its phytotoxicity is low, solvents in some formulations may damage certain crops. cf. the related Dieldrin, Aldrin, Isodrin G–L A synthetic chlorinated hydrocarbon insecticide, toxic to vertebrates. Though its phytotoxicity is low, solvents in some formulations may damage certain crops. cf. the related Dieldrin, Aldrin, Endrin M–O P–R S–Z Figures See also Anatomical terms of location Butterfly Caterpillar Comstock–Needham system External morphology of Lepidoptera Glossary of ant terms Glossary of spider terms Glossary of scientific names Insect wing Pupa Document 3::: Eclosion assays are experimental procedures used to study the process of eclosion in insects, particularly in the model organism drosophila (fruit flies). Eclosion is the process in which an adult insect emerges from its pupal case, or a larval insect hatches from its egg. In holometabolous insects, the circadian clock regulates the timing of adult emergence. The daily rhythm of adult emergence in these insects was among the first circadian rhythms to be investigated. The circadian clock in these insects enforces a daily pattern of emergence by permitting or triggering eclosion during specific time frames and preventing emergence during other periods. The purpose of an eclosion assay is to count the number of flies that emerge over time from a developing population, which provides information on the circadian clock in the experimentally manipulated drosophila. For example, with an eclosion monitor, scientists can study how knocking out a certain gene changes the behavioral expression of a drosophila's biological clock. Additionally, the circadian rhythm of adult insect emergence was among the earliest chronobiological phenomena to be examined, significantly impacting the field of chronobiology through its contributions to understanding temperature compensation, phase response curves, and reactions to skeleton photoperiods. The eclosion assay serves as a vital tool for researchers delving into chronobiology studies. Bang box The bang box is the first experimental assay developed to measure eclosion in fruit flies. The first model of the bang box was developed at a Princeton University laboratory, mainly accredited to Colin Pittendrigh, to measure the time that adult drosophilids emerged from pupae populations in a controlled light and temperature environment. This original model works by securing pupae on plastic boxes that can be temperature controlled. The pupae are harvested and attached to a brass holding plate. The holding plate is then secured to face a bras Document 4::: Many populations of Lepidoptera (butterflies or moths) migrate, sometimes long distances, to and from areas which are only suitable for part of the year. Lepidopterans migrate on all continents except Antarctica, including from or within subtropical and tropical areas. By migrating, these species can avoid unfavorable circumstances, including weather, food shortage, or over-population. In some lepidopteran species, all individuals migrate; in others, only some migrate. The best-known lepidopteran migration is that of the eastern population of the monarch butterfly which migrates from southern Canada to wintering sites in central Mexico. In late winter/early spring, the adult monarchs leave the Transvolcanic mountain range in Mexico for a more northern climate. Mating occurs and the females begin seeking out milkweed to lay their eggs, usually first in northern Mexico and southern Texas. The caterpillars hatch and develop into adults that move north, where more offspring can go as far as central Canada until next migratory cycle. The Danaids in South India are prominent migrants, between the Eastern Ghats and Western Ghats. Three species will be involved in this, namely Tirumala septentrionis, Euploea core, and Euploea sylvester. Sometimes they are joined by lemon pansy (Junonia lemonias), common emigrant (Catopsilia pomona), tawny coster (Acraea terpsicore) and blue tiger (Tirumala limniace). Definition Migration in Lepidoptera means a regular, predictable movement of a population from one place to another, determined by the seasons. There is no unambiguous definition of migratory butterfly or migratory moth, and this also applies to proposals to divide them into classes. Migration means different things to behavioral scientists and ecologists. The former emphasize the act of moving whereas the latter discriminate between whether the movement has been ecologically significant or not. Migration may be viewed as "a behavioural process with ecological consequence The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. How is a moth's life cycle most different from an insect that goes through incomplete metamorphosis? A. It creates a cocoon. B. It becomes an adult. C. It lays eggs. D. It eats leaves. Answer:
sciq-6955
multiple_choice
Volcano chains form as an oceanic plate moves over what?
[ "melt spot", "Water spot", "dust spot", "hot spot" ]
D
Relavent Documents: Document 0::: Maui Nui is a modern geologists' name given to a prehistoric Hawaiian island and the corresponding modern biogeographic region. Maui Nui is composed of four modern islands: Maui, Molokaʻi, Lānaʻi, and Kahoʻolawe. Administratively, the four modern islands comprise Maui County (and a tiny part of Molokaʻi called Kalawao County). Long after the breakup of Maui Nui, the four modern islands retained plant and animal life similar to each other. Thus, Maui Nui is not only a prehistoric island but also a modern biogeographic region. Geology Maui Nui formed and broke up during the Pleistocene Epoch, which lasted from about 2.58 million to 11,700 years ago. Maui Nui is built from seven shield volcanoes. The three oldest are Penguin Bank, West Molokaʻi, and East Molokaʻi, which probably range from slightly over to slightly less than 2 million years old. The four younger volcanoes are Lāna‘i, West Maui, Kaho‘olawe, and Haleakalā, which probably formed between 1.5 and 2 million years ago. At its prime 1.2 million years ago, Maui Nui was , 50% larger than today's Hawaiʻi Island. The island of Maui Nui included four modern islands (Maui, Molokaʻi, Lānaʻi, and Kahoʻolawe) and landmass west of Molokaʻi called Penguin Bank, which is now completely submerged. Maui Nui broke up as rising sea levels flooded the connections between the volcanoes. The breakup was complex because global sea levels rose and fell intermittently during the Quaternary glaciation. About 600,000 years ago, the connection between Molokaʻi and the island of Lāna‘i/Maui/Kahoʻolawe became intermittent. About 400,000 years ago, the connection between Lāna‘i and Maui/Kahoʻolawe also became intermittent. The connection between Maui and Kahoʻolawe was permanently broken between 200,000 and 150,000 years ago. Maui, Lāna‘i, and Molokaʻi were connected intermittently thereafter, most recently about 18,000 years ago during the Last Glacial Maximum. Today, the sea floor between these four islands is relatively shallow Document 1::: The plate theory is a model of volcanism that attributes all volcanic activity on Earth, even that which appears superficially to be anomalous, to the operation of plate tectonics. According to the plate theory, the principal cause of volcanism is extension of the lithosphere. Extension of the lithosphere is a function of the lithospheric stress field. The global distribution of volcanic activity at a given time reflects the contemporaneous lithospheric stress field, and changes in the spatial and temporal distribution of volcanoes reflect changes in the stress field. The main factors governing the evolution of the stress field are: Changes in the configuration of plate boundaries. Vertical motions. Thermal contraction. Lithospheric extension enables pre-existing melt in the crust and mantle to escape to the surface. If extension is severe and thins the lithosphere to the extent that the asthenosphere rises, then additional melt is produced by decompression upwelling. Origins of the plate theory Developed during the late 1960s and 1970s, plate tectonics provided an elegant explanation for most of the Earth's volcanic activity. At spreading boundaries where plates move apart, the asthenosphere decompresses and melts to form new oceanic crust. At subduction zones, slabs of oceanic crust sink into the mantle, dehydrate, and release volatiles which lower the melting temperature and give rise to volcanic arcs and back-arc extensions. Several volcanic provinces, however, do not fit this simple picture and have traditionally been considered exceptional cases which require a non-plate-tectonic explanation. Just prior to the development of plate tectonics in the early 1960s, the Canadian Geophysicist John Tuzo Wilson suggested that chains of volcanic islands form from movement of the seafloor over relatively stationary hotspots in stable centres of mantle convection cells. In the early 1970s, Wilson's idea was revived by the American geophysicist W. Jason Morgan. In Document 2::: The geologic record in stratigraphy, paleontology and other natural sciences refers to the entirety of the layers of rock strata. That is, deposits laid down by volcanism or by deposition of sediment derived from weathering detritus (clays, sands etc.). This includes all its fossil content and the information it yields about the history of the Earth: its past climate, geography, geology and the evolution of life on its surface. According to the law of superposition, sedimentary and volcanic rock layers are deposited on top of each other. They harden over time to become a solidified (competent) rock column, that may be intruded by igneous rocks and disrupted by tectonic events. Correlating the rock record At a certain locality on the Earth's surface, the rock column provides a cross section of the natural history in the area during the time covered by the age of the rocks. This is sometimes called the rock history and gives a window into the natural history of the location that spans many geological time units such as ages, epochs, or in some cases even multiple major geologic periods—for the particular geographic region or regions. The geologic record is in no one place entirely complete for where geologic forces one age provide a low-lying region accumulating deposits much like a layer cake, in the next may have uplifted the region, and the same area is instead one that is weathering and being torn down by chemistry, wind, temperature, and water. This is to say that in a given location, the geologic record can be and is quite often interrupted as the ancient local environment was converted by geological forces into new landforms and features. Sediment core data at the mouths of large riverine drainage basins, some of which go deep thoroughly support the law of superposition. However using broadly occurring deposited layers trapped within differently located rock columns, geologists have pieced together a system of units covering most of the geologic time scale Document 3::: In geodynamics lower crustal flow is the mainly lateral movement of material within the lower part of the continental crust by a ductile flow mechanism. It is thought to be an important process during both continental collision and continental break-up. Rheology The tendency of the lower crust to flow is controlled by its rheology. Ductile flow in the lower crust is assumed to be controlled by the deformation of quartz and/or plagioclase feldspar as its composition is thought to be granodioritic to dioritic. With normal thickness continental crust and a normal geothermal gradient, the lower crust, below the brittle–ductile transition zone, exhibits ductile flow behaviour under geological strain rates. Factors that can vary this behaviour include: water content, thickness, heat flow and strain-rate. Collisional belts In some areas of continental collision, the lower part of the thickened crust that results is interpreted to flow laterally, such as in the Tibetan plateau, and the Altiplano in the Bolivian Andes. Document 4::: In marine geology, a guyot (), also called a tablemount, is an isolated underwater volcanic mountain (seamount) with a flat top more than below the surface of the sea. The diameters of these flat summits can exceed . Guyots are most commonly found in the Pacific Ocean, but they have been identified in all the oceans except the Arctic Ocean. They are analogous to tables (such as mesas) on land. History Guyots were first recognized in 1945 by Harry Hammond Hess, who collected data using echo-sounding equipment on a ship he commanded during World War II. His data showed that some undersea mountains had flat tops. Hess called these undersea mountains "guyots", after the 19th-century geographer Arnold Henry Guyot. Hess postulated they were once volcanic islands that were beheaded by wave action, yet they are now deep under sea level. This idea was used to help bolster the theory of plate tectonics. Formation Guyots show evidence of having once been above the surface, with gradual subsidence through stages from fringed reefed mountain, coral atoll, and finally a flat-topped submerged mountain. Seamounts are made by extrusion of lavas piped upward in stages from sources within the Earth's mantle, usually hotspots, to vents on the seafloor. The volcanism invariably ceases after a time, and other processes dominate. When an undersea volcano grows high enough to be near or breach the ocean surface, wave action and/or coral reef growth tend to create a flat-topped edifice. However, all ocean crust and guyots form from hot magma and/or rock, which cools over time. As the lithosphere that the future guyot rides on slowly cools, it becomes denser and sinks lower into Earth's mantle, through the process of isostasy. In addition, the erosive effects of waves and currents are found mostly near the surface: the tops of guyots generally lie below this higher-erosion zone. This is the same process that gives rise to higher seafloor topography at oceanic ridges, such as the Mid The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. Volcano chains form as an oceanic plate moves over what? A. melt spot B. Water spot C. dust spot D. hot spot Answer:
sciq-40
multiple_choice
What must replicate in the cell cycle before meiosis i takes place?
[ "dna", "sperm", "cell walls", "meiotic fluid" ]
A
Relavent Documents: Document 0::: Interkinesis or interphase II is a period of rest that cells of some species enter during meiosis between meiosis I and meiosis II. No DNA replication occurs during interkinesis; however, replication does occur during the interphase I stage of meiosis (See meiosis I). During interkinesis, the spindles of the first meiotic division disassembles and the microtubules reassemble into two new spindles for the second meiotic division. Interkinesis follows telophase I; however, many plants skip telophase I and interkinesis, going immediately into prophase II. Each chromosome still consists of two chromatids. In this stage other organelle number may also increase. Document 1::: In cell biology, the spindle apparatus is the cytoskeletal structure of eukaryotic cells that forms during cell division to separate sister chromatids between daughter cells. It is referred to as the mitotic spindle during mitosis, a process that produces genetically identical daughter cells, or the meiotic spindle during meiosis, a process that produces gametes with half the number of chromosomes of the parent cell. Besides chromosomes, the spindle apparatus is composed of hundreds of proteins. Microtubules comprise the most abundant components of the machinery. Spindle structure Attachment of microtubules to chromosomes is mediated by kinetochores, which actively monitor spindle formation and prevent premature anaphase onset. Microtubule polymerization and depolymerization dynamic drive chromosome congression. Depolymerization of microtubules generates tension at kinetochores; bipolar attachment of sister kinetochores to microtubules emanating from opposite cell poles couples opposing tension forces, aligning chromosomes at the cell equator and poising them for segregation to daughter cells. Once every chromosome is bi-oriented, anaphase commences and cohesin, which couples sister chromatids, is severed, permitting the transit of the sister chromatids to opposite poles. The cellular spindle apparatus includes the spindle microtubules, associated proteins, which include kinesin and dynein molecular motors, condensed chromosomes, and any centrosomes or asters that may be present at the spindle poles depending on the cell type. The spindle apparatus is vaguely ellipsoid in cross section and tapers at the ends. In the wide middle portion, known as the spindle midzone, antiparallel microtubules are bundled by kinesins. At the pointed ends, known as spindle poles, microtubules are nucleated by the centrosomes in most animal cells. Acentrosomal or anastral spindles lack centrosomes or asters at the spindle poles, respectively, and occur for example during female meio Document 2::: Microgametogenesis is the process in plant reproduction where a microgametophyte develops in a pollen grain to the three-celled stage of its development. In flowering plants it occurs with a microspore mother cell inside the anther of the plant. When the microgametophyte is first formed inside the pollen grain four sets of fertile cells called sporogenous cells are apparent. These cells are surrounded by a wall of sterile cells called the tapetum, which supplies food to the cell and eventually becomes the cell wall for the pollen grain. These sets of sporogenous cells eventually develop into diploid microspore mother cells. These microspore mother cells, also called microsporocytes, then undergo meiosis and become four microspore haploid cells. These new microspore cells then undergo mitosis and form a tube cell and a generative cell. The generative cell then undergoes mitosis one more time to form two male gametes, also called sperm. See also Gametogenesis Document 3::: Sexual reproduction is a type of reproduction that involves a complex life cycle in which a gamete (haploid reproductive cells, such as a sperm or egg cell) with a single set of chromosomes combines with another gamete to produce a zygote that develops into an organism composed of cells with two sets of chromosomes (diploid). This is typical in animals, though the number of chromosome sets and how that number changes in sexual reproduction varies, especially among plants, fungi, and other eukaryotes. Sexual reproduction is the most common life cycle in multicellular eukaryotes, such as animals, fungi and plants. Sexual reproduction also occurs in some unicellular eukaryotes. Sexual reproduction does not occur in prokaryotes, unicellular organisms without cell nuclei, such as bacteria and archaea. However, some processes in bacteria, including bacterial conjugation, transformation and transduction, may be considered analogous to sexual reproduction in that they incorporate new genetic information. Some proteins and other features that are key for sexual reproduction may have arisen in bacteria, but sexual reproduction is believed to have developed in an ancient eukaryotic ancestor. In eukaryotes, diploid precursor cells divide to produce haploid cells in a process called meiosis. In meiosis, DNA is replicated to produce a total of four copies of each chromosome. This is followed by two cell divisions to generate haploid gametes. After the DNA is replicated in meiosis, the homologous chromosomes pair up so that their DNA sequences are aligned with each other. During this period before cell divisions, genetic information is exchanged between homologous chromosomes in genetic recombination. Homologous chromosomes contain highly similar but not identical information, and by exchanging similar but not identical regions, genetic recombination increases genetic diversity among future generations. During sexual reproduction, two haploid gametes combine into one diploid ce Document 4::: A kinetochore (, ) is a disc-shaped protein structure associated with duplicated chromatids in eukaryotic cells where the spindle fibers attach during cell division to pull sister chromatids apart. The kinetochore assembles on the centromere and links the chromosome to microtubule polymers from the mitotic spindle during mitosis and meiosis. The term kinetochore was first used in a footnote in a 1934 Cytology book by Lester W. Sharp and commonly accepted in 1936. Sharp's footnote reads: "The convenient term kinetochore (= movement place) has been suggested to the author by J. A. Moore", likely referring to John Alexander Moore who had joined Columbia University as a freshman in 1932. Monocentric organisms, including vertebrates, fungi, and most plants, have a single centromeric region on each chromosome which assembles a single, localized kinetochore. Holocentric organisms, such as nematodes and some plants, assemble a kinetochore along the entire length of a chromosome. Kinetochores start, control, and supervise the striking movements of chromosomes during cell division. During mitosis, which occurs after the amount of DNA is doubled in each chromosome (while maintaining the same number of chromosomes) in S phase, two sister chromatids are held together by a centromere. Each chromatid has its own kinetochore, which face in opposite directions and attach to opposite poles of the mitotic spindle apparatus. Following the transition from metaphase to anaphase, the sister chromatids separate from each other, and the individual kinetochores on each chromatid drive their movement to the spindle poles that will define the two new daughter cells. The kinetochore is therefore essential for the chromosome segregation that is classically associated with mitosis and meiosis. Structure of Kinetochore The kinetochore contains two regions: an inner kinetochore, which is tightly associated with the centromere DNA and assembled in a specialized form of chromatin that persists t The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. What must replicate in the cell cycle before meiosis i takes place? A. dna B. sperm C. cell walls D. meiotic fluid Answer:
sciq-5670
multiple_choice
Color, temperature, and solubility are examples of what type of property?
[ "susceptible", "intensive", "minimal", "severe" ]
B
Relavent Documents: Document 0::: A physical property is any property that is measurable, involved in the physical system, intensity on the object's state and behavior. The changes in the physical properties of a system can be used to describe its changes between momentary states. A quantifiable physical property is called physical quantity. Measurable physical quantities are often referred to as observables. Some physical properties are qualitative, such as shininess, brittleness, etc.; some general qualitative properties admit more specific related quantitative properties, such as in opacity, hardness, ductility,viscosity, etc. Physical properties are often characterized as intensive and extensive properties. An intensive property does not depend on the size or extent of the system, nor on the amount of matter in the object, while an extensive property shows an additive relationship. These classifications are in general only valid in cases when smaller subdivisions of the sample do not interact in some physical or chemical process when combined. Properties may also be classified with respect to the directionality of their nature. For example, isotropic properties do not change with the direction of observation, and anisotropic properties do have spatial variance. It may be difficult to determine whether a given property is a material property or not. Color, for example, can be seen and measured; however, what one perceives as color is really an interpretation of the reflective properties of a surface and the light used to illuminate it. In this sense, many ostensibly physical properties are called supervenient. A supervenient property is one which is actual, but is secondary to some underlying reality. This is similar to the way in which objects are supervenient on atomic structure. A cup might have the physical properties of mass, shape, color, temperature, etc., but these properties are supervenient on the underlying atomic structure, which may in turn be supervenient on an underlying quan Document 1::: Conceptual questions or conceptual problems in science, technology, engineering, and mathematics (STEM) education are questions that can be answered based only on the knowledge of relevant concepts, rather than performing extensive calculations. They contrast with most homework and exam problems in science and engineering that typically require plugging in numerical values into previously discussed formulas. Such "plug-and-chug" numerical problems can often be solved correctly by just matching the pattern of the problem to a previously discussed problem and changing the numerical inputs, which requires significant amounts of time to perform the calculations but does not test or deepen the understanding of how the concepts and formulas should work together. Conceptual questions, therefore, provide a good complement to conventional numerical problems because they need minimal or no calculations and instead encourage the students to engage more deeply with the underlying concepts and how they relate to formulas. Conceptual problems are often formulated as multiple-choice questions, making them easy to use during in-class discussions, particularly when utilizing active learning, peer instruction, and audience response. An example of a conceptual question in undergraduate thermodynamics is provided below: During adiabatic expansion of an ideal gas, its temperatureincreases decreases stays the same Impossible to tell/need more information The use of conceptual questions in physics was popularized by Eric Mazur, particularly in the form of multiple-choice tests that he called ConcepTests. In recent years, multiple websites that maintain lists of conceptual questions have been created by instructors for various disciplines. Some books on physics provide many examples of conceptual questions as well. Multiple conceptual questions can be assembled into a concept inventory to test the working knowledge of students at the beginning of a course or to track the improvement in Document 2::: In chemistry, solubility is the ability of a substance, the solute, to form a solution with another substance, the solvent. Insolubility is the opposite property, the inability of the solute to form such a solution. The extent of the solubility of a substance in a specific solvent is generally measured as the concentration of the solute in a saturated solution, one in which no more solute can be dissolved. At this point, the two substances are said to be at the solubility equilibrium. For some solutes and solvents, there may be no such limit, in which case the two substances are said to be "miscible in all proportions" (or just "miscible"). The solute can be a solid, a liquid, or a gas, while the solvent is usually solid or liquid. Both may be pure substances, or may themselves be solutions. Gases are always miscible in all proportions, except in very extreme situations, and a solid or liquid can be "dissolved" in a gas only by passing into the gaseous state first. The solubility mainly depends on the composition of solute and solvent (including their pH and the presence of other dissolved substances) as well as on temperature and pressure. The dependency can often be explained in terms of interactions between the particles (atoms, molecules, or ions) of the two substances, and of thermodynamic concepts such as enthalpy and entropy. Under certain conditions, the concentration of the solute can exceed its usual solubility limit. The result is a supersaturated solution, which is metastable and will rapidly exclude the excess solute if a suitable nucleation site appears. The concept of solubility does not apply when there is an irreversible chemical reaction between the two substances, such as the reaction of calcium hydroxide with hydrochloric acid; even though one might say, informally, that one "dissolved" the other. The solubility is also not the same as the rate of solution, which is how fast a solid solute dissolves in a liquid solvent. This property de Document 3::: Physical changes are changes affecting the form of a chemical substance, but not its chemical composition. Physical changes are used to separate mixtures into their component compounds, but can not usually be used to separate compounds into chemical elements or simpler compounds. Physical changes occur when objects or substances undergo a change that does not change their chemical composition. This contrasts with the concept of chemical change in which the composition of a substance changes or one or more substances combine or break up to form new substances. In general a physical change is reversible using physical means. For example, salt dissolved in water can be recovered by allowing the water to evaporate. A physical change involves a change in physical properties. Examples of physical properties include melting, transition to a gas, change of strength, change of durability, changes to crystal form, textural change, shape, size, color, volume and density. An example of a physical change is the process of tempering steel to form a knife blade. A steel blank is repeatedly heated and hammered which changes the hardness of the steel, its flexibility and its ability to maintain a sharp edge. Many physical changes also involve the rearrangement of atoms most noticeably in the formation of crystals. Many chemical changes are irreversible, and many physical changes are reversible, but reversibility is not a certain criterion for classification. Although chemical changes may be recognized by an indication such as odor, color change, or production of a gas, every one of these indicators can result from physical change. Examples Heating and cooling Many elements and some compounds change from solids to liquids and from liquids to gases when heated and the reverse when cooled. Some substances such as iodine and carbon dioxide go directly from solid to gas in a process called sublimation. Magnetism Ferro-magnetic materials can become magnetic. The process is reve Document 4::: A material property is an intensive property of a material, i.e., a physical property or chemical property that does not depend on the amount of the material. These quantitative properties may be used as a metric by which the benefits of one material versus another can be compared, thereby aiding in materials selection. A property having a fixed value for a given material or substance is called material constant or constant of matter. (Material constants should not be confused with physical constants, that have a universal character.) A material property may also be a function of one or more independent variables, such as temperature. Materials properties often vary to some degree according to the direction in the material in which they are measured, a condition referred to as anisotropy. Materials properties that relate to different physical phenomena often behave linearly (or approximately so) in a given operating range . Modeling them as linear functions can significantly simplify the differential constitutive equations that are used to describe the property. Equations describing relevant materials properties are often used to predict the attributes of a system. The properties are measured by standardized test methods. Many such methods have been documented by their respective user communities and published through the Internet; see ASTM International. Acoustical properties Acoustical absorption Speed of sound Sound reflection Sound transfer Third order elasticity (Acoustoelastic effect) Atomic properties Atomic mass: (applies to each element) the average mass of the atoms of an element, in daltons (Da), a.k.a. atomic mass units (amu). Atomic number: (applies to individual atoms or pure elements) the number of protons in each nucleus Relative atomic mass, a.k.a. atomic weight: (applies to individual isotopes or specific mixtures of isotopes of a given element) (no units) Standard atomic weight: the average relative atomic mass of a typical sample of the ele The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. Color, temperature, and solubility are examples of what type of property? A. susceptible B. intensive C. minimal D. severe Answer:
ai2_arc-687
multiple_choice
Which equation is correctly balanced for hydrogen and oxygen reacting to form water?
[ "H_{2} + O_{2} -> H_{2}O", "2H_{2} + O_{2} -> 2H_{2}O", "4H + O_{2} -> 2H_{2}O", "H_{2} + O -> H_{2}O" ]
B
Relavent Documents: Document 0::: An elementary reaction is a chemical reaction in which one or more chemical species react directly to form products in a single reaction step and with a single transition state. In practice, a reaction is assumed to be elementary if no reaction intermediates have been detected or need to be postulated to describe the reaction on a molecular scale. An apparently elementary reaction may be in fact a stepwise reaction, i.e. a complicated sequence of chemical reactions, with reaction intermediates of variable lifetimes. In a unimolecular elementary reaction, a molecule dissociates or isomerises to form the products(s) At constant temperature, the rate of such a reaction is proportional to the concentration of the species In a bimolecular elementary reaction, two atoms, molecules, ions or radicals, and , react together to form the product(s) The rate of such a reaction, at constant temperature, is proportional to the product of the concentrations of the species and The rate expression for an elementary bimolecular reaction is sometimes referred to as the Law of Mass Action as it was first proposed by Guldberg and Waage in 1864. An example of this type of reaction is a cycloaddition reaction. This rate expression can be derived from first principles by using collision theory for ideal gases. For the case of dilute fluids equivalent results have been obtained from simple probabilistic arguments. According to collision theory the probability of three chemical species reacting simultaneously with each other in a termolecular elementary reaction is negligible. Hence such termolecular reactions are commonly referred as non-elementary reactions and can be broken down into a more fundamental set of bimolecular reactions, in agreement with the law of mass action. It is not always possible to derive overall reaction schemes, but solutions based on rate equations are often possible in terms of steady-state or Michaelis-Menten approximations. Notes Chemical kinetics Phy Document 1::: Conceptual questions or conceptual problems in science, technology, engineering, and mathematics (STEM) education are questions that can be answered based only on the knowledge of relevant concepts, rather than performing extensive calculations. They contrast with most homework and exam problems in science and engineering that typically require plugging in numerical values into previously discussed formulas. Such "plug-and-chug" numerical problems can often be solved correctly by just matching the pattern of the problem to a previously discussed problem and changing the numerical inputs, which requires significant amounts of time to perform the calculations but does not test or deepen the understanding of how the concepts and formulas should work together. Conceptual questions, therefore, provide a good complement to conventional numerical problems because they need minimal or no calculations and instead encourage the students to engage more deeply with the underlying concepts and how they relate to formulas. Conceptual problems are often formulated as multiple-choice questions, making them easy to use during in-class discussions, particularly when utilizing active learning, peer instruction, and audience response. An example of a conceptual question in undergraduate thermodynamics is provided below: During adiabatic expansion of an ideal gas, its temperatureincreases decreases stays the same Impossible to tell/need more information The use of conceptual questions in physics was popularized by Eric Mazur, particularly in the form of multiple-choice tests that he called ConcepTests. In recent years, multiple websites that maintain lists of conceptual questions have been created by instructors for various disciplines. Some books on physics provide many examples of conceptual questions as well. Multiple conceptual questions can be assembled into a concept inventory to test the working knowledge of students at the beginning of a course or to track the improvement in Document 2::: The Hatta number (Ha) was developed by Shirôji Hatta, who taught at Tohoku University. It is a dimensionless parameter that compares the rate of reaction in a liquid film to the rate of diffusion through the film. For a second order reaction (), the maximum rate of reaction assumes that the liquid film is saturated with gas at the interfacial concentration ; thus, the maximum rate of reaction is . For a reaction order in and order in : It is an important parameter used in Chemical Reaction Engineering. Document 3::: A chemical equation is the symbolic representation of a chemical reaction in the form of symbols and chemical formulas. The reactant entities are given on the left-hand side and the product entities are on the right-hand side with a plus sign between the entities in both the reactants and the products, and an arrow that points towards the products to show the direction of the reaction. The chemical formulas may be symbolic, structural (pictorial diagrams), or intermixed. The coefficients next to the symbols and formulas of entities are the absolute values of the stoichiometric numbers. The first chemical equation was diagrammed by Jean Beguin in 1615. Structure A chemical equation (see an example below) consists of a list of reactants (the starting substances) on the left-hand side, an arrow symbol, and a list of products (substances formed in the chemical reaction) on the right-hand side. Each substance is specified by its chemical formula, optionally preceded by a number called stoichiometric coefficient. The coefficient specifies how many entities (e.g. molecules) of that substance are involved in the reaction on a molecular basis. If not written explicitly, the coefficient is equal to 1. Multiple substances on any side of the equation are separated from each other by a plus sign. As an example, the equation for the reaction of hydrochloric acid with sodium can be denoted: Given the formulas are fairly simple, this equation could be read as "two H-C-L plus two N-A yields two N-A-C-L and H two." Alternately, and in general for equations involving complex chemicals, the chemical formulas are read using IUPAC nomenclature, which could verbalise this equation as "two hydrochloric acid molecules and two sodium atoms react to form two formula units of sodium chloride and a hydrogen gas molecule." Reaction types Different variants of the arrow symbol are used to denote the type of a reaction: {| | style="text-align: center; padding-right: 0.5em;" | -> || net forwa Document 4::: The hydrogen cycle consists of hydrogen exchanges between biotic (living) and abiotic (non-living) sources and sinks of hydrogen-containing compounds. Hydrogen (H) is the most abundant element in the universe. On Earth, common H-containing inorganic molecules include water (H2O), hydrogen gas (H2), hydrogen sulfide (H2S), and ammonia (NH3). Many organic compounds also contain H atoms, such as hydrocarbons and organic matter. Given the ubiquity of hydrogen atoms in inorganic and organic chemical compounds, the hydrogen cycle is focused on molecular hydrogen, H2. As a consequence of microbial metabolisms or naturally occurring rock-water interactions, hydrogen gas can be created. Other bacteria may then consume free H2, which may also be oxidised photochemically in the atmosphere or lost to space. Hydrogen is also thought to be an important reactant in pre-biotic chemistry and the early evolution of life on Earth, and potentially elsewhere in the Solar System. Abiotic cycles Sources Abiotic sources of hydrogen gas include water-rock and photochemical reactions. Exothermic serpentinization reactions between water and olivine minerals produce H2 in the marine or terrestrial subsurface. In the ocean, hydrothermal vents erupt magma and altered seawater fluids including abundant H2, depending on the temperature regime and host rock composition. Molecular hydrogen can also be produced through photooxidation (via solar UV radiation) of some mineral species such as siderite in anoxic aqueous environments. This may have been an important process in the upper regions of early Earth's Archaean oceans. Sinks Because H2 is the lightest element, atmospheric H2 can readily be lost to space via Jeans escape, an irreversible process that drives Earth's net mass loss. Photolysis of heavier compounds not prone to escape, such as CH4 or H2O, can also liberate H2 from the upper atmosphere and contribute to this process. Another major sink of free atmospheric H2 is photochemical oxi The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. Which equation is correctly balanced for hydrogen and oxygen reacting to form water? A. H_{2} + O_{2} -> H_{2}O B. 2H_{2} + O_{2} -> 2H_{2}O C. 4H + O_{2} -> 2H_{2}O D. H_{2} + O -> H_{2}O Answer:
sciq-8878
multiple_choice
Liquefaction ccurs when the molecules of a gas are cooled to the point where they no longer possess sufficient kinetic energy to overcome what?
[ "intermolecular attractive forces", "intermolecular gravitational forces", "bonding attractive forces", "gravitational attractive forces" ]
A
Relavent Documents: Document 0::: In materials science, liquefaction is a process that generates a liquid from a solid or a gas or that generates a non-liquid phase which behaves in accordance with fluid dynamics. It occurs both naturally and artificially. As an example of the latter, a "major commercial application of liquefaction is the liquefaction of air to allow separation of the constituents, such as oxygen, nitrogen, and the noble gases." Another is the conversion of solid coal into a liquid form usable as a substitute for liquid fuels. Geology In geology, soil liquefaction refers to the process by which water-saturated, unconsolidated sediments are transformed into a substance that acts like a liquid, often in an earthquake. Soil liquefaction was blamed for building collapses in the city of Palu, Indonesia in October 2018. In a related phenomenon, liquefaction of bulk materials in cargo ships may cause a dangerous shift in the load. Physics and chemistry In physics and chemistry, the phase transitions from solid and gas to liquid (melting and condensation, respectively) may be referred to as liquefaction. The melting point (sometimes called liquefaction point) is the temperature and pressure at which a solid becomes a liquid. In commercial and industrial situations, the process of condensing a gas to liquid is sometimes referred to as liquefaction of gases. Coal Coal liquefaction is the production of liquid fuels from coal using a variety of industrial processes. Dissolution Liquefaction is also used in commercial and industrial settings to refer to mechanical dissolution of a solid by mixing, grinding or blending with a liquid. Food preparation In kitchen or laboratory settings, solids may be chopped into smaller parts sometimes in combination with a liquid, for example in food preparation or laboratory use. This may be done with a blender, or liquidiser in British English. Irradiation Liquefaction of silica and silicate glasses occurs on electron beam irradiation of nanos Document 1::: In chemistry, solubility is the ability of a substance, the solute, to form a solution with another substance, the solvent. Insolubility is the opposite property, the inability of the solute to form such a solution. The extent of the solubility of a substance in a specific solvent is generally measured as the concentration of the solute in a saturated solution, one in which no more solute can be dissolved. At this point, the two substances are said to be at the solubility equilibrium. For some solutes and solvents, there may be no such limit, in which case the two substances are said to be "miscible in all proportions" (or just "miscible"). The solute can be a solid, a liquid, or a gas, while the solvent is usually solid or liquid. Both may be pure substances, or may themselves be solutions. Gases are always miscible in all proportions, except in very extreme situations, and a solid or liquid can be "dissolved" in a gas only by passing into the gaseous state first. The solubility mainly depends on the composition of solute and solvent (including their pH and the presence of other dissolved substances) as well as on temperature and pressure. The dependency can often be explained in terms of interactions between the particles (atoms, molecules, or ions) of the two substances, and of thermodynamic concepts such as enthalpy and entropy. Under certain conditions, the concentration of the solute can exceed its usual solubility limit. The result is a supersaturated solution, which is metastable and will rapidly exclude the excess solute if a suitable nucleation site appears. The concept of solubility does not apply when there is an irreversible chemical reaction between the two substances, such as the reaction of calcium hydroxide with hydrochloric acid; even though one might say, informally, that one "dissolved" the other. The solubility is also not the same as the rate of solution, which is how fast a solid solute dissolves in a liquid solvent. This property de Document 2::: This is a list of gases at standard conditions, which means substances that boil or sublime at or below and 1 atm pressure and are reasonably stable. List This list is sorted by boiling point of gases in ascending order, but can be sorted on different values. "sub" and "triple" refer to the sublimation point and the triple point, which are given in the case of a substance that sublimes at 1 atm; "dec" refers to decomposition. "~" means approximately. Known as gas The following list has substances known to be gases, but with an unknown boiling point. Fluoroamine Trifluoromethyl trifluoroethyl trioxide CF3OOOCF2CF3 boils between 10 and 20° Bis-trifluoromethyl carbonate boils between −10 and +10° possibly +12, freezing −60° Difluorodioxirane boils between −80 and −90°. Difluoroaminosulfinyl fluoride F2NS(O)F is a gas but decomposes over several hours Trifluoromethylsulfinyl chloride CF3S(O)Cl Nitrosyl cyanide ?−20° blue-green gas 4343-68-4 Thiazyl chloride NSCl greenish yellow gas; trimerises. Document 3::: The Stefan flow, occasionally called Stefan's flow, is a transport phenomenon concerning the movement of a chemical species by a flowing fluid (typically in the gas phase) that is induced to flow by the production or removal of the species at an interface. Any process that adds the species of interest to or removes it from the flowing fluid may cause the Stefan flow, but the most common processes include evaporation, condensation, chemical reaction, sublimation, ablation, adsorption, absorption, and desorption. It was named after the Slovenian physicist, mathematician, and poet Josef Stefan for his early work on calculating evaporation rates. The Stefan flow is distinct from diffusion as described by Fick's law, but diffusion almost always also occurs in multi-species systems that are experiencing the Stefan flow. In systems undergoing one of the species addition or removal processes mentioned previously, the addition or removal generates a mean flow in the flowing fluid as the fluid next to the interface is displaced by the production or removal of additional fluid by the processes occurring at the interface. The transport of the species by this mean flow is the Stefan flow. When concentration gradients of the species are also present, diffusion transports the species relative to the mean flow. The total transport rate of the species is then given by a summation of the Stefan flow and diffusive contributions. An example of the Stefan flow occurs when a droplet of liquid evaporates in air. In this case, the vapor/air mixture surrounding the droplet is the flowing fluid, and liquid/vapor boundary of the droplet is the interface. As heat is absorbed by the droplet from the environment, some of the liquid evaporates into vapor at the surface of the droplet, and flows away from the droplet as it is displaced by additional vapor evaporating from the droplet. This process causes the flowing medium to move away from the droplet at some mean speed that is dependent on Document 4::: Chemisorption is a kind of adsorption which involves a chemical reaction between the surface and the adsorbate. New chemical bonds are generated at the adsorbent surface. Examples include macroscopic phenomena that can be very obvious, like corrosion, and subtler effects associated with heterogeneous catalysis, where the catalyst and reactants are in different phases. The strong interaction between the adsorbate and the substrate surface creates new types of electronic bonds. In contrast with chemisorption is physisorption, which leaves the chemical species of the adsorbate and surface intact. It is conventionally accepted that the energetic threshold separating the binding energy of "physisorption" from that of "chemisorption" is about 0.5 eV per adsorbed species. Due to specificity, the nature of chemisorption can greatly differ, depending on the chemical identity and the surface structural properties. The bond between the adsorbate and adsorbent in chemisorption is either ionic or covalent. Uses An important example of chemisorption is in heterogeneous catalysis which involves molecules reacting with each other via the formation of chemisorbed intermediates. After the chemisorbed species combine (by forming bonds with each other) the product desorbs from the surface. Self-assembled monolayers Self-assembled monolayers (SAMs) are formed by chemisorbing reactive reagents with metal surfaces. A famous example involves thiols (RS-H) adsorbing onto the surface of gold. This process forms strong Au-SR bonds and releases H2. The densely packed SR groups protect the surface. Gas-surface chemisorption Adsorption kinetics As an instance of adsorption, chemisorption follows the adsorption process. The first stage is for the adsorbate particle to come into contact with the surface. The particle needs to be trapped onto the surface by not possessing enough energy to leave the gas-surface potential well. If it elastically collides with the surface, then it would The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. Liquefaction ccurs when the molecules of a gas are cooled to the point where they no longer possess sufficient kinetic energy to overcome what? A. intermolecular attractive forces B. intermolecular gravitational forces C. bonding attractive forces D. gravitational attractive forces Answer:
sciq-732
multiple_choice
In what unit is speed usually measured?
[ "radians", "miles per hour", "Celcius", "calories" ]
B
Relavent Documents: Document 0::: Quantity calculus is the formal method for describing the mathematical relations between abstract physical quantities. Its roots can be traced to Fourier's concept of dimensional analysis (1822). The basic axiom of quantity calculus is Maxwell's description of a physical quantity as the product of a "numerical value" and a "reference quantity" (i.e. a "unit quantity" or a "unit of measurement"). De Boer summarized the multiplication, division, addition, association and commutation rules of quantity calculus and proposed that a full axiomatization has yet to be completed. Measurements are expressed as products of a numeric value with a unit symbol, e.g. "12.7 m". Unlike algebra, the unit symbol represents a measurable quantity such as a meter, not an algebraic variable. A careful distinction needs to be made between abstract quantities and measurable quantities. The multiplication and division rules of quantity calculus are applied to SI base units (which are measurable quantities) to define SI derived units, including dimensionless derived units, such as the radian (rad) and steradian (sr) which are useful for clarity, although they are both algebraically equal to 1. Thus there is some disagreement about whether it is meaningful to multiply or divide units. Emerson suggests that if the units of a quantity are algebraically simplified, they then are no longer units of that quantity. Johansson proposes that there are logical flaws in the application of quantity calculus, and that the so-called dimensionless quantities should be understood as "unitless quantities". How to use quantity calculus for unit conversion and keeping track of units in algebraic manipulations is explained in the handbook Quantities, Units and Symbols in Physical Chemistry. Notes Document 1::: Conceptual questions or conceptual problems in science, technology, engineering, and mathematics (STEM) education are questions that can be answered based only on the knowledge of relevant concepts, rather than performing extensive calculations. They contrast with most homework and exam problems in science and engineering that typically require plugging in numerical values into previously discussed formulas. Such "plug-and-chug" numerical problems can often be solved correctly by just matching the pattern of the problem to a previously discussed problem and changing the numerical inputs, which requires significant amounts of time to perform the calculations but does not test or deepen the understanding of how the concepts and formulas should work together. Conceptual questions, therefore, provide a good complement to conventional numerical problems because they need minimal or no calculations and instead encourage the students to engage more deeply with the underlying concepts and how they relate to formulas. Conceptual problems are often formulated as multiple-choice questions, making them easy to use during in-class discussions, particularly when utilizing active learning, peer instruction, and audience response. An example of a conceptual question in undergraduate thermodynamics is provided below: During adiabatic expansion of an ideal gas, its temperatureincreases decreases stays the same Impossible to tell/need more information The use of conceptual questions in physics was popularized by Eric Mazur, particularly in the form of multiple-choice tests that he called ConcepTests. In recent years, multiple websites that maintain lists of conceptual questions have been created by instructors for various disciplines. Some books on physics provide many examples of conceptual questions as well. Multiple conceptual questions can be assembled into a concept inventory to test the working knowledge of students at the beginning of a course or to track the improvement in Document 2::: This article gives a list of conversion factors for several physical quantities. A number of different units (some only of historical interest) are shown and expressed in terms of the corresponding SI unit. Conversions between units in the metric system are defined by their prefixes (for example, 1 kilogram = 1000 grams, 1 milligram = 0.001 grams) and are thus not listed in this article. Exceptions are made if the unit is commonly known by another name (for example, 1 micron = 10−6 metre). Within each table, the units are listed alphabetically, and the SI units (base or derived) are highlighted. The following quantities are considered: length, area, volume, plane angle, solid angle, mass, density, time, frequency, velocity, volumetric flow rate, acceleration, force, pressure (or mechanical stress), torque (or moment of force), energy, power (or heat flow rate), action, dynamic viscosity, kinematic viscosity, electric current, electric charge, electric dipole, electromotive force (or electric potential difference), electrical resistance, capacitance, magnetic flux, magnetic flux density, inductance, temperature, information entropy, luminous intensity, luminance, luminous flux, illuminance, radiation. Length Area Volume Plane angle Solid angle Mass Notes: See Weight for detail of mass/weight distinction and conversion. Avoirdupois is a system of mass based on a pound of 16 ounces, while Troy weight is the system of mass where 12 troy ounces equals one troy pound. The symbol is used to denote standard gravity in order to avoid confusion with the (upright) g symbol for gram. Density Time Frequency Speed or velocity A velocity consists of a speed combined with a direction; the speed part of the velocity takes units of speed. Flow (volume) Acceleration Force Pressure or mechanical stress Torque or moment of force Energy Power or heat flow rate Action Dynamic viscosity Kinematic viscosity Electric current Electric charge Electric dipole Elec Document 3::: The Unified Code for Units of Measure (UCUM) is a system of codes for unambiguously representing measurement units. Its primary purpose is machine-to-machine communication rather than communication between humans. The code set includes all units defined in ISO 1000, ISO 2955-1983, ANSI X3.50-1986, HL7 and ENV 12435, and explicitly and verifiably addresses the naming conflicts and ambiguities in those standards to resolve them. It provides for representations of units in 7 bit ASCII for machine-to-machine communication, with unambiguous mapping between case-sensitive and case-insensitive representations. A reference open-source implementation is available as a Java applet. Also an OSGi based implementation at Eclipse Foundation. Base units Units are represented in UCUM with reference to a set of seven base units. The UCUM base units are the metre for measurement of length, the second for time, the gram for mass, the coulomb for charge, the kelvin for temperature, the candela for luminous intensity, and the radian for plane angle. The UCUM base units form a set of mutually independent dimensions as required by dimensional analysis. Some of the UCUM base units are different from the SI base units. UCUM is compatible with, but not isomorphic with SI. There are four differences between the two sets of base units: The gram is the base unit of mass instead of the kilogram, since in UCUM base units do not have prefixes. Electric charge is the base quantity for electromagnetic phenomena instead of electric current, since the elementary charge of electrons is more fundamental physically. The mole is dimensionless in UCUM, since it can be defined in terms of the Avogadro number, The radian is a distinct base unit for plane angle, to distinguish angular velocity from rotational frequency and to distinguish the radian from the steradian for solid angles. Metric and non-metric units Each unit represented in UCUM is identified as either "metric" or "non-metric". Metric un Document 4::: °WK or degrees Windisch-Kolbach is a unit for measuring the diastatic power of malt, named after the German brewer Wilhelm Windisch and the Luxembourg brewer Paul Kolbach. It is a common unit in beer brewing (especially in Europe) that measures the ability of enzymes in malt to reduce starch to sugar (maltose). It is defined as the amount of maltose formed by 100 g of malt in 30 min at 20 °C. Degrees Lintner is a unit used in the United States for the same purpose. The conversion is as follows: . 334 °WK = 3.014×10−7 Katal The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. In what unit is speed usually measured? A. radians B. miles per hour C. Celcius D. calories Answer:
sciq-6583
multiple_choice
What prevents solutes that have accumulated in the xylem from leaking back into the soil solution?
[ "the exodermis", "the altostratus", "the exoskeleton", "the endodermis" ]
D
Relavent Documents: Document 0::: Xylem is one of the two types of transport tissue in vascular plants, the other being phloem. The basic function of the xylem is to transport water from roots to stems and leaves, but it also transports nutrients. The word xylem is derived from the Ancient Greek word (xylon), meaning "wood"; the best-known xylem tissue is wood, though it is found throughout a plant. The term was introduced by Carl Nägeli in 1858. Structure The most distinctive xylem cells are the long tracheary elements that transport water. Tracheids and vessel elements are distinguished by their shape; vessel elements are shorter, and are connected together into long tubes that are called vessels. Xylem also contains two other type of cells: parenchyma and fibers. Xylem can be found: in vascular bundles, present in non-woody plants and non-woody parts of woody plants in secondary xylem, laid down by a meristem called the vascular cambium in woody plants as part of a stelar arrangement not divided into bundles, as in many ferns. In transitional stages of plants with secondary growth, the first two categories are not mutually exclusive, although usually a vascular bundle will contain primary xylem only. The branching pattern exhibited by xylem follows Murray's law. Primary and secondary xylem Primary xylem is formed during primary growth from procambium. It includes protoxylem and metaxylem. Metaxylem develops after the protoxylem but before secondary xylem. Metaxylem has wider vessels and tracheids than protoxylem. Secondary xylem is formed during secondary growth from vascular cambium. Although secondary xylem is also found in members of the gymnosperm groups Gnetophyta and Ginkgophyta and to a lesser extent in members of the Cycadophyta, the two main groups in which secondary xylem can be found are: conifers (Coniferae): there are approximately 600 known species of conifers. All species have secondary xylem, which is relatively uniform in structure throughout this group. Many conife Document 1::: Guttation is the exudation of drops of xylem sap on the tips or edges of leaves of some vascular plants, such as grasses, and a number of fungi, which are not plants but were previously categorized as such and studied as part of botany. Process At night, transpiration usually does not occur, because most plants have their stomata closed. When there is a high soil moisture level, water will enter plant roots, because the water potential of the roots is lower than in the soil solution. The water will accumulate in the plant, creating a slight root pressure. The root pressure forces some water to exude through special leaf tip or edge structures, hydathodes or water glands, forming drops. Root pressure provides the impetus for this flow, rather than transpirational pull. Guttation is most noticeable when transpiration is suppressed and the relative humidity is high, such as during the night. Guttation formation in fungi is important for visual identification, but the process causing it is unknown. However, due to its association with stages of rapid growth in the life cycle of fungi, it has been hypothesised that during rapid metabolism excess water produced by respiration is exuded. Chemical content Guttation fluid may contain a variety of organic and inorganic compounds, mainly sugars, and potassium. On drying, a white crust remains on the leaf surface. Girolami et al. (2009) found that guttation drops from corn plants germinated from neonicotinoid-coated seeds could contain amounts of insecticide consistently higher than 10 mg/L, and up to 200 mg/L for the neonicotinoid imidacloprid. Concentrations this high are near those of active ingredients applied in field sprays for pest control and sometimes even higher. It was found that when bees consume guttation drops collected from plants grown from neonicotinoid-coated seeds, they die within a few minutes. This phenomenon may be a factor in bee deaths and, consequently, colony collapse disorder. Nitrogen levels Document 2::: The soil-plant-atmosphere continuum (SPAC) is the pathway for water moving from soil through plants to the atmosphere. Continuum in the description highlights the continuous nature of water connection through the pathway. The low water potential of the atmosphere, and relatively higher (i.e. less negative) water potential inside leaves, leads to a diffusion gradient across the stomatal pores of leaves, drawing water out of the leaves as vapour. As water vapour transpires out of the leaf, further water molecules evaporate off the surface of mesophyll cells to replace the lost molecules since water in the air inside leaves is maintained at saturation vapour pressure. Water lost at the surface of cells is replaced by water from the xylem, which due to the cohesion-tension properties of water in the xylem of plants pulls additional water molecules through the xylem from the roots toward the leaf. Components The transport of water along this pathway occurs in components, variously defined among scientific disciplines: Soil physics characterizes water in soil in terms of tension, Physiology of plants and animals characterizes water in organisms in terms of diffusion pressure deficit, and Meteorology uses vapour pressure or relative humidity to characterize atmospheric water. SPAC integrates these components and is defined as a: ...concept recognising that the field with all its components (soil, plant, animals and the ambient atmosphere taken together) constitutes a physically integrated, dynamic system in which the various flow processes involving energy and matter occur simultaneously and independently like links in the chain. This characterises the state of water in different components of the SPAC as expressions of the energy level or water potential of each. Modelling of water transport between components relies on SPAC, as do studies of water potential gradients between segments. See also Ecohydrology Evapotranspiration Hydraulic redistribution; a p Document 3::: Hydraulic redistribution is a passive mechanism where water is transported from moist to dry soils via subterranean networks. It occurs in vascular plants that commonly have roots in both wet and dry soils, especially plants with both taproots that grow vertically down to the water table, and lateral roots that sit close to the surface. In the late 1980s, there was a movement to understand the full extent of these subterranean networks. Since then it was found that vascular plants are assisted by fungal networks which grow on the root system to promote water redistribution. Process Hot, dry periods, when the surface soil dries out to the extent that the lateral roots exude whatever water they contain, will result in the death of such lateral roots unless the water is replaced. Similarly, under extremely wet conditions when lateral roots are inundated by flood waters, oxygen deprivation will also lead to root peril. In plants that exhibit hydraulic redistribution, there are xylem pathways from the taproots to the laterals, such that the absence or abundance of water at the laterals creates a pressure potential analogous to that of transpirational pull. In drought conditions, ground water is drawn up through the taproot to the laterals and exuded into the surface soil, replenishing that which was lost. Under flooding conditions, plant roots perform a similar function in the opposite direction. Though often referred to as hydraulic lift, movement of water by the plant roots has been shown to occur in any direction. This phenomenon has been documented in over sixty plant species spanning a variety of plant types (from herbs and grasses to shrubs and trees) and over a range of environmental conditions (from the Kalahari Desert to the Amazon Rainforest). Causes The movement of this water can be explained by a water transport theory throughout a plant. This well-established water transport theory is called the cohesion-tension theory. In brief, it explains the movement Document 4::: Xylan (; ) (CAS number: 9014-63-5) is a type of hemicellulose, a polysaccharide consisting mainly of xylose residues. It is found in plants, in the secondary cell walls of dicots and all cell walls of grasses. Xylan is the third most abundant biopolymer on Earth, after cellulose and chitin. Composition Xylans are polysaccharides made up of β-1,4-linked xylose (a pentose sugar) residues with side branches of α-arabinofuranose and/or α-glucuronic acids. On the basis of substituted groups xylan can be categorized into three classes i) glucuronoxylan (GX) ii) neutral arabinoxylan (AX) and iii) glucuronoarabinoxylan (GAX). In some cases contribute to cross-linking of cellulose microfibrils and lignin through ferulic acid residues. Occurrence Plant cell structure Xylans play an important role in the integrity of the plant cell wall and increase cell wall recalcitrance to enzymatic digestion; thus, they help plants to defend against herbivores and pathogens (biotic stress). Xylan also plays a significant role in plant growth and development. Typically, xylans content in hardwoods is 10-35%, whereas they are 10-15% in softwoods. The main xylan component in hardwoods is O-acetyl-4-O-methylglucuronoxylan, whereas arabino-4-O-methylglucuronoxylans are a major component in softwoods. In general, softwood xylans differ from hardwood xylans by the lack of acetyl groups and the presence of arabinose units linked by α-(1,3)-glycosidic bonds to the xylan backbone. Algae Some macrophytic green algae contain xylan (specifically homoxylan) especially those within the Codium and Bryopsis genera where it replaces cellulose in the cell wall matrix. Similarly, it replaces the inner fibrillar cell-wall layer of cellulose in some red algae. The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. What prevents solutes that have accumulated in the xylem from leaking back into the soil solution? A. the exodermis B. the altostratus C. the exoskeleton D. the endodermis Answer:
sciq-3308
multiple_choice
The work-energy theorem states that the net work on a system equals the change in what type of energy?
[ "binary energy", "kinetic energy", "new energy", "residual energy" ]
B
Relavent Documents: Document 0::: Conceptual questions or conceptual problems in science, technology, engineering, and mathematics (STEM) education are questions that can be answered based only on the knowledge of relevant concepts, rather than performing extensive calculations. They contrast with most homework and exam problems in science and engineering that typically require plugging in numerical values into previously discussed formulas. Such "plug-and-chug" numerical problems can often be solved correctly by just matching the pattern of the problem to a previously discussed problem and changing the numerical inputs, which requires significant amounts of time to perform the calculations but does not test or deepen the understanding of how the concepts and formulas should work together. Conceptual questions, therefore, provide a good complement to conventional numerical problems because they need minimal or no calculations and instead encourage the students to engage more deeply with the underlying concepts and how they relate to formulas. Conceptual problems are often formulated as multiple-choice questions, making them easy to use during in-class discussions, particularly when utilizing active learning, peer instruction, and audience response. An example of a conceptual question in undergraduate thermodynamics is provided below: During adiabatic expansion of an ideal gas, its temperatureincreases decreases stays the same Impossible to tell/need more information The use of conceptual questions in physics was popularized by Eric Mazur, particularly in the form of multiple-choice tests that he called ConcepTests. In recent years, multiple websites that maintain lists of conceptual questions have been created by instructors for various disciplines. Some books on physics provide many examples of conceptual questions as well. Multiple conceptual questions can be assembled into a concept inventory to test the working knowledge of students at the beginning of a course or to track the improvement in Document 1::: Engineering mathematics is a branch of applied mathematics concerning mathematical methods and techniques that are typically used in engineering and industry. Along with fields like engineering physics and engineering geology, both of which may belong in the wider category engineering science, engineering mathematics is an interdisciplinary subject motivated by engineers' needs both for practical, theoretical and other considerations outside their specialization, and to deal with constraints to be effective in their work. Description Historically, engineering mathematics consisted mostly of applied analysis, most notably: differential equations; real and complex analysis (including vector and tensor analysis); approximation theory (broadly construed, to include asymptotic, variational, and perturbative methods, representations, numerical analysis); Fourier analysis; potential theory; as well as linear algebra and applied probability, outside of analysis. These areas of mathematics were intimately tied to the development of Newtonian physics, and the mathematical physics of that period. This history also left a legacy: until the early 20th century subjects such as classical mechanics were often taught in applied mathematics departments at American universities, and fluid mechanics may still be taught in (applied) mathematics as well as engineering departments. The success of modern numerical computer methods and software has led to the emergence of computational mathematics, computational science, and computational engineering (the last two are sometimes lumped together and abbreviated as CS&E), which occasionally use high-performance computing for the simulation of phenomena and the solution of problems in the sciences and engineering. These are often considered interdisciplinary fields, but are also of interest to engineering mathematics. Specialized branches include engineering optimization and engineering statistics. Engineering mathematics in tertiary educ Document 2::: Energy transformation, also known as energy conversion, is the process of changing energy from one form to another. In physics, energy is a quantity that provides the capacity to perform work or moving (e.g. lifting an object) or provides heat. In addition to being converted, according to the law of conservation of energy, energy is transferable to a different location or object, but it cannot be created or destroyed. The energy in many of its forms may be used in natural processes, or to provide some service to society such as heating, refrigeration, lighting or performing mechanical work to operate machines. For example, to heat a home, the furnace burns fuel, whose chemical potential energy is converted into thermal energy, which is then transferred to the home's air to raise its temperature. Limitations in the conversion of thermal energy Conversions to thermal energy from other forms of energy may occur with 100% efficiency. Conversion among non-thermal forms of energy may occur with fairly high efficiency, though there is always some energy dissipated thermally due to friction and similar processes. Sometimes the efficiency is close to 100%, such as when potential energy is converted to kinetic energy as an object falls in a vacuum. This also applies to the opposite case; for example, an object in an elliptical orbit around another body converts its kinetic energy (speed) into gravitational potential energy (distance from the other object) as it moves away from its parent body. When it reaches the furthest point, it will reverse the process, accelerating and converting potential energy into kinetic. Since space is a near-vacuum, this process has close to 100% efficiency. Thermal energy is unique because it in most cases (willow) cannot be converted to other forms of energy. Only a difference in the density of thermal/heat energy (temperature) can be used to perform work, and the efficiency of this conversion will be (much) less than 100%. This is because t Document 3::: In physics and chemistry, the law of conservation of energy states that the total energy of an isolated system remains constant; it is said to be conserved over time. Energy can neither be created nor destroyed; rather, it can only be transformed or transferred from one form to another. For instance, chemical energy is converted to kinetic energy when a stick of dynamite explodes. If one adds up all forms of energy that were released in the explosion, such as the kinetic energy and potential energy of the pieces, as well as heat and sound, one will get the exact decrease of chemical energy in the combustion of the dynamite. Classically, conservation of energy was distinct from conservation of mass. However, special relativity shows that mass is related to energy and vice versa by , the equation representing mass–energy equivalence, and science now takes the view that mass-energy as a whole is conserved. Theoretically, this implies that any object with mass can itself be converted to pure energy, and vice versa. However, this is believed to be possible only under the most extreme of physical conditions, such as likely existed in the universe very shortly after the Big Bang or when black holes emit Hawking radiation. Given the stationary-action principle, conservation of energy can be rigorously proven by Noether's theorem as a consequence of continuous time translation symmetry; that is, from the fact that the laws of physics do not change over time. A consequence of the law of conservation of energy is that a perpetual motion machine of the first kind cannot exist; that is to say, no system without an external energy supply can deliver an unlimited amount of energy to its surroundings. Depending on the definition of energy, conservation of energy can arguably be violated by general relativity on the cosmological scale. History Ancient philosophers as far back as Thales of Miletus  550 BCE had inklings of the conservation of some underlying substance of which ev Document 4::: Energy flow is the flow of energy through living things within an ecosystem. All living organisms can be organized into producers and consumers, and those producers and consumers can further be organized into a food chain. Each of the levels within the food chain is a trophic level. In order to more efficiently show the quantity of organisms at each trophic level, these food chains are then organized into trophic pyramids. The arrows in the food chain show that the energy flow is unidirectional, with the head of an arrow indicating the direction of energy flow; energy is lost as heat at each step along the way. The unidirectional flow of energy and the successive loss of energy as it travels up the food web are patterns in energy flow that are governed by thermodynamics, which is the theory of energy exchange between systems. Trophic dynamics relates to thermodynamics because it deals with the transfer and transformation of energy (originating externally from the sun via solar radiation) to and among organisms. Energetics and the carbon cycle The first step in energetics is photosynthesis, wherein water and carbon dioxide from the air are taken in with energy from the sun, and are converted into oxygen and glucose. Cellular respiration is the reverse reaction, wherein oxygen and sugar are taken in and release energy as they are converted back into carbon dioxide and water. The carbon dioxide and water produced by respiration can be recycled back into plants. Energy loss can be measured either by efficiency (how much energy makes it to the next level), or by biomass (how much living material exists at those levels at one point in time, measured by standing crop). Of all the net primary productivity at the producer trophic level, in general only 10% goes to the next level, the primary consumers, then only 10% of that 10% goes on to the next trophic level, and so on up the food pyramid. Ecological efficiency may be anywhere from 5% to 20% depending on how efficient The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. The work-energy theorem states that the net work on a system equals the change in what type of energy? A. binary energy B. kinetic energy C. new energy D. residual energy Answer:
sciq-3285
multiple_choice
Whether it's puppies or people, offspring and parents usually share many of what?
[ "insects", "traits", "clothes", "fruits" ]
B
Relavent Documents: Document 0::: Evolutionary developmental biology (informally, evo-devo) is a field of biological research that compares the developmental processes of different organisms to infer how developmental processes evolved. The field grew from 19th-century beginnings, where embryology faced a mystery: zoologists did not know how embryonic development was controlled at the molecular level. Charles Darwin noted that having similar embryos implied common ancestry, but little progress was made until the 1970s. Then, recombinant DNA technology at last brought embryology together with molecular genetics. A key early discovery was of homeotic genes that regulate development in a wide range of eukaryotes. The field is composed of multiple core evolutionary concepts. One is deep homology, the finding that dissimilar organs such as the eyes of insects, vertebrates and cephalopod molluscs, long thought to have evolved separately, are controlled by similar genes such as pax-6, from the evo-devo gene toolkit. These genes are ancient, being highly conserved among phyla; they generate the patterns in time and space which shape the embryo, and ultimately form the body plan of the organism. Another is that species do not differ much in their structural genes, such as those coding for enzymes; what does differ is the way that gene expression is regulated by the toolkit genes. These genes are reused, unchanged, many times in different parts of the embryo and at different stages of development, forming a complex cascade of control, switching other regulatory genes as well as structural genes on and off in a precise pattern. This multiple pleiotropic reuse explains why these genes are highly conserved, as any change would have many adverse consequences which natural selection would oppose. New morphological features and ultimately new species are produced by variations in the toolkit, either when genes are expressed in a new pattern, or when toolkit genes acquire additional functions. Another possibility Document 1::: In mathematics, the notion of a germ of an object in/on a topological space is an equivalence class of that object and others of the same kind that captures their shared local properties. In particular, the objects in question are mostly functions (or maps) and subsets. In specific implementations of this idea, the functions or subsets in question will have some property, such as being analytic or smooth, but in general this is not needed (the functions in question need not even be continuous); it is however necessary that the space on/in which the object is defined is a topological space, in order that the word local has some meaning. Name The name is derived from cereal germ in a continuation of the sheaf metaphor, as a germ is (locally) the "heart" of a function, as it is for a grain. Formal definition Basic definition Given a point x of a topological space X, and two maps (where Y is any set), then and define the same germ at x if there is a neighbourhood U of x such that restricted to U, f and g are equal; meaning that for all u in U. Similarly, if S and T are any two subsets of X, then they define the same germ at x if there is again a neighbourhood U of x such that It is straightforward to see that defining the same germ at x is an equivalence relation (be it on maps or sets), and the equivalence classes are called germs (map-germs, or set-germs accordingly). The equivalence relation is usually written Given a map f on X, then its germ at x is usually denoted [f ]x. Similarly, the germ at x of a set S is written [S]x. Thus, A map germ at x in X that maps the point x in X to the point y in Y is denoted as When using this notation, f is then intended as an entire equivalence class of maps, using the same letter f for any representative map. Notice that two sets are germ-equivalent at x if and only if their characteristic functions are germ-equivalent at x: More generally Maps need not be defined on all of X, and in particular they don't need to Document 2::: Eusociality evolved repeatedly in different orders of animals, notably termites and the Hymenoptera (the wasps, bees, and ants). This 'true sociality' in animals, in which sterile individuals work to further the reproductive success of others, is found in termites, ambrosia beetles, gall-dwelling aphids, thrips, marine sponge-dwelling shrimp (Synalpheus regalis), naked mole-rats (Heterocephalus glaber), and many genera in the insect order Hymenoptera. The fact that eusociality has evolved so often in the Hymenoptera (between 8 and 11 times), but remains rare throughout the rest of the animal kingdom, has made its evolution a topic of debate among evolutionary biologists. Eusocial organisms at first appear to behave in stark contrast with simple interpretations of Darwinian evolution: passing on one's genes to the next generation, or fitness, is a central idea in evolutionary biology. Current theories propose that the evolution of eusociality occurred either due to kin selection, proposed by W. D. Hamilton, or by the competing theory of multilevel selection as proposed by E.O. Wilson and colleagues. No single trait or model is sufficient to explain the evolution of eusociality, and most likely the pathway to eusociality involved a combination of pre-conditions, ecological factors, and genetic influences. Overview of eusociality Eusociality can be characterized by four main criteria: overlapping generations, cooperative brood care, philopatry, and reproductive altruism. Overlapping generations means that multiple generations live together, and that older offspring may help the parents raise their siblings. Cooperative brood care is when individuals other than the parents assist in raising the offspring through means such as food gathering and protection. Philopatry is when individuals remain living in their birthplace. The final category, reproductive altruism, is the most divergent from other social orders. Altruism occurs when an individual performs a behavio Document 3::: Evolutionary biology is the subfield of biology that studies the evolutionary processes (natural selection, common descent, speciation) that produced the diversity of life on Earth. It is also defined as the study of the history of life forms on Earth. Evolution holds that all species are related and gradually change over generations. In a population, the genetic variations affect the phenotypes (physical characteristics) of an organism. These changes in the phenotypes will be an advantage to some organisms, which will then be passed onto their offspring. Some examples of evolution in species over many generations are the peppered moth and flightless birds. In the 1930s, the discipline of evolutionary biology emerged through what Julian Huxley called the modern synthesis of understanding, from previously unrelated fields of biological research, such as genetics and ecology, systematics, and paleontology. The investigational range of current research has widened to encompass the genetic architecture of adaptation, molecular evolution, and the different forces that contribute to evolution, such as sexual selection, genetic drift, and biogeography. Moreover, the newer field of evolutionary developmental biology ("evo-devo") investigates how embryogenesis is controlled, thus yielding a wider synthesis that integrates developmental biology with the fields of study covered by the earlier evolutionary synthesis. Subfields Evolution is the central unifying concept in biology. Biology can be divided into various ways. One way is by the level of biological organization, from molecular to cell, organism to population. Another way is by perceived taxonomic group, with fields such as zoology, botany, and microbiology, reflecting what was once seen as the major divisions of life. A third way is by approaches, such as field biology, theoretical biology, experimental evolution, and paleontology. These alternative ways of dividing up the subject have been combined with evolution Document 4::: Comparative cognition is the comparative study of the mechanisms and origins of cognition in various species, and is sometimes seen as more general than, or similar to, comparative psychology. From a biological point of view, work is being done on the brains of fruit flies that should yield techniques precise enough to allow an understanding of the workings of the human brain on a scale appreciative of individual groups of neurons rather than the more regional scale previously used. Similarly, gene activity in the human brain is better understood through examination of the brains of mice by the Seattle-based Allen Institute for Brain Science (see link below), yielding the freely available Allen Brain Atlas. This type of study is related to comparative cognition, but better classified as one of comparative genomics. Increasing emphasis in psychology and ethology on the biological aspects of perception and behavior is bridging the gap between genomics and behavioral analysis. In order for scientists to better understand cognitive function across a broad range of species they can systematically compare cognitive abilities between closely and distantly related species Through this process they can determine what kinds of selection pressure has led to different cognitive abilities across a broad range of animals. For example, it has been hypothesized that there is convergent evolution of the higher cognitive functions of corvids and apes, possibly due to both being omnivorous, visual animals that live in social groups. The development of comparative cognition has been ongoing for decades, including contributions from many researchers worldwide. Additionally, there are several key species used as model organisms in the study of comparative cognition. Methodology The aspects of animals which can reasonably be compared across species depend on the species of comparison, whether that be human to animal comparisons or comparisons between animals of varying species but near The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. Whether it's puppies or people, offspring and parents usually share many of what? A. insects B. traits C. clothes D. fruits Answer:
ai2_arc-430
multiple_choice
What is the main source of heat for Earth's surface?
[ "fire", "lightning", "the Sun", "the ocean" ]
C
Relavent Documents: Document 0::: Conceptual questions or conceptual problems in science, technology, engineering, and mathematics (STEM) education are questions that can be answered based only on the knowledge of relevant concepts, rather than performing extensive calculations. They contrast with most homework and exam problems in science and engineering that typically require plugging in numerical values into previously discussed formulas. Such "plug-and-chug" numerical problems can often be solved correctly by just matching the pattern of the problem to a previously discussed problem and changing the numerical inputs, which requires significant amounts of time to perform the calculations but does not test or deepen the understanding of how the concepts and formulas should work together. Conceptual questions, therefore, provide a good complement to conventional numerical problems because they need minimal or no calculations and instead encourage the students to engage more deeply with the underlying concepts and how they relate to formulas. Conceptual problems are often formulated as multiple-choice questions, making them easy to use during in-class discussions, particularly when utilizing active learning, peer instruction, and audience response. An example of a conceptual question in undergraduate thermodynamics is provided below: During adiabatic expansion of an ideal gas, its temperatureincreases decreases stays the same Impossible to tell/need more information The use of conceptual questions in physics was popularized by Eric Mazur, particularly in the form of multiple-choice tests that he called ConcepTests. In recent years, multiple websites that maintain lists of conceptual questions have been created by instructors for various disciplines. Some books on physics provide many examples of conceptual questions as well. Multiple conceptual questions can be assembled into a concept inventory to test the working knowledge of students at the beginning of a course or to track the improvement in Document 1::: Earth's internal heat budget is fundamental to the thermal history of the Earth. The flow of heat from Earth's interior to the surface is estimated at 47±2 terawatts (TW) and comes from two main sources in roughly equal amounts: the radiogenic heat produced by the radioactive decay of isotopes in the mantle and crust, and the primordial heat left over from the formation of Earth. Earth's internal heat travels along geothermal gradients and powers most geological processes. It drives mantle convection, plate tectonics, mountain building, rock metamorphism, and volcanism. Convective heat transfer within the planet's high-temperature metallic core is also theorized to sustain a geodynamo which generates Earth's magnetic field. Despite its geological significance, Earth's interior heat contributes only 0.03% of Earth's total energy budget at the surface, which is dominated by 173,000 TW of incoming solar radiation. This external energy source powers most of the planet's atmospheric, oceanic, and biologic processes. Nevertheless on land and at the ocean floor, the sensible heat absorbed from non-reflected insolation flows inward only by means of thermal conduction, and thus penetrates only several tens of centimeters on the daily cycle and only several tens of meters on the annual cycle. This renders solar radiation minimally relevant for processes internal to Earth's crust. Global data on heat-flow density are collected and compiled by the International Heat Flow Commission of the International Association of Seismology and Physics of the Earth's Interior. Heat and early estimate of Earth's age Based on calculations of Earth's cooling rate, which assumed constant conductivity in the Earth's interior, in 1862 William Thomson, later Lord Kelvin, estimated the age of the Earth at 98 million years, which contrasts with the age of 4.5 billion years obtained in the 20th century by radiometric dating. As pointed out by John Perry in 1895 a variable conductivity in the E Document 2::: The STEM (Science, Technology, Engineering, and Mathematics) pipeline is a critical infrastructure for fostering the development of future scientists, engineers, and problem solvers. It's the educational and career pathway that guides individuals from early childhood through to advanced research and innovation in STEM-related fields. Description The "pipeline" metaphor is based on the idea that having sufficient graduates requires both having sufficient input of students at the beginning of their studies, and retaining these students through completion of their academic program. The STEM pipeline is a key component of workplace diversity and of workforce development that ensures sufficient qualified candidates are available to fill scientific and technical positions. The STEM pipeline was promoted in the United States from the 1970s onwards, as “the push for STEM (science, technology, engineering, and mathematics) education appears to have grown from a concern for the low number of future professionals to fill STEM jobs and careers and economic and educational competitiveness.” Today, this metaphor is commonly used to describe retention problems in STEM fields, called “leaks” in the pipeline. For example, the White House reported in 2012 that 80% of minority groups and women who enroll in a STEM field switch to a non-STEM field or drop out during their undergraduate education. These leaks often vary by field, gender, ethnic and racial identity, socioeconomic background, and other factors, drawing attention to structural inequities involved in STEM education and careers. Current efforts The STEM pipeline concept is a useful tool for programs aiming at increasing the total number of graduates, and is especially important in efforts to increase the number of underrepresented minorities and women in STEM fields. Using STEM methodology, educational policymakers can examine the quantity and retention of students at all stages of the K–12 educational process and beyo Document 3::: The Science, Technology, Engineering and Mathematics Network or STEMNET is an educational charity in the United Kingdom that seeks to encourage participation at school and college in science and engineering-related subjects (science, technology, engineering, and mathematics) and (eventually) work. History It is based at Woolgate Exchange near Moorgate tube station in London and was established in 1996. The chief executive is Kirsten Bodley. The STEMNET offices are housed within the Engineering Council. Function Its chief aim is to interest children in science, technology, engineering and mathematics. Primary school children can start to have an interest in these subjects, leading secondary school pupils to choose science A levels, which will lead to a science career. It supports the After School Science and Engineering Clubs at schools. There are also nine regional Science Learning Centres. STEM ambassadors To promote STEM subjects and encourage young people to take up jobs in these areas, STEMNET have around 30,000 ambassadors across the UK. these come from a wide selection of the STEM industries and include TV personalities like Rob Bell. Funding STEMNET used to receive funding from the Department for Education and Skills. Since June 2007, it receives funding from the Department for Children, Schools and Families and Department for Innovation, Universities and Skills, since STEMNET sits on the chronological dividing point (age 16) of both of the new departments. See also The WISE Campaign Engineering and Physical Sciences Research Council National Centre for Excellence in Teaching Mathematics Association for Science Education Glossary of areas of mathematics Glossary of astronomy Glossary of biology Glossary of chemistry Glossary of engineering Glossary of physics Document 4::: A pre-STEM program is a course of study at any two-year college that prepares a student to transfer to a four-year school to earn a bachelor's degree in a STEM field. Overview The concept of a pre-STEM program is being developed to address America's need for more college-trained professionals in science, technology, engineering, and mathematics (STEM). It is an innovation meant to fill a gap at community colleges that do not have 'major' degree paths that students identify with on their way to earning an Associates degree. Students must complete a considerable amount of STEM coursework before transferring from a two-year school to a four-year school and earn a baccalaureate degree in a STEM field. Schools with a pre-STEM program are able to identify those students and support them with STEM-specific academic and career advising, increasing the student's chances of going on to earn a STEM baccalaureate degree in a timely fashion. With over 50% of America's college-bound students starting their college career at public or private two-year school, and with a very small proportion of students who start college at a two-year school matriculating to and earning STEM degrees from four-year schools, pre-STEM programs have great potential for broadening participation in baccalaureate STEM studies. Example programs The effectiveness of pre-STEM programs is being investigated by a consortium of schools in Missouri: Moberly Area Community College, St. Charles Community College, Metropolitan Community College, and Truman State University. A larger group of schools met at the Belknap Springs Meetings in October 2009 to discuss the challenges and opportunities presented by STEM-focused partnerships between 2-year and 4-year schools. Each program represented a two-year school and a four-year school that were trying to increase the number of people who earn a baccalaureate degree in a STEM area through various means, some of which were pre-STEM programs. Other methods includes The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. What is the main source of heat for Earth's surface? A. fire B. lightning C. the Sun D. the ocean Answer:
sciq-905
multiple_choice
Slime molds are fungus-like protists that grow as slimy masses on what?
[ "dark matter", "recycled matter", "food matter", "decaying matter" ]
D
Relavent Documents: Document 0::: Mold control and prevention is a conservation activity that is performed in libraries and archives to protect books, documents and other materials from deterioration caused by mold growth. Mold prevention consists of different methods, such as chemical treatments, careful environmental control, and manual cleaning. Preservationists use one or a combination of these methods to combat mold spores in library and archival collections. Due to the resilient nature of mold and its potential for damage to library collections, mold prevention has become an important activity among preservation librarians. Although mold is naturally present in both indoor and outdoor environments, under the right circumstances it can become active after being in a dormant state. Mold growth responds to increased moisture, high humidity, and warm temperatures. Library collections are particularly vulnerable to mold since mold thrives off of organic, cellulose-based materials such as paper, wood, and textiles made of natural fibers. Changes in the moisture in the atmosphere can lead to mold growth and irreparable damage to library collections. Mold Mold is a generic term for a specific type of fungi. Mildew may also refer to types of mold. Since there are so many species of mold, their appearance varies in color and growth habit. In general, active mold has a musty odor and appears fuzzy, slimy, or damp. Inactive mold looks dry and powdery. Mold propagates via spores, which are always present in the environment. Mold spores can be transferred to an object by mechanical instruments or air circulation. When spores attach to another organism, and the environment is favorable, they begin to germinate. Mold produce mycelium which growth pattern resembles cobwebs. Mycelium allows the mold to obtain food and nutrients through the host. Inevitably, the mycelium produces spore sacs and release new spores into the air. Eventually the spores land on new material, and the reproductive cycle begins aga Document 1::: Each species of slime mold has its own specific chemical messenger, which are collectively referred to as acrasins. These chemicals signal that many individual cells aggregate to form a single large cell or plasmodium. One of the earliest acrasins to be identified was cyclic AMP, found in the species Dictyostelium discoideum by Brian Shaffer, which exhibits a complex swirling-pulsating spiral pattern when forming a pseudoplasmodium. The term acrasin was descriptively named after Acrasia from Edmund Spenser's Faerie Queene, who seduced men against their will and then transformed them into beasts. Acrasia is itself a play on the Greek akrasia that describes loss of free will. Extraction Brian Shaffer was the first to purify acrasin, now known to be cyclic AMP, in 1954, using methanol. Glorin, the acrasin of P. violaceum, can be purified by inhibiting the acrasin-degrading enzyme acrasinase with alcohol, extracting with alcohol and separating with column chromatography. Notes Evidence for the formation of cell aggregates by chemotaxis in the development of the slime mold Dictyostelium discoideum - J.T.Bonner and L.J.Savage Journal of Experimental Biology Vol. 106, pp. 1, October (1947) Cell Biology Aggregation in cellular slime moulds: in vitro isolation of acrasin - B.M.Shaffer Nature Vol. 79, pp. 975, (1953) Cell Biology Identification of a pterin as the acrasin of the cellular slime mold Dictyostelium lacteum - Proceedings of the National Academy of Sciences United States Vol. 79, pp. 6270–6274, October (1982) Cell Biology Hunting Slime Moulds - Adele Conover, Smithsonian Magazine Online (2001) Document 2::: A mold () or mould () is one of the structures that certain fungi can form. The dust-like, colored appearance of molds is due to the formation of spores containing fungal secondary metabolites. The spores are the dispersal units of the fungi. Not all fungi form molds. Some fungi form mushrooms; others grow as single cells and are called microfungi (for example yeasts). A large and taxonomically diverse number of fungal species form molds. The growth of hyphae results in discoloration and a fuzzy appearance, especially on food. The network of these tubular branching hyphae, called a mycelium, is considered a single organism. The hyphae are generally transparent, so the mycelium appears like very fine, fluffy white threads over the surface. Cross-walls (septa) may delimit connected compartments along the hyphae, each containing one or multiple, genetically identical nuclei. The dusty texture of many molds is caused by profuse production of asexual spores (conidia) formed by differentiation at the ends of hyphae. The mode of formation and shape of these spores is traditionally used to classify molds. Many of these spores are colored, making the fungus much more obvious to the human eye at this stage in its life-cycle. Molds are considered to be microbes and do not form a specific taxonomic or phylogenetic grouping, but can be found in the divisions Zygomycota and Ascomycota. In the past, most molds were classified within the Deuteromycota. Mold had been used as a common name for now non-fungal groups such as water molds or slime molds that were once considered fungi. Molds cause biodegradation of natural materials, which can be unwanted when it becomes food spoilage or damage to property. They also play important roles in biotechnology and food science in the production of various pigments, foods, beverages, antibiotics, pharmaceuticals and enzymes. Some diseases of animals and humans can be caused by certain molds: disease may result from allergic sensitivity to m Document 3::: A fungus (: fungi or funguses) is any member of the group of eukaryotic organisms that includes microorganisms such as yeasts and molds, as well as the more familiar mushrooms. These organisms are classified as one of the traditional eukaryotic kingdoms, along with Animalia, Plantae and either Protista or Protozoa and Chromista. A characteristic that places fungi in a different kingdom from plants, bacteria, and some protists is chitin in their cell walls. Fungi, like animals, are heterotrophs; they acquire their food by absorbing dissolved molecules, typically by secreting digestive enzymes into their environment. Fungi do not photosynthesize. Growth is their means of mobility, except for spores (a few of which are flagellated), which may travel through the air or water. Fungi are the principal decomposers in ecological systems. These and other differences place fungi in a single group of related organisms, named the Eumycota (true fungi or Eumycetes), that share a common ancestor (i.e. they form a monophyletic group), an interpretation that is also strongly supported by molecular phylogenetics. This fungal group is distinct from the structurally similar myxomycetes (slime molds) and oomycetes (water molds). The discipline of biology devoted to the study of fungi is known as mycology (from the Greek , mushroom). In the past mycology was regarded as a branch of botany, although it is now known that fungi are genetically more closely related to animals than to plants. Abundant worldwide, most fungi are inconspicuous because of the small size of their structures, and their cryptic lifestyles in soil or on dead matter. Fungi include symbionts of plants, animals, or other fungi and also parasites. They may become noticeable when fruiting, either as mushrooms or as molds. Fungi perform an essential role in the decomposition of organic matter and have fundamental roles in nutrient cycling and exchange in the environment. They have long been used as a direct source of h Document 4::: Entangled Life: How fungi make our worlds, change our minds and shape our futures is a 2020 non-fiction book on mycology by British biologist Merlin Sheldrake. His first book, it was published by Random House on 12 May 2020. Summary The book looks at fungi from a number of angles, including decomposition, fermentation, nutrient distribution, psilocybin production, the evolutionary role fungi play in plants, and the ways in which humans relate to the fungal kingdom. It uses music and philosophy to illustrate its thesis, and introduces readers to a number of central strands of research on mycology. It is also a personal account of Sheldrake's experiences with fungi. Sheldrake is an expert in mycorrhizal fungi, holds a PhD in tropical ecology from the University of Cambridge for his work on underground fungal networks in tropical forests in Panama, where he was a predoctoral research fellow of the Smithsonian Tropical Research Institute, and his research is primarily in the fields of fungal biology and the history of Amazonian ethnobotany. He is the son of Rupert Sheldrake, a biologist, and Jill Purce, an author and therapist, and the brother of musician Cosmo Sheldrake. Reception The book was published to largely positive reviews. Jennifer Szalai of The New York Times called the book an "ebullient and ambitious exploration" of fungi, adding, "reading it left me not just moved but altered, eager to disseminate its message of what fungi can do." Eugenia Bone of The Wall Street Journal called it "a gorgeous book of literary nature writing in the tradition of [Robert] Macfarlane and John Fowles, ripe with insight and erudition." Rachel Cooke of The Observer called it "an astonishing book that could alter our perceptions of fungi forever." Richard Kerridge, reviewing the book in The Guardian, wrote that "when we look closely [at fungi], we meet large, unsettling questions... [Sheldrake] carries us easily into these questions with ebullience and precision." The book was The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. Slime molds are fungus-like protists that grow as slimy masses on what? A. dark matter B. recycled matter C. food matter D. decaying matter Answer:
ai2_arc-442
multiple_choice
A student uses the following characteristics to describe a group of objects in space. * 200 billion stars * 30 million light years from Earth * 500 light years in diameter Which of the following is the student most likely describing?
[ "a galaxy", "the universe", "a constellation", "the solar system" ]
A
Relavent Documents: Document 0::: Astrophysics is a science that employs the methods and principles of physics and chemistry in the study of astronomical objects and phenomena. As one of the founders of the discipline, James Keeler, said, Astrophysics "seeks to ascertain the nature of the heavenly bodies, rather than their positions or motions in space–what they are, rather than where they are." Among the subjects studied are the Sun (solar physics), other stars, galaxies, extrasolar planets, the interstellar medium and the cosmic microwave background. Emissions from these objects are examined across all parts of the electromagnetic spectrum, and the properties examined include luminosity, density, temperature, and chemical composition. Because astrophysics is a very broad subject, astrophysicists apply concepts and methods from many disciplines of physics, including classical mechanics, electromagnetism, statistical mechanics, thermodynamics, quantum mechanics, relativity, nuclear and particle physics, and atomic and molecular physics. In practice, modern astronomical research often involves a substantial amount of work in the realms of theoretical and observational physics. Some areas of study for astrophysicists include their attempts to determine the properties of dark matter, dark energy, black holes, and other celestial bodies; and the origin and ultimate fate of the universe. Topics also studied by theoretical astrophysicists include Solar System formation and evolution; stellar dynamics and evolution; galaxy formation and evolution; magnetohydrodynamics; large-scale structure of matter in the universe; origin of cosmic rays; general relativity, special relativity, quantum and physical cosmology, including string cosmology and astroparticle physics. History Astronomy is an ancient science, long separated from the study of terrestrial physics. In the Aristotelian worldview, bodies in the sky appeared to be unchanging spheres whose only motion was uniform motion in a circle, while the earthl Document 1::: This is a compilation of symbols commonly used in astronomy, particularly professional astronomy. Age (stellar) τ - age Astrometry parameters Astrometry parameters Rv - radial velocity cz - apparent radial velocity z - Redshift μ - proper motion π - parallax J - epoch α - Right Ascension δ - Declination λ - Ecliptic longitude β - Ecliptic latitude l - Galactic longitude b - Galactic latitude Cosmological parameters Cosmological parameters h - dimensionless Hubble parameter H0 - Hubble constant Λ - cosmological constant Ω - density parameter ρ - density ρc - critical density z - redshift Distance description Distance description for orbital and non-orbital parameters: d - distance d - in km = kilometer d - in mi = mile d - in AU = astronomical unit d - in ly = light-year d - in pc = parsec d - in kpc = kiloparsec (1000 pc) DL - luminosity distance, obtaining an objects distance using only visual aspects Galaxy comparison Galaxy type and spectral comparison: see galaxy morphological classification Luminosity comparison Luminosity comparison: LS, - luminosity of the Sun (Sol) Luminosity of certain object: Lacc - accretion luminosity Lbol - bolometric luminosity Mass comparison Mass comparison: ME, - mass of Earth , - mass of Jupiter MS, - mass of the Sun (Sol) Mass of certain object: M● - mass of black hole Macc - mass of accretion disc Metallicity comparison Metallicity comparison: [Fe/H] - Ratio of Iron to Hydrogen. This is not an exact ratio, but rather a logarithmic representation of the ratio of a star's iron abundance compared to that of the Sun. for a given star () : , where the values represent the number densities of the given element. [M/H] - Metallicity ratio. Z - Metallicity Z☉, ZS - Metallicity of the Sun (Sol) Orbital parameters Orbital Parameters of a Cosmic Object: α - RA, right ascension, if the Greek letter does not appear, á letter will appear. δ - Dec, declination, if the Greek letter does Document 2::: Types Quasar Supermassive black hole Hypercompact stellar system (hypothetical object organized around a supermassive black hole) Intermediate-mass black holes and candidates Cigar Galaxy (Messier 82, NGC 3034) GCIRS 13E HLX-1 M82 X-1 Messier 15 (NGC 7078) Messier 110 (NGC 205) Sculptor Galaxy (NGC 253) Triangulum Galaxy (Messier 33, NGC 598 Document 3::: In astronomy, a spectral atlas is a collection of spectra of one or more objects, intended as a reference work for comparison with spectra of other objects. Several different types of collections are titled spectral atlases: those intended for spectral classification, for key reference, or as a collection of spectra of a general type of object. In any spectral atlas, generally all the spectra have been taken with the same equipment, or with very similar instruments at different locations, to provide data as uniform as possible in its spectral resolution, wavelength coverage, noise characteristics, etc. Types For spectral classification When assigning a spectral classification, a spectral atlas is a collection of standard spectra of stars with known spectral types, against which a spectrum of an unknown star is compared. It is analogous to an identification key in biology. Originally, such atlases included reproductions of the monochrome spectra as recorded on photographic plates, as in the original Morgan-Keenan-Kellman atlas and other atlases. These atlases include identifications and notations for use of those spectral features to be used as discriminators between close spectral types. With very large surveys of the sky which include automated assignment of spectral classification from the digital spectra data, graphical atlases have been supplanted by libraries of spectra of standard stars which often can be downloaded from VizieR and other sources. For key reference A spectral atlas can be a very high-quality spectrum of a key reference object, often made with very high spectral resolution, generally presented in large-format graphical form as a line chart (but normally strictly without markers at specific data points) of intensity or relative intensity (which for a star whose spectrum is dominated by absorption lines runs from zero to a normalized continuum) as a function of wavelength. Such spectral atlases have been made several times for the Sun (e Document 4::: The cosmic distance ladder (also known as the extragalactic distance scale) is the succession of methods by which astronomers determine the distances to celestial objects. A direct distance measurement of an astronomical object is possible only for those objects that are "close enough" (within about a thousand parsecs) to Earth. The techniques for determining distances to more distant objects are all based on various measured correlations between methods that work at close distances and methods that work at larger distances. Several methods rely on a standard candle, which is an astronomical object that has a known luminosity. The ladder analogy arises because no single technique can measure distances at all ranges encountered in astronomy. Instead, one method can be used to measure nearby distances, a second can be used to measure nearby to intermediate distances, and so on. Each rung of the ladder provides information that can be used to determine the distances at the next higher rung. Direct measurement At the base of the ladder are fundamental distance measurements, in which distances are determined directly, with no physical assumptions about the nature of the object in question. The precise measurement of stellar positions is part of the discipline of astrometry. Early fundamental distances -- such as the radii of the earth, moon and sun, and the distances between them -- were well estimated with very low technology by the ancient Greeks. Astronomical unit Direct distance measurements are based upon the astronomical unit (AU), which is defined as the mean distance between the Earth and the Sun. Kepler's laws provide precise ratios of the sizes of the orbits of objects orbiting the Sun, but provide no measurement of the overall scale of the orbit system. Radar is used to measure the distance between the orbits of the Earth and of a second body. From that measurement and the ratio of the two orbit sizes, the size of Earth's orbit is calculated. The Earth The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. A student uses the following characteristics to describe a group of objects in space. * 200 billion stars * 30 million light years from Earth * 500 light years in diameter Which of the following is the student most likely describing? A. a galaxy B. the universe C. a constellation D. the solar system Answer:
sciq-10904
multiple_choice
What helps to convert some molecules to forms that can be taken up by other organisms?
[ "scavangers", "protists", "prokaryotes", "eukaryotes" ]
C
Relavent Documents: Document 0::: MicrobeLibrary is a permanent collection of over 1400 original peer-reviewed resources for teaching undergraduate microbiology. It is provided by the American Society for Microbiology, Washington DC, United States. Contents include curriculum activities; images and animations; reviews of books, websites and other resources; and articles from Focus on Microbiology Education, Microbiology Education and Microbe. Around 40% of the materials are free to educators and students, the remainder require a subscription. the service is suspended with the message to: "Please check back with us in 2017". External links MicrobeLibrary Microbiology Document 1::: Bioconversion, also known as biotransformation, is the conversion of organic materials, such as plant or animal waste, into usable products or energy sources by biological processes or agents, such as certain microorganisms. One example is the industrial production of cortisone, which one step is the bioconversion of progesterone to 11-alpha-Hydroxyprogesterone by Rhizopus nigricans. Another example is the bioconversion of glycerol to 1,3-propanediol, which is part of scientific research for many decades. Another example of bioconversion is the conversion of organic materials, such as plant or animal waste, into usable products or energy sources by biological processes or agents, such as certain microorganisms, some detritivores or enzymes. In the US, the Bioconversion Science and Technology group performs multidisciplinary R&D for the Department of Energy's (DOE) relevant applications of bioprocessing, especially with biomass. Bioprocessing combines the disciplines of chemical engineering, microbiology and biochemistry. The Group 's primary role is investigation of the use of microorganism, microbial consortia and microbial enzymes in bioenergy research. New cellulosic ethanol conversion processes have enabled the variety and volume of feedstock that can be bioconverted to expand rapidly. Feedstock now includes materials derived from plant or animal waste such as paper, auto-fluff, tires, fabric, construction materials, municipal solid waste (MSW), sludge, sewage, etc. Three different processes for bioconversion 1 - Enzymatic hydrolysis - a single source of feedstock, switchgrass for example, is mixed with strong enzymes which convert a portion of cellulosic material into sugars which can then be fermented into ethanol. Genencor and Novozymes are two companies that have received United States government Department of Energy funding for research into reducing the cost of cellulase, a key enzyme in the production cellulosic ethanol by this process. 2 - Synthesis Document 2::: The metabolome refers to the complete set of small-molecule chemicals found within a biological sample. The biological sample can be a cell, a cellular organelle, an organ, a tissue, a tissue extract, a biofluid or an entire organism. The small molecule chemicals found in a given metabolome may include both endogenous metabolites that are naturally produced by an organism (such as amino acids, organic acids, nucleic acids, fatty acids, amines, sugars, vitamins, co-factors, pigments, antibiotics, etc.) as well as exogenous chemicals (such as drugs, environmental contaminants, food additives, toxins and other xenobiotics) that are not naturally produced by an organism. In other words, there is both an endogenous metabolome and an exogenous metabolome. The endogenous metabolome can be further subdivided to include a "primary" and a "secondary" metabolome (particularly when referring to plant or microbial metabolomes). A primary metabolite is directly involved in the normal growth, development, and reproduction. A secondary metabolite is not directly involved in those processes, but usually has important ecological function. Secondary metabolites may include pigments, antibiotics or waste products derived from partially metabolized xenobiotics. The study of the metabolome is called metabolomics. Origins The word metabolome appears to be a blending of the words "metabolite" and "chromosome". It was constructed to imply that metabolites are indirectly encoded by genes or act on genes and gene products. The term "metabolome" was first used in 1998 and was likely coined to match with existing biological terms referring to the complete set of genes (the genome), the complete set of proteins (the proteome) and the complete set of transcripts (the transcriptome). The first book on metabolomics was published in 2003. The first journal dedicated to metabolomics (titled simply "Metabolomics") was launched in 2005 and is currently edited by Prof. Roy Goodacre. Some of the m Document 3::: Biochemical engineering, also known as bioprocess engineering, is a field of study with roots stemming from chemical engineering and biological engineering. It mainly deals with the design, construction, and advancement of unit processes that involve biological organisms (such as fermentation) or organic molecules (often enzymes) and has various applications in areas of interest such as biofuels, food, pharmaceuticals, biotechnology, and water treatment processes. The role of a biochemical engineer is to take findings developed by biologists and chemists in a laboratory and translate that to a large-scale manufacturing process. History For hundreds of years, humans have made use of the chemical reactions of biological organisms in order to create goods. In the mid-1800s, Louis Pasteur was one of the first people to look into the role of these organisms when he researched fermentation. His work also contributed to the use of pasteurization, which is still used to this day. By the early 1900s, the use of microorganisms had expanded, and was used to make industrial products. Up to this point, biochemical engineering hadn't developed as a field yet. It wasn't until 1928 when Alexander Fleming discovered penicillin that the field of biochemical engineering was established. After this discovery, samples were gathered from around the world in order to continue research into the characteristics of microbes from places such as soils, gardens, forests, rivers, and streams. Today, biochemical engineers can be found working in a variety of industries, from food to pharmaceuticals. This is due to the increasing need for efficiency and production which requires knowledge of how biological systems and chemical reactions interact with each other and how they can be used to meet these needs. Education Biochemical engineering is not a major offered by most universities and is instead an area of interest under the chemical engineering major in most cases. The following universiti Document 4::: The European Algae Biomass Association (EABA), established on 2 June 2009, is the European association representing both research and industry in the field of algae technologies. EABA was founded during its inaugural conference on 1–2 June 2009 at Villa La Pietra in Florence. The association is headquartered in Florence, Italy. History The first EABA's President, Prof. Dr. Mario Tredici, served a 2-year term since his election on 2 June 2009. The EABA Vice-presidents were Mr. Claudio Rochietta, (Oxem, Italy), Prof. Patrick Sorgeloos (University of Ghent, Belgium) and Mr. Marc Van Aken (SBAE Industries, Belgium). The EABA Executive Director was Mr. Raffaello Garofalo. EABA had 58 founding members and the EABA reached 79 members in 2011. The last election occurred on 3 December 2018 in Amsterdam. The EABA's President is Mr. Jean-Paul Cadoret (Algama / France). The EABA Vice-presidents are Prof. Dr. Sammy Boussiba (Ben-Gurion University of the Negev / Israel), Prof. Dr. Gabriel Acien (University of Almeria / Spain) and Dr. Alexandra Mosch (Germany). The EABA General Manager is Dr. Vítor Verdelho (A4F AlgaFuel, S.A. / Portugal) and Prof. Dr. Mario Tredici (University of Florence / Italy) is elected as Honorary President. Cooperation with other organisations ART Fuels Forum European Society of Biochemical Engineering Sciences Algae Biomass Organization The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. What helps to convert some molecules to forms that can be taken up by other organisms? A. scavangers B. protists C. prokaryotes D. eukaryotes Answer:
sciq-11199
multiple_choice
What tissue system has neither dermal nor vascular tissues?
[ "ground tissue system", "external tissue system", "internal tissue system", "work tissue system" ]
A
Relavent Documents: Document 0::: H2.00.04.4.01001: Lymphoid tissue H2.00.05.0.00001: Muscle tissue H2.00.05.1.00001: Smooth muscle tissue H2.00.05.2.00001: Striated muscle tissue H2.00.06.0.00001: Nerve tissue H2.00.06.1.00001: Neuron H2.00.06.2.00001: Synapse H2.00.06.2.00001: Neuroglia h3.01: Bones h3.02: Joints h3.03: Muscles h3.04: Alimentary system h3.05: Respiratory system h3.06: Urinary system h3.07: Genital system h3.08: Document 1::: In a multicellular organism, an organ is a collection of tissues joined in a structural unit to serve a common function. In the hierarchy of life, an organ lies between tissue and an organ system. Tissues are formed from same type cells to act together in a function. Tissues of different types combine to form an organ which has a specific function. The intestinal wall for example is formed by epithelial tissue and smooth muscle tissue. Two or more organs working together in the execution of a specific body function form an organ system, also called a biological system or body system. An organ's tissues can be broadly categorized as parenchyma, the functional tissue, and stroma, the structural tissue with supportive, connective, or ancillary functions. For example, the gland's tissue that makes the hormones is the parenchyma, whereas the stroma includes the nerves that innervate the parenchyma, the blood vessels that oxygenate and nourish it and carry away its metabolic wastes, and the connective tissues that provide a suitable place for it to be situated and anchored. The main tissues that make up an organ tend to have common embryologic origins, such as arising from the same germ layer. Organs exist in most multicellular organisms. In single-celled organisms such as members of the eukaryotes, the functional analogue of an organ is known as an organelle. In plants, there are three main organs. The number of organs in any organism depends on the definition used. By one widely adopted definition, 79 organs have been identified in the human body. Animals Except for placozoans, multicellular animals including humans have a variety of organ systems. These specific systems are widely studied in human anatomy. The functions of these organ systems often share significant overlap. For instance, the nervous and endocrine system both operate via a shared organ, the hypothalamus. For this reason, the two systems are combined and studied as the neuroendocrine system. The sam Document 2::: Outline h1.00: Cytology h2.00: General histology H2.00.01.0.00001: Stem cells H2.00.02.0.00001: Epithelial tissue H2.00.02.0.01001: Epithelial cell H2.00.02.0.02001: Surface epithelium H2.00.02.0.03001: Glandular epithelium H2.00.03.0.00001: Connective and supportive tissues H2.00.03.0.01001: Connective tissue cells H2.00.03.0.02001: Extracellular matrix H2.00.03.0.03001: Fibres of connective tissues H2.00.03.1.00001: Connective tissue proper H2.00.03.1.01001: Ligaments H2.00.03.2.00001: Mucoid connective tissue; Gelatinous connective tissue H2.00.03.3.00001: Reticular tissue H2.00.03.4.00001: Adipose tissue H2.00.03.5.00001: Cartilage tissue H2.00.03.6.00001: Chondroid tissue H2.00.03.7.00001: Bone tissue; Osseous tissue H2.00.04.0.00001: Haemotolymphoid complex H2.00.04.1.00001: Blood cells H2.00.04.1.01001: Erythrocyte; Red blood cell H2.00.04.1.02001: Leucocyte; White blood cell H2.00.04.1.03001: Platelet; Thrombocyte H2.00.04.2.00001: Plasma H2.00.04.3.00001: Blood cell production H2.00.04.4.00001: Postnatal sites of haematopoiesis H2.00.04.4.01001: Lymphoid tissue H2.00.05.0.00001: Muscle tissue H2.00.05.1.00001: Smooth muscle tissue Document 3::: This table lists the epithelia of different organs of the human body Human anatomy Document 4::: A biological system is a complex network which connects several biologically relevant entities. Biological organization spans several scales and are determined based different structures depending on what the system is. Examples of biological systems at the macro scale are populations of organisms. On the organ and tissue scale in mammals and other animals, examples include the circulatory system, the respiratory system, and the nervous system. On the micro to the nanoscopic scale, examples of biological systems are cells, organelles, macromolecular complexes and regulatory pathways. A biological system is not to be confused with a living system, such as a living organism. Organ and tissue systems These specific systems are widely studied in human anatomy and are also present in many other animals. Respiratory system: the organs used for breathing, the pharynx, larynx, bronchi, lungs and diaphragm. Digestive system: digestion and processing food with salivary glands, oesophagus, stomach, liver, gallbladder, pancreas, intestines, rectum and anus. Cardiovascular system (heart and circulatory system): pumping and channeling blood to and from the body and lungs with heart, blood and blood vessels. Urinary system: kidneys, ureters, bladder and urethra involved in fluid balance, electrolyte balance and excretion of urine. Integumentary system: skin, hair, fat, and nails. Skeletal system: structural support and protection with bones, cartilage, ligaments and tendons. Endocrine system: communication within the body using hormones made by endocrine glands such as the hypothalamus, pituitary gland, pineal body or pineal gland, thyroid, parathyroid and adrenals, i.e., adrenal glands. Lymphatic system: structures involved in the transfer of lymph between tissues and the blood stream; includes the lymph and the nodes and vessels. The lymphatic system includes functions including immune responses and development of antibodies. Immune system: protects the organism from The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. What tissue system has neither dermal nor vascular tissues? A. ground tissue system B. external tissue system C. internal tissue system D. work tissue system Answer:
scienceQA-2814
multiple_choice
Select the liquid.
[ "beads", "lemonade", "air inside a balloon", "empty cup" ]
B
An empty cup is a solid. A solid has a size and shape of its own. When you fill a cup with water, the cup still has its own shape. Each bead in the jar is a solid. If you put many beads into a bottle, they will take the shape of the bottle, as a liquid would. But be careful! Beads are not a liquid. Each bead still has a size and shape of its own. The air inside a balloon is a gas. A gas expands to fill a space. The air inside a balloon expands to fill all the space in the balloon. If the balloon pops, the air will expand to fill a much larger space. Lemonade is a liquid. A liquid takes the shape of any container it is in. If you pour lemonade into a cup, the lemonade will take the shape of the cup. But the lemonade will still take up the same amount of space.
Relavent Documents: Document 0::: A liquid is a nearly incompressible fluid that conforms to the shape of its container but retains a nearly constant volume independent of pressure. It is one of the four fundamental states of matter (the others being solid, gas, and plasma), and is the only state with a definite volume but no fixed shape. The density of a liquid is usually close to that of a solid, and much higher than that of a gas. Therefore, liquid and solid are both termed condensed matter. On the other hand, as liquids and gases share the ability to flow, they are both called fluids. A liquid is made up of tiny vibrating particles of matter, such as atoms, held together by intermolecular bonds. Like a gas, a liquid is able to flow and take the shape of a container. Unlike a gas, a liquid maintains a fairly constant density and does not disperse to fill every space of a container. Although liquid water is abundant on Earth, this state of matter is actually the least common in the known universe, because liquids require a relatively narrow temperature/pressure range to exist. Most known matter in the universe is either gas (as interstellar clouds) or plasma (as stars). Introduction Liquid is one of the four primary states of matter, with the others being solid, gas and plasma. A liquid is a fluid. Unlike a solid, the molecules in a liquid have a much greater freedom to move. The forces that bind the molecules together in a solid are only temporary in a liquid, allowing a liquid to flow while a solid remains rigid. A liquid, like a gas, displays the properties of a fluid. A liquid can flow, assume the shape of a container, and, if placed in a sealed container, will distribute applied pressure evenly to every surface in the container. If liquid is placed in a bag, it can be squeezed into any shape. Unlike a gas, a liquid is nearly incompressible, meaning that it occupies nearly a constant volume over a wide range of pressures; it does not generally expand to fill available space in a containe Document 1::: Conceptual questions or conceptual problems in science, technology, engineering, and mathematics (STEM) education are questions that can be answered based only on the knowledge of relevant concepts, rather than performing extensive calculations. They contrast with most homework and exam problems in science and engineering that typically require plugging in numerical values into previously discussed formulas. Such "plug-and-chug" numerical problems can often be solved correctly by just matching the pattern of the problem to a previously discussed problem and changing the numerical inputs, which requires significant amounts of time to perform the calculations but does not test or deepen the understanding of how the concepts and formulas should work together. Conceptual questions, therefore, provide a good complement to conventional numerical problems because they need minimal or no calculations and instead encourage the students to engage more deeply with the underlying concepts and how they relate to formulas. Conceptual problems are often formulated as multiple-choice questions, making them easy to use during in-class discussions, particularly when utilizing active learning, peer instruction, and audience response. An example of a conceptual question in undergraduate thermodynamics is provided below: During adiabatic expansion of an ideal gas, its temperatureincreases decreases stays the same Impossible to tell/need more information The use of conceptual questions in physics was popularized by Eric Mazur, particularly in the form of multiple-choice tests that he called ConcepTests. In recent years, multiple websites that maintain lists of conceptual questions have been created by instructors for various disciplines. Some books on physics provide many examples of conceptual questions as well. Multiple conceptual questions can be assembled into a concept inventory to test the working knowledge of students at the beginning of a course or to track the improvement in Document 2::: The Z-tube is an experimental apparatus for measuring the tensile strength of a liquid. It consists of a Z-shaped tube with open ends, filled with a liquid, and set on top of a spinning table. If the tube were straight, the liquid would immediately fly out one end or the other of the tube as it began to spin. By bending the ends of the tube back towards the center of rotation, a shift of the liquid away from center will result in the water level in one end of the tube rising and thus increasing the pressure in that end of the tube, and consequently returning the liquid to the center of the tube. By measuring the rotational speed and the distance from the center of rotation to the liquid level in the bent ends of the tube, the pressure reduction inside the tube can be calculated. Negative pressures, (i.e. less than zero absolute pressure, or in other words, tension) have been reported using water processed to remove dissolved gases. Tensile strengths up to 280 atmospheres have been reported for water in glass. Document 3::: In mathematical psychology and education theory, a knowledge space is a combinatorial structure used to formulate mathematical models describing the progression of a human learner. Knowledge spaces were introduced in 1985 by Jean-Paul Doignon and Jean-Claude Falmagne, and remain in extensive use in the education theory. Modern applications include two computerized tutoring systems, ALEKS and the defunct RATH. Formally, a knowledge space assumes that a domain of knowledge is a collection of concepts or skills, each of which must be eventually mastered. Not all concepts are interchangeable; some require other concepts as prerequisites. Conversely, competency at one skill may ease the acquisition of another through similarity. A knowledge space marks out which collections of skills are feasible: they can be learned without mastering any other skills. Under reasonable assumptions, the collection of feasible competencies forms the mathematical structure known as an antimatroid. Researchers and educators usually explore the structure of a discipline's knowledge space as a latent class model. Motivation Knowledge Space Theory attempts to address shortcomings of standardized testing when used in educational psychometry. Common tests, such as the SAT and ACT, compress a student's knowledge into a very small range of ordinal ranks, in the process effacing the conceptual dependencies between questions. Consequently, the tests cannot distinguish between true understanding and guesses, nor can they identify a student's particular weaknesses, only the general proportion of skills mastered. The goal of knowledge space theory is to provide a language by which exams can communicate What the student can do and What the student is ready to learn. Model structure Knowledge Space Theory-based models presume that an educational subject can be modeled as a finite set of concepts, skills, or topics. Each feasible state of knowledge about is then a subset of ; the set of Document 4::: A molecular sieve is a material with pores (very small holes) of uniform size. These pore diameters are similar in size to small molecules, and thus large molecules cannot enter or be adsorbed, while smaller molecules can. As a mixture of molecules migrates through the stationary bed of porous, semi-solid substance referred to as a sieve (or matrix), the components of the highest molecular weight (which are unable to pass into the molecular pores) leave the bed first, followed by successively smaller molecules. Some molecular sieves are used in size-exclusion chromatography, a separation technique that sorts molecules based on their size. Other molecular sieves are used as desiccants (some examples include activated charcoal and silica gel). The pore diameter of a molecular sieve is measured in ångströms (Å) or nanometres (nm). According to IUPAC notation, microporous materials have pore diameters of less than 2 nm (20 Å) and macroporous materials have pore diameters of greater than 50 nm (500 Å); the mesoporous category thus lies in the middle with pore diameters between 2 and 50 nm (20–500 Å). Materials Molecular sieves can be microporous, mesoporous, or macroporous material. Microporous material (<2 nm) Zeolites (aluminosilicate minerals, not to be confused with aluminium silicate) Zeolite LTA: 3–4 Å Porous glass: 10 Å (1 nm), and up Active carbon: 0–20 Å (0–2 nm), and up Clays Montmorillonite intermixes Halloysite (endellite): Two common forms are found, when hydrated the clay exhibits a 1 nm spacing of the layers and when dehydrated (meta-halloysite) the spacing is 0.7 nm. Halloysite naturally occurs as small cylinders which average 30 nm in diameter with lengths between 0.5 and 10 micrometres. Mesoporous material (2–50 nm) Silicon dioxide (used to make silica gel): 24 Å (2.4 nm) Macroporous material (>50 nm) Macroporous silica, 200–1000 Å (20–100 nm) Applications Molecular sieves are often utilized in the petroleum industry, especially for dryin The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. Select the liquid. A. beads B. lemonade C. air inside a balloon D. empty cup Answer:
sciq-8076
multiple_choice
What type of seismic waves do the most damage?
[ "gravity", "surface", "tension", "sunlight" ]
B
Relavent Documents: Document 0::: The Human-Induced Earthquake Database (HiQuake) is an online database that documents all reported cases of induced seismicity proposed on scientific grounds. It is the most complete compilation of its kind and is freely available to download via the associated website. The database is periodically updated to correct errors, revise existing entries, and add new entries reported in new scientific papers and reports. Suggestions for revisions and new entries can be made via the associated website. History In 2016, Nederlandse Aardolie Maatschappij funded a team of researchers from Durham University and Newcastle University to conduct a full review of induced seismicity. This review formed part of a scientific workshop aimed at estimating the maximum possible magnitude earthquake that might be induced by conventional gas production in the Groningen gas field. The resulting database from the review was publicly released online on the 26 January 2017. The database was accompanied by the publication of two scientific papers, the more detailed of which is freely available online. Document 1::: Seismic moment is a quantity used by seismologists to measure the size of an earthquake. The scalar seismic moment is defined by the equation , where is the shear modulus of the rocks involved in the earthquake (in pascals (Pa), i.e. newtons per square meter) is the area of the rupture along the geologic fault where the earthquake occurred (in square meters), and is the average slip (displacement offset between the two sides of the fault) on (in meters). thus has dimensions of torque, measured in newton meters. The connection between seismic moment and a torque is natural in the body-force equivalent representation of seismic sources as a double-couple (a pair of force couples with opposite torques): the seismic moment is the torque of each of the two couples. Despite having the same dimensions as energy, seismic moment is not a measure of energy. The relations between seismic moment, potential energy drop and radiated energy are indirect and approximative. The seismic moment of an earthquake is typically estimated using whatever information is available to constrain its factors. For modern earthquakes, moment is usually estimated from ground motion recordings of earthquakes known as seismograms. For earthquakes that occurred in times before modern instruments were available, moment may be estimated from geologic estimates of the size of the fault rupture and the slip. Seismic moment is the basis of the moment magnitude scale introduced by Hiroo Kanamori, which is often used to compare the size of different earthquakes and is especially useful for comparing the sizes of large (great) earthquakes. The seismic moment is not restricted to earthquakes. For a more general seismic source described by a seismic moment tensor (a symmetric tensor, but not necessarily a double couple tensor), the seismic moment is See also Richter magnitude scale Moment magnitude scale Sources . . . . Seismology measurement Moment (physics) Document 2::: The moment magnitude scale (MMS; denoted explicitly with or Mw, and generally implied with use of a single M for magnitude) is a measure of an earthquake's magnitude ("size" or strength) based on its seismic moment. It was defined in a 1979 paper by Thomas C. Hanks and Hiroo Kanamori. Similar to the local magnitude/Richter scale () defined by Charles Francis Richter in 1935, it uses a logarithmic scale; small earthquakes have approximately the same magnitudes on both scales. Despite the difference, news media often says "Richter scale" when referring to the moment magnitude scale. Moment magnitude () is considered the authoritative magnitude scale for ranking earthquakes by size. It is more directly related to the energy of an earthquake than other scales, and does not saturatethat is, it does not underestimate magnitudes as other scales do in certain conditions. It has become the standard scale used by seismological authorities like the U.S. Geological Survey for reporting large earthquakes (typically M > 4), replacing the local magnitude () and surface wave magnitude () scales. Subtypes of the moment magnitude scale (, etc.) reflect different ways of estimating the seismic moment. History Richter scale: the original measure of earthquake magnitude At the beginning of the twentieth century, very little was known about how earthquakes happen, how seismic waves are generated and propagate through the Earth's crust, and what information they carry about the earthquake rupture process; the first magnitude scales were therefore empirical. The initial step in determining earthquake magnitudes empirically came in 1931 when the Japanese seismologist Kiyoo Wadati showed that the maximum amplitude of an earthquake's seismic waves diminished with distance at a certain rate. Charles F. Richter then worked out how to adjust for epicentral distance (and some other factors) so that the logarithm of the amplitude of the seismograph trace could be used as a measure of "magnit Document 3::: In seismology and other areas involving elastic waves, S waves, secondary waves, or shear waves (sometimes called elastic S waves) are a type of elastic wave and are one of the two main types of elastic body waves, so named because they move through the body of an object, unlike surface waves. S waves are transverse waves, meaning that the direction of particle movement of an S wave is perpendicular to the direction of wave propagation, and the main restoring force comes from shear stress. Therefore, S waves cannot propagate in liquids with zero (or very low) viscosity; however, they may propagate in liquids with high viscosity. The name secondary wave comes from the fact that they are the second type of wave to be detected by an earthquake seismograph, after the compressional primary wave, or P wave, because S waves travel more slowly in solids. Unlike P waves, S waves cannot travel through the molten outer core of the Earth, and this causes a shadow zone for S waves opposite to their origin. They can still propagate through the solid inner core: when a P wave strikes the boundary of molten and solid cores at an oblique angle, S waves will form and propagate in the solid medium. When these S waves hit the boundary again at an oblique angle, they will in turn create P waves that propagate through the liquid medium. This property allows seismologists to determine some physical properties of the Earth's inner core. History In 1830, the mathematician Siméon Denis Poisson presented to the French Academy of Sciences an essay ("memoir") with a theory of the propagation of elastic waves in solids. In his memoir, he states that an earthquake would produce two different waves: one having a certain speed and the other having a speed . At a sufficient distance from the source, when they can be considered plane waves in the region of interest, the first kind consists of expansions and compressions in the direction perpendicular to the wavefront (that is, parallel to the Document 4::: Shear wave splitting, also called seismic birefringence, is the phenomenon that occurs when a polarized shear wave enters an anisotropic medium (Fig. 1). The incident shear wave splits into two polarized shear waves (Fig. 2). Shear wave splitting is typically used as a tool for testing the anisotropy of an area of interest. These measurements reflect the degree of anisotropy and lead to a better understanding of the area's crack density and orientation or crystal alignment. We can think of the anisotropy of a particular area as a black box and the shear wave splitting measurements as a way of looking at what is in the box. Introduction An incident shear wave may enter an anisotropic medium from an isotropic media by encountering a change in the preferred orientation or character of the medium. When a polarized shear wave enters a new, anisotropic medium, it splits into two shear waves (Fig.2). One of these shear waves will be faster than the other and oriented parallel to the cracks or crystals in the medium. The second wave will be slower than the first and sometimes orthogonal to both the first shear wave and the cracks or crystals in the media. The time delays observed between the slow and fast shear waves give information about the density of cracks in the medium. The orientation of the fast shear wave records the direction of the cracks in the medium. When plotted using polarization diagrams, the arrival of split shear waves can be identified by the abrupt changes in direction of the particle motion (Fig.3). In a homogeneous material that is weakly anisotropic, the incident shear wave will split into two quasi-shear waves with approximately orthogonal polarizations that reach the receiver at approximately the same time. In the deeper crust and upper mantle, the high frequency shear waves split completely into two separate shear waves with different polarizations and a time delay between them that may be up to a few seconds. History Hess (1964) ma The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. What type of seismic waves do the most damage? A. gravity B. surface C. tension D. sunlight Answer:
sciq-718
multiple_choice
Glass breaking is an example of what type of change that doesn't affect the makeup of matter?
[ "thermal", "chemical", "reversible", "physical" ]
D
Relavent Documents: Document 0::: Physical changes are changes affecting the form of a chemical substance, but not its chemical composition. Physical changes are used to separate mixtures into their component compounds, but can not usually be used to separate compounds into chemical elements or simpler compounds. Physical changes occur when objects or substances undergo a change that does not change their chemical composition. This contrasts with the concept of chemical change in which the composition of a substance changes or one or more substances combine or break up to form new substances. In general a physical change is reversible using physical means. For example, salt dissolved in water can be recovered by allowing the water to evaporate. A physical change involves a change in physical properties. Examples of physical properties include melting, transition to a gas, change of strength, change of durability, changes to crystal form, textural change, shape, size, color, volume and density. An example of a physical change is the process of tempering steel to form a knife blade. A steel blank is repeatedly heated and hammered which changes the hardness of the steel, its flexibility and its ability to maintain a sharp edge. Many physical changes also involve the rearrangement of atoms most noticeably in the formation of crystals. Many chemical changes are irreversible, and many physical changes are reversible, but reversibility is not a certain criterion for classification. Although chemical changes may be recognized by an indication such as odor, color change, or production of a gas, every one of these indicators can result from physical change. Examples Heating and cooling Many elements and some compounds change from solids to liquids and from liquids to gases when heated and the reverse when cooled. Some substances such as iodine and carbon dioxide go directly from solid to gas in a process called sublimation. Magnetism Ferro-magnetic materials can become magnetic. The process is reve Document 1::: Conceptual questions or conceptual problems in science, technology, engineering, and mathematics (STEM) education are questions that can be answered based only on the knowledge of relevant concepts, rather than performing extensive calculations. They contrast with most homework and exam problems in science and engineering that typically require plugging in numerical values into previously discussed formulas. Such "plug-and-chug" numerical problems can often be solved correctly by just matching the pattern of the problem to a previously discussed problem and changing the numerical inputs, which requires significant amounts of time to perform the calculations but does not test or deepen the understanding of how the concepts and formulas should work together. Conceptual questions, therefore, provide a good complement to conventional numerical problems because they need minimal or no calculations and instead encourage the students to engage more deeply with the underlying concepts and how they relate to formulas. Conceptual problems are often formulated as multiple-choice questions, making them easy to use during in-class discussions, particularly when utilizing active learning, peer instruction, and audience response. An example of a conceptual question in undergraduate thermodynamics is provided below: During adiabatic expansion of an ideal gas, its temperatureincreases decreases stays the same Impossible to tell/need more information The use of conceptual questions in physics was popularized by Eric Mazur, particularly in the form of multiple-choice tests that he called ConcepTests. In recent years, multiple websites that maintain lists of conceptual questions have been created by instructors for various disciplines. Some books on physics provide many examples of conceptual questions as well. Multiple conceptual questions can be assembled into a concept inventory to test the working knowledge of students at the beginning of a course or to track the improvement in Document 2::: Comminution is the reduction of solid materials from one average particle size to a smaller average particle size, by crushing, grinding, cutting, vibrating, or other processes. In geology, it occurs naturally during faulting in the upper part of the Earth's crust. In industry, it is an important unit operation in mineral processing, ceramics, electronics, and other fields, accomplished with many types of mill. In dentistry, it is the result of mastication of food. In general medicine, it is one of the most traumatic forms of bone fracture. Within industrial uses, the purpose of comminution is to reduce the size and to increase the surface area of solids. It is also used to free useful materials from matrix materials in which they are embedded, and to concentrate minerals. Energy requirements The comminution of solid materials consumes energy, which is being used to break up the solid into smaller pieces. The comminution energy can be estimated by: Rittinger's law, which assumes that the energy consumed is proportional to the newly generated surface area; Kick's law, which related the energy to the sizes of the feed particles and the product particles; Bond's law, which assumes that the total work useful in breakage is inversely proportional to the square root of the diameter of the product particles, [implying] theoretically that the work input varies as the length of the new cracks made in breakage. Holmes's law, which modifies Bond's law by substituting the square root with an exponent that depends on the material. Forces There are three forces which typically are used to effect the comminution of particles: impact, shear, and compression. Methods There are several methods of comminution. Comminution of solid materials requires different types of crushers and mills depending on the feed properties such as hardness at various size ranges and application requirements such as throughput and maintenance. The most common machines for the comminution of coarse Document 3::: States of matter are distinguished by changes in the properties of matter associated with external factors like pressure and temperature. States are usually distinguished by a discontinuity in one of those properties: for example, raising the temperature of ice produces a discontinuity at 0°C, as energy goes into a phase transition, rather than temperature increase. The three classical states of matter are solid, liquid and gas. In the 20th century, however, increased understanding of the more exotic properties of matter resulted in the identification of many additional states of matter, none of which are observed in normal conditions. Low-energy states of matter Classical states Solid: A solid holds a definite shape and volume without a container. The particles are held very close to each other. Amorphous solid: A solid in which there is no far-range order of the positions of the atoms. Crystalline solid: A solid in which atoms, molecules, or ions are packed in regular order. Plastic crystal: A molecular solid with long-range positional order but with constituent molecules retaining rotational freedom. Quasicrystal: A solid in which the positions of the atoms have long-range order, but this is not in a repeating pattern. Liquid: A mostly non-compressible fluid. Able to conform to the shape of its container but retains a (nearly) constant volume independent of pressure. Liquid crystal: Properties intermediate between liquids and crystals. Generally, able to flow like a liquid but exhibiting long-range order. Gas: A compressible fluid. Not only will a gas take the shape of its container but it will also expand to fill the container. Modern states Plasma: Free charged particles, usually in equal numbers, such as ions and electrons. Unlike gases, plasma may self-generate magnetic fields and electric currents and respond strongly and collectively to electromagnetic forces. Plasma is very uncommon on Earth (except for the ionosphere), although it is the mo Document 4::: Material is a substance or mixture of substances that constitutes an object. Materials can be pure or impure, living or non-living matter. Materials can be classified on the basis of their physical and chemical properties, or on their geological origin or biological function. Materials science is the study of materials, their properties and their applications. Raw materials can be processed in different ways to influence their properties, by purification, shaping or the introduction of other materials. New materials can be produced from raw materials by synthesis. In industry, materials are inputs to manufacturing processes to produce products or more complex materials. Historical elements Materials chart the history of humanity. The system of the three prehistoric ages (Stone Age, Bronze Age, Iron Age) were succeeded by historical ages: steel age in the 19th century, polymer age in the middle of the following century (plastic age) and silicon age in the second half of the 20th century. Classification by use Materials can be broadly categorized in terms of their use, for example: Building materials are used for construction Building insulation materials are used to retain heat within buildings Refractory materials are used for high-temperature applications Nuclear materials are used for nuclear power and weapons Aerospace materials are used in aircraft and other aerospace applications Biomaterials are used for applications interacting with living systems Material selection is a process to determine which material should be used for a given application. Classification by structure The relevant structure of materials has a different length scale depending on the material. The structure and composition of a material can be determined by microscopy or spectroscopy. Microstructure In engineering, materials can be categorised according to their microscopic structure: Plastics: a wide range of synthetic or semi-synthetic materials that use polymers as a main ingred The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. Glass breaking is an example of what type of change that doesn't affect the makeup of matter? A. thermal B. chemical C. reversible D. physical Answer:
sciq-7443
multiple_choice
What crust is thinner and denser than continental crust?
[ "oceanic", "coastal", "land", "asteroid" ]
A
Relavent Documents: Document 0::: The core–mantle boundary (CMB) of Earth lies between the planet's silicate mantle and its liquid iron–nickel outer core, at a depth of below Earth's surface. The boundary is observed via the discontinuity in seismic wave velocities at that depth due to the differences between the acoustic impedances of the solid mantle and the molten outer core. P-wave velocities are much slower in the outer core than in the deep mantle while S-waves do not exist at all in the liquid portion of the core. Recent evidence suggests a distinct boundary layer directly above the CMB possibly made of a novel phase of the basic perovskite mineralogy of the deep mantle named post-perovskite. Seismic tomography studies have shown significant irregularities within the boundary zone and appear to be dominated by the African and Pacific Large Low-Shear-Velocity Provinces (LLSVP). The uppermost section of the outer core is thought to be about 500–1,800 K hotter than the overlying mantle, creating a thermal boundary layer. The boundary is thought to harbor topography, much like Earth's surface, that is supported by solid-state convection within the overlying mantle. Variations in the thermal properties of the core-mantle boundary may affect how the outer core's iron-rich fluids flow, which are ultimately responsible for Earth's magnetic field. The D″ region The approx. 200 km thick layer of the lower mantle directly above the boundary is referred to as the D″ region ("D double-prime" or "D prime prime") and is sometimes included in discussions regarding the core–mantle boundary zone. The D″ name originates from geophysicist Keith Bullen's designations for the Earth's layers. His system was to label each layer alphabetically, A through G, with the crust as 'A' and the inner core as 'G'. In his 1942 publication of his model, the entire lower mantle was the D layer. In 1949, Bullen found his 'D' layer to actually be two different layers. The upper part of the D layer, about 1800 km thick, was r Document 1::: Earth's crustal evolution involves the formation, destruction and renewal of the rocky outer shell at that planet's surface. The variation in composition within the Earth's crust is much greater than that of other terrestrial planets. Mars, Venus, Mercury and other planetary bodies have relatively quasi-uniform crusts unlike that of the Earth which contains both oceanic and continental plates. This unique property reflects the complex series of crustal processes that have taken place throughout the planet's history, including the ongoing process of plate tectonics. The proposed mechanisms regarding Earth's crustal evolution take a theory-orientated approach. Fragmentary geologic evidence and observations provide the basis for hypothetical solutions to problems relating to the early Earth system. Therefore, a combination of these theories creates both a framework of current understanding and also a platform for future study. Early crust Mechanisms of early crust formation The early Earth was entirely molten. This was due to high temperatures created and maintained by the following processes: Compression of the early atmosphere Rapid axial rotation Regular impacts with neighbouring planetesimals. The mantle remained hotter than modern day temperatures throughout the Archean. Over time the Earth began to cool as planetary accretion slowed and heat stored within the magma ocean was lost to space through radiation. A theory for the initiation of magma solidification states that once cool enough, the cooler base of the magma ocean would begin to crystallise first. This is because pressure of 25 GPa at the surface cause the solidus to lower. The formation of a thin 'chill-crust' at the extreme surface would provide thermal insulation to the shallow sub surface, keeping it warm enough to maintain the mechanism of crystallisation from the deep magma ocean. The composition of the crystals produced during the crystallisation of the magma ocean varied with depth. Ex Document 2::: Akilia Island is an island in southwestern Greenland, about 22 kilometers south of Nuuk. Akilia is the location of a rock formation that has been proposed to contain the oldest known sedimentary rocks on Earth, and perhaps the oldest evidence of life on Earth. Geology The rocks in question are part of a metamorphosed supracrustal sequence located at the south-western tip of the island. The sequence has been dated as no younger than 3.85 billion years old - that is, in the Hadean eon - based on the age of an igneous band that cuts the rock. The supracrustal sequence contains layers rich in iron and silica, which are variously interpreted as banded iron formation, chemical sediments from submarine hot springs, or hydrothermal vein deposits. Carbon in the rock, present as graphite, shows low levels of carbon-13, which may suggest an origin as isotopically light organic matter derived from living organisms. However, this interpretation is complicated because of high-grade metamorphism that affected the Akilia rocks after their formation. The sedimentary origin, age and the carbon content of the rocks have been questioned. If the Akilia rocks do show evidence of life by 3.85 Ga, it would challenge models which suggest that Earth would not be hospitable to life at this time. See also List of islands of Greenland Origin of life Document 3::: In geodynamics lower crustal flow is the mainly lateral movement of material within the lower part of the continental crust by a ductile flow mechanism. It is thought to be an important process during both continental collision and continental break-up. Rheology The tendency of the lower crust to flow is controlled by its rheology. Ductile flow in the lower crust is assumed to be controlled by the deformation of quartz and/or plagioclase feldspar as its composition is thought to be granodioritic to dioritic. With normal thickness continental crust and a normal geothermal gradient, the lower crust, below the brittle–ductile transition zone, exhibits ductile flow behaviour under geological strain rates. Factors that can vary this behaviour include: water content, thickness, heat flow and strain-rate. Collisional belts In some areas of continental collision, the lower part of the thickened crust that results is interpreted to flow laterally, such as in the Tibetan plateau, and the Altiplano in the Bolivian Andes. Document 4::: The geologic record in stratigraphy, paleontology and other natural sciences refers to the entirety of the layers of rock strata. That is, deposits laid down by volcanism or by deposition of sediment derived from weathering detritus (clays, sands etc.). This includes all its fossil content and the information it yields about the history of the Earth: its past climate, geography, geology and the evolution of life on its surface. According to the law of superposition, sedimentary and volcanic rock layers are deposited on top of each other. They harden over time to become a solidified (competent) rock column, that may be intruded by igneous rocks and disrupted by tectonic events. Correlating the rock record At a certain locality on the Earth's surface, the rock column provides a cross section of the natural history in the area during the time covered by the age of the rocks. This is sometimes called the rock history and gives a window into the natural history of the location that spans many geological time units such as ages, epochs, or in some cases even multiple major geologic periods—for the particular geographic region or regions. The geologic record is in no one place entirely complete for where geologic forces one age provide a low-lying region accumulating deposits much like a layer cake, in the next may have uplifted the region, and the same area is instead one that is weathering and being torn down by chemistry, wind, temperature, and water. This is to say that in a given location, the geologic record can be and is quite often interrupted as the ancient local environment was converted by geological forces into new landforms and features. Sediment core data at the mouths of large riverine drainage basins, some of which go deep thoroughly support the law of superposition. However using broadly occurring deposited layers trapped within differently located rock columns, geologists have pieced together a system of units covering most of the geologic time scale The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. What crust is thinner and denser than continental crust? A. oceanic B. coastal C. land D. asteroid Answer:
sciq-8078
multiple_choice
What is responsible for the foul smell of rancid butt?
[ "butyric acid", "ibogaine acid", "affixed acid", "Fatty Acid" ]
A
Relavent Documents: Document 0::: The biochemistry of body odor pertains to the chemical compounds in the body responsible for body odor and their kinetics. Causes Body odor encompasses axillary (underarm) odor and foot odor. It is caused by a combination of sweat gland secretions and normal skin microflora. In addition, androstane steroids and the ABCC11 transporter are essential for most axillary odor. Body odor is a complex phenomenon, with numerous compounds and catalysts involved in its genesis. Secretions from sweat glands are initially odorless, but preodoriferous compounds or malodor precursors in the secretions are transformed by skin surface bacteria into volatile odorous compounds that are responsible for body malodor. Water and nutrients secreted by sweat glands also contribute to body odor by creating an ideal environment for supporting the growth of skin surface bacteria. Types There are three types of sweat glands: eccrine, apocrine, and apoeccrine. Apocrine glands are primarily responsible for body malodor and, along with apoeccrine glands, are mostly expressed in the axillary (underarm) regions, whereas eccrine glands are distributed throughout virtually all of the rest of the skin in the body, although they are also particularly expressed in the axillary regions, and contribute to malodor to a relatively minor extent. Sebaceous glands, another type of secretory gland, are not sweat glands but instead secrete sebum (an oily substance), and may also contribute to body odor to some degree. The main odorous compounds that contribute to axillary odor include: Unsaturated or hydroxylated branched fatty acids, with the key ones being (E)-3-methyl-2-hexenoic acid (3M2H) and 3-hydroxy-3-methylhexanoic acid (HMHA) Sulfanylalkanols, particularly 3-methyl-3-sulfanylhexan-1-ol (3M3SH) Odoriferous androstane steroids, namely the pheromones androstenone (5α-androst-16-en-3-one) and androstenol (5α-androst-16-en-3α-ol) These malodorous compounds are formed from non-odoriferous precursors Document 1::: Olfactory glands, also known as Bowman's glands, are a type of nasal gland situated in the part of the olfactory mucosa beneath the olfactory epithelium, that is the lamina propria, a connective tissue also containing fibroblasts, blood vessels and bundles of fine axons from the olfactory neurons. An olfactory gland consists of an acinus in the lamina propria and a secretory duct going out through the olfactory epithelium. Electron microscopy studies show that olfactory glands contain cells with large secretory vesicles. Olfactory glands secrete the gel-forming mucin protein MUC5B. They might secrete proteins such as lactoferrin, lysozyme, amylase and IgA, similarly to serous glands. The exact composition of the secretions from olfactory glands is unclear, but there is evidence that they produce odorant-binding protein. Function The olfactory glands are tubuloalveolar glands surrounded by olfactory receptors and sustentacular cells in the olfactory epithelium. These glands produce mucous to lubricate the olfactory epithelium and dissolve odorant-containing gases. Several olfactory binding proteins are produced from the olfactory glands that help facilitate the transportation of odorants to the olfactory receptors. These cells exhibit the mRNA to transform growth factor α, stimulating the production of new olfactory receptor cells. See also William Bowman List of distinct cell types in the adult human body Document 2::: A pre-STEM program is a course of study at any two-year college that prepares a student to transfer to a four-year school to earn a bachelor's degree in a STEM field. Overview The concept of a pre-STEM program is being developed to address America's need for more college-trained professionals in science, technology, engineering, and mathematics (STEM). It is an innovation meant to fill a gap at community colleges that do not have 'major' degree paths that students identify with on their way to earning an Associates degree. Students must complete a considerable amount of STEM coursework before transferring from a two-year school to a four-year school and earn a baccalaureate degree in a STEM field. Schools with a pre-STEM program are able to identify those students and support them with STEM-specific academic and career advising, increasing the student's chances of going on to earn a STEM baccalaureate degree in a timely fashion. With over 50% of America's college-bound students starting their college career at public or private two-year school, and with a very small proportion of students who start college at a two-year school matriculating to and earning STEM degrees from four-year schools, pre-STEM programs have great potential for broadening participation in baccalaureate STEM studies. Example programs The effectiveness of pre-STEM programs is being investigated by a consortium of schools in Missouri: Moberly Area Community College, St. Charles Community College, Metropolitan Community College, and Truman State University. A larger group of schools met at the Belknap Springs Meetings in October 2009 to discuss the challenges and opportunities presented by STEM-focused partnerships between 2-year and 4-year schools. Each program represented a two-year school and a four-year school that were trying to increase the number of people who earn a baccalaureate degree in a STEM area through various means, some of which were pre-STEM programs. Other methods includes Document 3::: In chemistry, the terms volatile acid (or volatile fatty acid (VFA)) and volatile acidity (VA) are used somewhat differently in various application areas. Wine In wine chemistry, the volatile acids are those that can be separated from wine through steam distillation. Many factors influence the level of VA, but the growth of spoilage bacteria and yeasts are the primary source and consequently VA is often used to quantify the degree of wine oxidation and spoilage. Acetic acid is the primary volatile acid in wine, but smaller amounts of lactic, formic, butyric, propionic acid, carbonic acid (from carbon dioxide), and sulfurous acid (from sulfur dioxide) may be present and contribute to VA; in analysis, measures may be taken to exclude or correct for the VA due to carbonic, sulfuric, and sorbic acids. Other acids present in wine, including malic and tartaric acid are considered non-volatile or fixed acids. Together volatile and non-volatile acidity compromise total acidity. Classical analysis for VA involves distillation in a Cash or Markham still, followed by titration with standardized sodium hydroxide, and reporting of the results as acetic acid. Several alternatives to the classical analysis have been developed. While VA is typically considered a wine flaw or fault, winemakers may intentionally allow a small amount of VA in their product for its contribution to the wine's sensory complexity. Excess VA is difficult for winemakers to correct. In the some countries, including the United States, European Union, and Australia, the law sets a limit on the level of allowable VA. Wastewater In wastewater treatment, the volatile acids are the short chain fatty acids (1-6 carbon atoms) that are water soluble and can be steam distilled at atmospheric pressure - primarily acetic, proprionic, and butyric acid. These acids are produced during anaerobic digestion. In a well functioning digester, the volatile acids will be consumed by the methane forming bacteria. Volatile Document 4::: GRE Subject Biochemistry, Cell and Molecular Biology was a standardized exam provided by ETS (Educational Testing Service) that was discontinued in December 2016. It is a paper-based exam and there are no computer-based versions of it. ETS places this exam three times per year: once in April, once in October and once in November. Some graduate programs in the United States recommend taking this exam, while others require this exam score as a part of the application to their graduate programs. ETS sends a bulletin with a sample practice test to each candidate after registration for the exam. There are 180 questions within the biochemistry subject test. Scores are scaled and then reported as a number between 200 and 990; however, in recent versions of the test, the maximum and minimum reported scores have been 760 (corresponding to the 99 percentile) and 320 (1 percentile) respectively. The mean score for all test takers from July, 2009, to July, 2012, was 526 with a standard deviation of 95. After learning that test content from editions of the GRE® Biochemistry, Cell and Molecular Biology (BCM) Test has been compromised in Israel, ETS made the decision not to administer this test worldwide in 2016–17. Content specification Since many students who apply to graduate programs in biochemistry do so during the first half of their fourth year, the scope of most questions is largely that of the first three years of a standard American undergraduate biochemistry curriculum. A sampling of test item content is given below: Biochemistry (36%) A Chemical and Physical Foundations Thermodynamics and kinetics Redox states Water, pH, acid-base reactions and buffers Solutions and equilibria Solute-solvent interactions Chemical interactions and bonding Chemical reaction mechanisms B Structural Biology: Structure, Assembly, Organization and Dynamics Small molecules Macromolecules (e.g., nucleic acids, polysaccharides, proteins and complex lipids) Supramolecular complexes (e.g. The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. What is responsible for the foul smell of rancid butt? A. butyric acid B. ibogaine acid C. affixed acid D. Fatty Acid Answer:
sciq-7203
multiple_choice
Somatosensation includes all sensation received from the skin and mucous membranes, as well as from these?
[ "five senses", "glial cells", "organs", "limbs and joints" ]
D
Relavent Documents: Document 0::: H2.00.04.4.01001: Lymphoid tissue H2.00.05.0.00001: Muscle tissue H2.00.05.1.00001: Smooth muscle tissue H2.00.05.2.00001: Striated muscle tissue H2.00.06.0.00001: Nerve tissue H2.00.06.1.00001: Neuron H2.00.06.2.00001: Synapse H2.00.06.2.00001: Neuroglia h3.01: Bones h3.02: Joints h3.03: Muscles h3.04: Alimentary system h3.05: Respiratory system h3.06: Urinary system h3.07: Genital system h3.08: Document 1::: The sensory nervous system is a part of the nervous system responsible for processing sensory information. A sensory system consists of sensory neurons (including the sensory receptor cells), neural pathways, and parts of the brain involved in sensory perception and interoception. Commonly recognized sensory systems are those for vision, hearing, touch, taste, smell, balance and visceral sensation. Sense organs are transducers that convert data from the outer physical world to the realm of the mind where people interpret the information, creating their perception of the world around them. The receptive field is the area of the body or environment to which a receptor organ and receptor cells respond. For instance, the part of the world an eye can see, is its receptive field; the light that each rod or cone can see, is its receptive field. Receptive fields have been identified for the visual system, auditory system and somatosensory system. Stimulus Organisms need information to solve at least three kinds of problems: (a) to maintain an appropriate environment, i.e., homeostasis; (b) to time activities (e.g., seasonal changes in behavior) or synchronize activities with those of conspecifics; and (c) to locate and respond to resources or threats (e.g., by moving towards resources or evading or attacking threats). Organisms also need to transmit information in order to influence another's behavior: to identify themselves, warn conspecifics of danger, coordinate activities, or deceive. Sensory systems code for four aspects of a stimulus; type (modality), intensity, location, and duration. Arrival time of a sound pulse and phase differences of continuous sound are used for sound localization. Certain receptors are sensitive to certain types of stimuli (for example, different mechanoreceptors respond best to different kinds of touch stimuli, like sharp or blunt objects). Receptors send impulses in certain patterns to send information about the intensity of a stimul Document 2::: In a multicellular organism, an organ is a collection of tissues joined in a structural unit to serve a common function. In the hierarchy of life, an organ lies between tissue and an organ system. Tissues are formed from same type cells to act together in a function. Tissues of different types combine to form an organ which has a specific function. The intestinal wall for example is formed by epithelial tissue and smooth muscle tissue. Two or more organs working together in the execution of a specific body function form an organ system, also called a biological system or body system. An organ's tissues can be broadly categorized as parenchyma, the functional tissue, and stroma, the structural tissue with supportive, connective, or ancillary functions. For example, the gland's tissue that makes the hormones is the parenchyma, whereas the stroma includes the nerves that innervate the parenchyma, the blood vessels that oxygenate and nourish it and carry away its metabolic wastes, and the connective tissues that provide a suitable place for it to be situated and anchored. The main tissues that make up an organ tend to have common embryologic origins, such as arising from the same germ layer. Organs exist in most multicellular organisms. In single-celled organisms such as members of the eukaryotes, the functional analogue of an organ is known as an organelle. In plants, there are three main organs. The number of organs in any organism depends on the definition used. By one widely adopted definition, 79 organs have been identified in the human body. Animals Except for placozoans, multicellular animals including humans have a variety of organ systems. These specific systems are widely studied in human anatomy. The functions of these organ systems often share significant overlap. For instance, the nervous and endocrine system both operate via a shared organ, the hypothalamus. For this reason, the two systems are combined and studied as the neuroendocrine system. The sam Document 3::: Cutaneous innervation refers to an area of the skin which is supplied by a specific cutaneous nerve. Dermatomes are similar; however, a dermatome only specifies the area served by a spinal nerve. In some cases, the dermatome is less specific (when a spinal nerve is the source for more than one cutaneous nerve), and in other cases it is more specific (when a cutaneous nerve is derived from multiple spinal nerves.) Modern texts are in agreement about which areas of the skin are served by which nerves, but there are minor variations in some of the details. The borders designated by the diagrams in the 1918 edition of Gray's Anatomy are similar, but not identical, to those generally accepted today. Importance of the peripheral nervous system The peripheral nervous system (PNS) is divided into the somatic nervous system, the autonomic nervous system, and the enteric nervous system. However, it is the somatic nervous system, responsible for body movement and the reception of external stimuli, which allows one to understand how cutaneous innervation is made possible by the action of specific sensory fibers located on the skin, as well as the distinct pathways they take to the central nervous system. The skin, which is part of the integumentary system, plays an important role in the somatic nervous system because it contains a range of nerve endings that react to heat and cold, touch, pressure, vibration, and tissue injury. Importance of the central nervous system The central nervous system (CNS) works with the peripheral nervous system in cutaneous innervation. The CNS is responsible for processing the information it receives from the cutaneous nerves that detect a given stimulus, and then identifying the kind of sensory inputs which project to a specific region of the primary somatosensory cortex. The role of nerve endings on the surface of the skin Groups of nerve terminals located in the different layers of the skin are categorized depending on whether the skin Document 4::: Sensory neuroscience is a subfield of neuroscience which explores the anatomy and physiology of neurons that are part of sensory systems such as vision, hearing, and olfaction. Neurons in sensory regions of the brain respond to stimuli by firing one or more nerve impulses (action potentials) following stimulus presentation. How is information about the outside world encoded by the rate, timing, and pattern of action potentials? This so-called neural code is currently poorly understood and sensory neuroscience plays an important role in the attempt to decipher it. Looking at early sensory processing is advantageous since brain regions that are "higher up" (e.g. those involved in memory or emotion) contain neurons which encode more abstract representations. However, the hope is that there are unifying principles which govern how the brain encodes and processes information. Studying sensory systems is an important stepping stone in our understanding of brain function in general. Typical experiments A typical experiment in sensory neuroscience involves the presentation of a series of relevant stimuli to an experimental subject while the subject's brain is being monitored. This monitoring can be accomplished by noninvasive means such as functional magnetic resonance imaging (fMRI) or electroencephalography (EEG), or by more invasive means such as electrophysiology, the use of electrodes to record the electrical activity of single neurons or groups of neurons. fMRI measures changes in blood flow which related to the level of neural activity and provides low spatial and temporal resolution, but does provide data from the whole brain. In contrast, Electrophysiology provides very high temporal resolution (the shapes of single spikes can be resolved) and data can be obtained from single cells. This is important since computations are performed within the dendrites of individual neurons. Single neuron experiments In most of the central nervous system, neurons communicate ex The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. Somatosensation includes all sensation received from the skin and mucous membranes, as well as from these? A. five senses B. glial cells C. organs D. limbs and joints Answer:
sciq-6792
multiple_choice
Molds and casts usually form in which type of rock?
[ "metamorphic", "igneous", "sedimentary", "crystalline" ]
C
Relavent Documents: Document 0::: In geology, rock (or stone) is any naturally occurring solid mass or aggregate of minerals or mineraloid matter. It is categorized by the minerals included, its chemical composition, and the way in which it is formed. Rocks form the Earth's outer solid layer, the crust, and most of its interior, except for the liquid outer core and pockets of magma in the asthenosphere. The study of rocks involves multiple subdisciplines of geology, including petrology and mineralogy. It may be limited to rocks found on Earth, or it may include planetary geology that studies the rocks of other celestial objects. Rocks are usually grouped into three main groups: igneous rocks, sedimentary rocks and metamorphic rocks. Igneous rocks are formed when magma cools in the Earth's crust, or lava cools on the ground surface or the seabed. Sedimentary rocks are formed by diagenesis and lithification of sediments, which in turn are formed by the weathering, transport, and deposition of existing rocks. Metamorphic rocks are formed when existing rocks are subjected to such high pressures and temperatures that they are transformed without significant melting. Humanity has made use of rocks since the earliest humans. This early period, called the Stone Age, saw the development of many stone tools. Stone was then used as a major component in the construction of buildings and early infrastructure. Mining developed to extract rocks from the Earth and obtain the minerals within them, including metals. Modern technology has allowed the development of new man-made rocks and rock-like substances, such as concrete. Study Geology is the study of Earth and its components, including the study of rock formations. Petrology is the study of the character and origin of rocks. Mineralogy is the study of the mineral components that create rocks. The study of rocks and their components has contributed to the geological understanding of Earth's history, the archaeological understanding of human history, and the Document 1::: Materials science has shaped the development of civilizations since the dawn of mankind. Better materials for tools and weapons has allowed mankind to spread and conquer, and advancements in material processing like steel and aluminum production continue to impact society today. Historians have regarded materials as such an important aspect of civilizations such that entire periods of time have defined by the predominant material used (Stone Age, Bronze Age, Iron Age). For most of recorded history, control of materials had been through alchemy or empirical means at best. The study and development of chemistry and physics assisted the study of materials, and eventually the interdisciplinary study of materials science emerged from the fusion of these studies. The history of materials science is the study of how different materials were used and developed through the history of Earth and how those materials affected the culture of the peoples of the Earth. The term "Silicon Age" is sometimes used to refer to the modern period of history during the late 20th to early 21st centuries. Prehistory In many cases, different cultures leave their materials as the only records; which anthropologists can use to define the existence of such cultures. The progressive use of more sophisticated materials allows archeologists to characterize and distinguish between peoples. This is partially due to the major material of use in a culture and to its associated benefits and drawbacks. Stone-Age cultures were limited by which rocks they could find locally and by which they could acquire by trading. The use of flint around 300,000 BCE is sometimes considered the beginning of the use of ceramics. The use of polished stone axes marks a significant advance, because a much wider variety of rocks could serve as tools. The innovation of smelting and casting metals in the Bronze Age started to change the way that cultures developed and interacted with each other. Starting around 5,500 BCE, Document 2::: Rock mechanics is a theoretical and applied science of the mechanical behavior of rocks and rock masses. Compared to geology, it is the branch of mechanics concerned with the response of rock and rock masses to the force fields of their physical environment. Background Rock mechanics is part of a much broader subject of geomechanics, which is concerned with the mechanical responses of all geological materials, including soils. Rock mechanics is concerned with the application of the principles of engineering mechanics to the design of structures built in or on rock. The structure could include many objects such as a drilling well, a mine shaft, a tunnel, a reservoir dam, a repository component, or a building. Rock mechanics is used in many engineering disciplines, but is primarily used in Mining, Civil, Geotechnical, Transportation, and Petroleum Engineering. Rock mechanics answers questions such as, "is reinforcement necessary for a rock, or will it be able to handle whatever load it is faced with?" It also includes the design of reinforcement systems, such as rock bolting patterns. Assessing the Project Site Before any work begins, the construction site must be investigated properly to inform of the geological conditions of the site. Field observations, deep drilling, and geophysical surveys, can all give necessary information to develop a safe construction plan and create a site geological model. The level of investigation conducted at this site depends on factors such as budget, time frame, and expected geological conditions. The first step of the investigation is the collection of maps and aerial photos to analyze. This can provide information about potential sinkholes, landslides, erosion, etc. Maps can provide information on the rock type of the site, geological structure, and boundaries between bedrock units. Boreholes Creating a borehole is a technique that consists of drilling through the ground in various areas at various depths, to get a bett Document 3::: The Géotechnique lecture is an biennial lecture on the topic of soil mechanics, organised by the British Geotechnical Association named after its major scientific journal Géotechnique. This should not be confused with the annual BGA Rankine Lecture. List of Géotechnique Lecturers See also Named lectures Rankine Lecture Terzaghi Lecture External links ICE Géotechnique journal British Geotechnical Association Document 4::: Conceptual questions or conceptual problems in science, technology, engineering, and mathematics (STEM) education are questions that can be answered based only on the knowledge of relevant concepts, rather than performing extensive calculations. They contrast with most homework and exam problems in science and engineering that typically require plugging in numerical values into previously discussed formulas. Such "plug-and-chug" numerical problems can often be solved correctly by just matching the pattern of the problem to a previously discussed problem and changing the numerical inputs, which requires significant amounts of time to perform the calculations but does not test or deepen the understanding of how the concepts and formulas should work together. Conceptual questions, therefore, provide a good complement to conventional numerical problems because they need minimal or no calculations and instead encourage the students to engage more deeply with the underlying concepts and how they relate to formulas. Conceptual problems are often formulated as multiple-choice questions, making them easy to use during in-class discussions, particularly when utilizing active learning, peer instruction, and audience response. An example of a conceptual question in undergraduate thermodynamics is provided below: During adiabatic expansion of an ideal gas, its temperatureincreases decreases stays the same Impossible to tell/need more information The use of conceptual questions in physics was popularized by Eric Mazur, particularly in the form of multiple-choice tests that he called ConcepTests. In recent years, multiple websites that maintain lists of conceptual questions have been created by instructors for various disciplines. Some books on physics provide many examples of conceptual questions as well. Multiple conceptual questions can be assembled into a concept inventory to test the working knowledge of students at the beginning of a course or to track the improvement in The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. Molds and casts usually form in which type of rock? A. metamorphic B. igneous C. sedimentary D. crystalline Answer:
sciq-1299
multiple_choice
The angle of refraction depends on the index of what?
[ "vibration", "frequency", "refraction", "reflection" ]
C
Relavent Documents: Document 0::: Optics is the branch of physics which involves the behavior and properties of light, including its interactions with matter and the construction of instruments that use or detect it. Optics usually describes the behavior of visible, ultraviolet, and infrared light. Because light is an electromagnetic wave, other forms of electromagnetic radiation such as X-rays, microwaves, and radio waves exhibit similar properties. A B C D E F G H I J K L M N O P Q R S T U W Z See also :Category:Optical components :Category:Optical materials Document 1::: In a prism, the angle of deviation () decreases with increase in the angle of incidence () up to a particular angle. This angle of incidence where the angle of deviation in a prism is minimum is called the minimum deviation position of the prism and that very deviation angle is known as the minimum angle of deviation (denoted by , , or ). The angle of minimum deviation is related with the refractive index as: This is useful to calculate the refractive index of a material. Rainbow and halo occur at minimum deviation. Also, a thin prism is always set at minimum deviation. Formula In minimum deviation, the refracted ray in the prism is parallel to its base. In other words, the light ray is symmetrical about the axis of symmetry of the prism. Also, the angles of refractions are equal i.e. . And, the angle of incidence and angle of emergence equal each other (). This is clearly visible in the graph below. The formula for minimum deviation can be derived by exploiting the geometry in the prism. The approach involves replacing the variables in the Snell's law in terms of the Deviation and Prism Angles by making the use of the above properties. From the angle sum of , Using the exterior angle theorem in , This can also be derived by putting in the prism formula: From Snell's law, (where is the refractive index, is the Angle of Prism and is the Minimum Angle of Deviation.) This is a convenient way used to measure the refractive index of a material(liquid or gas) by directing a light ray through a prism of negligible thickness at minimum deviation filled with the material or in a glass prism dipped in it. Worked out examples: Answer: 37°, 49° Solution: Here, , Plugging them in the above formula, Also, This is also apparent in the graph below. Answer: 60° Solution: Here, Using the above formula, Also, the variation of the angle of deviation with an arbitrary angle of incidence can be encapsulated into a single equation by expressing in terms of Document 2::: The calculation of glass properties (glass modeling) is used to predict glass properties of interest or glass behavior under certain conditions (e.g., during production) without experimental investigation, based on past data and experience, with the intention to save time, material, financial, and environmental resources, or to gain scientific insight. It was first practised at the end of the 19th century by A. Winkelmann and O. Schott. The combination of several glass models together with other relevant functions can be used for optimization and six sigma procedures. In the form of statistical analysis glass modeling can aid with accreditation of new data, experimental procedures, and measurement institutions (glass laboratories). History Historically, the calculation of glass properties is directly related to the founding of glass science. At the end of the 19th century the physicist Ernst Abbe developed equations that allow calculating the design of optimized optical microscopes in Jena, Germany, stimulated by co-operation with the optical workshop of Carl Zeiss. Before Ernst Abbe's time the building of microscopes was mainly a work of art and experienced craftsmanship, resulting in very expensive optical microscopes with variable quality. Now Ernst Abbe knew exactly how to construct an excellent microscope, but unfortunately, the required lenses and prisms with specific ratios of refractive index and dispersion did not exist. Ernst Abbe was not able to find answers to his needs from glass artists and engineers; glass making was not based on science at this time. In 1879 the young glass engineer Otto Schott sent Abbe glass samples with a special composition (lithium silicate glass) that he had prepared himself and that he hoped to show special optical properties. Following measurements by Ernst Abbe, Schott's glass samples did not have the desired properties, and they were also not as homogeneous as desired. Nevertheless, Ernst Abbe invited Otto Schott to work Document 3::: Treatise on Light: In Which Are Explained the Causes of That Which Occurs in Reflection & Refraction (: Où Sont Expliquées les Causes de ce qui Luy Arrive Dans la Reflexion & Dans la Refraction) is a book written by Dutch polymath Christiaan Huygens that was published in French in 1690. The book describes Huygens's conception of the nature of light propagation which makes it possible to explain the laws of geometrical optics shown in Descartes's Dioptrique, which Huygens aimed to replace. Unlike Newton's corpuscular theory, which was presented in the Opticks, Huygens conceived of light as an irregular series of shock waves which proceeds with very great, but finite, velocity through the aether, similar to sound waves. Moreover, he proposed that each point of a wavefront is itself the origin of a secondary spherical wave, a principle known today as the Huygens–Fresnel principle. The book is considered a pioneering work of theoretical and mathematical physics and the first mechanistic account of an unobservable physical phenomenon. Overview Huygens worked on the mathematics of light rays and the properties of refraction in his work Dioptrica, which began in 1652 but remained unpublished, and which predated his lens grinding work. In 1672, the problem of the strange refraction of the Iceland crystal created a puzzle regarding the physics of refraction that Huygens wanted to solve. Huygens eventually was able to solve this problem by means of elliptical waves in 1677 and confirmed his theory by experiments mostly after critical reactions in 1679. His explanation of birefringence was based on three hypotheses: (1) There are inside the crystal two media in which light waves proceed, (2) one medium behaves as ordinary ether and carries the normally refracted ray, and (3) the velocity of the waves in the other medium is dependent on direction, so that the waves do not expand in spherical form, but rather as ellipsoids of revolution; this second medium carries the abnorm Document 4::: Kirchhoff's diffraction formula (also called Fresnel–Kirchhoff diffraction formula) approximates light intensity and phase in optical diffraction: light fields in the boundary regions of shadows. The approximation can be used to model light propagation in a wide range of configurations, either analytically or using numerical modelling. It gives an expression for the wave disturbance when a monochromatic spherical wave is the incoming wave of a situation under consideration. This formula is derived by applying the Kirchhoff integral theorem, which uses the Green's second identity to derive the solution to the homogeneous scalar wave equation, to a spherical wave with some approximations. The Huygens–Fresnel principle is derived by the Fresnel–Kirchhoff diffraction formula. Derivation of Kirchhoff's diffraction formula Kirchhoff's integral theorem, sometimes referred to as the Fresnel–Kirchhoff integral theorem, uses Green's second identity to derive the solution of the homogeneous scalar wave equation at an arbitrary spatial position P in terms of the solution of the wave equation and its first order derivative at all points on an arbitrary closed surface as the boundary of some volume including P. The solution provided by the integral theorem for a monochromatic source is where is the spatial part of the solution of the homogeneous scalar wave equation (i.e., as the homogeneous scalar wave equation solution), k is the wavenumber, and s is the distance from P to an (infinitesimally small) integral surface element, and denotes differentiation along the integral surface element normal unit vector (i.e., a normal derivative), i.e., . Note that the surface normal or the direction of is toward the inside of the enclosed volume in this integral; if the more usual outer-pointing normal is used, the integral will have the opposite sign. And also note that, in the integral theorem shown here, and P are vector quantities while other terms are scalar quantities. The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. The angle of refraction depends on the index of what? A. vibration B. frequency C. refraction D. reflection Answer:
ai2_arc-679
multiple_choice
In the geologic past, abiotic factors such as volcanic eruptions have had an impact on the availability of resources. How can volcanic eruptions impact the availability of resources?
[ "by decreasing the thickness of soil", "by causing more heavy rains to erode topsoil", "by disrupting the sunlight from reaching producers", "by causing the surface of Earth to be warmer than usual" ]
C
Relavent Documents: Document 0::: Maui Nui is a modern geologists' name given to a prehistoric Hawaiian island and the corresponding modern biogeographic region. Maui Nui is composed of four modern islands: Maui, Molokaʻi, Lānaʻi, and Kahoʻolawe. Administratively, the four modern islands comprise Maui County (and a tiny part of Molokaʻi called Kalawao County). Long after the breakup of Maui Nui, the four modern islands retained plant and animal life similar to each other. Thus, Maui Nui is not only a prehistoric island but also a modern biogeographic region. Geology Maui Nui formed and broke up during the Pleistocene Epoch, which lasted from about 2.58 million to 11,700 years ago. Maui Nui is built from seven shield volcanoes. The three oldest are Penguin Bank, West Molokaʻi, and East Molokaʻi, which probably range from slightly over to slightly less than 2 million years old. The four younger volcanoes are Lāna‘i, West Maui, Kaho‘olawe, and Haleakalā, which probably formed between 1.5 and 2 million years ago. At its prime 1.2 million years ago, Maui Nui was , 50% larger than today's Hawaiʻi Island. The island of Maui Nui included four modern islands (Maui, Molokaʻi, Lānaʻi, and Kahoʻolawe) and landmass west of Molokaʻi called Penguin Bank, which is now completely submerged. Maui Nui broke up as rising sea levels flooded the connections between the volcanoes. The breakup was complex because global sea levels rose and fell intermittently during the Quaternary glaciation. About 600,000 years ago, the connection between Molokaʻi and the island of Lāna‘i/Maui/Kahoʻolawe became intermittent. About 400,000 years ago, the connection between Lāna‘i and Maui/Kahoʻolawe also became intermittent. The connection between Maui and Kahoʻolawe was permanently broken between 200,000 and 150,000 years ago. Maui, Lāna‘i, and Molokaʻi were connected intermittently thereafter, most recently about 18,000 years ago during the Last Glacial Maximum. Today, the sea floor between these four islands is relatively shallow Document 1::: The interaction between erosion and tectonics has been a topic of debate since the early 1990s. While the tectonic effects on surface processes such as erosion have long been recognized (for example, river formation as a result of tectonic uplift), the opposite (erosional effects on tectonic activity) has only recently been addressed. The primary questions surrounding this topic are what types of interactions exist between erosion and tectonics and what are the implications of these interactions. While this is still a matter of debate, one thing is clear, Earth's landscape is a product of two factors: tectonics, which can create topography and maintain relief through surface and rock uplift, and climate, which mediates the erosional processes that wear away upland areas over time. The interaction of these processes can form, modify, or destroy geomorphic features on Earth's surface. Tectonic processes The term tectonics refers to the study of Earth's surface structure and the ways in which it changes over time. Tectonic processes typically occur at plate boundaries which are one of three types: convergent boundaries, divergent boundaries, or transform boundaries. These processes form and modify the topography of the Earth's surface, effectively increasing relief through the mechanisms of isostatic uplift, crustal thickening, and deformation in the form of faulting and folding. Increased elevations, in relation to regional base levels, lead to steeper river channel gradients and an increase in orographically localized precipitation, ultimately resulting in drastically increased erosion rates. The topography, and general relief, of a given area determines the velocity at which surface runoff will flow, ultimately determining the potential erosive power of the runoff. Longer, steeper slopes are more prone to higher rates of erosion during periods of heavy rainfall than shorter, gradually sloping areas. Thus, large mountain ranges, and other areas of high relief, forme Document 2::: Margaret Armstrong is an Australian geostatistician, mathematical geoscientist, and textbook author. She works as an associate professor in the School of Applied Mathematics at the Fundação Getúlio Vargas in Brazil, and as a research associate in the Centre for Industrial Economics of Mines ParisTech in France. Education Armstrong graduated from the University of Queensland in 1972, with a bachelor's degree in mathematics and a diploma of education. After working as a mathematics teacher she returned to graduate study, first with a master's degree in mathematics from Queensland in 1977, and then with Georges Matheron at the École des Mines de Paris. She completed her doctorate there in 1980. Books Armstrong is the author of the textbook Basic Linear Geostatistics (Springer, 1998), and co-author of the book Plurigaussian Simulations in Geosciences (Springer, 2003; 2nd ed., 2011). With Matheron, she edited Geostatistical Case Studies (Springer, 1987). Recognition In 1998, Armstrong was the winner of the John Cedric Griffiths Teaching Award of the International Association for Mathematical Geosciences. The award statement noted "her aptitude at the blackboard", the international demand for her short courses, and the "great clarity" of her book Basic Linear Geostatistics. Document 3::: The mid-24th century BCE climate anomaly is the period, between 2354–2345 BCE, of consistently, reduced annual temperatures that are reconstructed from consecutive abnormally narrow, Irish oak tree rings. These tree rings are indicative of a period of catastrophically reduced growth in Irish trees during that period. This range of dates also matches the transition from the Neolithic to the Bronze Age in the British Isles and a period of widespread societal collapse in the Near East. It has been proposed that this anomalous downturn in the climate might have been the result of comet debris suspended in the atmosphere. In 1997, Marie-Agnès Courty proposed that a natural disaster involving wildfires, floods, and an air blast of over 100 megatons power occurred about 2350 BCE. This proposal is based on unusual "dust" deposits which have been reported from archaeological sites in Mesopotamia that are a few hundred kilometres from each other. In later papers, Courty subsequently revised the date of this event from 2350 BCE to 2000 BCE. Based only upon the analysis of satellite imagery, Umm al Binni lake in southern Iraq has been suggested as a possible extraterrestrial impact crater and possible cause of this natural disaster. More recent sources have argued for a formation of the lake through the subsidence of the underlying basement fault blocks. Baillie and McAneney's 2015 discussion of this climate anomaly discusses its abnormally narrow Irish tree rings and the anomalous dust deposits of Courty. However, this paper lacks any mention of Umm al Binni lake. See also 4.2-kiloyear event, c. 2200 BCE Great Flood (China), c. 2300 BCE Document 4::: The rock cycle is a basic concept in geology that describes transitions through geologic time among the three main rock types: sedimentary, metamorphic, and igneous. Each rock type is altered when it is forced out of its equilibrium conditions. For example, an igneous rock such as basalt may break down and dissolve when exposed to the atmosphere, or melt as it is subducted under a continent. Due to the driving forces of the rock cycle, plate tectonics and the water cycle, rocks do not remain in equilibrium and change as they encounter new environments. The rock cycle explains how the three rock types are related to each other, and how processes change from one type to another over time. This cyclical aspect makes rock change a geologic cycle and, on planets containing life, a biogeochemical cycle. Transition to igneous rock When rocks are pushed deep under the Earth's surface, they may melt into magma. If the conditions no longer exist for the magma to stay in its liquid state, it cools and solidifies into an igneous rock. A rock that cools within the Earth is called intrusive or plutonic and cools very slowly, producing a coarse-grained texture such as the rock granite. As a result of volcanic activity, magma (which is called lava when it reaches Earth's surface) may cool very rapidly on the Earth's surface exposed to the atmosphere and are called extrusive or volcanic rocks. These rocks are fine-grained and sometimes cool so rapidly that no crystals can form and result in a natural glass, such as obsidian, however the most common fine-grained rock would be known as basalt. Any of the three main types of rocks (igneous, sedimentary, and metamorphic rocks) can melt into magma and cool into igneous rocks. Secondary changes Epigenetic change (secondary processes occurring at low temperatures and low pressures) may be arranged under a number of headings, each of which is typical of a group of rocks or rock-forming minerals, though usually more than one of these alt The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. In the geologic past, abiotic factors such as volcanic eruptions have had an impact on the availability of resources. How can volcanic eruptions impact the availability of resources? A. by decreasing the thickness of soil B. by causing more heavy rains to erode topsoil C. by disrupting the sunlight from reaching producers D. by causing the surface of Earth to be warmer than usual Answer:
sciq-10867
multiple_choice
Goldfish, tuna, salmon, perch, and cod are examples of which group of fish?
[ "ray-finned fish", "bony fish", "cartilaginous fish", "spiny-lobed fish" ]
A
Relavent Documents: Document 0::: Fisheries science is the academic discipline of managing and understanding fisheries. It is a multidisciplinary science, which draws on the disciplines of limnology, oceanography, freshwater biology, marine biology, meteorology, conservation, ecology, population dynamics, economics, statistics, decision analysis, management, and many others in an attempt to provide an integrated picture of fisheries. In some cases new disciplines have emerged, as in the case of bioeconomics and fisheries law. Because fisheries science is such an all-encompassing field, fisheries scientists often use methods from a broad array of academic disciplines. Over the most recent several decades, there have been declines in fish stocks (populations) in many regions along with increasing concern about the impact of intensive fishing on marine and freshwater biodiversity. Fisheries science is typically taught in a university setting, and can be the focus of an undergraduate, master's or Ph.D. program. Some universities offer fully integrated programs in fisheries science. Graduates of university fisheries programs typically find employment as scientists, fisheries managers of both recreational and commercial fisheries, researchers, aquaculturists, educators, environmental consultants and planners, conservation officers, and many others. Fisheries research Because fisheries take place in a diverse set of aquatic environments (i.e., high seas, coastal areas, large and small rivers, and lakes of all sizes), research requires different sampling equipment, tools, and techniques. For example, studying trout populations inhabiting mountain lakes requires a very different set of sampling tools than, say, studying salmon in the high seas. Ocean fisheries research vessels (FRVs) often require platforms which are capable of towing different types of fishing nets, collecting plankton or water samples from a range of depths, and carrying acoustic fish-finding equipment. Fisheries research vessels a Document 1::: A fish (: fish or fishes) is an aquatic, craniate, gill-bearing animal that lacks limbs with digits. Included in this definition are the living hagfish, lampreys, and cartilaginous and bony fish as well as various extinct related groups. Approximately 95% of living fish species are ray-finned fish, belonging to the class Actinopterygii, with around 99% of those being teleosts. The earliest organisms that can be classified as fish were soft-bodied chordates that first appeared during the Cambrian period. Although they lacked a true spine, they possessed notochords which allowed them to be more agile than their invertebrate counterparts. Fish would continue to evolve through the Paleozoic era, diversifying into a wide variety of forms. Many fish of the Paleozoic developed external armor that protected them from predators. The first fish with jaws appeared in the Silurian period, after which many (such as sharks) became formidable marine predators rather than just the prey of arthropods. Most fish are ectothermic ("cold-blooded"), allowing their body temperatures to vary as ambient temperatures change, though some of the large active swimmers like white shark and tuna can hold a higher core temperature. Fish can acoustically communicate with each other, most often in the context of feeding, aggression or courtship. Fish are abundant in most bodies of water. They can be found in nearly all aquatic environments, from high mountain streams (e.g., char and gudgeon) to the abyssal and even hadal depths of the deepest oceans (e.g., cusk-eels and snailfish), although no species has yet been documented in the deepest 25% of the ocean. With 34,300 described species, fish exhibit greater species diversity than any other group of vertebrates. Fish are an important resource for humans worldwide, especially as food. Commercial and subsistence fishers hunt fish in wild fisheries or farm them in ponds or in cages in the ocean (in aquaculture). They are also caught by recreational Document 2::: Fish anatomy is the study of the form or morphology of fish. It can be contrasted with fish physiology, which is the study of how the component parts of fish function together in the living fish. In practice, fish anatomy and fish physiology complement each other, the former dealing with the structure of a fish, its organs or component parts and how they are put together, such as might be observed on the dissecting table or under the microscope, and the latter dealing with how those components function together in living fish. The anatomy of fish is often shaped by the physical characteristics of water, the medium in which fish live. Water is much denser than air, holds a relatively small amount of dissolved oxygen, and absorbs more light than air does. The body of a fish is divided into a head, trunk and tail, although the divisions between the three are not always externally visible. The skeleton, which forms the support structure inside the fish, is either made of cartilage (cartilaginous fish) or bone (bony fish). The main skeletal element is the vertebral column, composed of articulating vertebrae which are lightweight yet strong. The ribs attach to the spine and there are no limbs or limb girdles. The main external features of the fish, the fins, are composed of either bony or soft spines called rays which, with the exception of the caudal fins, have no direct connection with the spine. They are supported by the muscles which compose the main part of the trunk. The heart has two chambers and pumps the blood through the respiratory surfaces of the gills and then around the body in a single circulatory loop. The eyes are adapted for seeing underwater and have only local vision. There is an inner ear but no external or middle ear. Low-frequency vibrations are detected by the lateral line system of sense organs that run along the length of the sides of fish, which responds to nearby movements and to changes in water pressure. Sharks and rays are basal fish with Document 3::: Fish intelligence is the resultant of the process of acquiring, storing in memory, retrieving, combining, comparing, and using in new contexts information and conceptual skills" as it applies to fish. According to Culum Brown from Macquarie University, "Fish are more intelligent than they appear. In many areas, such as memory, their cognitive powers match or exceed those of ‘higher’ vertebrates including non-human primates." Fish hold records for the relative brain weights of vertebrates. Most vertebrate species have similar brain-to-body mass ratios. The deep sea bathypelagic bony-eared assfish has the smallest ratio of all known vertebrates. At the other extreme, the electrogenic elephantnose fish, an African freshwater fish, has one of the largest brain-to-body weight ratios of all known vertebrates (slightly higher than humans) and the highest brain-to-body oxygen consumption ratio of all known vertebrates (three times that for humans). Brain Fish typically have quite small brains relative to body size compared with other vertebrates, typically one-fifteenth the brain mass of a similarly sized bird or mammal. However, some fish have relatively large brains, most notably mormyrids and sharks, which have brains about as massive relative to body weight as birds and marsupials. The cerebellum of cartilaginous and bony fishes is large and complex. In at least one important respect, it differs in internal structure from the mammalian cerebellum: The fish cerebellum does not contain discrete deep cerebellar nuclei. Instead, the primary targets of Purkinje cells are a distinct type of cell distributed across the cerebellar cortex, a type not seen in mammals. The circuits in the cerebellum are similar across all classes of vertebrates, including fish, reptiles, birds, and mammals. There is also an analogous brain structure in cephalopods with well-developed brains, such as octopuses. This has been taken as evidence that the cerebellum performs functions important to Document 4::: Age class structure in fisheries and wildlife management is a part of population assessment. Age class structures can be used to model many populations including trees and fish. This method can be used to predict the occurrence of forest fires within a forest population. Age can be determined by counting growth rings in fish scales, otoliths, cross-sections of fin spines for species with thick spines such as triggerfish, or teeth for a few species. Each method has its merits and drawbacks. Fish scales are easiest to obtain, but may be unreliable if scales have fallen off the fish and new ones grown in their places. Fin spines may be unreliable for the same reason, and most fish do not have spines of sufficient thickness for clear rings to be visible. Otoliths will have stayed with the fish throughout its life history, but obtaining them requires killing the fish. Also, otoliths often require more preparation before ageing can occur. Analyzing fisheries age class structure An example of using age class structure to learn about a population is a regular bell curve for the population of 1-5 year-old fish with a very low population for the 3-year-olds. An age class structure with gaps in population size like the one described earlier implies a bad spawning year 3 years ago in that species. Often fish in younger age class structures have very low numbers because they were small enough to slip through the sampling nets, and may in fact have a very healthy population. See also Identification of aging in fish Population pyramid Population dynamics of fisheries The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. Goldfish, tuna, salmon, perch, and cod are examples of which group of fish? A. ray-finned fish B. bony fish C. cartilaginous fish D. spiny-lobed fish Answer:
sciq-4888
multiple_choice
What gives the plant cell strength and protection?
[ "the protons", "the cell nucleus", "the genes", "a cell wall" ]
D
Relavent Documents: Document 0::: Plant stem cells Plant stem cells are innately undifferentiated cells located in the meristems of plants. Plant stem cells serve as the origin of plant vitality, as they maintain themselves while providing a steady supply of precursor cells to form differentiated tissues and organs in plants. Two distinct areas of stem cells are recognised: the apical meristem and the lateral meristem. Plant stem cells are characterized by two distinctive properties, which are: the ability to create all differentiated cell types and the ability to self-renew such that the number of stem cells is maintained. Plant stem cells never undergo aging process but immortally give rise to new specialized and unspecialized cells, and they have the potential to grow into any organ, tissue, or cell in the body. Thus they are totipotent cells equipped with regenerative powers that facilitate plant growth and production of new organs throughout lifetime. Unlike animals, plants are immobile. As plants cannot escape from danger by taking motion, they need a special mechanism to withstand various and sometimes unforeseen environmental stress. Here, what empowers them to withstand harsh external influence and preserve life is stem cells. In fact, plants comprise the oldest and the largest living organisms on earth, including Bristlecone Pines in California, U.S. (4,842 years old), and the Giant Sequoia in mountainous regions of California, U.S. (87 meters in height and 2,000 tons in weight). This is possible because they have a modular body plan that enables them to survive substantial damage by initiating continuous and repetitive formation of new structures and organs such as leaves and flowers. Plant stem cells are also characterized by their location in specialized structures called meristematic tissues, which are located in root apical meristem (RAM), shoot apical meristem (SAM), and vascular system ((pro)cambium or vascular meristem.) Research and development Traditionally, plant stem ce Document 1::: A stem is one of two main structural axes of a vascular plant, the other being the root. It supports leaves, flowers and fruits, transports water and dissolved substances between the roots and the shoots in the xylem and phloem, photosynthesis takes place here, stores nutrients, and produces new living tissue. The stem can also be called halm or haulm or culms. The stem is normally divided into nodes and internodes: The nodes are the points of attachment for leaves and can hold one or more leaves. There are sometimes axillary buds between the stem and leaf which can grow into branches (with leaves, conifer cones, or flowers). Adventitious roots may also be produced from the nodes. Vines may produce tendrils from nodes. The internodes distance one node from another. The term "shoots" is often confused with "stems"; "shoots" generally refers to new fresh plant growth, including both stems and other structures like leaves or flowers. In most plants, stems are located above the soil surface, but some plants have underground stems. Stems have several main functions: Support for and the elevation of leaves, flowers, and fruits. The stems keep the leaves in the light and provide a place for the plant to keep its flowers and fruits. Transport of fluids between the roots and the shoots in the xylem and phloem. Storage of nutrients. Production of new living tissue. The normal lifespan of plant cells is one to three years. Stems have cells called meristems that annually generate new living tissue. Photosynthesis. Stems have two pipe-like tissues called xylem and phloem. The xylem tissue arises from the cell facing inside and transports water by the action of transpiration pull, capillary action, and root pressure. The phloem tissue arises from the cell facing outside and consists of sieve tubes and their companion cells. The function of phloem tissue is to distribute food from photosynthetic tissue to other tissues. The two tissues are separated by cambium, a tis Document 2::: The secondary cell wall is a structure found in many plant cells, located between the primary cell wall and the plasma membrane. The cell starts producing the secondary cell wall after the primary cell wall is complete and the cell has stopped expanding. Secondary cell walls provide additional protection to cells and rigidity and strength to the larger plant. These walls are constructed of layered sheaths of cellulose microfibrils, wherein the fibers are in parallel within each layer. The inclusion of lignin makes the secondary cell wall less flexible and less permeable to water than the primary cell wall. In addition to making the walls more resistant to degradation, the hydrophobic nature of lignin within these tissues is essential for containing water within the vascular tissues that carry it throughout the plant. The secondary cell wall consists primarily of cellulose, along with other polysaccharides, lignin, and glycoprotein. It sometimes consists of three distinct layers - S1, S2 and S3 - where the direction of the cellulose microfibrils differs between the layers. The direction of the microfibrils is called microfibril angle (MFA). In the secondary cell wall of fibres of trees a low microfibril angle is found in the S2-layer, while S1 and S3-layers show a higher MFA . However, the MFA can also change depending on the loads on the tissue. It has been shown that in reaction wood the MFA in S2-layer can vary. Tension wood has a low MFA, meaning that the microfibril is oriented parallel to the axis of the fibre. In compression wood the MFA is high and reaches up to 45°. These variations influence the mechanical properties of the cell wall. The secondary cell wall has different ratios of constituents compared to the primary wall. An example of this is that secondary wall in wood contains polysaccharides called xylan, whereas the primary wall contains the polysaccharide xyloglucan. The cells fraction in secondary walls is also higher. Pectins may also be a Document 3::: Cell mechanics is a sub-field of biophysics that focuses on the mechanical properties and behavior of living cells and how it relates to cell function. It encompasses aspects of cell biophysics, biomechanics, soft matter physics and rheology, mechanobiology and cell biology. Eukaryotic Eukaryotic cells are cells that consist of membrane-bound organelles, a membrane-bound nucleus, and more than one linear chromosome. Being much more complex than prokaryotic cells, cells without a true nucleus, eukaryotes must protect its organelles from outside forces. Plant Plant cell mechanics combines principles of biomechanics and mechanobiology to investigate the growth and shaping of the plant cells. Plant cells, similar to animal cells, respond to externally applied forces, such as by reorganization of their cytoskeletal network. The presence of a considerably rigid extracellular matrix, the cell wall, however, bestows the plant cells with a set of particular properties. Mainly, the growth of plant cells is controlled by the mechanics and chemical composition of the cell wall. A major part of research in plant cell mechanics is put toward the measurement and modeling of the cell wall mechanics to understand how modification of its composition and mechanical properties affects the cell function, growth and morphogenesis. Animal Because animal cells do not have cell walls to protect them like plant cells, they require other specialized structures to sustain external mechanical forces. All animal cells are encased within a cell membrane made of a thin lipid bilayer that protects the cell from exposure to the outside environment. Using receptors composed of protein structures, the cell membrane is able to let selected molecules within the cell. Inside the cell membrane includes the cytoplasm, which contains the cytoskeleton. A network of filamentous proteins including microtubules, intermediate filaments, and actin filaments makes up the cytoskeleton and helps maintain th Document 4::: The MSU-DOE Plant Research Laboratory (PRL), commonly referred to as Plant Research Lab, is a research institute funded to a large extent by the U.S. Department of Energy Office of Science and located at Michigan State University (MSU) in East Lansing, Michigan. The Plant Research Lab was founded in 1965, and it currently includes twelve laboratories that conduct collaborative basic research into the biology of diverse photosynthetic organisms, including plants, bacteria, and algae, in addition to developing new technologies towards addressing energy and food challenges. History 1964-1978 The contract for the establishment of the MSU-DOE Plant Research Laboratory was signed on March 6, 1964, between the U.S. Atomic Energy Commission (AEC) and Michigan State University. The institute was initially funded by the AEC's Division of Biology and Medicine, which saw a need for improving the state of plant sciences in the United States. The Division aimed to create a new program at one or more universities where student interest in plant research could be fostered. The contract signed between AEC and Michigan State University provided for a comprehensive research program in plant biology and related education and training at the graduate and postgraduate levels. The program was to draw strongly on related disciplines such as biochemistry, biophysics, genetics, microbiology, and others. In 1966, personnel of the new program - called MSU-AEC Plant Research Laboratory at that time - moved into their new quarters in the Plant Biology Laboratories building at Michigan State University. The first research projects generally focused on problems specific to plants, such as cell growth and its regulation by plant hormones, cell wall structure and composition, and the physiology of flower formation; other research projects addressed general biological problems, such as the regulation of enzyme formation during development and cellular and genetic aspects of hormone action. I The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. What gives the plant cell strength and protection? A. the protons B. the cell nucleus C. the genes D. a cell wall Answer:
sciq-10368
multiple_choice
What tissue consists of cells that form the body’s structure?
[ "fibrous", "reproductive", "connective", "congenital" ]
C
Relavent Documents: Document 0::: In a multicellular organism, an organ is a collection of tissues joined in a structural unit to serve a common function. In the hierarchy of life, an organ lies between tissue and an organ system. Tissues are formed from same type cells to act together in a function. Tissues of different types combine to form an organ which has a specific function. The intestinal wall for example is formed by epithelial tissue and smooth muscle tissue. Two or more organs working together in the execution of a specific body function form an organ system, also called a biological system or body system. An organ's tissues can be broadly categorized as parenchyma, the functional tissue, and stroma, the structural tissue with supportive, connective, or ancillary functions. For example, the gland's tissue that makes the hormones is the parenchyma, whereas the stroma includes the nerves that innervate the parenchyma, the blood vessels that oxygenate and nourish it and carry away its metabolic wastes, and the connective tissues that provide a suitable place for it to be situated and anchored. The main tissues that make up an organ tend to have common embryologic origins, such as arising from the same germ layer. Organs exist in most multicellular organisms. In single-celled organisms such as members of the eukaryotes, the functional analogue of an organ is known as an organelle. In plants, there are three main organs. The number of organs in any organism depends on the definition used. By one widely adopted definition, 79 organs have been identified in the human body. Animals Except for placozoans, multicellular animals including humans have a variety of organ systems. These specific systems are widely studied in human anatomy. The functions of these organ systems often share significant overlap. For instance, the nervous and endocrine system both operate via a shared organ, the hypothalamus. For this reason, the two systems are combined and studied as the neuroendocrine system. The sam Document 1::: Outline h1.00: Cytology h2.00: General histology H2.00.01.0.00001: Stem cells H2.00.02.0.00001: Epithelial tissue H2.00.02.0.01001: Epithelial cell H2.00.02.0.02001: Surface epithelium H2.00.02.0.03001: Glandular epithelium H2.00.03.0.00001: Connective and supportive tissues H2.00.03.0.01001: Connective tissue cells H2.00.03.0.02001: Extracellular matrix H2.00.03.0.03001: Fibres of connective tissues H2.00.03.1.00001: Connective tissue proper H2.00.03.1.01001: Ligaments H2.00.03.2.00001: Mucoid connective tissue; Gelatinous connective tissue H2.00.03.3.00001: Reticular tissue H2.00.03.4.00001: Adipose tissue H2.00.03.5.00001: Cartilage tissue H2.00.03.6.00001: Chondroid tissue H2.00.03.7.00001: Bone tissue; Osseous tissue H2.00.04.0.00001: Haemotolymphoid complex H2.00.04.1.00001: Blood cells H2.00.04.1.01001: Erythrocyte; Red blood cell H2.00.04.1.02001: Leucocyte; White blood cell H2.00.04.1.03001: Platelet; Thrombocyte H2.00.04.2.00001: Plasma H2.00.04.3.00001: Blood cell production H2.00.04.4.00001: Postnatal sites of haematopoiesis H2.00.04.4.01001: Lymphoid tissue H2.00.05.0.00001: Muscle tissue H2.00.05.1.00001: Smooth muscle tissue Document 2::: H2.00.04.4.01001: Lymphoid tissue H2.00.05.0.00001: Muscle tissue H2.00.05.1.00001: Smooth muscle tissue H2.00.05.2.00001: Striated muscle tissue H2.00.06.0.00001: Nerve tissue H2.00.06.1.00001: Neuron H2.00.06.2.00001: Synapse H2.00.06.2.00001: Neuroglia h3.01: Bones h3.02: Joints h3.03: Muscles h3.04: Alimentary system h3.05: Respiratory system h3.06: Urinary system h3.07: Genital system h3.08: Document 3::: Stroma () is the part of a tissue or organ with a structural or connective role. It is made up of all the parts without specific functions of the organ - for example, connective tissue, blood vessels, ducts, etc. The other part, the parenchyma, consists of the cells that perform the function of the tissue or organ. There are multiple ways of classifying tissues: one classification scheme is based on tissue functions and another analyzes their cellular components. Stromal tissue falls into the "functional" class that contributes to the body's support and movement. The cells which make up stroma tissues serve as a matrix in which the other cells are embedded. Stroma is made of various types of stromal cells. Examples of stroma include: stroma of iris stroma of cornea stroma of ovary stroma of thyroid gland stroma of thymus stroma of bone marrow lymph node stromal cell multipotent stromal cell (mesenchymal stem cell) Structure Stromal connective tissues are found in the stroma; this tissue belongs to the group connective tissue proper. The function of connective tissue proper is to secure the parenchymal tissue, including blood vessels and nerves of the stroma, and to construct organs and spread mechanical tension to reduce localised stress. Stromal tissue is primarily made of extracellular matrix containing connective tissue cells. Extracellular matrix is primarily composed of ground substance - a porous, hydrated gel, made mainly from proteoglycan aggregates - and connective tissue fibers. There are three types of fibers commonly found within the stroma: collagen type I, elastic, and reticular (collagen type III) fibres. Cells Wandering cells - cells that migrate into the tissue from blood stream in response to a variety of stimuli; for example, immune system blood cells causing inflammatory response. Fixed cells - cells that are permanent inhabitants of the tissue. Fibroblast - produce and secrete the organic parts of the ground substance and extrace Document 4::: Vertebrates Tendon cells, or tenocytes, are elongated fibroblast type cells. The cytoplasm is stretched between the collagen fibres of the tendon. They have a central cell nucleus with a prominent nucleolus. Tendon cells have a well-developed rough endoplasmic reticulum and they are responsible for synthesis and turnover of tendon fibres and ground substance. Invertebrates Tendon cells form a connecting epithelial layer between the muscle and shell in molluscs. In gastropods, for example, the retractor muscles connect to the shell via tendon cells. Muscle cells are attached to the collagenous myo-tendon space via hemidesmosomes. The myo-tendon space is then attached to the base of the tendon cells via basal hemidesmosomes, while apical hemidesmosomes, which sit atop microvilli, attach the tendon cells to a thin layer of collagen. This is in turn attached to the shell via organic fibres which insert into the shell. Molluscan tendon cells appear columnar and contain a large basal cell nucleus. The cytoplasm is filled with granular endoplasmic reticulum and sparse golgi. Dense bundles of microfilaments run the length of the cell connecting the basal to the apical hemidesmosomes. See also List of human cell types derived from the germ layers List of distinct cell types in the adult human body The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. What tissue consists of cells that form the body’s structure? A. fibrous B. reproductive C. connective D. congenital Answer:
ai2_arc-93
multiple_choice
Students studying membranes conducted an experiment using labeled paper cups filled with varying concentrations of red food coloring. After the experiment, the cups were empty and stained. What should be done with used cups?
[ "reuse the cups", "dispose of the cups", "recycle the cups", "relabel the cups" ]
C
Relavent Documents: Document 0::: Conceptual questions or conceptual problems in science, technology, engineering, and mathematics (STEM) education are questions that can be answered based only on the knowledge of relevant concepts, rather than performing extensive calculations. They contrast with most homework and exam problems in science and engineering that typically require plugging in numerical values into previously discussed formulas. Such "plug-and-chug" numerical problems can often be solved correctly by just matching the pattern of the problem to a previously discussed problem and changing the numerical inputs, which requires significant amounts of time to perform the calculations but does not test or deepen the understanding of how the concepts and formulas should work together. Conceptual questions, therefore, provide a good complement to conventional numerical problems because they need minimal or no calculations and instead encourage the students to engage more deeply with the underlying concepts and how they relate to formulas. Conceptual problems are often formulated as multiple-choice questions, making them easy to use during in-class discussions, particularly when utilizing active learning, peer instruction, and audience response. An example of a conceptual question in undergraduate thermodynamics is provided below: During adiabatic expansion of an ideal gas, its temperatureincreases decreases stays the same Impossible to tell/need more information The use of conceptual questions in physics was popularized by Eric Mazur, particularly in the form of multiple-choice tests that he called ConcepTests. In recent years, multiple websites that maintain lists of conceptual questions have been created by instructors for various disciplines. Some books on physics provide many examples of conceptual questions as well. Multiple conceptual questions can be assembled into a concept inventory to test the working knowledge of students at the beginning of a course or to track the improvement in Document 1::: The SAT Subject Test in Biology was the name of a one-hour multiple choice test given on biology by the College Board. A student chose whether to take the test depending upon college entrance requirements for the schools in which the student is planning to apply. Until 1994, the SAT Subject Tests were known as Achievement Tests; and from 1995 until January 2005, they were known as SAT IIs. Of all SAT subject tests, the Biology E/M test was the only SAT II that allowed the test taker a choice between the ecological or molecular tests. A set of 60 questions was taken by all test takers for Biology and a choice of 20 questions was allowed between either the E or M tests. This test was graded on a scale between 200 and 800. The average for Molecular is 630 while Ecological is 591. On January 19 2021, the College Board discontinued all SAT Subject tests, including the SAT Subject Test in Biology E/M. This was effective immediately in the United States, and the tests were to be phased out by the following summer for international students. This was done as a response to changes in college admissions due to the impact of the COVID-19 pandemic on education. Format This test had 80 multiple-choice questions that were to be answered in one hour. All questions had five answer choices. Students received one point for each correct answer, lost ¼ of a point for each incorrect answer, and received 0 points for questions left blank. The student's score was based entirely on his or her performance in answering the multiple-choice questions. The questions covered a broad range of topics in general biology. There were more specific questions related respectively on ecological concepts (such as population studies and general Ecology) on the E test and molecular concepts such as DNA structure, translation, and biochemistry on the M test. Preparation The College Board suggested a year-long course in biology at the college preparatory level, as well as a one-year course in algebra, a Document 2::: GRE Subject Biochemistry, Cell and Molecular Biology was a standardized exam provided by ETS (Educational Testing Service) that was discontinued in December 2016. It is a paper-based exam and there are no computer-based versions of it. ETS places this exam three times per year: once in April, once in October and once in November. Some graduate programs in the United States recommend taking this exam, while others require this exam score as a part of the application to their graduate programs. ETS sends a bulletin with a sample practice test to each candidate after registration for the exam. There are 180 questions within the biochemistry subject test. Scores are scaled and then reported as a number between 200 and 990; however, in recent versions of the test, the maximum and minimum reported scores have been 760 (corresponding to the 99 percentile) and 320 (1 percentile) respectively. The mean score for all test takers from July, 2009, to July, 2012, was 526 with a standard deviation of 95. After learning that test content from editions of the GRE® Biochemistry, Cell and Molecular Biology (BCM) Test has been compromised in Israel, ETS made the decision not to administer this test worldwide in 2016–17. Content specification Since many students who apply to graduate programs in biochemistry do so during the first half of their fourth year, the scope of most questions is largely that of the first three years of a standard American undergraduate biochemistry curriculum. A sampling of test item content is given below: Biochemistry (36%) A Chemical and Physical Foundations Thermodynamics and kinetics Redox states Water, pH, acid-base reactions and buffers Solutions and equilibria Solute-solvent interactions Chemical interactions and bonding Chemical reaction mechanisms B Structural Biology: Structure, Assembly, Organization and Dynamics Small molecules Macromolecules (e.g., nucleic acids, polysaccharides, proteins and complex lipids) Supramolecular complexes (e.g. Document 3::: Adaptive comparative judgement is a technique borrowed from psychophysics which is able to generate reliable results for educational assessment – as such it is an alternative to traditional exam script marking. In the approach, judges are presented with pairs of student work and are then asked to choose which is better, one or the other. By means of an iterative and adaptive algorithm, a scaled distribution of student work can then be obtained without reference to criteria. Introduction Traditional exam script marking began in Cambridge 1792 when, with undergraduate numbers rising, the importance of proper ranking of students was growing. So in 1792 the new Proctor of Examinations, William Farish, introduced marking, a process in which every examiner gives a numerical score to each response by every student, and the overall total mark puts the students in the final rank order. Francis Galton (1869) noted that, in an unidentified year about 1863, the Senior Wrangler scored 7,634 out of a maximum of 17,000, while the Second Wrangler scored 4,123. (The 'Wooden Spoon' scored only 237.) Prior to 1792, a team of Cambridge examiners convened at 5pm on the last day of examining, reviewed the 19 papers each student had sat – and published their rank order at midnight. Marking solved the problems of numbers and prevented unfair personal bias, and its introduction was a step towards modern objective testing, the format it is best suited to. But the technology of testing that followed, with its major emphasis on reliability and the automatisation of marking, has been an uncomfortable partner for some areas of educational achievement: assessing writing or speaking, and other kinds of performance need something more qualitative and judgemental. The technique of Adaptive Comparative Judgement is an alternative to marking. It returns to the pre-1792 idea of sorting papers according to their quality, but retains the guarantee of reliability and fairness. It is by far the most rel Document 4::: In mathematical psychology and education theory, a knowledge space is a combinatorial structure used to formulate mathematical models describing the progression of a human learner. Knowledge spaces were introduced in 1985 by Jean-Paul Doignon and Jean-Claude Falmagne, and remain in extensive use in the education theory. Modern applications include two computerized tutoring systems, ALEKS and the defunct RATH. Formally, a knowledge space assumes that a domain of knowledge is a collection of concepts or skills, each of which must be eventually mastered. Not all concepts are interchangeable; some require other concepts as prerequisites. Conversely, competency at one skill may ease the acquisition of another through similarity. A knowledge space marks out which collections of skills are feasible: they can be learned without mastering any other skills. Under reasonable assumptions, the collection of feasible competencies forms the mathematical structure known as an antimatroid. Researchers and educators usually explore the structure of a discipline's knowledge space as a latent class model. Motivation Knowledge Space Theory attempts to address shortcomings of standardized testing when used in educational psychometry. Common tests, such as the SAT and ACT, compress a student's knowledge into a very small range of ordinal ranks, in the process effacing the conceptual dependencies between questions. Consequently, the tests cannot distinguish between true understanding and guesses, nor can they identify a student's particular weaknesses, only the general proportion of skills mastered. The goal of knowledge space theory is to provide a language by which exams can communicate What the student can do and What the student is ready to learn. Model structure Knowledge Space Theory-based models presume that an educational subject can be modeled as a finite set of concepts, skills, or topics. Each feasible state of knowledge about is then a subset of ; the set of The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. Students studying membranes conducted an experiment using labeled paper cups filled with varying concentrations of red food coloring. After the experiment, the cups were empty and stained. What should be done with used cups? A. reuse the cups B. dispose of the cups C. recycle the cups D. relabel the cups Answer:
sciq-2617
multiple_choice
What is formed when humid air near the ground cools below its dew point?
[ "steam", "smoke", "weather", "fog" ]
D
Relavent Documents: Document 0::: The dew point of a given body of air is the temperature to which it must be cooled to become saturated with water vapor. This temperature depends on the pressure and water content of the air. When the air is cooled below the dew point, its moisture capacity is reduced and airborne water vapor will condense to form liquid water known as dew. When this occurs through the air's contact with a colder surface, dew will form on that surface. The dew point is affected by the air's humidity. The more moisture the air contains, the higher its dew point. When the temperature is below the freezing point of water, the dew point is called the frost point, as frost is formed via deposition rather than condensation. In liquids, the analog to the dew point is the cloud point. Humidity If all the other factors influencing humidity remain constant, at ground level the relative humidity rises as the temperature falls; this is because less vapor is needed to saturate the air. In normal conditions, the dew point temperature will not be greater than the air temperature, since relative humidity typically does not exceed 100%. In technical terms, the dew point is the temperature at which the water vapor in a sample of air at constant barometric pressure condenses into liquid water at the same rate at which it evaporates. At temperatures below the dew point, the rate of condensation will be greater than that of evaporation, forming more liquid water. The condensed water is called dew when it forms on a solid surface, or frost if it freezes. In the air, the condensed water is called either fog or a cloud, depending on its altitude when it forms. If the temperature is below the dew point, and no dew or fog forms, the vapor is called supersaturated. This can happen if there are not enough particles in the air to act as condensation nuclei. The dew point depends on how much water vapor the air contains. If the air is very dry and has few water molecules, the dew point is low and surface Document 1::: In atmospheric science, equivalent temperature is the temperature of air in a parcel from which all the water vapor has been extracted by an adiabatic process. Air contains water vapor that has been evaporated into it from liquid sources (lakes, sea, etc...). The energy needed to do that has been taken from the air. Taking a volume of air at temperature and mixing ratio of , drying it by condensation will restore energy to the airmass. This will depend on the latent heat release as: where: : latent heat of evaporation (2400 kJ/kg at 25°C to 2600 kJ/kg at −40°C) : specific heat at constant pressure for air (≈ 1004 J/(kg·K)) Tables exist for exact values of the last two coefficients. See also Wet-bulb temperature Potential temperature Atmospheric thermodynamics Equivalent potential temperature Bibliography M Robitzsch, Aequivalenttemperatur und Aequivalentthemometer, Meteorologische Zeitschrift, 1928, pp. 313-315. M K Yau and R.R. Rogers, Short Course in Cloud Physics, Third Edition, published by Butterworth-Heinemann, January 1, 1989, 304 pages. J.V. Iribarne and W.L. Godson, Atmospheric Thermodynamics, published by D. Reidel Publishing Company, Dordrecht, Holland, 1973, 222 pages Atmospheric thermodynamics Atmospheric temperature Meteorological quantities Document 2::: Humidity is the concentration of water vapor present in the air. Water vapor, the gaseous state of water, is generally invisible to the human eye. Humidity indicates the likelihood for precipitation, dew, or fog to be present. Humidity depends on the temperature and pressure of the system of interest. The same amount of water vapor results in higher relative humidity in cool air than warm air. A related parameter is the dew point. The amount of water vapor needed to achieve saturation increases as the temperature increases. As the temperature of a parcel of air decreases it will eventually reach the saturation point without adding or losing water mass. The amount of water vapor contained within a parcel of air can vary significantly. For example, a parcel of air near saturation may contain 28 g of water per cubic metre of air at , but only 8 g of water per cubic metre of air at . Three primary measurements of humidity are widely employed: absolute, relative, and specific. Absolute humidity is expressed as either mass of water vapor per volume of moist air (in grams per cubic meter) or as mass of water vapor per mass of dry air (usually in grams per kilogram). Relative humidity, often expressed as a percentage, indicates a present state of absolute humidity relative to a maximum humidity given the same temperature. Specific humidity is the ratio of water vapor mass to total moist air parcel mass. Humidity plays an important role for surface life. For animal life dependent on perspiration (sweating) to regulate internal body temperature, high humidity impairs heat exchange efficiency by reducing the rate of moisture evaporation from skin surfaces. This effect can be calculated using a heat index table, also known as a humidex. The notion of air "holding" water vapor or being "saturated" by it is often mentioned in connection with the concept of relative humidity. This, however, is misleading—the amount of water vapor that enters (or can enter) a given space at a g Document 3::: Conceptual questions or conceptual problems in science, technology, engineering, and mathematics (STEM) education are questions that can be answered based only on the knowledge of relevant concepts, rather than performing extensive calculations. They contrast with most homework and exam problems in science and engineering that typically require plugging in numerical values into previously discussed formulas. Such "plug-and-chug" numerical problems can often be solved correctly by just matching the pattern of the problem to a previously discussed problem and changing the numerical inputs, which requires significant amounts of time to perform the calculations but does not test or deepen the understanding of how the concepts and formulas should work together. Conceptual questions, therefore, provide a good complement to conventional numerical problems because they need minimal or no calculations and instead encourage the students to engage more deeply with the underlying concepts and how they relate to formulas. Conceptual problems are often formulated as multiple-choice questions, making them easy to use during in-class discussions, particularly when utilizing active learning, peer instruction, and audience response. An example of a conceptual question in undergraduate thermodynamics is provided below: During adiabatic expansion of an ideal gas, its temperatureincreases decreases stays the same Impossible to tell/need more information The use of conceptual questions in physics was popularized by Eric Mazur, particularly in the form of multiple-choice tests that he called ConcepTests. In recent years, multiple websites that maintain lists of conceptual questions have been created by instructors for various disciplines. Some books on physics provide many examples of conceptual questions as well. Multiple conceptual questions can be assembled into a concept inventory to test the working knowledge of students at the beginning of a course or to track the improvement in Document 4::: Haze is traditionally an atmospheric phenomenon in which dust, smoke, and other dry particulates suspended in air obscure visibility and the clarity of the sky. The World Meteorological Organization manual of codes includes a classification of particulates causing horizontal obscuration into categories of fog, ice fog, steam fog, mist, haze, smoke, volcanic ash, dust, sand, and snow. Sources for particles that cause haze include farming (ploughing in dry weather), traffic, industry, windy weather, volcanic activity and wildfires. Seen from afar (e.g. an approaching airplane) and depending on the direction of view with respect to the Sun, haze may appear brownish or bluish, while mist tends to be bluish grey instead. Whereas haze often is considered a phenomenon occurring in dry air, mist formation is a phenomenon in saturated, humid air. However, haze particles may act as condensation nuclei that leads to the subsequent vapor condensation and formation of mist droplets; such forms of haze are known as "wet haze". In meteorological literature, the word haze is generally used to denote visibility-reducing aerosols of the wet type suspended in the atmosphere. Such aerosols commonly arise from complex chemical reactions that occur as sulfur dioxide gases emitted during combustion are converted into small droplets of sulfuric acid when exposed. The reactions are enhanced in the presence of sunlight, high relative humidity, and an absence of air flow (wind). A small component of wet-haze aerosols appear to be derived from compounds released by trees when burning, such as terpenes. For all these reasons, wet haze tends to be primarily a warm-season phenomenon. Large areas of haze covering many thousands of kilometers may be produced under extensive favorable conditions each summer. Air pollution Haze often occurs when suspended dust and smoke particles accumulate in relatively dry air. When weather conditions block the dispersal of smoke and other pollutants they concen The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. What is formed when humid air near the ground cools below its dew point? A. steam B. smoke C. weather D. fog Answer:
sciq-2673
multiple_choice
What kind of structure do purines have?
[ "helical stucture", "double ring structure", "single ring structure", "triple ring structure" ]
B
Relavent Documents: Document 0::: Biomolecular structure is the intricate folded, three-dimensional shape that is formed by a molecule of protein, DNA, or RNA, and that is important to its function. The structure of these molecules may be considered at any of several length scales ranging from the level of individual atoms to the relationships among entire protein subunits. This useful distinction among scales is often expressed as a decomposition of molecular structure into four levels: primary, secondary, tertiary, and quaternary. The scaffold for this multiscale organization of the molecule arises at the secondary level, where the fundamental structural elements are the molecule's various hydrogen bonds. This leads to several recognizable domains of protein structure and nucleic acid structure, including such secondary-structure features as alpha helixes and beta sheets for proteins, and hairpin loops, bulges, and internal loops for nucleic acids. The terms primary, secondary, tertiary, and quaternary structure were introduced by Kaj Ulrik Linderstrøm-Lang in his 1951 Lane Medical Lectures at Stanford University. Primary structure The primary structure of a biopolymer is the exact specification of its atomic composition and the chemical bonds connecting those atoms (including stereochemistry). For a typical unbranched, un-crosslinked biopolymer (such as a molecule of a typical intracellular protein, or of DNA or RNA), the primary structure is equivalent to specifying the sequence of its monomeric subunits, such as amino acids or nucleotides. The primary structure of a protein is reported starting from the amino N-terminus to the carboxyl C-terminus, while the primary structure of DNA or RNA molecule is known as the nucleic acid sequence reported from the 5' end to the 3' end. The nucleic acid sequence refers to the exact sequence of nucleotides that comprise the whole molecule. Often, the primary structure encodes sequence motifs that are of functional importance. Some examples of such motif Document 1::: In mathematics, a fibrifold is (roughly) a fiber space whose fibers and base spaces are orbifolds. They were introduced by , who introduced a system of notation for 3-dimensional fibrifolds and used this to assign names to the 219 affine space group types. 184 of these are considered reducible, and 35 irreducible. Irreducible cubic space groups The 35 irreducible space groups correspond to the cubic space group. Irreducible group symbols (indexed 195−230) in Hermann–Mauguin notation, Fibrifold notation, geometric notation, and Coxeter notation: Document 2::: The UCL Faculty of Mathematical and Physical Sciences is one of the 11 constituent faculties of University College London (UCL). The Faculty, the UCL Faculty of Engineering Sciences and the UCL Faculty of the Built Envirornment (The Bartlett) together form the UCL School of the Built Environment, Engineering and Mathematical and Physical Sciences. Departments The Faculty currently comprises the following departments: UCL Department of Chemistry UCL Department of Earth Sciences UCL Department of Mathematics Chalkdust is an online mathematics interest magazine published by Department of Mathematics students starting in 2015 UCL Department of Natural Sciences UCL Department of Physics & Astronomy UCL Department of Science and Technology Studies UCL Department of Space & Climate Physics (Mullard Space Science Laboratory) UCL Department of Statistical Science London Centre for Nanotechnology - a joint venture between UCL and Imperial College London established in 2003 following the award of a £13.65m higher education grant under the Science Research Infrastructure Fund. Research centres and institutes The Faculty is closely involved with the following research centres and institutes: UCL Centre for Materials Research UCL Centre for Mathematics and Physics in the Life Sciences and Experimental Biology (CoMPLEX) - an inter-disciplinary virtual centre that seeks to bring together mathematicians, physical scientists, computer scientists and engineers upon the problems posed by complexity in biology and biomedicine. The centre works with 29 departments and Institutes across UCL. It has a MRes/PhD program that requires that its students also belong to at least one of these Departments/Institutes. The centre is based in the Physics Building on the UCL main campus. Centre for Planetary Science at UCL/Birkbeck UCL Clinical Operational Research Unit (CORU) - CORU sits within the Department of Mathematics and is a team of researchers dedicated to applying operational research, Document 3::: Several universities have designed interdisciplinary courses with a focus on human biology at the undergraduate level. There is a wide variation in emphasis ranging from business, social studies, public policy, healthcare and pharmaceutical research. Americas Human Biology major at Stanford University, Palo Alto (since 1970) Stanford's Human Biology Program is an undergraduate major; it integrates the natural and social sciences in the study of human beings. It is interdisciplinary and policy-oriented and was founded in 1970 by a group of Stanford faculty (Professors Dornbusch, Ehrlich, Hamburg, Hastorf, Kennedy, Kretchmer, Lederberg, and Pittendrigh). It is a very popular major and alumni have gone to post-graduate education, medical school, law, business and government. Human and Social Biology (Caribbean) Human and Social Biology is a Level 4 & 5 subject in the secondary and post-secondary schools in the Caribbean and is optional for the Caribbean Secondary Education Certification (CSEC) which is equivalent to Ordinary Level (O-Level) under the British school system. The syllabus centers on structure and functioning (anatomy, physiology, biochemistry) of human body and the relevance to human health with Caribbean-specific experience. The syllabus is organized under five main sections: Living organisms and the environment, life processes, heredity and variation, disease and its impact on humans, the impact of human activities on the environment. Human Biology Program at University of Toronto The University of Toronto offers an undergraduate program in Human Biology that is jointly offered by the Faculty of Arts & Science and the Faculty of Medicine. The program offers several major and specialist options in: human biology, neuroscience, health & disease, global health, and fundamental genetics and its applications. Asia BSc (Honours) Human Biology at All India Institute of Medical Sciences, New Delhi (1980–2002) BSc (honours) Human Biology at AIIMS (New Document 4::: In mathematics, the Coxeter complex, named after H. S. M. Coxeter, is a geometrical structure (a simplicial complex) associated to a Coxeter group. Coxeter complexes are the basic objects that allow the construction of buildings; they form the apartments of a building. Construction The canonical linear representation The first ingredient in the construction of the Coxeter complex associated to a Coxeter system is a certain representation of , called the canonical representation of . Let be a Coxeter system with Coxeter matrix . The canonical representation is given by a vector space with basis of formal symbols , which is equipped with the symmetric bilinear form . In particular, . The action of on is then given by . This representation has several foundational properties in the theory of Coxeter groups; for instance, is positive definite if and only if is finite. It is a faithful representation of . Chambers and the Tits cone This representation describes as a reflection group, with the caveat that might not be positive definite. It becomes important then to distinguish the representation from its dual . The vectors lie in and have corresponding dual vectors in given by where the angled brackets indicate the natural pairing between and . Now acts on and the action is given by for and any . Then is a reflection in the hyperplane . One has the fundamental chamber ; this has faces the so-called walls, . The other chambers can be obtained from by translation: they are the for . The Tits cone is . This need not be the whole of . Of major importance is the fact that is convex. The closure of is a fundamental domain for the action of on . The Coxeter complex The Coxeter complex of with respect to is , where is the multiplicative group of positive reals. Examples Finite dihedral groups The dihedral groups (of order 2n) are Coxeter groups, of corresponding type . These have the presentation . The canonical linear representa The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. What kind of structure do purines have? A. helical stucture B. double ring structure C. single ring structure D. triple ring structure Answer:
sciq-7276
multiple_choice
As a battery is depleted what happens to its internal resistance?
[ "increases", "stagnates", "changes", "reduces" ]
A
Relavent Documents: Document 0::: A battery is a source of electric power consisting of one or more electrochemical cells with external connections for powering electrical devices. When a battery is supplying power, its positive terminal is the cathode and its negative terminal is the anode. The terminal marked negative is the source of electrons that will flow through an external electric circuit to the positive terminal. When a battery is connected to an external electric load, a redox reaction converts high-energy reactants to lower-energy products, and the free-energy difference is delivered to the external circuit as electrical energy. Historically the term "battery" specifically referred to a device composed of multiple cells; however, the usage has evolved to include devices composed of a single cell. Primary (single-use or "disposable") batteries are used once and discarded, as the electrode materials are irreversibly changed during discharge; a common example is the alkaline battery used for flashlights and a multitude of portable electronic devices. Secondary (rechargeable) batteries can be discharged and recharged multiple times using an applied electric current; the original composition of the electrodes can be restored by reverse current. Examples include the lead–acid batteries used in vehicles and lithium-ion batteries used for portable electronics such as laptops and mobile phones. Batteries come in many shapes and sizes, from miniature cells used to power hearing aids and wristwatches to, at the largest extreme, huge battery banks the size of rooms that provide standby or emergency power for telephone exchanges and computer data centers. Batteries have much lower specific energy (energy per unit mass) than common fuels such as gasoline. In automobiles, this is somewhat offset by the higher efficiency of electric motors in converting electrical energy to mechanical work, compared to combustion engines. History Invention Benjamin Franklin first used the term "battery" in 1749 wh Document 1::: Conceptual questions or conceptual problems in science, technology, engineering, and mathematics (STEM) education are questions that can be answered based only on the knowledge of relevant concepts, rather than performing extensive calculations. They contrast with most homework and exam problems in science and engineering that typically require plugging in numerical values into previously discussed formulas. Such "plug-and-chug" numerical problems can often be solved correctly by just matching the pattern of the problem to a previously discussed problem and changing the numerical inputs, which requires significant amounts of time to perform the calculations but does not test or deepen the understanding of how the concepts and formulas should work together. Conceptual questions, therefore, provide a good complement to conventional numerical problems because they need minimal or no calculations and instead encourage the students to engage more deeply with the underlying concepts and how they relate to formulas. Conceptual problems are often formulated as multiple-choice questions, making them easy to use during in-class discussions, particularly when utilizing active learning, peer instruction, and audience response. An example of a conceptual question in undergraduate thermodynamics is provided below: During adiabatic expansion of an ideal gas, its temperatureincreases decreases stays the same Impossible to tell/need more information The use of conceptual questions in physics was popularized by Eric Mazur, particularly in the form of multiple-choice tests that he called ConcepTests. In recent years, multiple websites that maintain lists of conceptual questions have been created by instructors for various disciplines. Some books on physics provide many examples of conceptual questions as well. Multiple conceptual questions can be assembled into a concept inventory to test the working knowledge of students at the beginning of a course or to track the improvement in Document 2::: Memory effect, also known as battery effect, lazy battery effect, or battery memory, is an effect observed in nickel-cadmium rechargeable batteries that causes them to hold less charge. It describes the situation in which nickel-cadmium batteries gradually lose their maximum energy capacity if they are repeatedly recharged after being only partially discharged. The battery appears to "remember" the smaller capacity. True memory effect The term "memory" came from an aerospace nickel-cadmium application in which the cells were repeatedly discharged to 25% of available capacity (give or take 1%) by exacting computer control, then recharged to 100% capacity without overcharge. This long-term, repetitive cycle régime, with no provision for overcharge, resulted in a loss of capacity beyond the 25% discharge point. True memory cannot exist if any one (or more) of the following conditions holds: batteries achieve full overcharge. discharge is not exactly the same each cycle, within plus or minus 3% discharge is to less than 1.0 volt per cell True memory-effect is specific to sintered-plate nickel-cadmium cells, and is exceedingly difficult to reproduce, especially in lower ampere-hour cells. In one particular test program designed to induce the effect, none was found after more than 700 precisely-controlled charge/discharge cycles. In the program, spirally-wound one-ampere-hour cells were used. In a follow-up program, 20-ampere-hour aerospace-type cells were used on a similar test régime; memory effects were observed after a few hundred cycles. Other problems perceived as memory effect Phenomena which are not true memory effects may also occur in battery types other than sintered-plate nickel-cadmium cells. In particular, lithium-based cells, not normally subject to the memory effect, may change their voltage levels so that a virtual decrease of capacity may be perceived by the battery control system. Temporary effects Voltage depression due to long-term over-chargi Document 3::: Self-discharge is a phenomenon in batteries in which internal chemical reactions reduce the stored charge of the battery without any connection between the electrodes or any external circuit. Self-discharge decreases the shelf life of batteries and causes them to have less than a full charge when actually put to use. How fast self-discharge in a battery occurs is dependent on the type of battery, state of charge, charging current, ambient temperature and other factors. Primary batteries are not designed for recharging between manufacturing and use, thus have battery chemistry that has to have a much lower self-discharge rate than older types of secondary cells, but have lost that advantage with the development of rechargeable secondary cells with very low self discharge rates like NiMH cells. Self-discharge is a chemical reaction, just as closed-circuit discharge is, and tends to occur more quickly at higher temperatures. Storing batteries at lower temperatures thus reduces the rate of self-discharge and preserves the initial energy stored in the battery. Self-discharge is also thought to be reduced as a passivation layer develops on the electrodes over time. Typical self-discharge by battery type Document 4::: A rechargeable battery, storage battery, or secondary cell (formally a type of energy accumulator), is a type of electrical battery which can be charged, discharged into a load, and recharged many times, as opposed to a disposable or primary battery, which is supplied fully charged and discarded after use. It is composed of one or more electrochemical cells. The term "accumulator" is used as it accumulates and stores energy through a reversible electrochemical reaction. Rechargeable batteries are produced in many different shapes and sizes, ranging from button cells to megawatt systems connected to stabilize an electrical distribution network. Several different combinations of electrode materials and electrolytes are used, including lead–acid, zinc–air, nickel–cadmium (NiCd), nickel–metal hydride (NiMH), lithium-ion (Li-ion), lithium iron phosphate (LiFePO4), and lithium-ion polymer (Li-ion polymer). Rechargeable batteries typically initially cost more than disposable batteries but have a much lower total cost of ownership and environmental impact, as they can be recharged inexpensively many times before they need replacing. Some rechargeable battery types are available in the same sizes and voltages as disposable types, and can be used interchangeably with them. Billions of dollars in research are being invested around the world for improving batteries and industry also focuses on building better batteries. Some characteristics of rechargeable battery are given below: In rechargeable batteries, energy is induced by applying an external source to the chemical substances. The chemical reaction that occurs in them is reversible. Internal resistance is comparatively low. They have a high self-discharge rate comparatively. They have a bulky and complex design. They have high resell value. Applications Devices which use rechargeable batteries include automobile starters, portable consumer devices, light vehicles (such as motorized wheelchairs, golf carts, e The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. As a battery is depleted what happens to its internal resistance? A. increases B. stagnates C. changes D. reduces Answer:
sciq-5230
multiple_choice
The muscles in arteries and veins that are largely responsible for regulation of blood pressure are known as what type?
[ "smooth muscles", "elongated muscles", "vascular muscles", "opposing muscles" ]
A
Relavent Documents: Document 0::: Vascular recruitment is the increase in the number of perfused capillaries in response to a stimulus. I.e., the more you exercise regularly, the more oxygen can reach your muscles. Vascular recruitment may also be called capillary recruitment. Vascular recruitment in skeletal muscle The term «vascular recruitment» or «capillary recruitment» usually refers to the increase in the number perfused capillaries in skeletal muscle in response to a stimulus. The most important stimulus in humans is regular exercise. Vascular recruitment in skeletal muscle is thought to enhance the capillary surface area for oxygen exchange and decrease the oxygen diffusion distance. Other stimuli are possible. Insulin can act as a stimulus for vascular recruitment in skeletal muscle. This process may also improve glucose delivery to skeletal muscle by increasing the surface area for diffusion. That insulin can act in this way has been proposed based on increases in limb blood flow and skeletal muscle blood volume which occurred after hyperinsulinemia. The exact extent of capillary recruitment in intact skeletal muscle in response to regular exercise or insulin is unknown, because non-invasive measurement techniques are not yet extremely precise. Being overweight or obese may negatively interfere with vascular recruitment in skeletal muscle. Vascular recruitment in the lung Vascular recruitment in the lung (i.e., in the pulmonary microcirculation) may be noteworthy to healthcare professionals in emergency medicine, because it may increase evidence of lung injury, and increase pulmonary capillary protein leak. Vascular recruitment in the brain Vascular recruitment in the brain is thought to lead to new capillaries and increase the cerebral blood flow. Controversy The existence of vascular recruitment in response to a stimulus has been disputed by some researchers. However, most researchers accept that vascular recruitment exists. Document 1::: Pathophysiology is a study which explains the function of the body as it relates to diseases and conditions. The pathophysiology of hypertension is an area which attempts to explain mechanistically the causes of hypertension, which is a chronic disease characterized by elevation of blood pressure. Hypertension can be classified by cause as either essential (also known as primary or idiopathic) or secondary. About 90–95% of hypertension is essential hypertension. Some authorities define essential hypertension as that which has no known explanation, while others define its cause as being due to overconsumption of sodium and underconsumption of potassium. Secondary hypertension indicates that the hypertension is a result of a specific underlying condition with a well-known mechanism, such as chronic kidney disease, narrowing of the aorta or kidney arteries, or endocrine disorders such as excess aldosterone, cortisol, or catecholamines. Persistent hypertension is a major risk factor for hypertensive heart disease, coronary artery disease, stroke, aortic aneurysm, peripheral artery disease, and chronic kidney disease. Cardiac output and peripheral resistance are the two determinants of arterial pressure. Cardiac output is determined by stroke volume and heart rate; stroke volume is related to myocardial contractility and to the size of the vascular compartment. Peripheral resistance is determined by functional and anatomic changes in small arteries and arterioles. Genetics Single gene mutations can cause Mendelian forms of high blood pressure; ten genes have been identified which cause these monogenic forms of hypertension. These mutations affect blood pressure by altering kidney salt handling. There is greater similarity in blood pressure within families than between families, which indicates a form of inheritance, and this is not due to shared environmental factors. With the aid of genetic analysis techniques, a statistically significant linkage of blood pressure to Document 2::: Vascular remodelling is a process which occurs when an immature heart begins contracting, pushing fluid through the early vasculature. The process typically begins at day 22, and continues to the tenth week of human embryogenesis. This first passage of fluid initiates a signal cascade and cell movement based on physical cues including shear stress and circumferential stress, which is necessary for the remodelling of the vascular network, arterial-venous identity, angiogenesis, and the regulation of genes through mechanotransduction. This embryonic process is necessary for the future stability of the mature vascular network. Vasculogenesis is the initial establishment of the components of the blood vessel network, or vascular tree. This is dictated by genetic factors and has no inherent function other than to lay down the preliminary outline of the circulatory system. Once fluid flow begins, biomechanical and hemodynamic inputs are applied to the system set up by vasculogenesis, and the active remodelling process can begin. Physical cues such as pressure, velocity, flow patterns, and shear stress are known to act on the vascular network in a number of ways, including branching morphogenesis, enlargement of vessels in high-flow areas, angiogenesis, and the development of vein valves. The mechanotransduction of these physical cues to endothelial and smooth muscle cells in the vascular wall can also trigger the promotion or repression of certain genes which are responsible for vasodilation, cell alignment, and other shear stress-mitigating factors. This relationship between genetics and environment is not clearly understood, but researchers are attempting to clarify it by combining reliable genetic techniques, such as genetically-ablated model organisms and tissues, with new technologies developed to measure and track flow patterns, velocity profiles, and pressure fluctuations in vivo. Both in vivo study and modelling are necessary tools to understand this complex pr Document 3::: Compliance is the ability of a hollow organ (vessel) to distend and increase volume with increasing transmural pressure or the tendency of a hollow organ to resist recoil toward its original dimensions on application of a distending or compressing force. It is the reciprocal of "elastance", hence elastance is a measure of the tendency of a hollow organ to recoil toward its original dimensions upon removal of a distending or compressing force. Blood vessels The terms elastance and compliance are of particular significance in cardiovascular physiology and respiratory physiology. In compliance, an increase in volume occurs in a vessel when the pressure in that vessel is increased. The tendency of the arteries and veins to stretch in response to pressure has a large effect on perfusion and blood pressure. This physically means that blood vessels with a higher compliance deform easier than lower compliance blood vessels under the same pressure and volume conditions. Venous compliance is approximately 30 times larger than arterial compliance. Compliance is calculated using the following equation, where ΔV is the change in volume (mL), and ΔP is the change in pressure (mmHg): Physiologic compliance is generally in agreement with the above and adds dP/dt as a common academic physiologic measurement of both pulmonary and cardiac tissues. Adaptation of equations initially applied to rubber and latex allow modeling of the dynamics of pulmonary and cardiac tissue compliance. Veins have a much higher compliance than arteries (largely due to their thinner walls.) Veins which are abnormally compliant can be associated with edema. Pressure stockings are sometimes used to externally reduce compliance, and thus keep blood from pooling in the legs. Vasodilation and vasoconstriction are complex phenomena; they are functions not merely of the fluid mechanics of pressure and tissue elasticity but also of active homeostatic regulation with hormones and cell signaling, in which Document 4::: Low pressure baroreceptors are baroreceptors that relay information derived from blood pressure within the autonomic nervous system. They are stimulated by stretching of the vessel wall. They are located in large systemic veins and in the walls of the atria of the heart, and pulmonary vasculature. Low pressure baroreceptors are also referred to as volume receptors and cardiopulmonary baroreceptors. Structure There are two types of cardiopulmonary baroreceptors. Type A receptors and Type B receptors are both within the atria of the heart. Type A receptors are activated by wall tension, which develops by atrial contraction during ventricular diastole. Type B receptors are activated by wall stretch, which develops by atrial filling during ventricular systole. In the right atrium, the stretch receptors occur at the junction of the venae cavae. In the left atrium, the junction is at the pulmonary veins. Function Low pressure baroreceptors are involved in regulation of the blood volume. The blood volume determines the mean pressure throughout the system, especially venous side where most of the blood is held. Low pressure baroreceptors have both circulatory and renal effects, which produce changes in hormone secretion. These secretions can effect the retention of salt and water as well as influencing the intake of salt and water within the kidneys. The renal will allow the receptors to change the longer-term mean pressure. Through the vagal nerve, impulses transmits from the atria to the vagal center of the medulla. This causes a reduction in the sympathetic outflow the kidney, which results in decreased renal blood flow and decreased urine output. This same sympathetic outflow is increased to the sinus node in the atria, which causes increased heart rate/cardiac output. These cardiopulmonary receptors also inhibits vagal stimulation in the vasoconstrictor center of the medulla resulting in decreased release of angiotensin, aldosterone, and vasopressin.[1] See also The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. The muscles in arteries and veins that are largely responsible for regulation of blood pressure are known as what type? A. smooth muscles B. elongated muscles C. vascular muscles D. opposing muscles Answer:
sciq-7792
multiple_choice
When the sun is below the horizon and thus not visible on a direct line, the light path will bend slightly and thus make the sun visible by what?
[ "infrared", "refraction", "thermal", "reflection" ]
B
Relavent Documents: Document 0::: Limb darkening is an optical effect seen in stars (including the Sun) and planets, where the central part of the disk appears brighter than the edge, or limb. Its understanding offered early solar astronomers an opportunity to construct models with such gradients. This encouraged the development of the theory of radiative transfer. Basic theory Optical depth, a measure of the opacity of an object or part of an object, combines with effective temperature gradients inside the star to produce limb darkening. The light seen is approximately the integral of all emission along the line of sight modulated by the optical depth to the viewer (i.e. 1/e times the emission at 1 optical depth, 1/e2 times the emission at 2 optical depths, etc.). Near the center of the star, optical depth is effectively infinite, causing approximately constant brightness. However, the effective optical depth decreases with increasing radius due to lower gas density and a shorter line of sight distance through the star, producing a gradual dimming, until it becomes zero at the apparent edge of the star. The effective temperature of the photosphere also decreases with increasing distance from the center of the star. The radiation emitted from a gas is approximately black-body radiation, the intensity of which is proportional to the fourth power of the temperature. Therefore, even in line of sight directions where the optical depth is effectively infinite, the emitted energy comes from cooler parts of the photosphere, resulting in less total energy reaching the viewer. The temperature in the atmosphere of a star does not always decrease with increasing height. For certain spectral lines, the optical depth is greatest in regions of increasing temperature. In this scenario, the phenomenon of "limb brightening" is seen instead. In the Sun, the existence of a temperature minimum region means that limb brightening should start to dominate at far-infrared or radio wavelengths. Above the lower atmosphe Document 1::: The circumzenithal arc, also called the circumzenith arc (CZA), upside-down rainbow, and the Bravais arc, is an optical phenomenon similar in appearance to a rainbow, but belonging to the family of halos arising from refraction of sunlight through ice crystals, generally in cirrus or cirrostratus clouds, rather than from raindrops. The arc is located at a considerable distance (approximately 46°) above the observed Sun and at most forms a quarter of a circle centered on the zenith. It has been called "a smile in the sky", its first impression being that of an upside-down rainbow. The CZA is one of the brightest and most colorful members of the halo family. Its colors, ranging from violet on top to red at the bottom, are purer than those of a rainbow because there is much less overlap in their formation. The intensity distribution along the circumzenithal arc requires consideration of several effects: Fresnel's reflection and transmission amplitudes, atmospheric attenuation, chromatic dispersion (i.e. the width of the arc), azimuthal angular dispersion (ray bundling), and geometrical constraints. In effect, the CZA is brightest when the Sun is observed at about 20°. Contrary to public awareness, the CZA is not a rare phenomenon, but it tends to be overlooked since it occurs so far overhead. It is worthwhile to look out for it when sun dogs are visible, since the same type of ice crystals that cause them (plate-shaped hexagonal prisms in horizontal orientation) are responsible for the CZA. Formation The light that forms the CZA enters an ice crystal through its flat top face, and exits through a side prism face. The refraction of almost parallel sunlight through what is essentially a 90-degree prism accounts for the wide color separation and the purity of color. The CZA can only form when the sun is at an altitude lower than 32.2°. The CZA is brightest when the sun is at 22° above the horizon, which causes sunlight to enter and exit the crystals at the minimum d Document 2::: Sunspot drawing or sunspot sketching is the act of drawing sunspots. Sunspots are darker spots on the Sun's photosphere. Their prediction is very important for radio communication because they are strongly associated with solar activity, which can seriously damage radio equipment. History Sunspots were probably first drawn by an English monk John of Worcester on 8 December 1128. There are records of observing sunspots from 28 BC, but that is the first known drawing of sunspots, almost 500 years before the telescope. His drawing seems to come around solar maximum. Five days later, the Korean astronomer saw the northern lights above his country, so this is also the first prediction of coronal mass ejection. In 1612, Galileo Galilei was writing letters on sunspots to Mark Welser. They were published in 1613. In his telescope, he saw some darker spots on Sun's surface. It seems like he was observing the Sun and drawing sunspots without any filter, which is very hard. He said, "The spots seen at sunset are observed to change the place from one evening to the next, descending from the part of the sun then uppermost, and the morning spots ascend from the part then below ...". From there it seems that he observed the Sun at sunset, but not at sunrise because of the high horizon of Apennines. It is also possible, that he was referring to Scheiner's observation, where he first saw that the Sun is rotating. He complained that he couldn't observe the Sun every morning and evening because of low clouds and so he couldn't see their motion with confidence. He Probably never observed them in the middle of the day. In the same year, his student Benedetto Castelli invented a new method for observing and drawing sunspots, the projection method. Probably, he was never looking at the Sun directly through the telescope. The Mount Wilson observatory started drawing sunspots by hand in 1917. This tradition continues still today. The early drawers did not draw their shapes and positions Document 3::: A sunbreak is a natural phenomenon in which sunlight obscured over a relatively large area penetrates the obscuring material in a localized space. The typical example is of sunlight shining through a hole in cloud cover. A sunbreak piercing clouds normally produces a visible shaft of light reflected by atmospheric dust and or moisture, called a sunbeam. Another form of sunbreak occurs when sunlight passes into an area otherwise shadowed by surrounding large buildings through a gap temporarily aligned with the position of the sun. The word is considered by some to have origins in Pacific Northwest English. In art Artists such as cartoonists and filmmakers often use sunbreak to show protection or relief being brought upon an area of land by God or a receding storm. Document 4::: The umbra, penumbra and antumbra are three distinct parts of a shadow, created by any light source after impinging on an opaque object. Assuming no diffraction, for a collimated beam (such as a point source) of light, only the umbra is cast. These names are most often used for the shadows cast by celestial bodies, though they are sometimes used to describe levels, such as in sunspots. Umbra The umbra (Latin for "shadow") is the innermost and darkest part of a shadow, where the light source is completely blocked by the occluding body. An observer within the umbra experiences a total occultation. The umbra of a round body occluding a round light source forms a right circular cone. When viewed from the cone's apex, the two bodies appear the same size. The distance from the Moon to the apex of its umbra is roughly equal to that between the Moon and Earth: . Since Earth's diameter is 3.7 times the Moon's, its umbra extends correspondingly farther: roughly . Penumbra The penumbra (from the Latin paene "almost, nearly") is the region in which only a portion of the light source is obscured by the occluding body. An observer in the penumbra experiences a partial eclipse. An alternative definition is that the penumbra is the region where some or all of the light source is obscured (i.e., the umbra is a subset of the penumbra). For example, NASA's Navigation and Ancillary Information Facility defines that a body in the umbra is also within the penumbra. Antumbra The antumbra (from Latin ante, "before") is the region from which the occluding body appears entirely within the disc of the light source. An observer in this region experiences an annular eclipse, in which a bright ring is visible around the eclipsing body. If the observer moves closer to the light source, the apparent size of the occluding body increases until it causes a full umbra. See also Antisolar point Earth's shadow The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. When the sun is below the horizon and thus not visible on a direct line, the light path will bend slightly and thus make the sun visible by what? A. infrared B. refraction C. thermal D. reflection Answer:
sciq-10822
multiple_choice
What promotes cell division and are necessary for growing plants in tissue culture?
[ "auxins", "polyamines", "cytokinins", "mitochondria" ]
C
Relavent Documents: Document 0::: Primary growth in plants is growth that takes place from the tips of roots or shoots. It leads to lengthening of roots and stems and sets the stage for organ formation. It is distinguished from secondary growth that leads to widening. Plant growth takes place in well defined plant locations. Specifically, the cell division and differentiation needed for growth occurs in specialized structures called meristems. These consist of undifferentiated cells (meristematic cells) capable of cell division. Cells in the meristem can develop into all the other tissues and organs that occur in plants. These cells continue to divide until they differentiate and then lose the ability to divide. Thus, the meristems produce all the cells used for plant growth and function. At the tip of each stem and root, an apical meristem adds cells to their length, resulting in the elongation of both. Examples of primary growth are the rapid lengthening growth of seedlings after they emerge from the soil and the penetration of roots deep into the soil. Furthermore, all plant organs arise ultimately from cell divisions in the apical meristems, followed by cell expansion and differentiation. In contrast, a growth process that involves thickening of stems takes place within lateral meristems that are located throughout the length of the stems. The lateral meristems of larger plants also extend into the roots. This thickening is secondary growth and is needed to give mechanical support and stability to the plant. The functions of a plant's growing tips – its apical (or primary) meristems – include: lengthening through cell division and elongation; organising the development of leaves along the stem; creating platforms for the eventual development of branches along the stem; laying the groundwork for organ formation by providing a stock of undifferentiated or incompletely differentiated cells that later develop into fully differentiated cells, thereby ultimately allowing the "spatial deployment Document 1::: Micropropagation or tissue culture is the practice of rapidly multiplying plant stock material to produce many progeny plants, using modern plant tissue culture methods. Micropropagation is used to multiply a wide variety of plants, such as those that have been genetically modified or bred through conventional plant breeding methods. It is also used to provide a sufficient number of plantlets for planting from seedless plants, plants that do not respond well to vegetative reproduction or where micropropagation is the cheaper means of propagating (e.g. Orchids). Cornell University botanist Frederick Campion Steward discovered and pioneered micropropagation and plant tissue culture in the late 1950s and early 1960s. Steps In short, steps of micropropagation can be divided into four stages: Selection of mother plant Multiplication Rooting and acclimatizing Transfer new plant to soil Selection of mother plant Micropropagation begins with the selection of plant material to be propagated. The plant tissues are removed from an intact plant in a sterile condition. Clean stock materials that are free of viruses and fungi are important in the production of the healthiest plants. Once the plant material is chosen for culture, the collection of explant(s) begins and is dependent on the type of tissue to be used; including stem tips, anthers, petals, pollen and other plant tissues. The explant material is then surface sterilized, usually in multiple courses of bleach and alcohol washes, and finally rinsed in sterilized water. This small portion of plant tissue, sometimes only a single cell, is placed on a growth medium, typically containing Macro and micro nutrients, water, sucrose as an energy source and one or more plant growth regulators (plant hormones). Usually the medium is thickened with a gelling agent, such as agar, to create a gel which supports the explant during growth. Some plants are easily grown on simple media, but others require more complicated media f Document 2::: The quiescent centre is a group of cells, up to 1,000 in number, in the form of a hemisphere, with the flat face toward the root tip of vascular plants. It is a region in the apical meristem of a root where cell division proceeds very slowly or not at all, but the cells are capable of resuming meristematic activity when the tissue surrounding them is damaged. Cells of root apical meristems do not all divide at the same rate. Determinations of relative rates of DNA synthesis show that primary roots of Zea, Vicia and Allium have quiescent centres to the meristems, in which the cells divide rarely or never in the course of normal root growth (Clowes, 1958). Such a quiescent centre includes the cells at the apices of the histogens of both stele and cortex. Its presence can be deduced from the anatomy of the apex in Zea (Clowes, 1958), but not in the other species which lack discrete histogens. History In 1953, during the course of analysing the organization and function of the root apices, Frederick Albert Lionel Clowes (born 10 September 1921), at the School of Botany (now Department of Plant Sciences), University of Oxford, proposed the term ‘cytogenerative centre’ to denote ‘the region of an apical meristem from which all future cells are derived’. This term had been suggested to him by Mr Harold K. Pusey, a lecturer in embryology at the Department of Zoology and Comparative Anatomy at the same university. The 1953 paper of Clowes reported results of his experiments on Fagus sylvatica and Vicia faba, in which small oblique and wedge-shaped excisions were made at the tip of the primary root, at the most distal level of the root body, near the boundary with the root cap. The results of these experiments were striking and showed that: the root which grew on following the excision was normal at the undamaged meristem side; the nonexcised meristem portion contributed to the regeneration of the excised portion; the regenerated part of the root had abnormal patterning and Document 3::: Plant embryonic development, also plant embryogenesis is a process that occurs after the fertilization of an ovule to produce a fully developed plant embryo. This is a pertinent stage in the plant life cycle that is followed by dormancy and germination. The zygote produced after fertilization must undergo various cellular divisions and differentiations to become a mature embryo. An end stage embryo has five major components including the shoot apical meristem, hypocotyl, root meristem, root cap, and cotyledons. Unlike the embryonic development in animals, and specifically in humans, plant embryonic development results in an immature form of the plant, lacking most structures like leaves, stems, and reproductive structures. However, both plants and animals including humans, pass through a phylotypic stage that evolved independently and that causes a developmental constraint limiting morphological diversification. Morphogenic events Embryogenesis occurs naturally as a result of single, or double fertilization, of the ovule, giving rise to two distinct structures: the plant embryo and the endosperm which go on to develop into a seed. The zygote goes through various cellular differentiations and divisions in order to produce a mature embryo. These morphogenic events form the basic cellular pattern for the development of the shoot-root body and the primary tissue layers; it also programs the regions of meristematic tissue formation. The following morphogenic events are only particular to eudicots, and not monocots. Plant Following fertilization, the zygote and endosperm are present within the ovule, as seen in stage I of the illustration on this page. Then the zygote undergoes an asymmetric transverse cell division that gives rise to two cells - a small apical cell resting above a large basal cell. These two cells are very different, and give rise to different structures, establishing polarity in the embryo. apical cellThe small apical cell is on the top and contains Document 4::: In botany, secondary growth is the growth that results from cell division in the cambia or lateral meristems and that causes the stems and roots to thicken, while primary growth is growth that occurs as a result of cell division at the tips of stems and roots, causing them to elongate, and gives rise to primary tissue. Secondary growth occurs in most seed plants, but monocots usually lack secondary growth. If they do have secondary growth, it differs from the typical pattern of other seed plants. The formation of secondary vascular tissues from the cambium is a characteristic feature of dicotyledons and gymnosperms. In certain monocots, the vascular tissues are also increased after the primary growth is completed but the cambium of these plants is of a different nature. In the living pteridophytes this feature is extremely rare, only occurring in Isoetes. Lateral meristems In many vascular plants, secondary growth is the result of the activity of the two lateral meristems, the cork cambium and vascular cambium. Arising from lateral meristems, secondary growth increases the width of the plant root or stem, rather than its length. As long as the lateral meristems continue to produce new cells, the stem or root will continue to grow in diameter. In woody plants, this process produces wood, and shapes the plant into a tree with a thickened trunk. Because this growth usually ruptures the epidermis of the stem or roots, plants with secondary growth usually also develop a cork cambium. The cork cambium gives rise to thickened cork cells to protect the surface of the plant and reduce water loss. If this is kept up over many years, this process may produce a layer of cork. In the case of the cork oak it will yield harvestable cork. In nonwoody plants Secondary growth also occurs in many nonwoody plants, e.g. tomato, potato tuber, carrot taproot and sweet potato tuberous root. A few long-lived leaves also have secondary growth. Abnormal secondary growth Abnormal seco The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. What promotes cell division and are necessary for growing plants in tissue culture? A. auxins B. polyamines C. cytokinins D. mitochondria Answer:
sciq-9145
multiple_choice
Where does electric power originate?
[ "coal mines", "magnets", "the Sun", "power plants" ]
D
Relavent Documents: Document 0::: Power engineering, also called power systems engineering, is a subfield of electrical engineering that deals with the generation, transmission, distribution, and utilization of electric power, and the electrical apparatus connected to such systems. Although much of the field is concerned with the problems of three-phase AC power – the standard for large-scale power transmission and distribution across the modern world – a significant fraction of the field is concerned with the conversion between AC and DC power and the development of specialized power systems such as those used in aircraft or for electric railway networks. Power engineering draws the majority of its theoretical base from electrical engineering and mechanical engineering. History Pioneering years Electricity became a subject of scientific interest in the late 17th century. Over the next two centuries a number of important discoveries were made including the incandescent light bulb and the voltaic pile. Probably the greatest discovery with respect to power engineering came from Michael Faraday who in 1831 discovered that a change in magnetic flux induces an electromotive force in a loop of wire—a principle known as electromagnetic induction that helps explain how generators and transformers work. In 1881 two electricians built the world's first power station at Godalming in England. The station employed two waterwheels to produce an alternating current that was used to supply seven Siemens arc lamps at 250 volts and thirty-four incandescent lamps at 40 volts. However supply was intermittent and in 1882 Thomas Edison and his company, The Edison Electric Light Company, developed the first steam-powered electric power station on Pearl Street in New York City. The Pearl Street Station consisted of several generators and initially powered around 3,000 lamps for 59 customers. The power station used direct current and operated at a single voltage. Since the direct current power could not be easily transf Document 1::: This timeline outlines the key developments in the United Kingdom electricity industry from the start of electricity supplies in the 1870s to the present day. It identifies significant developments in technology for the generation, transmission and use of electricity; outlines developments in the structure of the industry including key organisations and facilities; and records the legislation and regulations that have governed the UK electricity industry.   The first part is a chronological table of significant events; the second part is a list of local acts of Parliament (1879–1948) illustrating the growth of electricity supplies. Significant events The following is a list of significant events in the history of the electricity sector in the United Kingdom. Local legislation timeline In addition to the Public General Acts on electricity supply given in the above table, there were also Local Acts. The Electric Lighting Acts 1882 to 1909 permitted local authorities and companies to apply to the Board of Trade for provisional orders and licences to supply electricity. The orders were confirmed by local Electric Lighting Orders Confirmation Acts. Local authorities and companies could also obtain Local Acts for electricity supply. A sample of Local Acts is given in the table below. Note that Local Acts have a chapter number for the relevant year in lower-case Roman numerals. See also Energy policy of the United Kingdom Energy use and conservation in the United Kingdom Energy switching services in the UK Document 2::: The following is a chronology of discoveries concerning the magnetosphere. 1600 - William Gilbert in London suggests the Earth is a giant magnet. 1741 - Hiorter and Anders Celsius note that the polar aurora is accompanied by a disturbance of the magnetic needle. 1820 - Hans Christian Ørsted discovers electric currents create magnetic effects. André-Marie Ampère deduces that magnetism is basically the force between electric currents. 1833 - Carl Friedrich Gauss and Wilhelm Weber worked out the mathematical theory for separating the inner and outer Magnetosphere sources of Earth's magnetic field. 1843 - Samuel Schwabe, a German amateur astronomer, shows the existence of an 11-year sunspot cycle. 1859 - Richard Carrington in England observes a solar flare; 17 hours later a large magnetic storm begins. 1892 - George Ellery Hale introduces the spectroheliograph, observing the Sun in hydrogen light from the chromosphere, a sensitive way of detecting flares. He confirms the connection between flares and magnetic storms. 1900-3 - Kristian Birkeland experiments with beams of electrons aimed at a magnetized sphere ("terrella") in a vacuum chamber. The electrons hit near the magnetic poles, leading him to propose that the polar aurora is created by electron beams from the Sun. Birkeland also observes magnetic disturbances associated with the aurora, suggesting to him that localized "polar magnetic storms" exist in the auroral zone. 1902 - Marconi successfully sends radio signals across the Atlantic Ocean. Oliver Heaviside suggests that the radio waves found their way around the curving Earth because they were reflected from electrically conducting layer at the top of the atmosphere. 1926 - Gregory Breit and Merle Tuve measure the distance to the conducting layer—which R. Watson-Watt proposes naming "ionosphere"—by measuring the time needed for a radio signal to bounce back. 1930-1 - After Birkeland's "electron beam" theory is disproved, Sydney Chapman and Vincent Ferrar Document 3::: is a series of educational Japanese manga books. Each volume explains a particular subject in science or mathematics. The series is published in Japan by Ohmsha, in America by No Starch Press, in France by H&K, in Italy by L'Espresso, in Malaysia by Pelangi, and in Taiwan by 世茂出版社. Different volumes are written by different authors. Volume list The series to date of February 18, 2023 consists of 50 volumes in Japan. Fourteen of them have been published in English and six in French so far, with more planned, including one on sociology. In contrast, 49 of them have been published and translated in Chinese. One of the books has been translated into Swedish. The Manga Guide to Electricity This 207-page guide consists of five chapters, excluding the preface, prologue, and epilogue. It explains fundamental concepts in the study of electricity, including Ohm's law and Fleming's rules. There are written explanations after each manga chapter. An index and two pages to write notes on are provided. The story begins with Rereko, an average high-school student who lives in Electopia (the land of electricity), failing her final electricity exam. She was forced to skip her summer vacation and go to Earth for summer school. The high school teacher Teteka sensei gave her a “transdimensional walkie-talkie and observation robot” named Yonosuke, which she will use later for going back and forth to Earth. Rereko then met her mentor Hikaru sensei, who did Electrical Engineering Research at a university in Tokyo, Japan. Hikaru sensei explained to Rereko the basic components of electricity with occasional humorous moments. In the fifth chapter, Hikaru sensei told Rereko her studies are over. Yonosuke soon received Electopia’s call to pick Rereko up. Hikaru sensei told her that he learned a lot from teaching her, and she should keep at it, even back on Electopia. Rereko told Hikaru sensei to keep working on his research and clean his room often. Her sentence was interrupted, and she wa Document 4::: The Bernard Price Memorial Lecture is the premier annual lecture of the South African Institute of Electrical Engineers. It is of general scientific or engineering interest and is given by an invited guest, often from overseas, at several of the major centres on South Africa. The main lecture and accompanying dinner are usually held at the University of Witwatersrand and it is also presented in the space of one week at other centres, typically Cape Town, Durban, East London and Port Elizabeth. The Lecture is named in memory of the eminent electrical engineer Bernard Price. The first Lecture was held in 1951 and it has occurred as an annual event ever since. Lecturers 1951 Basil Schonland 1952 A M Jacobs 1953 H J Van Eck 1954 J M Meek 1955 Frank Nabarro 1956 A L Hales 1957 P G Game 1958 Colin Cherry 1959 Thomas Allibone 1960 M G Say 1961 Willis Jackson 1963 W R Stevens 1964 William Pickering 1965 G H Rawcliffe 1966 Harold Bishop 1967 Eric Eastwood 1968 F J Lane 1969 A H Reeves 1970 Andrew R Cooper 1971 Herbert Haslegrave 1972 W J Bray 1973 R Noser 1974 D Kind 1975 L Kirchmayer 1976 S Jones 1977 J Johnson 1978 T G E Cockbain 1979 A R Hileman 1980 James Redmond 1981 L M Muntzing 1982 K F Raby 1983 R Isermann 1984 M N John 1985 J W L de Villiers 1986 Derek Roberts 1987 Wolfram Boeck 1988 Karl Gehring 1989 Leonard Sagan 1990 GKF Heyner 1991 P S Blythin 1992 P M Neches 1993 P Radley 1994 P R Rosen 1995 F P Sioshansi 1996 J Taylor 1997 M Chamia 1998 C Gellings 1999 M W Kennedy 2000 John Midwinter 2001 Pragasen Pillay 2002 Polina Bayvel 2003 Case Rijsdijk 2004 Frank Larkins 2005 Igor Aleksander 2006 Kevin Warwick 2007 Skip Hatfield 2008 Sami Solanki 2009 William Gruver 2010 Glenn Ricart 2011 Philippe Paelinck 2012 Nick Frydas 2013 Vint Cerf 2014 Ian Jandrell 2015 Saurabh Sinha 2016 Tshilidzi Marwala 2017 Fulufhelo Nelwamondo 2018 Ian Craig 2019 Robert Metcalfe 2020 Roger Price The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. Where does electric power originate? A. coal mines B. magnets C. the Sun D. power plants Answer:
scienceQA-10664
multiple_choice
Select the mammal.
[ "California toad", "gray crowned crane", "giraffe", "western rattlesnake" ]
C
A gray crowned crane is a bird. It has feathers, two wings, and a beak. Cranes wade in shallow water to look for food. Cranes eat insects, worms, and plants. A California toad is an amphibian. It has moist skin and begins its life in water. Toads do not have teeth! They swallow their food whole. A giraffe is a mammal. It has hair and feeds its young milk. Giraffes eat mostly leaves that are too high up for other animals to reach. A western rattlesnake is a reptile. It has scaly, waterproof skin. Rattlesnakes have fangs they can use to inject venom into their prey.
Relavent Documents: Document 0::: In zoology, mammalogy is the study of mammals – a class of vertebrates with characteristics such as homeothermic metabolism, fur, four-chambered hearts, and complex nervous systems. Mammalogy has also been known as "mastology," "theriology," and "therology." The archive of number of mammals on earth is constantly growing, but is currently set at 6,495 different mammal species including recently extinct. There are 5,416 living mammals identified on earth and roughly 1,251 have been newly discovered since 2006. The major branches of mammalogy include natural history, taxonomy and systematics, anatomy and physiology, ethology, ecology, and management and control. The approximate salary of a mammalogist varies from $20,000 to $60,000 a year, depending on their experience. Mammalogists are typically involved in activities such as conducting research, managing personnel, and writing proposals. Mammalogy branches off into other taxonomically-oriented disciplines such as primatology (study of primates), and cetology (study of cetaceans). Like other studies, mammalogy is also a part of zoology which is also a part of biology, the study of all living things. Research purposes Mammalogists have stated that there are multiple reasons for the study and observation of mammals. Knowing how mammals contribute or thrive in their ecosystems gives knowledge on the ecology behind it. Mammals are often used in business industries, agriculture, and kept for pets. Studying mammals habitats and source of energy has led to aiding in survival. The domestication of some small mammals has also helped discover several different diseases, viruses, and cures. Mammalogist A mammalogist studies and observes mammals. In studying mammals, they can observe their habitats, contributions to the ecosystem, their interactions, and the anatomy and physiology. A mammalogist can do a broad variety of things within the realm of mammals. A mammalogist on average can make roughly $58,000 a year. This dep Document 1::: Vertebrate zoology is the biological discipline that consists of the study of Vertebrate animals, i.e., animals with a backbone, such as fish, amphibians, reptiles, birds and mammals. Many natural history museums have departments named Vertebrate Zoology. In some cases whole museums bear this name, e.g. the Museum of Vertebrate Zoology at the University of California, Berkeley. Subdivisions This subdivision of zoology has many further subdivisions, including: Ichthyology - the study of fishes. Mammalogy - the study of mammals. Chiropterology - the study of bats. Primatology - the study of primates. Ornithology - the study of birds. Herpetology - the study of reptiles. Batrachology - the study of amphibians. These divisions are sometimes further divided into more specific specialties. Document 2::: Mammals Alces alces (Linnaeus, 1758) — Eurasian elk, moose Axis axis (Erxleben, 1777) — chital, axis deer Bison bison (Linnaeus, 1758) — American bison, buffalo Capreolus capreolus (Linnaeus, 1758) — European roe deer, roe deer Caracal caracal (Schreber, 1776) — caracal Chinchilla chinchilla (Lichtenstein, 1829) — short-tailed chinchilla Chiropotes chiropotes (Humboldt, 1811) — red-backed bearded saki Cricetus cricetus (Linnaeus, 1758) — common hamster, European hamster Crocuta crocuta (Erxleben, 1777) — spotted hyena Dama dama (Linnaeus, 1758) — European fallow deer Feroculus feroculus (Kelaart, 1850) — Kelaart's long-clawed shrew Gazella gazella (Pallas, 1766) — mountain gazelle Genetta genetta (Linnaeus, 1758) — common genet Gerbillus gerbillus (Olivier, 1801) — lesser Egyptian gerbil Giraffa giraffa (von Schreber, 1784) — southern giraffe Glis glis (Linnaeus, 1766) — European edible dormouse, European fat dormouse Gorilla gorilla (Savage, 1847) — western gorilla Gulo gulo (Linnaeus, 1758) — wolverine Hoolock hoolock (Harlan, 1834) — western hoolock gibbon Hyaena hyaena (Linnaeus, 1758) — striped hyena Indri indri (Gmelin, 1788) — indri Jaculus jaculus (Linnaeus, 1758) — lesser Egyptian jerboa Lagurus lagurus (Pallas, 1773) — steppe vole, steppe lemming Lemmus lemmus (Linnaeus, 1758) — Norway lemming Lutra lutra (Linnaeus, 1758) — European otter Lynx lynx (Linnaeus, 1758) — Eurasian lynx Macrophyllum macrophyllum (Schinz, 1821) — long-legged bat Marmota marmota (Linnaeus, 1758) — Alpine marmot Martes martes (Linnaeus, 1758) — European pine marten, pine marten Meles meles (Linnaeus, 1758) — European badg Document 3::: Wildlife endocrinology is a branch of endocrinology which deals with the study of the endocrine system in vertebrates as well as invertebrates. It deals with hormone analysis which helps understand the basic physiological functions such as metabolic activity, reproduction, health and well-being of the organism. Hormones can be measured via multiple biological matrices such as blood, urine, faeces, hair and saliva, the choice of which depends upon the type of information required, ease of sample collection, assays available to analyse the sample and species difference in hormone metabolism and excretion. Non-invasive samples are preferred for wild ranging animals whereas, both invasive as well as non-invasive samples are used to study captive animals. Background Wildlife endocrinology can help understand the mechanisms by which organisms cope with changing environment and therefore plays an important role in wildlife conservation. Field endocrine strategies have progressed quickly as of late and can give considerable data on the growth, stress, and reproductive status of individual creatures, in this manner giving knowledge into current and future reactions of populations to changes in the earth. Ecological stressors and regenerative status can be recognized nonlethally by estimating various endocrine-related endpoints, like steroids for plasma, living and nonliving tissue, urine, and feces. Data on the natural or endocrine necessities of individual species for typical development, advancement, and multiplication will give basic data to species and environment preservation. For some taxa, essential data on endocrinology is missing, and progress in preservation endocrinology will require approaches that are both "fundamental" and "applied" and incorporate reconciliation of research center and field approaches. Sampling methods in wildlife endocrinology Sampling always depends upon the feasibility of the sampling protocol. If one is assessing the health of humans or Document 4::: In zoology, megafauna (from Greek μέγας megas "large" and Neo-Latin fauna "animal life") are large animals. The most common thresholds to be a megafauna are weighing over (i.e., having a mass comparable to or larger than a human) or weighing over a tonne, (i.e., having a mass comparable to or larger than an ox). The first of these include many species not popularly thought of as overly large, and being the only few large animals left in a given range/area, such as white-tailed deer, Thomson's gazelle, and red kangaroo. In practice, the most common usage encountered in academic and popular writing describes land mammals roughly larger than a human that are not (solely) domesticated. The term is especially associated with the Pleistocene megafauna – the land animals that are considered archetypical of the last ice age, such as mammoths, the majority of which in northern Eurasia, Australia-New Guinea and the Americas became extinct within the last forty thousand years. Among living animals, the term megafauna is most commonly used for the largest extant terrestrial mammals, which includes (but is not limited to) elephants, giraffes, hippopotamuses, rhinoceroses, and large bovines. Of these five categories of large herbivores, only bovines are presently found outside of Africa and southern Asia, but all the others were formerly more wide-ranging, with their ranges and populations continually shrinking and decreasing over time. Wild equines are another example of megafauna, but their current ranges are largely restricted to the Old World, specifically Africa and Asia. Megafaunal species may be categorized according to their dietary type: megaherbivores (e.g., elephants), megacarnivores (e.g., lions), and, more rarely, megaomnivores (e.g., bears). The megafauna is also categorized by the class of animals that it belongs to, which are mammals, birds, reptiles, amphibians, fish, and invertebrates. Other common uses are for giant aquatic species, especially whales, as The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. Select the mammal. A. California toad B. gray crowned crane C. giraffe D. western rattlesnake Answer:
scienceQA-5792
multiple_choice
Select the plant.
[ "Penguins walk and swim.", "Humans eat plants and animals.", "Chili peppers have green leaves.", "Manta rays swim underwater." ]
C
A chili pepper is a plant. It has many green leaves. Chili peppers give food a spicy flavor. A penguin is an animal. It walks and swims. A penguin is a bird that lives near water. Penguins cannot fly! They use their wings to swim. A manta ray is an animal. It swims underwater. Manta rays are fish. They have triangle-shaped fins. A human is an animal! Humans eat plants and animals. Humans are primates. Monkeys and apes are also primates.
Relavent Documents: Document 0::: What a Plant Knows is a popular science book by Daniel Chamovitz, originally published in 2012, discussing the sensory system of plants. A revised edition was published in 2017. Release details / Editions / Publication Hardcover edition, 2012 Paperback version, 2013 Revised edition, 2017 What a Plant Knows has been translated and published in a number of languages. Document 1::: Plant ontology (PO) is a collection of ontologies developed by the Plant Ontology Consortium. These ontologies describe anatomical structures and growth and developmental stages across Viridiplantae. The PO is intended for multiple applications, including genetics, genomics, phenomics, and development, taxonomy and systematics, semantic applications and education. Project Members Oregon State University New York Botanical Garden L. H. Bailey Hortorium at Cornell University Ensembl SoyBase SSWAP SGN Gramene The Arabidopsis Information Resource (TAIR) MaizeGDB University of Missouri at St. Louis Missouri Botanical Garden See also Generic Model Organism Database Open Biomedical Ontologies OBO Foundry Document 2::: The Desert Garden Conservatory is a large botanical greenhouse and part of the Huntington Library, Art Collections and Botanical Gardens, in San Marino, California. It was constructed in 1985. The Desert Garden Conservatory is adjacent to the Huntington Desert Garden itself. The garden houses one of the most important collections of cacti and other succulent plants in the world, including a large number of rare and endangered species. The Desert Garden Conservatory serves The Huntington and public communities as a conservation facility, research resource and genetic diversity preserve. John N. Trager is the Desert Collection curator. There are an estimated 10,000 succulents worldwide, about 1,500 of them classified as cacti. The Huntington Desert Garden Conservatory now contains more than 2,200 accessions, representing more than 43 plant families, 1,261 different species and subspecies, and 246 genera. The plant collection contains examples from the world's major desert regions, including the southern United States, Argentina, Bolivia, Chile, Brazil, Canary Islands, Madagascar, Malawi, Mexico and South Africa. The Desert Collection plays a critical role as a repository of biodiversity, in addition to serving as an outreach and education center. Propagation program to save rare and endangered plants Some studies estimate that as many as two-thirds of the world's flora and fauna may become extinct during the course of the 21st century, the result of global warming and encroaching development. Scientists alarmed by these prospects are working diligently to propagate plants outside their natural habitats, in protected areas. Ex-situ cultivation, as this practice is known, can serve as a stopgap for plants that will otherwise be lost to the world as their habitats disappear. To this end, The Huntington has a program to protect and plant propagate endangered plant species, designated International Succulent Introductions (ISI). The aim of the ISI program is to pr Document 3::: The Department of Plant Sciences is a department of the University of Cambridge that conducts research and teaching in plant sciences. It was established in 1904, although the university has had a professor of botany since 1724. Research , the department pursues three strategic targets of research Global food security Synthetic biology and biotechnology Climate science and ecosystem conservation See also the Sainsbury Laboratory Cambridge University Notable academic staff Sir David Baulcombe, FRS, Regius Professor of Botany Beverley Glover, Professor of Plant systematics and evolution, director of the Cambridge University Botanic Garden Howard Griffiths, Professor of Plant Ecology Julian Hibberd, Professor of Photosynthesis Alison Smith, Professor of Plant Biochemistry and Head of Department , the department also has 66 members of faculty and postdoctoral researchers, 100 graduate students, 19 Biotechnology and Biological Sciences Research Council (BBSRC) Doctoral Training Program (DTP) PhD students, 20 part II Tripos undergraduate students and 44 support staff. History The University of Cambridge has a long and distinguished history in Botany including work by John Ray and Stephen Hales in the 17th century and 18th century, Charles Darwin’s mentor John Stevens Henslow in the 19th century, and Frederick Blackman, Arthur Tansley and Harry Godwin in the 20th century. Emeritus and alumni More recently, the department has been home to: John C. Gray, Emeritus Professor of Plant Molecular Biology since 2011 Thomas ap Rees, Professor of Botany F. Ian Woodward, Lecturer and Fellow of Trinity Hall, Cambridge before being appointed Professor of Plant Ecology at the University of Sheffield Document 4::: In botany, a virtual herbarium is a herbarium in a digitized form. That is, it concerns a collection of digital images of preserved plants or plant parts. Virtual herbaria often are established to improve availability of specimens to a wider audience. However, there are digital herbaria that are not suitable for internet access because of the high resolution of scans and resulting large file sizes (several hundred megabytes per file). Additional information about each specimen, such as the location, the collector, and the botanical name are attached to every specimen. Frequently, further details such as related species and growth requirements are mentioned. Specimen imaging The standard hardware used for herbarium specimen imaging is the "HerbScan" scanner. It is an inverted flat-bed scanner which raises the specimen up to the scanning surface. This technology was developed because it is standard practice to never turn a herbarium specimen upside-down. Alternatively, some herbaria employ a flat-bed book scanner or a copy stand to achieve the same effect. A small color chart and a ruler must be included on a herbarium sheet when it is imaged. The JSTOR Plant Science requires that the ruler bears the herbarium name and logo, and that a ColorChecker chart is used for any specimens to be contributed to the Global Plants Initiative (GPI). Uses Virtual herbaria are established in part to increase the longevity of specimens. Major herbaria participate in international loan programs, where a researcher can request specimens to be shipped in for study. This shipping contributes to the wear and tear of specimens. If, however, digital images are available, images of the specimens can be sent electronically. These images may be a sufficient substitute for the specimens themselves, or alternatively, the researcher can use the images to "preview" the specimens, to which ones should be sent out for further study. This process cuts down on the shipping, and thus the wear and The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. Select the plant. A. Penguins walk and swim. B. Humans eat plants and animals. C. Chili peppers have green leaves. D. Manta rays swim underwater. Answer:
sciq-3245
multiple_choice
Types of compounds include covalent and which other compounds?
[ "solvent", "reactant", "soluble", "ionic" ]
D
Relavent Documents: Document 0::: This is an index of lists of molecules (i.e. by year, number of atoms, etc.). Millions of molecules have existed in the universe since before the formation of Earth. Three of them, carbon dioxide, water and oxygen were necessary for the growth of life. Although humanity had always been surrounded by these substances, it has not always known what they were composed of. By century The following is an index of list of molecules organized by time of discovery of their molecular formula or their specific molecule in case of isomers: List of compounds By number of carbon atoms in the molecule List of compounds with carbon number 1 List of compounds with carbon number 2 List of compounds with carbon number 3 List of compounds with carbon number 4 List of compounds with carbon number 5 List of compounds with carbon number 6 List of compounds with carbon number 7 List of compounds with carbon number 8 List of compounds with carbon number 9 List of compounds with carbon number 10 List of compounds with carbon number 11 List of compounds with carbon number 12 List of compounds with carbon number 13 List of compounds with carbon number 14 List of compounds with carbon number 15 List of compounds with carbon number 16 List of compounds with carbon number 17 List of compounds with carbon number 18 List of compounds with carbon number 19 List of compounds with carbon number 20 List of compounds with carbon number 21 List of compounds with carbon number 22 List of compounds with carbon number 23 List of compounds with carbon number 24 List of compounds with carbon numbers 25-29 List of compounds with carbon numbers 30-39 List of compounds with carbon numbers 40-49 List of compounds with carbon numbers 50+ Other lists List of interstellar and circumstellar molecules List of gases List of molecules with unusual names See also Molecule Empirical formula Chemical formula Chemical structure Chemical compound Chemical bond Coordination complex L Document 1::: Steudel R 2020, Chemistry of the Non-metals: Syntheses - Structures - Bonding - Applications, in collaboration with D Scheschkewitz, Berlin, Walter de Gruyter, . ▲ An updated translation of the 5th German edition of 2013, incorporating the literature up to Spring 2019. Twenty-three nonmetals, including B, Si, Ge, As, Se, Te, and At but not Sb (nor Po). The nonmetals are identified on the basis of their electrical conductivity at absolute zero putatively being close to zero, rather than finite as in the case of metals. That does not work for As however, which has the electronic structure of a semimetal (like Sb). Halka M & Nordstrom B 2010, "Nonmetals", Facts on File, New York, A reading level 9+ book covering H, C, N, O, P, S, Se. Complementary books by the same authors examine (a) the post-transition metals (Al, Ga, In, Tl, Sn, Pb and Bi) and metalloids (B, Si, Ge, As, Sb, Te and Po); and (b) the halogens and noble gases. Woolins JD 1988, Non-Metal Rings, Cages and Clusters, John Wiley & Sons, Chichester, . A more advanced text that covers H; B; C, Si, Ge; N, P, As, Sb; O, S, Se and Te. Steudel R 1977, Chemistry of the Non-metals: With an Introduction to Atomic Structure and Chemical Bonding, English edition by FC Nachod & JJ Zuckerman, Berlin, Walter de Gruyter, . ▲ Twenty-four nonmetals, including B, Si, Ge, As, Se, Te, Po and At. Powell P & Timms PL 1974, The Chemistry of the Non-metals, Chapman & Hall, London, . ▲ Twenty-two nonmetals including B, Si, Ge, As and Te. Tin and antimony are shown as being intermediate between metals and nonmetals; they are later shown as either metals or nonmetals. Astatine is counted as a metal. Document 2::: Molecular binding is an attractive interaction between two molecules that results in a stable association in which the molecules are in close proximity to each other. It is formed when atoms or molecules bind together by sharing of electrons. It often, but not always, involves some chemical bonding. In some cases, the associations can be quite strong—for example, the protein streptavidin and the vitamin biotin have a dissociation constant (reflecting the ratio between bound and free biotin) on the order of 10−14—and so the reactions are effectively irreversible. The result of molecular binding is sometimes the formation of a molecular complex in which the attractive forces holding the components together are generally non-covalent, and thus are normally energetically weaker than covalent bonds. Molecular binding occurs in biological complexes (e.g., between pairs or sets of proteins, or between a protein and a small molecule ligand it binds) and also in abiologic chemical systems, e.g. as in cases of coordination polymers and coordination networks such as metal-organic frameworks. Types Molecular binding can be classified into the following types: Non-covalent – no chemical bonds are formed between the two interacting molecules hence the association is fully reversible Reversible covalent – a chemical bond is formed, however the free energy difference separating the noncovalently-bonded reactants from bonded product is near equilibrium and the activation barrier is relatively low such that the reverse reaction which cleaves the chemical bond easily occurs Irreversible covalent – a chemical bond is formed in which the product is thermodynamically much more stable than the reactants such that the reverse reaction does not take place. Bound molecules are sometimes called a "molecular complex"—the term generally refers to non-covalent associations. Non-covalent interactions can effectively become irreversible; for example, tight binding inhibitors of enzymes Document 3::: In chemical nomenclature, the IUPAC nomenclature of organic chemistry is a method of naming organic chemical compounds as recommended by the International Union of Pure and Applied Chemistry (IUPAC). It is published in the Nomenclature of Organic Chemistry (informally called the Blue Book). Ideally, every possible organic compound should have a name from which an unambiguous structural formula can be created. There is also an IUPAC nomenclature of inorganic chemistry. To avoid long and tedious names in normal communication, the official IUPAC naming recommendations are not always followed in practice, except when it is necessary to give an unambiguous and absolute definition to a compound. IUPAC names can sometimes be simpler than older names, as with ethanol, instead of ethyl alcohol. For relatively simple molecules they can be more easily understood than non-systematic names, which must be learnt or looked over. However, the common or trivial name is often substantially shorter and clearer, and so preferred. These non-systematic names are often derived from an original source of the compound. Also, very long names may be less clear than structural formulas. Basic principles In chemistry, a number of prefixes, suffixes and infixes are used to describe the type and position of the functional groups in the compound. The steps for naming an organic compound are: Identification of the parent hydride parent hydrocarbon chain. This chain must obey the following rules, in order of precedence: It should have the maximum number of substituents of the suffix functional group. By suffix, it is meant that the parent functional group should have a suffix, unlike halogen substituents. If more than one functional group is present, the one with highest group precedence should be used. It should have the maximum number of multiple bonds. It should have the maximum length. It should have the maximum number of substituents or branches cited as prefixes It should have the ma Document 4::: This is a list of homological algebra topics, by Wikipedia page. Basic techniques Cokernel Exact sequence Chain complex Differential module Five lemma Short five lemma Snake lemma Nine lemma Extension (algebra) Central extension Splitting lemma Projective module Injective module Projective resolution Injective resolution Koszul complex Exact functor Derived functor Ext functor Tor functor Filtration (abstract algebra) Spectral sequence Abelian category Triangulated category Derived category Applications Group cohomology Galois cohomology Lie algebra cohomology Sheaf cohomology Whitehead problem Homological conjectures in commutative algebra Homological algebra The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. Types of compounds include covalent and which other compounds? A. solvent B. reactant C. soluble D. ionic Answer:
sciq-11672
multiple_choice
What do you call a growing mass of cancerous cells that pushes into nearby tissues?
[ "bacteria", "calcium", "pallet", "tumor" ]
D
Relavent Documents: Document 0::: Metastasis is a pathogenic agent's spread from an initial or primary site to a different or secondary site within the host's body; the term is typically used when referring to metastasis by a cancerous tumor. The newly pathological sites, then, are metastases (mets). It is generally distinguished from cancer invasion, which is the direct extension and penetration by cancer cells into neighboring tissues. Cancer occurs after cells are genetically altered to proliferate rapidly and indefinitely. This uncontrolled proliferation by mitosis produces a primary heterogeneic tumour. The cells which constitute the tumor eventually undergo metaplasia, followed by dysplasia then anaplasia, resulting in a malignant phenotype. This malignancy allows for invasion into the circulation, followed by invasion to a second site for tumorigenesis. Some cancer cells known as circulating tumor cells acquire the ability to penetrate the walls of lymphatic or blood vessels, after which they are able to circulate through the bloodstream to other sites and tissues in the body. This process is known (respectively) as lymphatic or hematogenous spread. After the tumor cells come to rest at another site, they re-penetrate the vessel or walls and continue to multiply, eventually forming another clinically detectable tumor. This new tumor is known as a metastatic (or secondary) tumor. Metastasis is one of the hallmarks of cancer, distinguishing it from benign tumors. Most cancers can metastasize, although in varying degrees. Basal cell carcinoma for example rarely metastasizes. When tumor cells metastasize, the new tumor is called a secondary or metastatic tumor, and its cells are similar to those in the original or primary tumor. This means that if breast cancer metastasizes to the lungs, the secondary tumor is made up of abnormal breast cells, not of abnormal lung cells. The tumor in the lung is then called metastatic breast cancer, not lung cancer. Metastasis is a key element in cancer sta Document 1::: Cellular components are the complex biomolecules and structures of which cells, and thus living organisms, are composed. Cells are the structural and functional units of life. The smallest organisms are single cells, while the largest organisms are assemblages of trillions of cells. DNA, double stranded macromolecule that carries the hereditary information of the cell and found in all living cells; each cell carries chromosome(s) having a distinctive DNA sequence. Examples include macromolecules such as proteins and nucleic acids, biomolecular complexes such as a ribosome, and structures such as membranes, and organelles. While the majority of cellular components are located within the cell itself, some may exist in extracellular areas of an organism. Cellular components may also be called biological matter or biological material. Most biological matter has the characteristics of soft matter, being governed by relatively small energies. All known life is made of biological matter. To be differentiated from other theoretical or fictional life forms, such life may be called carbon-based, cellular, organic, biological, or even simply living – as some definitions of life exclude hypothetical types of biochemistry. See also Cell (biology) Cell biology Biomolecule Organelle Tissue (biology) External links https://web.archive.org/web/20130918033010/http://bioserv.fiu.edu/~walterm/FallSpring/review1_fall05_chap_cell3.htm Document 2::: Invasion is the process by which cancer cells directly extend and penetrate into neighboring tissues in cancer. It is generally distinguished from metastasis, which is the spread of cancer cells through the circulatory system or the lymphatic system to more distant locations. Yet, lymphovascular invasion is generally the first step of metastasis. There exist two main patterns of cancer cell invasion by cell migration: collective cell migration and individual cell migration, by which tumor cells overcome barriers of the extracellular matrix and spread into surrounding tissues. Each pattern of cell migration exhibits distinct morphological features and is governed by specific biochemical and molecular genetic mechanisms. Two types of migrating tumor cells, mesenchymal (fibroblast-like) and amoeboid, can be observed in various patterns of cancer cell invasion. This article describes the key differences between the variants of cancer cell migration, the role of epithelial-mesenchymal and related transitions, as well as the significance of different tumor factors and stromal molecules in tumor invasion. Morphological manifestations of the invasion patterns are characterized by a variety of tissue (tumor) structures. Invasive growth and metastasis The results of numerous experimental and clinical studies of malignant neoplasms have indicated that invasive growth and metastasis are the main manifestations of tumor progression, which constitute two closely related processes. A malignant tumor is defined by its capacity to initiate a biological phenomenon known as the metastatic cascade, a complex multi-stage process in which cell invasion precedes further cancer progression and the formation of metastases in distant organs and tissues. Massive metastatic lesions lead to the development of organ failure. The range between the “end” points of a complex invasive metastatic process–an invasion of the primary tumor into surrounding tissues and the formation of metastatic fo Document 3::: Hyperplasia (from ancient Greek ὑπέρ huper 'over' + πλάσις plasis 'formation'), or hypergenesis, is an enlargement of an organ or tissue caused by an increase in the amount of organic tissue that results from cell proliferation. It may lead to the gross enlargement of an organ, and the term is sometimes confused with benign neoplasia or benign tumor. Hyperplasia is a common preneoplastic response to stimulus. Microscopically, cells resemble normal cells but are increased in numbers. Sometimes cells may also be increased in size (hypertrophy). Hyperplasia is different from hypertrophy in that the adaptive cell change in hypertrophy is an increase in the size of cells, whereas hyperplasia involves an increase in the number of cells. Causes Hyperplasia may be due to any number of causes, including proliferation of basal layer of epidermis to compensate skin loss, chronic inflammatory response, hormonal dysfunctions, or compensation for damage or disease elsewhere. Hyperplasia may be harmless and occur on a particular tissue. An example of a normal hyperplastic response would be the growth and multiplication of milk-secreting glandular cells in the breast as a response to pregnancy, thus preparing for future breast feeding. Perhaps the most interesting and potent effect insulin-like growth factor 1 (IGF) has on the human body is its ability to cause hyperplasia, which is an actual splitting of cells. By contrast, hypertrophy is what occurs, for example, to skeletal muscle cells during weight training and is simply an increase in the size of the cells. With IGF use, one is able to cause hyperplasia which actually increases the number of muscle cells present in the tissue. Weight training enables these new cells to mature in size and strength. It is theorized that hyperplasia may also be induced through specific power output training for athletic performance, thus increasing the number of muscle fibers instead of increasing the size of a single fiber. Mechanism Hype Document 4::: A neoplasm () is a type of abnormal and excessive growth of tissue. The process that occurs to form or produce a neoplasm is called neoplasia. The growth of a neoplasm is uncoordinated with that of the normal surrounding tissue, and persists in growing abnormally, even if the original trigger is removed. This abnormal growth usually forms a mass, when it may be called a tumour or tumor.ICD-10 classifies neoplasms into four main groups: benign neoplasms, in situ neoplasms, malignant neoplasms, and neoplasms of uncertain or unknown behavior. Malignant neoplasms are also simply known as cancers and are the focus of oncology. Prior to the abnormal growth of tissue, as neoplasia, cells often undergo an abnormal pattern of growth, such as metaplasia or dysplasia. However, metaplasia or dysplasia does not always progress to neoplasia and can occur in other conditions as well. The word neoplasm is from Ancient Greek 'new' and 'formation, creation'. Types A neoplasm can be benign, potentially malignant, or malignant (cancer). Benign tumors include uterine fibroids, osteophytes and melanocytic nevi (skin moles). They are circumscribed and localized and do not transform into cancer. Potentially-malignant neoplasms include carcinoma in situ. They are localised, do not invade and destroy but in time, may transform into a cancer. Malignant neoplasms are commonly called cancer. They invade and destroy the surrounding tissue, may form metastases and, if untreated or unresponsive to treatment, will generally prove fatal. Secondary neoplasm refers to any of a class of cancerous tumor that is either a metastatic offshoot of a primary tumor, or an apparently unrelated tumor that increases in frequency following certain cancer treatments such as chemotherapy or radiotherapy. Rarely there can be a metastatic neoplasm with no known site of the primary cancer and this is classed as a cancer of unknown primary origin. Clonality Neoplastic tumors are often heterogeneous and con The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. What do you call a growing mass of cancerous cells that pushes into nearby tissues? A. bacteria B. calcium C. pallet D. tumor Answer:
sciq-8035
multiple_choice
What two forces tend to keep an animal stationary and thus oppose locomotion?
[ "thickness and gravity", "stength and gravity", "friction and gravity", "workload and gravity" ]
C
Relavent Documents: Document 0::: Terrestrial locomotion has evolved as animals adapted from aquatic to terrestrial environments. Locomotion on land raises different problems than that in water, with reduced friction being replaced by the increased effects of gravity. As viewed from evolutionary taxonomy, there are three basic forms of animal locomotion in the terrestrial environment: legged – moving by using appendages limbless locomotion – moving without legs, primarily using the body itself as a propulsive structure. rolling – rotating the body over the substrate Some terrains and terrestrial surfaces permit or demand alternative locomotive styles. A sliding component to locomotion becomes possible on slippery surfaces (such as ice and snow), where location is aided by potential energy, or on loose surfaces (such as sand or scree), where friction is low but purchase (traction) is difficult. Humans, especially, have adapted to sliding over terrestrial snowpack and terrestrial ice by means of ice skates, snow skis, and toboggans. Aquatic animals adapted to polar climates, such as ice seals and penguins also take advantage of the slipperiness of ice and snow as part of their locomotion repertoire. Beavers are known to take advantage of a mud slick known as a "beaver slide" over a short distance when passing from land into a lake or pond. Human locomotion in mud is improved through the use of cleats. Some snakes use an unusual method of movement known as sidewinding on sand or loose soil. Animals caught in terrestrial mudflows are subject to involuntary locomotion; this may be beneficial to the distribution of species with limited locomotive range under their own power. There is less opportunity for passive locomotion on land than by sea or air, though parasitism (hitchhiking) is available toward this end, as in all other habitats. Many species of monkeys and apes use a form of arboreal locomotion known as brachiation, with forelimbs as the prime mover. Some elements of the gymnastic sport of une Document 1::: Animal locomotion, in ethology, is any of a variety of methods that animals use to move from one place to another. Some modes of locomotion are (initially) self-propelled, e.g., running, swimming, jumping, flying, hopping, soaring and gliding. There are also many animal species that depend on their environment for transportation, a type of mobility called passive locomotion, e.g., sailing (some jellyfish), kiting (spiders), rolling (some beetles and spiders) or riding other animals (phoresis). Animals move for a variety of reasons, such as to find food, a mate, a suitable microhabitat, or to escape predators. For many animals, the ability to move is essential for survival and, as a result, natural selection has shaped the locomotion methods and mechanisms used by moving organisms. For example, migratory animals that travel vast distances (such as the Arctic tern) typically have a locomotion mechanism that costs very little energy per unit distance, whereas non-migratory animals that must frequently move quickly to escape predators are likely to have energetically costly, but very fast, locomotion. The anatomical structures that animals use for movement, including cilia, legs, wings, arms, fins, or tails are sometimes referred to as locomotory organs or locomotory structures. Etymology The term "locomotion" is formed in English from Latin loco "from a place" (ablative of locus "place") + motio "motion, a moving". Locomotion in different media Animals move through, or on, five types of environment: aquatic (in or on water), terrestrial (on ground or other surface, including arboreal, or tree-dwelling), fossorial (underground), and aerial (in the air). Many animals—for example semi-aquatic animals, and diving birds—regularly move through more than one type of medium. In some cases, the surface they move on facilitates their method of locomotion. Aquatic Swimming In water, staying afloat is possible using buoyancy. If an animal's body is less dense than water, i Document 2::: The study of animal locomotion is a branch of biology that investigates and quantifies how animals move. Kinematics Kinematics is the study of how objects move, whether they are mechanical or living. In animal locomotion, kinematics is used to describe the motion of the body and limbs of an animal. The goal is ultimately to understand how the movement of individual limbs relates to the overall movement of an animal within its environment. Below highlights the key kinematic parameters used to quantify body and limb movement for different modes of animal locomotion. Quantifying locomotion Walking Legged locomotion is a dominant form of terrestrial locomotion, the movement on land. The motion of limbs is quantified by intralimb and interlimb kinematic parameters. Intralimb kinematic parameters capture movement aspects of an individual limb, whereas, interlimb kinematic parameters characterize the coordination across limbs. Interlimb kinematic parameters are also referred to as gait parameters. The following are key intralimb and interlimb kinematic parameters of walking: Characterizing swing and stance transitions The calculation of the above intra- and interlimb kinematics relies on the classification of when the legs of an animal touches and leaves the ground. Stance onset is defined as when a leg first contacts the ground, whereas, swing onset occurs at the time when the leg leaves the ground. Typically, the transition between swing and stance, and vice versa, of a leg is determined by first recording the leg's motion with high-speed videography (see the description of high-speed videography below for more details). From the video recordings of the leg, a marker on the leg (usually placed at the distal tip of the leg) is then tracked manually or in an automated fashion to obtain the position signal of the leg's movement. The position signal associated with each leg is then normalized to that associated with a marker on the body; transforming the leg position Document 3::: Belt friction is a term describing the friction forces between a belt and a surface, such as a belt wrapped around a bollard. When a force applies a tension to one end of a belt or rope wrapped around a curved surface, the frictional force between the two surfaces increases with the amount of wrap about the curved surface, and only part of that force (or resultant belt tension) is transmitted to the other end of the belt or rope. Belt friction can be modeled by the Belt friction equation. In practice, the theoretical tension acting on the belt or rope calculated by the belt friction equation can be compared to the maximum tension the belt can support. This helps a designer of such a system determine how many times the belt or rope must be wrapped around a curved surface to prevent it from slipping. Mountain climbers and sailing crews demonstrate a working knowledge of belt friction when accomplishing tasks with ropes, pulleys, bollards and capstans. Equation The equation used to model belt friction is, assuming the belt has no mass and its material is a fixed composition: where is the tension of the pulling side, is the tension of the resisting side, is the static friction coefficient, which has no units, and is the angle, in radians, formed by the first and last spots the belt touches the pulley, with the vertex at the center of the pulley. The tension on the pulling side of the belt and pulley has the ability to increase exponentially if the magnitude of the belt angle increases (e.g. it is wrapped around the pulley segment numerous times). Generalization for a rope lying on an arbitrary orthotropic surface If a rope is laying in equilibrium under tangential forces on a rough orthotropic surface then three following conditions (all of them) are satisfied: 1. No separation – normal reaction is positive for all points of the rope curve: , where is a normal curvature of the rope curve. 2. Dragging coefficient of friction and angle are satisfying Document 4::: Surface force denoted fs is the force that acts across an internal or external surface element in a material body. Normal forces and shear forces between objects are types of surface force. All cohesive forces and contact forces between objects are considered as surface forces. Surface force can be decomposed into two perpendicular components: normal forces and shear forces. A normal force acts normally over an area and a shear force acts tangentially over an area. Equations for surface force Surface force due to pressure , where f = force, p = pressure, and A = area on which a uniform pressure acts Examples Pressure related surface force Since pressure is , and area is a , a pressure of over an area of will produce a surface force of . See also Body force Contact force The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. What two forces tend to keep an animal stationary and thus oppose locomotion? A. thickness and gravity B. stength and gravity C. friction and gravity D. workload and gravity Answer:
sciq-1300
multiple_choice
Systems are in thermal equilibrium when they have the same of what measurement?
[ "oxygen", "mass", "temperature", "density" ]
C
Relavent Documents: Document 0::: Scale of temperature is a methodology of calibrating the physical quantity temperature in metrology. Empirical scales measure temperature in relation to convenient and stable parameters or reference points, such as the freezing and boiling point of water. Absolute temperature is based on thermodynamic principles: using the lowest possible temperature as the zero point, and selecting a convenient incremental unit. Celsius, Kelvin, and Fahrenheit are common temperature scales. Other scales used throughout history include Rankine, Rømer, Newton, Delisle, Réaumur, Gas mark, Leiden and Wedgwood. Definition The zeroth law of thermodynamics describes thermal equilibrium between thermodynamic systems in form of an equivalence relation. Accordingly, all thermal systems may be divided into a quotient set, denoted as M. If the set M has the cardinality of c, then one can construct an injective function , by which every thermal system has a parameter associated with it such that when two thermal systems have the same value of that parameter, they are in thermal equilibrium. This parameter is the property of temperature. The specific way of assigning numerical values for temperature is establishing a scale of temperature. In practical terms, a temperature scale is always based on usually a single physical property of a simple thermodynamic system, called a thermometer, that defines a scaling function for mapping the temperature to the measurable thermometric parameter. Such temperature scales that are purely based on measurement are called empirical temperature scales. The second law of thermodynamics provides a fundamental, natural definition of thermodynamic temperature starting with a null point of absolute zero. A scale for thermodynamic temperature is established similarly to the empirical temperature scales, however, needing only one additional fixing point. Empirical scales Empirical scales are based on the measurement of physical parameters that express the prope Document 1::: Thermodynamic equilibrium is an axiomatic concept of thermodynamics. It is an internal state of a single thermodynamic system, or a relation between several thermodynamic systems connected by more or less permeable or impermeable walls. In thermodynamic equilibrium, there are no net macroscopic flows of matter nor of energy within a system or between systems. In a system that is in its own state of internal thermodynamic equilibrium, no macroscopic change occurs. Systems in mutual thermodynamic equilibrium are simultaneously in mutual thermal, mechanical, chemical, and radiative equilibria. Systems can be in one kind of mutual equilibrium, while not in others. In thermodynamic equilibrium, all kinds of equilibrium hold at once and indefinitely, until disturbed by a thermodynamic operation. In a macroscopic equilibrium, perfectly or almost perfectly balanced microscopic exchanges occur; this is the physical explanation of the notion of macroscopic equilibrium. A thermodynamic system in a state of internal thermodynamic equilibrium has a spatially uniform temperature. Its intensive properties, other than temperature, may be driven to spatial inhomogeneity by an unchanging long-range force field imposed on it by its surroundings. In systems that are at a state of non-equilibrium there are, by contrast, net flows of matter or energy. If such changes can be triggered to occur in a system in which they are not already occurring, the system is said to be in a meta-stable equilibrium. Though not a widely named "law," it is an axiom of thermodynamics that there exist states of thermodynamic equilibrium. The second law of thermodynamics states that when an isolated body of material starts from an equilibrium state, in which portions of it are held at different states by more or less permeable or impermeable partitions, and a thermodynamic operation removes or makes the partitions more permeable, then it spontaneously reaches its own new state of internal thermodynamic equ Document 2::: Thermodynamic temperature is a quantity defined in thermodynamics as distinct from kinetic theory or statistical mechanics. Historically, thermodynamic temperature was defined by Lord Kelvin in terms of a macroscopic relation between thermodynamic work and heat transfer as defined in thermodynamics, but the kelvin was redefined by international agreement in 2019 in terms of phenomena that are now understood as manifestations of the kinetic energy of free motion of microscopic particles such as atoms, molecules, and electrons. From the thermodynamic viewpoint, for historical reasons, because of how it is defined and measured, this microscopic kinetic definition is regarded as an "empirical" temperature. It was adopted because in practice it can generally be measured more precisely than can Kelvin's thermodynamic temperature. A thermodynamic temperature reading of zero is of particular importance for the third law of thermodynamics. By convention, it is reported on the Kelvin scale of temperature in which the unit of measurement is the kelvin (unit symbol: K). For comparison, a temperature of 295 K is equal to 21.85 °C and 71.33 °F. Overview Thermodynamic temperature, as distinct from SI temperature, is defined in terms of a macroscopic Carnot cycle. Thermodynamic temperature is of importance in thermodynamics because it is defined in purely thermodynamic terms. SI temperature is conceptually far different from thermodynamic temperature. Thermodynamic temperature was rigorously defined historically long before there was a fair knowledge of microscopic particles such as atoms, molecules, and electrons. The International System of Units (SI) specifies the international absolute scale for measuring temperature, and the unit of measure kelvin (unit symbol: K) for specific values along the scale. The kelvin is also used for denoting temperature intervals (a span or difference between two temperatures) as per the following example usage: "A 60/40 tin/lead solder is no Document 3::: In thermodynamics, heat is the thermal energy transferred between systems due to a temperature difference. In colloquial use, heat sometimes refers to thermal energy itself. Thermal energy is the kinetic energy of vibrating and colliding atoms in a substance. An example of formal vs. informal usage may be obtained from the right-hand photo, in which the metal bar is "conducting heat" from its hot end to its cold end, but if the metal bar is considered a thermodynamic system, then the energy flowing within the metal bar is called internal energy, not heat. The hot metal bar is also transferring heat to its surroundings, a correct statement for both the strict and loose meanings of heat. Another example of informal usage is the term heat content, used despite the fact that physics defines heat as energy transfer. More accurately, it is thermal energy that is contained in the system or body, as it is stored in the microscopic degrees of freedom of the modes of vibration. Heat is energy in transfer to or from a thermodynamic system, by a mechanism that involves the microscopic atomic modes of motion or the corresponding macroscopic properties. This descriptive characterization excludes the transfers of energy by thermodynamic work or mass transfer. Defined quantitatively, the heat involved in a process is the difference in internal energy between the final and initial states of a system, and subtracting the work done in the process. This is the formulation of the first law of thermodynamics. The measurement of energy transferred as heat is called calorimetry, performed by measuring its effect on the states of interacting bodies. For example, heat can be measured by the amount of ice melted, or by change in temperature of a body in the surroundings of the system. In the International System of Units (SI) the unit of measurement for heat, as a form of energy, is the joule (J). Notation and units As a form of energy, heat has the unit joule (J) in the International Sy Document 4::: Temperature is a physical quantity that expresses quantitatively the attribute of hotness or coldness. Temperature is measured with a thermometer. It reflects the kinetic energy of the vibrating and colliding atoms making up a substance. Thermometers are calibrated in various temperature scales that historically have relied on various reference points and thermometric substances for definition. The most common scales are the Celsius scale with the unit symbol °C (formerly called centigrade), the Fahrenheit scale (°F), and the Kelvin scale (K), the latter being used predominantly for scientific purposes. The kelvin is one of the seven base units in the International System of Units (SI). Absolute zero, i.e., zero kelvin or −273.15 °C, is the lowest point in the thermodynamic temperature scale. Experimentally, it can be approached very closely but not actually reached, as recognized in the third law of thermodynamics. It would be impossible to extract energy as heat from a body at that temperature. Temperature is important in all fields of natural science, including physics, chemistry, Earth science, astronomy, medicine, biology, ecology, material science, metallurgy, mechanical engineering and geography as well as most aspects of daily life. Effects Many physical processes are related to temperature; some of them are given below: the physical properties of materials including the phase (solid, liquid, gaseous or plasma), density, solubility, vapor pressure, electrical conductivity, hardness, wear resistance, thermal conductivity, corrosion resistance, strength the rate and extent to which chemical reactions occur the amount and properties of thermal radiation emitted from the surface of an object air temperature affects all living organisms the speed of sound, which in a gas is proportional to the square root of the absolute temperature Scales Temperature scales need two values for definition: the point chosen as zero degrees and the magnitudes of the incr The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. Systems are in thermal equilibrium when they have the same of what measurement? A. oxygen B. mass C. temperature D. density Answer:
sciq-9733
multiple_choice
How does uric acid react to water?
[ "explodes", "does not dissolve", "mixes", "does not form" ]
B
Relavent Documents: Document 0::: GRE Subject Biochemistry, Cell and Molecular Biology was a standardized exam provided by ETS (Educational Testing Service) that was discontinued in December 2016. It is a paper-based exam and there are no computer-based versions of it. ETS places this exam three times per year: once in April, once in October and once in November. Some graduate programs in the United States recommend taking this exam, while others require this exam score as a part of the application to their graduate programs. ETS sends a bulletin with a sample practice test to each candidate after registration for the exam. There are 180 questions within the biochemistry subject test. Scores are scaled and then reported as a number between 200 and 990; however, in recent versions of the test, the maximum and minimum reported scores have been 760 (corresponding to the 99 percentile) and 320 (1 percentile) respectively. The mean score for all test takers from July, 2009, to July, 2012, was 526 with a standard deviation of 95. After learning that test content from editions of the GRE® Biochemistry, Cell and Molecular Biology (BCM) Test has been compromised in Israel, ETS made the decision not to administer this test worldwide in 2016–17. Content specification Since many students who apply to graduate programs in biochemistry do so during the first half of their fourth year, the scope of most questions is largely that of the first three years of a standard American undergraduate biochemistry curriculum. A sampling of test item content is given below: Biochemistry (36%) A Chemical and Physical Foundations Thermodynamics and kinetics Redox states Water, pH, acid-base reactions and buffers Solutions and equilibria Solute-solvent interactions Chemical interactions and bonding Chemical reaction mechanisms B Structural Biology: Structure, Assembly, Organization and Dynamics Small molecules Macromolecules (e.g., nucleic acids, polysaccharides, proteins and complex lipids) Supramolecular complexes (e.g. Document 1::: The Bachelor of Science in Aquatic Resources and Technology (B.Sc. in AQT) (or Bachelor of Aquatic Resource) is an undergraduate degree that prepares students to pursue careers in the public, private, or non-profit sector in areas such as marine science, fisheries science, aquaculture, aquatic resource technology, food science, management, biotechnology and hydrography. Post-baccalaureate training is available in aquatic resource management and related areas. The Department of Animal Science and Export Agriculture, at the Uva Wellassa University of Badulla, Sri Lanka, has the largest enrollment of undergraduate majors in Aquatic Resources and Technology, with about 200 students as of 2014. The Council on Education for Aquatic Resources and Technology includes undergraduate AQT degrees in the accreditation review of Aquatic Resources and Technology programs and schools. See also Marine Science Ministry of Fisheries and Aquatic Resources Development Document 2::: Conceptual questions or conceptual problems in science, technology, engineering, and mathematics (STEM) education are questions that can be answered based only on the knowledge of relevant concepts, rather than performing extensive calculations. They contrast with most homework and exam problems in science and engineering that typically require plugging in numerical values into previously discussed formulas. Such "plug-and-chug" numerical problems can often be solved correctly by just matching the pattern of the problem to a previously discussed problem and changing the numerical inputs, which requires significant amounts of time to perform the calculations but does not test or deepen the understanding of how the concepts and formulas should work together. Conceptual questions, therefore, provide a good complement to conventional numerical problems because they need minimal or no calculations and instead encourage the students to engage more deeply with the underlying concepts and how they relate to formulas. Conceptual problems are often formulated as multiple-choice questions, making them easy to use during in-class discussions, particularly when utilizing active learning, peer instruction, and audience response. An example of a conceptual question in undergraduate thermodynamics is provided below: During adiabatic expansion of an ideal gas, its temperatureincreases decreases stays the same Impossible to tell/need more information The use of conceptual questions in physics was popularized by Eric Mazur, particularly in the form of multiple-choice tests that he called ConcepTests. In recent years, multiple websites that maintain lists of conceptual questions have been created by instructors for various disciplines. Some books on physics provide many examples of conceptual questions as well. Multiple conceptual questions can be assembled into a concept inventory to test the working knowledge of students at the beginning of a course or to track the improvement in Document 3::: Several universities have designed interdisciplinary courses with a focus on human biology at the undergraduate level. There is a wide variation in emphasis ranging from business, social studies, public policy, healthcare and pharmaceutical research. Americas Human Biology major at Stanford University, Palo Alto (since 1970) Stanford's Human Biology Program is an undergraduate major; it integrates the natural and social sciences in the study of human beings. It is interdisciplinary and policy-oriented and was founded in 1970 by a group of Stanford faculty (Professors Dornbusch, Ehrlich, Hamburg, Hastorf, Kennedy, Kretchmer, Lederberg, and Pittendrigh). It is a very popular major and alumni have gone to post-graduate education, medical school, law, business and government. Human and Social Biology (Caribbean) Human and Social Biology is a Level 4 & 5 subject in the secondary and post-secondary schools in the Caribbean and is optional for the Caribbean Secondary Education Certification (CSEC) which is equivalent to Ordinary Level (O-Level) under the British school system. The syllabus centers on structure and functioning (anatomy, physiology, biochemistry) of human body and the relevance to human health with Caribbean-specific experience. The syllabus is organized under five main sections: Living organisms and the environment, life processes, heredity and variation, disease and its impact on humans, the impact of human activities on the environment. Human Biology Program at University of Toronto The University of Toronto offers an undergraduate program in Human Biology that is jointly offered by the Faculty of Arts & Science and the Faculty of Medicine. The program offers several major and specialist options in: human biology, neuroscience, health & disease, global health, and fundamental genetics and its applications. Asia BSc (Honours) Human Biology at All India Institute of Medical Sciences, New Delhi (1980–2002) BSc (honours) Human Biology at AIIMS (New Document 4::: Single Best Answer (SBA or One Best Answer) is a written examination form of multiple choice questions used extensively in medical education. Structure A single question is posed with typically five alternate answers, from which the candidate must choose the best answer. This method avoids the problems of past examinations of a similar form described as Single Correct Answer. The older form can produce confusion where more than one of the possible answers has some validity. The newer form makes it explicit that more than one answer may have elements that are correct, but that one answer will be superior. Prior to the widespread introduction of SBAs into medical education, the typical form of examination was true-false multiple choice questions. But during the 2000s, educators found that SBAs would be superior. The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. How does uric acid react to water? A. explodes B. does not dissolve C. mixes D. does not form Answer:
sciq-7968
multiple_choice
The hydroxyl radical is highly reactive because it has what?
[ "unpaired electron", "paired proton", "paired electron", "unpaired neutron" ]
A
Relavent Documents: Document 0::: In chemistry, a radical, also known as a free radical, is an atom, molecule, or ion that has at least one unpaired valence electron. With some exceptions, these unpaired electrons make radicals highly chemically reactive. Many radicals spontaneously dimerize. Most organic radicals have short lifetimes. A notable example of a radical is the hydroxyl radical (HO·), a molecule that has one unpaired electron on the oxygen atom. Two other examples are triplet oxygen and triplet carbene (꞉) which have two unpaired electrons. Radicals may be generated in a number of ways, but typical methods involve redox reactions, Ionizing radiation, heat, electrical discharges, and electrolysis are known to produce radicals. Radicals are intermediates in many chemical reactions, more so than is apparent from the balanced equations. Radicals are important in combustion, atmospheric chemistry, polymerization, plasma chemistry, biochemistry, and many other chemical processes. A majority of natural products are generated by radical-generating enzymes. In living organisms, the radicals superoxide and nitric oxide and their reaction products regulate many processes, such as control of vascular tone and thus blood pressure. They also play a key role in the intermediary metabolism of various biological compounds. Such radicals can even be messengers in a process dubbed redox signaling. A radical may be trapped within a solvent cage or be otherwise bound. Formation Radicals are either (1) formed from spin-paired molecules or (2) from other radicals. Radicals are formed from spin-paired molecules through homolysis of weak bonds or electron transfer, also known as reduction. Radicals are formed from other radicals through substitution, addition, and elimination reactions. Radical formation from spin-paired molecules Homolysis Homolysis makes two new radicals from a spin-paired molecule by breaking a covalent bond, leaving each of the fragments with one of the electrons in the bond. Bec Document 1::: O2•– + H+ + H2O2 → O2 + HO• + H2O    (step 3: propagation) Finally, the chain is terminated when the hydroxyl radical is scavenged by a ferrous ion: Fe2+ + HO• + H+ → Fe3+ + H2O        (step 4: termination) George showed in 1947 that, in water, step 3 cannot compete with the spontaneous disproportionation of superoxide Document 2::: In chemistry, an unpaired electron is an electron that occupies an orbital of an atom singly, rather than as part of an electron pair. Each atomic orbital of an atom (specified by the three quantum numbers n, l and m) has a capacity to contain two electrons (electron pair) with opposite spins. As the formation of electron pairs is often energetically favourable, either in the form of a chemical bond or as a lone pair, unpaired electrons are relatively uncommon in chemistry, because an entity that carries an unpaired electron is usually rather reactive. In organic chemistry they typically only occur briefly during a reaction on an entity called a radical; however, they play an important role in explaining reaction pathways. Radicals are uncommon in s- and p-block chemistry, since the unpaired electron occupies a valence p orbital or an sp, sp2 or sp3 hybrid orbital. These orbitals are strongly directional and therefore overlap to form strong covalent bonds, favouring dimerisation of radicals. Radicals can be stable if dimerisation would result in a weak bond or the unpaired electrons are stabilised by delocalisation. In contrast, radicals in d- and f-block chemistry are very common. The less directional, more diffuse d and f orbitals, in which unpaired electrons reside, overlap less effectively, form weaker bonds and thus dimerisation is generally disfavoured. These d and f orbitals also have comparatively smaller radial extension, disfavouring overlap to form dimers. Relatively more stable entities with unpaired electrons do exist, e.g. the nitric oxide molecule has one. According to Hund's rule, the spins of unpaired electrons are aligned parallel and this gives these molecules paramagnetic properties. The most stable examples of unpaired electrons are found on the atoms and ions of lanthanides and actinides. The incomplete f-shell of these entities does not interact very strongly with the environment they are in and this prevents them from being paired. The i Document 3::: Distonic ions are chemical species that contain two ionic charges on the same molecule, separated by two or more carbon or heteroatoms. A feature of distonic radical ions is that their charges and radical sites are in different locations (on separate atoms), unlike regular radicals where the formal charge and unpaired electron are in the same location. These molecular species are created by ionization of either zwitterions or diradicals; ultimately, a neutral molecule loses an electron. Through experimental research distonic radicals have been found to be extremely stable gas phase ions and can be separated into different classes depending on the inherent features of the charged portion of the ion. History In 1984 scientists Bouma, Radom and Yates originated the term through extensive experimental research but they were not the first to deal with distonic ions. Experiments date back to the 1970s with Gross and McLafferty who were the first to propose the idea of such a species. Ion structure Several efficient techniques are available to detect the presence of distonic ions; the most appropriate method will depend on the ion's internal energy and lifespan. Collisions between ions and uncharged molecules allow one to detect the location of the radical and charge site in order to confirm that the ion is not just a regular radical ion. When a molecule is ionized and can structurally be classified as a distonic ion, the molecule's kinetics and thermodynamic properties have been greatly altered. However, additional chemical properties are based on the reactions of the central excited ions. Mass spectrometry techniques are used to study their chemistry. Experimental data Distonic ions have been extensively examined due to their unique behavior and how commonly they can occur. It has been shown that in most cases distonic ions have a bonding arrangement corresponding to that of the original molecule before ionization occurred; but that distonic ions are less stable t Document 4::: Conceptual questions or conceptual problems in science, technology, engineering, and mathematics (STEM) education are questions that can be answered based only on the knowledge of relevant concepts, rather than performing extensive calculations. They contrast with most homework and exam problems in science and engineering that typically require plugging in numerical values into previously discussed formulas. Such "plug-and-chug" numerical problems can often be solved correctly by just matching the pattern of the problem to a previously discussed problem and changing the numerical inputs, which requires significant amounts of time to perform the calculations but does not test or deepen the understanding of how the concepts and formulas should work together. Conceptual questions, therefore, provide a good complement to conventional numerical problems because they need minimal or no calculations and instead encourage the students to engage more deeply with the underlying concepts and how they relate to formulas. Conceptual problems are often formulated as multiple-choice questions, making them easy to use during in-class discussions, particularly when utilizing active learning, peer instruction, and audience response. An example of a conceptual question in undergraduate thermodynamics is provided below: During adiabatic expansion of an ideal gas, its temperatureincreases decreases stays the same Impossible to tell/need more information The use of conceptual questions in physics was popularized by Eric Mazur, particularly in the form of multiple-choice tests that he called ConcepTests. In recent years, multiple websites that maintain lists of conceptual questions have been created by instructors for various disciplines. Some books on physics provide many examples of conceptual questions as well. Multiple conceptual questions can be assembled into a concept inventory to test the working knowledge of students at the beginning of a course or to track the improvement in The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. The hydroxyl radical is highly reactive because it has what? A. unpaired electron B. paired proton C. paired electron D. unpaired neutron Answer:
sciq-9239
multiple_choice
Attaching strips of neutral metals that are higher in the activity series can protect a structure from what?
[ "diffusion", "corrosion", "weathering", "deoxidation" ]
B
Relavent Documents: Document 0::: Detectable tape or Underground warning tape is a conductive tape typically applied over buried utilities made of non-conductive materials such as plastic, fiberglass, or cement. It is used because most utility location methods work best on conductive objects, and hence may easily miss structures made of non-conductive materials. The tape also serves as a physical warning. If uncovered during digging, it alerts the user to an underground object that might be damaged by further excavation. To aid in this, it is typically colored to reflect the nature of the buried object that it is protecting. It is common for construction specifications to mandate the use of such tape. The conductive material in detectable tapes is typically aluminium, but there have been studies investigating replacing this with a material which is both magnetic and conductive, to make it detectable to a wider variety of utility location techniques. See also Underground Service Alert, an organization that specializes in marking underground utilities. Document 1::: The Zinagizado is an electrochemical process to provide a ferrous metal material with anti-corrosive properties. It involves the application of a constant electric current through a circuit to break the bonds and these are attached to the metal to be coated by forming a surface coating. The alloy used is called Zinag (Zn-Al-Ag); this alloy has excellent mechanical and corrosive properties, so the piece will have increased by 60% of life. The deposition of Zinag provides environmental protection against corrosion and can be used in covering all kinds of steel metallic materials in contact with a corrosive medium. The anti-corrosive property has been obtained by the corrosion resistance of zinc achieved by the aluminium and silver addition, which is cathodically respect to the iron and steel. Cathodic protection This process is an innovation by Said Robles Casolco and Adrianni Zanatta. Patent called: Zinagizado as corrosion process for metals by electrolytic method. No. MX/a/2010/009200, IMPI-Mexico. Document 2::: NiTiNOL 60, or 60 NiTiNOL, is a Nickel Titanium alloy (nominally Ni-40wt% Ti) discovered in the late 1950s by the U. S. Naval Ordnance Laboratory (hence the "NOL" portion of the name NiTiNOL). Depending upon the heat treat history, 60 NiTiNOL has the ability to exhibit either superelastic properties in the hardened state or shape memory characteristics in the softened state. Producing the material in any meaningful quantities, however, proved quite difficult by conventional methods and the material was largely forgotten. The composition and processing parameters have recently been revived by Summit Materials, LLC under the trademarked name SM-100. SM-100 maintains 60 NiTiNOL's combination of superb corrosion resistance [NASA terms it "Corrosion Proof"] and equally impressive wear and erosion properties. In bearing lifting tests conducted by NASA, SM-100 has been shown to have over twice the life of 440C stainless steel and over ten times the life of conventional titanium alloys with a significantly lower coefficient of friction. The superelastic nature of the material gives it the ability to withstand compression loading of well over with no permanent yielding. Applications Common applications for Nitinol 60 include: Bearings High-end knives High-end ice hockey skate blades Implantable medical devices, including collapsible braided structures and stents Properties The following table compares 60 NiTiNOL against commonly used bearing materials. Document 3::: The Handle-o-Meter is a testing machine developed by Johnson & Johnson and now manufactured by Thwing-Albert that measures the "handle" of sheeted materials: a combination of its surface friction and flexibility. Originally, it was used to test the durability and flexibility of toilet paper and paper towels. The test sample is placed over an adjustable slot. The resistance encountered by the penetrator blade as it is moved into the slot by a pivoting arm is measured by the machine. Details The data collected when such nonwovens, tissues, toweling, film and textiles are tested has been shown to correlate well with the actual performance of these specific material's performance as a finished product. Materials are simply placed over the slot that extends across the instrument platform, and then the tester hits test. There are three different test modes which can be applied to the material: single, double, and quadruple. The average is automatically calculated for double or quadruple tests. Features Adjustable slot openings Interchangeable beams Auto-ranging 2 x 40 LCD display Statistical Analysis RS-232 Output and Serial Port Industry Standards: ASTM D2923, D6828-02 TAPPI T498 INDA IST 90.3 Document 4::: Conceptual questions or conceptual problems in science, technology, engineering, and mathematics (STEM) education are questions that can be answered based only on the knowledge of relevant concepts, rather than performing extensive calculations. They contrast with most homework and exam problems in science and engineering that typically require plugging in numerical values into previously discussed formulas. Such "plug-and-chug" numerical problems can often be solved correctly by just matching the pattern of the problem to a previously discussed problem and changing the numerical inputs, which requires significant amounts of time to perform the calculations but does not test or deepen the understanding of how the concepts and formulas should work together. Conceptual questions, therefore, provide a good complement to conventional numerical problems because they need minimal or no calculations and instead encourage the students to engage more deeply with the underlying concepts and how they relate to formulas. Conceptual problems are often formulated as multiple-choice questions, making them easy to use during in-class discussions, particularly when utilizing active learning, peer instruction, and audience response. An example of a conceptual question in undergraduate thermodynamics is provided below: During adiabatic expansion of an ideal gas, its temperatureincreases decreases stays the same Impossible to tell/need more information The use of conceptual questions in physics was popularized by Eric Mazur, particularly in the form of multiple-choice tests that he called ConcepTests. In recent years, multiple websites that maintain lists of conceptual questions have been created by instructors for various disciplines. Some books on physics provide many examples of conceptual questions as well. Multiple conceptual questions can be assembled into a concept inventory to test the working knowledge of students at the beginning of a course or to track the improvement in The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. Attaching strips of neutral metals that are higher in the activity series can protect a structure from what? A. diffusion B. corrosion C. weathering D. deoxidation Answer:
sciq-6584
multiple_choice
Hydrogen chloride contains one atom of hydrogen and one atom of what?
[ "chlorine", "nitrogen", "magnesium", "calcium" ]
A
Relavent Documents: Document 0::: Conceptual questions or conceptual problems in science, technology, engineering, and mathematics (STEM) education are questions that can be answered based only on the knowledge of relevant concepts, rather than performing extensive calculations. They contrast with most homework and exam problems in science and engineering that typically require plugging in numerical values into previously discussed formulas. Such "plug-and-chug" numerical problems can often be solved correctly by just matching the pattern of the problem to a previously discussed problem and changing the numerical inputs, which requires significant amounts of time to perform the calculations but does not test or deepen the understanding of how the concepts and formulas should work together. Conceptual questions, therefore, provide a good complement to conventional numerical problems because they need minimal or no calculations and instead encourage the students to engage more deeply with the underlying concepts and how they relate to formulas. Conceptual problems are often formulated as multiple-choice questions, making them easy to use during in-class discussions, particularly when utilizing active learning, peer instruction, and audience response. An example of a conceptual question in undergraduate thermodynamics is provided below: During adiabatic expansion of an ideal gas, its temperatureincreases decreases stays the same Impossible to tell/need more information The use of conceptual questions in physics was popularized by Eric Mazur, particularly in the form of multiple-choice tests that he called ConcepTests. In recent years, multiple websites that maintain lists of conceptual questions have been created by instructors for various disciplines. Some books on physics provide many examples of conceptual questions as well. Multiple conceptual questions can be assembled into a concept inventory to test the working knowledge of students at the beginning of a course or to track the improvement in Document 1::: Sodium hydride is the chemical compound with the empirical formula NaH. This alkali metal hydride is primarily used as a strong yet combustible base in organic synthesis. NaH is a saline (salt-like) hydride, composed of Na+ and H− ions, in contrast to molecular hydrides such as borane, methane, ammonia, and water. It is an ionic material that is insoluble in all solvents (other than molten Na), consistent with the fact that H− ions do not exist in solution. Because of the insolubility of NaH, all reactions involving NaH occur at the surface of the solid. Basic properties and structure NaH is produced by the direct reaction of hydrogen and liquid sodium. Pure NaH is colorless, although samples generally appear grey. NaH is around 40% denser than Na (0.968 g/cm3). NaH, like LiH, KH, RbH, and CsH, adopts the NaCl crystal structure. In this motif, each Na+ ion is surrounded by six H− centers in an octahedral geometry. The ionic radii of H− (146 pm in NaH) and F− (133 pm) are comparable, as judged by the Na−H and Na−F distances. "Inverse sodium hydride" A very unusual situation occurs in a compound dubbed "inverse sodium hydride", which contains H+ and Na− ions. Na− is an alkalide, and this compound differs from ordinary sodium hydride in having a much higher energy content due to the net displacement of two electrons from hydrogen to sodium. A derivative of this "inverse sodium hydride" arises in the presence of the base [36]adamanzane. This molecule irreversibly encapsulates the H+ and shields it from interaction with the alkalide Na−. Theoretical work has suggested that even an unprotected protonated tertiary amine complexed with the sodium alkalide might be metastable under certain solvent conditions, though the barrier to reaction would be small and finding a suitable solvent might be difficult. Applications in organic synthesis As a strong base NaH is a base of wide scope and utility in organic chemistry. As a superbase, it is capable of deprotonating a ra Document 2::: Atomicity is the total number of atoms present in a molecule. For example, each molecule of oxygen (O2) is composed of two oxygen atoms. Therefore, the atomicity of oxygen is 2. In older contexts, atomicity is sometimes equivalent to valency. Some authors also use the term to refer to the maximum number of valencies observed for an element. Classifications Based on atomicity, molecules can be classified as: Monoatomic (composed of one atom). Examples include He (helium), Ne (neon), Ar (argon), and Kr (krypton). All noble gases are monoatomic. Diatomic (composed of two atoms). Examples include H2 (hydrogen), N2 (nitrogen), O2 (oxygen), F2 (fluorine), and Cl2 (chlorine). Halogens are usually diatomic. Triatomic (composed of three atoms). Examples include O3 (ozone). Polyatomic (composed of three or more atoms). Examples include S8. Atomicity may vary in different allotropes of the same element. The exact atomicity of metals, as well as some other elements such as carbon, cannot be determined because they consist of a large and indefinite number of atoms bonded together. They are typically designated as having an atomicity of 1. The atomicity of homonuclear molecule can be derived by dividing the molecular weight by the atomic weight. For example, the molecular weight of oxygen is 31.999, while its atomic weight is 15.879; therefore, its atomicity is approximately 2 (31.999/15.879 ≈ 2). Examples The most common values of atomicity for the first 30 elements in the periodic table are as follows: Document 3::: Steudel R 2020, Chemistry of the Non-metals: Syntheses - Structures - Bonding - Applications, in collaboration with D Scheschkewitz, Berlin, Walter de Gruyter, . ▲ An updated translation of the 5th German edition of 2013, incorporating the literature up to Spring 2019. Twenty-three nonmetals, including B, Si, Ge, As, Se, Te, and At but not Sb (nor Po). The nonmetals are identified on the basis of their electrical conductivity at absolute zero putatively being close to zero, rather than finite as in the case of metals. That does not work for As however, which has the electronic structure of a semimetal (like Sb). Halka M & Nordstrom B 2010, "Nonmetals", Facts on File, New York, A reading level 9+ book covering H, C, N, O, P, S, Se. Complementary books by the same authors examine (a) the post-transition metals (Al, Ga, In, Tl, Sn, Pb and Bi) and metalloids (B, Si, Ge, As, Sb, Te and Po); and (b) the halogens and noble gases. Woolins JD 1988, Non-Metal Rings, Cages and Clusters, John Wiley & Sons, Chichester, . A more advanced text that covers H; B; C, Si, Ge; N, P, As, Sb; O, S, Se and Te. Steudel R 1977, Chemistry of the Non-metals: With an Introduction to Atomic Structure and Chemical Bonding, English edition by FC Nachod & JJ Zuckerman, Berlin, Walter de Gruyter, . ▲ Twenty-four nonmetals, including B, Si, Ge, As, Se, Te, Po and At. Powell P & Timms PL 1974, The Chemistry of the Non-metals, Chapman & Hall, London, . ▲ Twenty-two nonmetals including B, Si, Ge, As and Te. Tin and antimony are shown as being intermediate between metals and nonmetals; they are later shown as either metals or nonmetals. Astatine is counted as a metal. Document 4::: In mathematical psychology and education theory, a knowledge space is a combinatorial structure used to formulate mathematical models describing the progression of a human learner. Knowledge spaces were introduced in 1985 by Jean-Paul Doignon and Jean-Claude Falmagne, and remain in extensive use in the education theory. Modern applications include two computerized tutoring systems, ALEKS and the defunct RATH. Formally, a knowledge space assumes that a domain of knowledge is a collection of concepts or skills, each of which must be eventually mastered. Not all concepts are interchangeable; some require other concepts as prerequisites. Conversely, competency at one skill may ease the acquisition of another through similarity. A knowledge space marks out which collections of skills are feasible: they can be learned without mastering any other skills. Under reasonable assumptions, the collection of feasible competencies forms the mathematical structure known as an antimatroid. Researchers and educators usually explore the structure of a discipline's knowledge space as a latent class model. Motivation Knowledge Space Theory attempts to address shortcomings of standardized testing when used in educational psychometry. Common tests, such as the SAT and ACT, compress a student's knowledge into a very small range of ordinal ranks, in the process effacing the conceptual dependencies between questions. Consequently, the tests cannot distinguish between true understanding and guesses, nor can they identify a student's particular weaknesses, only the general proportion of skills mastered. The goal of knowledge space theory is to provide a language by which exams can communicate What the student can do and What the student is ready to learn. Model structure Knowledge Space Theory-based models presume that an educational subject can be modeled as a finite set of concepts, skills, or topics. Each feasible state of knowledge about is then a subset of ; the set of The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. Hydrogen chloride contains one atom of hydrogen and one atom of what? A. chlorine B. nitrogen C. magnesium D. calcium Answer:
sciq-786
multiple_choice
What is the volume of the molecules of an ideal gas?
[ "two", "one", "zero", "three" ]
C
Relavent Documents: Document 0::: Conceptual questions or conceptual problems in science, technology, engineering, and mathematics (STEM) education are questions that can be answered based only on the knowledge of relevant concepts, rather than performing extensive calculations. They contrast with most homework and exam problems in science and engineering that typically require plugging in numerical values into previously discussed formulas. Such "plug-and-chug" numerical problems can often be solved correctly by just matching the pattern of the problem to a previously discussed problem and changing the numerical inputs, which requires significant amounts of time to perform the calculations but does not test or deepen the understanding of how the concepts and formulas should work together. Conceptual questions, therefore, provide a good complement to conventional numerical problems because they need minimal or no calculations and instead encourage the students to engage more deeply with the underlying concepts and how they relate to formulas. Conceptual problems are often formulated as multiple-choice questions, making them easy to use during in-class discussions, particularly when utilizing active learning, peer instruction, and audience response. An example of a conceptual question in undergraduate thermodynamics is provided below: During adiabatic expansion of an ideal gas, its temperatureincreases decreases stays the same Impossible to tell/need more information The use of conceptual questions in physics was popularized by Eric Mazur, particularly in the form of multiple-choice tests that he called ConcepTests. In recent years, multiple websites that maintain lists of conceptual questions have been created by instructors for various disciplines. Some books on physics provide many examples of conceptual questions as well. Multiple conceptual questions can be assembled into a concept inventory to test the working knowledge of students at the beginning of a course or to track the improvement in Document 1::: In mathematical psychology and education theory, a knowledge space is a combinatorial structure used to formulate mathematical models describing the progression of a human learner. Knowledge spaces were introduced in 1985 by Jean-Paul Doignon and Jean-Claude Falmagne, and remain in extensive use in the education theory. Modern applications include two computerized tutoring systems, ALEKS and the defunct RATH. Formally, a knowledge space assumes that a domain of knowledge is a collection of concepts or skills, each of which must be eventually mastered. Not all concepts are interchangeable; some require other concepts as prerequisites. Conversely, competency at one skill may ease the acquisition of another through similarity. A knowledge space marks out which collections of skills are feasible: they can be learned without mastering any other skills. Under reasonable assumptions, the collection of feasible competencies forms the mathematical structure known as an antimatroid. Researchers and educators usually explore the structure of a discipline's knowledge space as a latent class model. Motivation Knowledge Space Theory attempts to address shortcomings of standardized testing when used in educational psychometry. Common tests, such as the SAT and ACT, compress a student's knowledge into a very small range of ordinal ranks, in the process effacing the conceptual dependencies between questions. Consequently, the tests cannot distinguish between true understanding and guesses, nor can they identify a student's particular weaknesses, only the general proportion of skills mastered. The goal of knowledge space theory is to provide a language by which exams can communicate What the student can do and What the student is ready to learn. Model structure Knowledge Space Theory-based models presume that an educational subject can be modeled as a finite set of concepts, skills, or topics. Each feasible state of knowledge about is then a subset of ; the set of Document 2::: The Gas composition of any gas can be characterised by listing the pure substances it contains, and stating for each substance its proportion of the gas mixture's molecule count.Nitrogen 78.084 Oxygen 20.9476 Argon Ar 0.934 Carbon Dioxide 0.0314 Gas composition of air To give a familiar example, air has a composition of: Standard Dry Air is the agreed-upon gas composition for air from which all water vapour has been removed. There are various standards bodies which publish documents that define a dry air gas composition. Each standard provides a list of constituent concentrations, a gas density at standard conditions and a molar mass. It is extremely unlikely that the actual composition of any specific sample of air will completely agree with any definition for standard dry air. While the various definitions for standard dry air all attempt to provide realistic information about the constituents of air, the definitions are important in and of themselves because they establish a standard which can be cited in legal contracts and publications documenting measurement calculation methodologies or equations of state. The standards below are two examples of commonly used and cited publications that provide a composition for standard dry air: ISO TR 29922-2017 provides a definition for standard dry air which specifies an air molar mass of 28,965 46 ± 0,000 17 kg·kmol-1. GPA 2145:2009 is published by the Gas Processors Association. It provides a molar mass for air of 28.9625 g/mol, and provides a composition for standard dry air as a footnote. Document 3::: In chemistry and related fields, the molar volume, symbol Vm, or of a substance is the ratio of the volume occupied by a substance to the amount of substance, usually given at a given temperature and pressure. It is equal to the molar mass (M) divided by the mass density (ρ): The molar volume has the SI unit of cubic metres per mole (m3/mol), although it is more typical to use the units cubic decimetres per mole (dm3/mol) for gases, and cubic centimetres per mole (cm3/mol) for liquids and solids. Definition The molar volume of a substance i is defined as its molar mass divided by its density ρi0: For an ideal mixture containing N components, the molar volume of the mixture is the weighted sum of the molar volumes of its individual components. For a real mixture the molar volume cannot be calculated without knowing the density: There are many liquid–liquid mixtures, for instance mixing pure ethanol and pure water, which may experience contraction or expansion upon mixing. This effect is represented by the quantity excess volume of the mixture, an example of excess property. Relation to specific volume Molar volume is related to specific volume by the product with molar mass. This follows from above where the specific volume is the reciprocal of the density of a substance: Ideal gases For ideal gases, the molar volume is given by the ideal gas equation; this is a good approximation for many common gases at standard temperature and pressure. The ideal gas equation can be rearranged to give an expression for the molar volume of an ideal gas: Hence, for a given temperature and pressure, the molar volume is the same for all ideal gases and is based on the gas constant: R = , or about . The molar volume of an ideal gas at 100 kPa (1 bar) is at 0 °C, at 25 °C. The molar volume of an ideal gas at 1 atmosphere of pressure is at 0 °C, at 25 °C. Crystalline solids For crystalline solids, the molar volume can be measured by X-ray crystallography. The unit cell Document 4::: The Force Concept Inventory is a test measuring mastery of concepts commonly taught in a first semester of physics developed by Hestenes, Halloun, Wells, and Swackhamer (1985). It was the first such "concept inventory" and several others have been developed since for a variety of topics. The FCI was designed to assess student understanding of the Newtonian concepts of force. Hestenes (1998) found that while "nearly 80% of the [students completing introductory college physics courses] could state Newton's Third Law at the beginning of the course, FCI data showed that less than 15% of them fully understood it at the end". These results have been replicated in a number of studies involving students at a range of institutions (see sources section below), and have led to greater recognition in the physics education research community of the importance of students' "active engagement" with the materials to be mastered. The 1995 version has 30 five-way multiple choice questions. Example question (question 4): Gender differences The FCI shows a gender difference in favor of males that has been the subject of some research in regard to gender equity in education. Men score on average about 10% higher. The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. What is the volume of the molecules of an ideal gas? A. two B. one C. zero D. three Answer:
sciq-1972
multiple_choice
The law of conservation mass states that matter cannot be created or what?
[ "moved", "observed", "changed", "destroyed" ]
D
Relavent Documents: Document 0::: Conceptual questions or conceptual problems in science, technology, engineering, and mathematics (STEM) education are questions that can be answered based only on the knowledge of relevant concepts, rather than performing extensive calculations. They contrast with most homework and exam problems in science and engineering that typically require plugging in numerical values into previously discussed formulas. Such "plug-and-chug" numerical problems can often be solved correctly by just matching the pattern of the problem to a previously discussed problem and changing the numerical inputs, which requires significant amounts of time to perform the calculations but does not test or deepen the understanding of how the concepts and formulas should work together. Conceptual questions, therefore, provide a good complement to conventional numerical problems because they need minimal or no calculations and instead encourage the students to engage more deeply with the underlying concepts and how they relate to formulas. Conceptual problems are often formulated as multiple-choice questions, making them easy to use during in-class discussions, particularly when utilizing active learning, peer instruction, and audience response. An example of a conceptual question in undergraduate thermodynamics is provided below: During adiabatic expansion of an ideal gas, its temperatureincreases decreases stays the same Impossible to tell/need more information The use of conceptual questions in physics was popularized by Eric Mazur, particularly in the form of multiple-choice tests that he called ConcepTests. In recent years, multiple websites that maintain lists of conceptual questions have been created by instructors for various disciplines. Some books on physics provide many examples of conceptual questions as well. Multiple conceptual questions can be assembled into a concept inventory to test the working knowledge of students at the beginning of a course or to track the improvement in Document 1::: Scientific laws or laws of science are statements, based on repeated experiments or observations, that describe or predict a range of natural phenomena. The term law has diverse usage in many cases (approximate, accurate, broad, or narrow) across all fields of natural science (physics, chemistry, astronomy, geoscience, biology). Laws are developed from data and can be further developed through mathematics; in all cases they are directly or indirectly based on empirical evidence. It is generally understood that they implicitly reflect, though they do not explicitly assert, causal relationships fundamental to reality, and are discovered rather than invented. Scientific laws summarize the results of experiments or observations, usually within a certain range of application. In general, the accuracy of a law does not change when a new theory of the relevant phenomenon is worked out, but rather the scope of the law's application, since the mathematics or statement representing the law does not change. As with other kinds of scientific knowledge, scientific laws do not express absolute certainty, as mathematical theorems or identities do. A scientific law may be contradicted, restricted, or extended by future observations. A law can often be formulated as one or several statements or equations, so that it can predict the outcome of an experiment. Laws differ from hypotheses and postulates, which are proposed during the scientific process before and during validation by experiment and observation. Hypotheses and postulates are not laws, since they have not been verified to the same degree, although they may lead to the formulation of laws. Laws are narrower in scope than scientific theories, which may entail one or several laws. Science distinguishes a law or theory from facts. Calling a law a fact is ambiguous, an overstatement, or an equivocation. The nature of scientific laws has been much discussed in philosophy, but in essence scientific laws are simply empirical Document 2::: The principle of mutability is the notion that any physical property which appears to follow a conservation law may undergo some physical process that violates its conservation. John Archibald Wheeler offered this speculative principle after Stephen Hawking predicted the evaporation of black holes which violates baryon number conservation. See also Philosophy of physics Document 3::: "Unified Science" can refer to any of three related strands in contemporary thought. Belief in the unity of science was a central tenet of logical positivism. Different logical positivists construed this doctrine in several different ways, e.g. as a reductionist thesis, that the objects investigated by the special sciences reduce to the objects of a common, putatively more basic domain of science, usually thought to be physics; as the thesis that all of the theories and results of the various sciences can or ought to be expressed in a common language or "universal slang"; or as the thesis that all the special sciences share a common method. The writings of Edward Haskell and a few associates, seeking to rework science into a single discipline employing a common artificial language. This work culminated in the 1972 publication of Full Circle: The Moral Force of Unified Science. The vast part of the work of Haskell and his contemporaries remains unpublished, however. Timothy Wilken and Anthony Judge have recently revived and extended the insights of Haskell and his coworkers. Unified Science has been a consistent thread since the 1940s in Howard T. Odum's systems ecology and the associated Emergy Synthesis, modeling the "ecosystem": the geochemical, biochemical, and thermodynamic processes of the lithosphere and biosphere. Modeling such earthly processes in this manner requires a science uniting geology, physics, biology, and chemistry (H.T.Odum 1995). With this in mind, Odum developed a common language of science based on electronic schematics, with applications to ecology economic systems in mind (H.T.Odum 1994). See also Consilience — the unification of knowledge, e.g. science and the humanities Tree of knowledge system Document 4::: Physics education or physics teaching refers to the education methods currently used to teach physics. The occupation is called physics educator or physics teacher. Physics education research refers to an area of pedagogical research that seeks to improve those methods. Historically, physics has been taught at the high school and college level primarily by the lecture method together with laboratory exercises aimed at verifying concepts taught in the lectures. These concepts are better understood when lectures are accompanied with demonstration, hand-on experiments, and questions that require students to ponder what will happen in an experiment and why. Students who participate in active learning for example with hands-on experiments learn through self-discovery. By trial and error they learn to change their preconceptions about phenomena in physics and discover the underlying concepts. Physics education is part of the broader area of science education. Ancient Greece Aristotle wrote what is considered now as the first textbook of physics. Aristotle's ideas were taught unchanged until the Late Middle Ages, when scientists started making discoveries that didn't fit them. For example, Copernicus' discovery contradicted Aristotle's idea of an Earth-centric universe. Aristotle's ideas about motion weren't displaced until the end of the 17th century, when Newton published his ideas. Today's physics students often think of physics concepts in Aristotelian terms, despite being taught only Newtonian concepts. Hong Kong High schools In Hong Kong, physics is a subject for public examination. Local students in Form 6 take the public exam of Hong Kong Diploma of Secondary Education (HKDSE). Compare to the other syllabus include GCSE, GCE etc. which learn wider and boarder of different topics, the Hong Kong syllabus is learning more deeply and more challenges with calculations. Topics are narrow down to a smaller amount compared to the A-level due to the insufficient teachi The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. The law of conservation mass states that matter cannot be created or what? A. moved B. observed C. changed D. destroyed Answer:
sciq-8646
multiple_choice
Contributing to the blood-brain barrier is one of the jobs of glial cells, which support neurons in what system?
[ "digestive system", "peripheral nervous system", "circulatory system", "central nervous system" ]
D
Relavent Documents: Document 0::: The blood–brain barrier (BBB) is a highly selective semipermeable border of endothelial cells that regulates the transfer of solutes and chemicals between the circulatory system and the central nervous system, thus protecting the brain from harmful or unwanted substances in the blood. The blood–brain barrier is formed by endothelial cells of the capillary wall, astrocyte end-feet ensheathing the capillary, and pericytes embedded in the capillary basement membrane. This system allows the passage of some small molecules by passive diffusion, as well as the selective and active transport of various nutrients, ions, organic anions, and macromolecules such as glucose and amino acids that are crucial to neural function. The blood–brain barrier restricts the passage of pathogens, the diffusion of solutes in the blood, and large or hydrophilic molecules into the cerebrospinal fluid, while allowing the diffusion of hydrophobic molecules (O2, CO2, hormones) and small non-polar molecules. Cells of the barrier actively transport metabolic products such as glucose across the barrier using specific transport proteins. The barrier also restricts the passage of peripheral immune factors, like signaling molecules, antibodies, and immune cells, into the CNS, thus insulating the brain from damage due to peripheral immune events. Specialized brain structures participating in sensory and secretory integration within brain neural circuits—the circumventricular organs and choroid plexus—have in contrast highly permeable capillaries. Structure The BBB results from the selectivity of the tight junctions between the endothelial cells of brain capillaries, restricting the passage of solutes. At the interface between blood and the brain, endothelial cells are adjoined continuously by these tight junctions, which are composed of smaller subunits of transmembrane proteins, such as occludin, claudins (such as Claudin-5), junctional adhesion molecule (such as JAM-A). Each of these tight junct Document 1::: Catherina Gwynne Becker (née Krüger) is an Alexander von Humboldt Professor at TU Dresden, and was formerly Professor of Neural Development and Regeneration at the University of Edinburgh. Early life and education Catherina Becker was born in Marburg, Germany in 1964. She was educated at the in Bremen, before going on to study at the University of Bremen where she obtained an MSci of Biology and her PhD (Dr. rer. nat.) in 1993, investigating visual system development and regeneration in frogs and salamanders under the supervision of Gerhard Roth. She then trained as post-doctorate at the Swiss Federal Institute of Technology in Zürich, the Department Dev Cell Biol funded by an EMBO long-term fellowship, at the University of California, Irvine in USA and the Centre for Molecular Neurobiology Hamburg (ZMNH), Germany where she took a position of group leader in 2000 and finished her ‚Habilitation‘ in neurobiology in 2012. Career Becker joined the University of Edinburgh in 2005 as senior Lecturer and was appointed personal chair in neural development and regeneration in 2013. She was also the Director of Postgraduate Training at the Centre for Neuroregeneration up to 2015, then centre director up to 2017. In 2021 she received an Alexander von Humboldt Professorship, joining the at the Technical University of Dresden. Research Becker's research focuses on a better understanding of the factors governing the generation of neurons and axonal pathfinding in the CNS during development and regeneration using the zebrafish model to identify fundamental mechanisms in vertebrates with clear translational implications for CNS injury and neurodegenerative diseases. The Becker group established the zebrafish as a model for spinal cord regeneration. Their research found that functional regeneration is near perfect, but anatomical repair does not fully recreate the previous network, instead, new neurons are generated and extensive rewiring occurs. They have identified neurotra Document 2::: The following diagram is provided as an overview of and topical guide to the human nervous system: Human nervous system – the part of the human body that coordinates a person's voluntary and involuntary actions and transmits signals between different parts of the body. The human nervous system consists of two main parts: the central nervous system (CNS) and the peripheral nervous system (PNS). The CNS contains the brain and spinal cord. The PNS consists mainly of nerves, which are long fibers that connect the CNS to every other part of the body. The PNS includes motor neurons, mediating voluntary movement; the autonomic nervous system, comprising the sympathetic nervous system and the parasympathetic nervous system and regulating involuntary functions; and the enteric nervous system, a semi-independent part of the nervous system whose function is to control the gastrointestinal system. Evolution of the human nervous system Evolution of nervous systems Evolution of human intelligence Evolution of the human brain Paleoneurology Some branches of science that study the human nervous system Neuroscience Neurology Paleoneurology Central nervous system The central nervous system (CNS) is the largest part of the nervous system and includes the brain and spinal cord. Spinal cord Brain Brain – center of the nervous system. Outline of the human brain List of regions of the human brain Principal regions of the vertebrate brain: Peripheral nervous system Peripheral nervous system (PNS) – nervous system structures that do not lie within the CNS. Sensory system A sensory system is a part of the nervous system responsible for processing sensory information. A sensory system consists of sensory receptors, neural pathways, and parts of the brain involved in sensory perception. List of sensory systems Sensory neuron Perception Visual system Auditory system Somatosensory system Vestibular system Olfactory system Taste Pain Components of the nervous system Neuron I Document 3::: Gliosis is a nonspecific reactive change of glial cells in response to damage to the central nervous system (CNS). In most cases, gliosis involves the proliferation or hypertrophy of several different types of glial cells, including astrocytes, microglia, and oligodendrocytes. In its most extreme form, the proliferation associated with gliosis leads to the formation of a glial scar. The process of gliosis involves a series of cellular and molecular events that occur over several days. Typically, the first response to injury is the migration of macrophages and local microglia to the injury site. This process, which constitutes a form of gliosis known as microgliosis, begins within hours of the initial CNS injury. Later, after 3–5 days, oligodendrocyte precursor cells are also recruited to the site and may contribute to remyelination. The final component of gliosis is astrogliosis, the proliferation of surrounding astrocytes, which are the main constituents of the glial scar. Gliosis has historically been given a negative connotation due to its appearance in many CNS diseases and the inhibition of axonal regeneration caused by glial scar formation. However, gliosis has been shown to have both beneficial and detrimental effects, and the balance between these is due to a complex array of factors and molecular signaling mechanisms, which affect the reaction of all glial cell types. Astrogliosis Reactive astrogliosis is the most common form of gliosis and involves the proliferation of astrocytes, a type of glial cell responsible for maintaining extracellular ion and neurotransmitter concentrations, modulating synapse function, and forming the blood–brain barrier. Like other forms of gliosis, astrogliosis accompanies traumatic brain injury as well as many neuropathologies, ranging from amyotrophic lateral sclerosis to fatal familial insomnia. Although the mechanisms which lead to astrogliosis are not fully understood, neuronal injury is well understood to cause astrocy Document 4::: A gemistocyte (/dʒɛˈmɪstəsaɪt/ jem-ISS-tə-syte; from Greek γέμιζω (gemizo) 'to fill up') is a swollen, reactive astrocyte. Gemistocytes are glial cells that are characterized by billowing, eosinophilic cytoplasm and a peripherally positioned, flattened nucleus. Gemistocytes most often appear during acute injury; and eventually, shrink in size. They are usually present in anoxic-ischemic brains, which occurs when there is a complete lack of blood flow to the brain. The human brain contains many cells that can impact gliosis, including endothelial progenitors, fibroblast lineage cells, different types of inflammatory cells, and various types of glia and neural-lineage progenitor cells, which include astrocytes. Gliosis occurs when the body creates more, or larger, glial cells that cause scars in the brain that impact body functions. The human body has many body functions to maintain homeostasis because gliosis can occur immediately upon injury. Anoxic-ischemic brains usually occur in patients who have had cardiac arrest and prolonged attempt at cardiopulmonary resuscitation. Functions in the body When present in anoxic-ischemic brains, gemistocytes are regularly encountered in glial neoplasms, also known as glioma, which is a type of tumor that occurs in the brain and spinal cord. Usually, gliomas begin in the glial cells that surround the nerve cells to help them function. Many gliomas exhibit cells that do not exist in normal brain tissue and are not seen in glial differentiation. Of these gliomas are astrocytomas, which is a type of cancer that occurs in the brain or spinal cord. The main role of astrocytes is to maintain brain homeostasis and neuronal metabolism. When the astrocytes become activated, they begin to respond to damage. Astrocyte activation, known as astrogliosis, responds to neurological trauma, infections, degradations, epilepsy, and tumorigenesis. Each neurological insult plays a major role in astrocyte activation and response to that specific d The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. Contributing to the blood-brain barrier is one of the jobs of glial cells, which support neurons in what system? A. digestive system B. peripheral nervous system C. circulatory system D. central nervous system Answer:
sciq-11005
multiple_choice
What is the first stage of cellular respiration?
[ "photosynthesis", "glycolysis", "hydrolysis", "amniocentesis" ]
B
Relavent Documents: Document 0::: Cellular respiration is the process by which biological fuels are oxidized in the presence of an inorganic electron acceptor, such as oxygen, to drive the bulk production of adenosine triphosphate (ATP), which contains energy. Cellular respiration may be described as a set of metabolic reactions and processes that take place in the cells of organisms to convert chemical energy from nutrients into ATP, and then release waste products. Cellular respiration is a vital process that happens in the cells of living organisms, including humans, plants, and animals. It's how cells produce energy to power all the activities necessary for life. The reactions involved in respiration are catabolic reactions, which break large molecules into smaller ones, producing large amounts of energy (ATP). Respiration is one of the key ways a cell releases chemical energy to fuel cellular activity. The overall reaction occurs in a series of biochemical steps, some of which are redox reactions. Although cellular respiration is technically a combustion reaction, it is an unusual one because of the slow, controlled release of energy from the series of reactions. Nutrients that are commonly used by animal and plant cells in respiration include sugar, amino acids and fatty acids, and the most common oxidizing agent is molecular oxygen (O2). The chemical energy stored in ATP (the bond of its third phosphate group to the rest of the molecule can be broken allowing more stable products to form, thereby releasing energy for use by the cell) can then be used to drive processes requiring energy, including biosynthesis, locomotion or transportation of molecules across cell membranes. Aerobic respiration Aerobic respiration requires oxygen (O2) in order to create ATP. Although carbohydrates, fats and proteins are consumed as reactants, aerobic respiration is the preferred method of pyruvate production in glycolysis, and requires pyruvate to the mitochondria in order to be fully oxidized by the c Document 1::: Cellular waste products are formed as a by-product of cellular respiration, a series of processes and reactions that generate energy for the cell, in the form of ATP. One example of cellular respiration creating cellular waste products are aerobic respiration and anaerobic respiration. Each pathway generates different waste products. Aerobic respiration When in the presence of oxygen, cells use aerobic respiration to obtain energy from glucose molecules. Simplified Theoretical Reaction: C6H12O6 (aq) + 6O2 (g) → 6CO2 (g) + 6H2O (l) + ~ 30ATP Cells undergoing aerobic respiration produce 6 molecules of carbon dioxide, 6 molecules of water, and up to 30 molecules of ATP (adenosine triphosphate), which is directly used to produce energy, from each molecule of glucose in the presence of surplus oxygen. In aerobic respiration, oxygen serves as the recipient of electrons from the electron transport chain. Aerobic respiration is thus very efficient because oxygen is a strong oxidant. Aerobic respiration proceeds in a series of steps, which also increases efficiency - since glucose is broken down gradually and ATP is produced as needed, less energy is wasted as heat. This strategy results in the waste products H2O and CO2 being formed in different amounts at different phases of respiration. CO2 is formed in Pyruvate decarboxylation, H2O is formed in oxidative phosphorylation, and both are formed in the citric acid cycle. The simple nature of the final products also indicates the efficiency of this method of respiration. All of the energy stored in the carbon-carbon bonds of glucose is released, leaving CO2 and H2O. Although there is energy stored in the bonds of these molecules, this energy is not easily accessible by the cell. All usable energy is efficiently extracted. Anaerobic respiration Anaerobic respiration is done by aerobic organisms when there is not sufficient oxygen in a cell to undergo aerobic respiration as well as by cells called anaerobes that Document 2::: GRE Subject Biochemistry, Cell and Molecular Biology was a standardized exam provided by ETS (Educational Testing Service) that was discontinued in December 2016. It is a paper-based exam and there are no computer-based versions of it. ETS places this exam three times per year: once in April, once in October and once in November. Some graduate programs in the United States recommend taking this exam, while others require this exam score as a part of the application to their graduate programs. ETS sends a bulletin with a sample practice test to each candidate after registration for the exam. There are 180 questions within the biochemistry subject test. Scores are scaled and then reported as a number between 200 and 990; however, in recent versions of the test, the maximum and minimum reported scores have been 760 (corresponding to the 99 percentile) and 320 (1 percentile) respectively. The mean score for all test takers from July, 2009, to July, 2012, was 526 with a standard deviation of 95. After learning that test content from editions of the GRE® Biochemistry, Cell and Molecular Biology (BCM) Test has been compromised in Israel, ETS made the decision not to administer this test worldwide in 2016–17. Content specification Since many students who apply to graduate programs in biochemistry do so during the first half of their fourth year, the scope of most questions is largely that of the first three years of a standard American undergraduate biochemistry curriculum. A sampling of test item content is given below: Biochemistry (36%) A Chemical and Physical Foundations Thermodynamics and kinetics Redox states Water, pH, acid-base reactions and buffers Solutions and equilibria Solute-solvent interactions Chemical interactions and bonding Chemical reaction mechanisms B Structural Biology: Structure, Assembly, Organization and Dynamics Small molecules Macromolecules (e.g., nucleic acids, polysaccharides, proteins and complex lipids) Supramolecular complexes (e.g. Document 3::: MicrobeLibrary is a permanent collection of over 1400 original peer-reviewed resources for teaching undergraduate microbiology. It is provided by the American Society for Microbiology, Washington DC, United States. Contents include curriculum activities; images and animations; reviews of books, websites and other resources; and articles from Focus on Microbiology Education, Microbiology Education and Microbe. Around 40% of the materials are free to educators and students, the remainder require a subscription. the service is suspended with the message to: "Please check back with us in 2017". External links MicrobeLibrary Microbiology Document 4::: Reactions The reactions of the MEP pathway are as follows, taken primarily from Eisenreich and co-workers, excep The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. What is the first stage of cellular respiration? A. photosynthesis B. glycolysis C. hydrolysis D. amniocentesis Answer:
sciq-10942
multiple_choice
Having polar bonds may make a covalent compound what?
[ "ionic", "negatively charged", "polar", "neutral" ]
C
Relavent Documents: Document 0::: In chemistry, polarity is a separation of electric charge leading to a molecule or its chemical groups having an electric dipole moment, with a negatively charged end and a positively charged end. Polar molecules must contain one or more polar bonds due to a difference in electronegativity between the bonded atoms. Molecules containing polar bonds have no molecular polarity if the bond dipoles cancel each other out by symmetry. Polar molecules interact through dipole-dipole intermolecular forces and hydrogen bonds. Polarity underlies a number of physical properties including surface tension, solubility, and melting and boiling points. Polarity of bonds Not all atoms attract electrons with the same force. The amount of "pull" an atom exerts on its electrons is called its electronegativity. Atoms with high electronegativitiessuch as fluorine, oxygen, and nitrogenexert a greater pull on electrons than atoms with lower electronegativities such as alkali metals and alkaline earth metals. In a bond, this leads to unequal sharing of electrons between the atoms, as electrons will be drawn closer to the atom with the higher electronegativity. Because electrons have a negative charge, the unequal sharing of electrons within a bond leads to the formation of an electric dipole: a separation of positive and negative electric charge. Because the amount of charge separated in such dipoles is usually smaller than a fundamental charge, they are called partial charges, denoted as δ+ (delta plus) and δ− (delta minus). These symbols were introduced by Sir Christopher Ingold and Dr. Edith Hilda (Usherwood) Ingold in 1926. The bond dipole moment is calculated by multiplying the amount of charge separated and the distance between the charges. These dipoles within molecules can interact with dipoles in other molecules, creating dipole-dipole intermolecular forces. Classification Bonds can fall between one of two extremescompletely nonpolar or completely polar. A completely nonpolar Document 1::: A carbon–nitrogen bond is a covalent bond between carbon and nitrogen and is one of the most abundant bonds in organic chemistry and biochemistry. Nitrogen has five valence electrons and in simple amines it is trivalent, with the two remaining electrons forming a lone pair. Through that pair, nitrogen can form an additional bond to hydrogen making it tetravalent and with a positive charge in ammonium salts. Many nitrogen compounds can thus be potentially basic but its degree depends on the configuration: the nitrogen atom in amides is not basic due to delocalization of the lone pair into a double bond and in pyrrole the lone pair is part of an aromatic sextet. Similar to carbon–carbon bonds, these bonds can form stable double bonds, as in imines; and triple bonds, such as nitriles. Bond lengths range from 147.9 pm for simple amines to 147.5 pm for C-N= compounds such as nitromethane to 135.2 pm for partial double bonds in pyridine to 115.8 pm for triple bonds as in nitriles. A CN bond is strongly polarized towards nitrogen (the electronegativities of C and N are 2.55 and 3.04, respectively) and subsequently molecular dipole moments can be high: cyanamide 4.27 D, diazomethane 1.5 D, methyl azide 2.17, pyridine 2.19. For this reason many compounds containing CN bonds are water-soluble. N-philes are group of radical molecules which are specifically attracted to the C=N bonds. Carbon-nitrogen bond can be analyzed by X-ray photoelectron spectroscopy (XPS). Depending on the bonding states the peak positions differ in N1s XPS spectra. Nitrogen functional groups See also Cyanide Other carbon bonds with group 15 elements: carbon–nitrogen bonds, carbon–phosphorus bonds Other carbon bonds with period 2 elements: carbon–lithium bonds, carbon–beryllium bonds, carbon–boron bonds, carbon–carbon bonds, carbon–nitrogen bonds, carbon–oxygen bonds, carbon–fluorine bonds Carbon–hydrogen bond Document 2::: Molecular binding is an attractive interaction between two molecules that results in a stable association in which the molecules are in close proximity to each other. It is formed when atoms or molecules bind together by sharing of electrons. It often, but not always, involves some chemical bonding. In some cases, the associations can be quite strong—for example, the protein streptavidin and the vitamin biotin have a dissociation constant (reflecting the ratio between bound and free biotin) on the order of 10−14—and so the reactions are effectively irreversible. The result of molecular binding is sometimes the formation of a molecular complex in which the attractive forces holding the components together are generally non-covalent, and thus are normally energetically weaker than covalent bonds. Molecular binding occurs in biological complexes (e.g., between pairs or sets of proteins, or between a protein and a small molecule ligand it binds) and also in abiologic chemical systems, e.g. as in cases of coordination polymers and coordination networks such as metal-organic frameworks. Types Molecular binding can be classified into the following types: Non-covalent – no chemical bonds are formed between the two interacting molecules hence the association is fully reversible Reversible covalent – a chemical bond is formed, however the free energy difference separating the noncovalently-bonded reactants from bonded product is near equilibrium and the activation barrier is relatively low such that the reverse reaction which cleaves the chemical bond easily occurs Irreversible covalent – a chemical bond is formed in which the product is thermodynamically much more stable than the reactants such that the reverse reaction does not take place. Bound molecules are sometimes called a "molecular complex"—the term generally refers to non-covalent associations. Non-covalent interactions can effectively become irreversible; for example, tight binding inhibitors of enzymes Document 3::: An intramolecular force (or primary forces) is any force that binds together the atoms making up a molecule or compound, not to be confused with intermolecular forces, which are the forces present between molecules. The subtle difference in the name comes from the Latin roots of English with inter meaning between or among and intra meaning inside. Chemical bonds are considered to be intramolecular forces which are often stronger than intermolecular forces present between non-bonding atoms or molecules. Types The classical model identifies three main types of chemical bonds — ionic, covalent, and metallic — distinguished by the degree of charge separation between participating atoms. The characteristics of the bond formed can be predicted by the properties of constituent atoms, namely electronegativity. They differ in the magnitude of their bond enthalpies, a measure of bond strength, and thus affect the physical and chemical properties of compounds in different ways. % of ionic character is directly proportional difference in electronegitivity of bonded atom. Ionic bond An ionic bond can be approximated as complete transfer of one or more valence electrons of atoms participating in bond formation, resulting in a positive ion and a negative ion bound together by electrostatic forces. Electrons in an ionic bond tend to be mostly found around one of the two constituent atoms due to the large electronegativity difference between the two atoms, generally more than 1.9, (greater difference in electronegativity results in a stronger bond); this is often described as one atom giving electrons to the other. This type of bond is generally formed between a metal and nonmetal, such as sodium and chlorine in NaCl. Sodium would give an electron to chlorine, forming a positively charged sodium ion and a negatively charged chloride ion. Covalent bond In a true covalent bond, the electrons are shared evenly between the two atoms of the bond; there is little or no charge separa Document 4::: In chemistry, the carbon-hydrogen bond ( bond) is a chemical bond between carbon and hydrogen atoms that can be found in many organic compounds. This bond is a covalent, single bond, meaning that carbon shares its outer valence electrons with up to four hydrogens. This completes both of their outer shells, making them stable. Carbon–hydrogen bonds have a bond length of about 1.09 Å (1.09 × 10−10 m) and a bond energy of about 413 kJ/mol (see table below). Using Pauling's scale—C (2.55) and H (2.2)—the electronegativity difference between these two atoms is 0.35. Because of this small difference in electronegativities, the bond is generally regarded as being non-polar. In structural formulas of molecules, the hydrogen atoms are often omitted. Compound classes consisting solely of bonds and bonds are alkanes, alkenes, alkynes, and aromatic hydrocarbons. Collectively they are known as hydrocarbons. In October 2016, astronomers reported that the very basic chemical ingredients of life—the carbon-hydrogen molecule (CH, or methylidyne radical), the carbon-hydrogen positive ion () and the carbon ion ()—are the result, in large part, of ultraviolet light from stars, rather than in other ways, such as the result of turbulent events related to supernovae and young stars, as thought earlier. Bond length The length of the carbon-hydrogen bond varies slightly with the hybridisation of the carbon atom. A bond between a hydrogen atom and an sp2 hybridised carbon atom is about 0.6% shorter than between hydrogen and sp3 hybridised carbon. A bond between hydrogen and sp hybridised carbon is shorter still, about 3% shorter than sp3 C-H. This trend is illustrated by the molecular geometry of ethane, ethylene and acetylene. Reactions The C−H bond in general is very strong, so it is relatively unreactive. In several compound classes, collectively called carbon acids, the C−H bond can be sufficiently acidic for proton removal. Unactivated C−H bonds are found in alkanes and are no The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. Having polar bonds may make a covalent compound what? A. ionic B. negatively charged C. polar D. neutral Answer:
sciq-2758
multiple_choice
Some microorganisms can digest cellulose, breaking it down into what?
[ "glucose monomers", "gluclose polymers", "fructose polymers", "fructose monomers" ]
A
Relavent Documents: Document 0::: Fibrolytic bacteria constitute a group of microorganisms that are able to process complex plant polysaccharides thanks to their capacity to synthesize cellulolytic and hemicellulolytic enzymes. Polysaccharides are present in plant cellular cell walls in a compact fiber form where they are mainly composed of cellulose and hemicellulose. Fibrolytic enzymes, which are classified as cellulases, can hydrolyze the β (1 ->4) bonds in plant polysaccharides. Cellulase and hemicellulase (also known as xylanase) are the two main representatives of these enzymes. Biological characteristics Fibrolytic bacteria use glycolysis and the pentose phosphate pathway as the main metabolic routes to catabolize carbohydrates in order to obtain energy and carbon backbones. They use ammonia as the major and practically exclusive source of nitrogen, and they require several B-vitamins for their development. They often depend on other microorganisms to obtain some of their nutrients. Although their growth rate is considered slow, it can be enhanced in the presence of considerable amounts of short-chain fatty acids (isobutyric and isovaleric). These compounds are normally generated as a product of the amino acid fermentative activity of other microorganisms. Because of their habitat conditions, most fibrolytic bacteria are anaerobic. Cellulolytic communities Most fibrolytic bacteria are classified as Bacteroidota or Bacillota and include several bacterial species with diverse morphological and physiological characteristics. They are normally commensal species which have a symbiotic relationship with different insect and mammal species, constituting one of the main components of their gastrointestinal flora. In fact, in herbivores each milliliter of ruminal content can reach about 50 million of bacteria of a great variety of genera and species. . Given the importance of industrial processing of plant fibers in different fields, the genomic analysis of fibrolytic communities in the gastroi Document 1::: Fibrobacter succinogenes is a cellulolytic bacterium species in the genus Fibrobacter. It is present in the rumen of cattle. F. succinogenes is a gram negative, rod-shaped, obligate anaerobe that is a major contributor to cellulose digestion. Since its discovery in the 1950s, it has been studied for its role in herbivore digestion and cellulose fermentation, which can be utilized in biofuel production. History Fibrobacter succinogenes was isolated in 1954 by M.P. Bryant and R.N. Doetsch from bovine rumen at the University of Maryland. They isolated 8 different strains – S23, S61, S85, S111, S121, C2, M13, and M34, all of which belonged to one species – Bacteroides succinogenes. This species would later be renamed Fibrobacter succinogenes. S85 would soon become a model strain for research, and it continues to be representative of wild type species. Genome The genome of F. succinogenes is 3.84 Megabasepairs and is predicted to consist of 3085 open reading frames. Many of these genes encode for carbohydrate binding molecules, glycoside hydrolases, and other enzymes. Thirty-one genes are identified as cellulases. The genome also encodes for a number of proteins capable of breaking down sugars, but it lacks the machinery to transport and use all the products except for those derived from cellulose. Relationship to other bacteria Phylogenetic studies based RpoC and Gyrase B protein sequences, indicate that Fibrobacter succinogenes is closely related to the species from the phyla Bacteroidetes and Chlorobi. Fibrobacter succinogenes and the species from these two other phyla also branch in the same position based upon conserved signature indels in a number of important proteins. Lastly and most importantly, comparative genomic studies have identified two conserved signature indels (a 5-7 amino acid insert in the RpoC protein and a 13-16 amino acid insertion in serine hydroxymethyltransferase) and one signature protein (PG00081) that are uniquely shared by Fibrob Document 2::: Cellulose fibers () are fibers made with ethers or esters of cellulose, which can be obtained from the bark, wood or leaves of plants, or from other plant-based material. In addition to cellulose, the fibers may also contain hemicellulose and lignin, with different percentages of these components altering the mechanical properties of the fibers. The main applications of cellulose fibers are in the textile industry, as chemical filters, and as fiber-reinforcement composites, due to their similar properties to engineered fibers, being another option for biocomposites and polymer composites. History Cellulose was discovered in 1838 by the French chemist Anselme Payen, who isolated it from plant matter and determined its chemical formula. Cellulose was used to produce the first successful thermoplastic polymer, celluloid, by Hyatt Manufacturing Company in 1870. Production of rayon ("artificial silk") from cellulose began in the 1890s, and cellophane was invented in 1912. In 1893, Arthur D. Little of Boston, invented yet another cellulosic product, acetate, and developed it as a film. The first commercial textile uses for acetate in fiber form were developed by the Celanese Company in 1924. Hermann Staudinger determined the polymer structure of cellulose in 1920. The compound was first chemically synthesized (without the use of any biologically derived enzymes) in 1992, by Kobayashi and Shoda. Cellulose structure Cellulose is a polymer made of repeating glucose molecules attached end to end. A cellulose molecule may be from several hundred to over 10,000 glucose units long. Cellulose is similar in form to complex carbohydrates like starch and glycogen. These polysaccharides are also made from multiple subunits of glucose. The difference between cellulose and other complex carbohydrate molecules is how the glucose molecules are linked together. In addition, cellulose is a straight chain polymer, and each cellulose molecule is long and rod-like. This differs from starch Document 3::: Bacterial cellulose is an organic compound with the formula produced by certain types of bacteria. While cellulose is a basic structural material of most plants, it is also produced by bacteria, principally of the genera Acetobacter, Sarcina ventriculi and Agrobacterium. Bacterial, or microbial, cellulose has different properties from plant cellulose and is characterized by high purity, strength, moldability and increased water holding ability. In natural habitats, the majority of bacteria synthesize extracellular polysaccharides, such as cellulose, which form protective envelopes around the cells. While bacterial cellulose is produced in nature, many methods are currently being investigated to enhance cellulose growth from cultures in laboratories as a large-scale process. By controlling synthesis methods, the resulting microbial cellulose can be tailored to have specific desirable properties. For example, attention has been given to the bacteria Komagataeibacter xylinum due to its cellulose's unique mechanical properties and applications to biotechnology, microbiology, and materials science. Historically, bacterial cellulose has been limited to the manufacture of Nata de coco, a South-East Asian food product. With advances in the ability to synthesize and characterize bacterial cellulose, the material is being used for a wide variety of commercial applications including textiles, cosmetics, and food products, as well as medical applications. Many patents have been issued in microbial cellulose applications and several active areas of research are attempting to better characterize microbial cellulose and utilize it in new areas. History As a material, cellulose was first discovered in 1838 by Anselme Payen. Payen was able to isolate the cellulose from the other plant matter and chemically characterize it. In one of its first and most common industrial applications, cellulose from wood pulp was used to manufacture paper. It is ideal for displaying information in p Document 4::: Lignocellulose refers to plant dry matter (biomass), so called lignocellulosic biomass. It is the most abundantly available raw material on the Earth for the production of biofuels. It is composed of two kinds of carbohydrate polymers, cellulose and hemicellulose, and an aromatic-rich polymer called lignin. Any biomass rich in cellulose, hemicelluloses, and lignin are commonly referred to as lignocellulosic biomass. Each component has a distinct chemical behavior. Being a composite of three very different components makes the processing of lignocellulose challenging. The evolved resistance to degradation or even separation is referred to as recalcitrance. Overcoming this recalcitrance to produce useful, high value products requires a combination of heat, chemicals, enzymes, and microorganisms. These carbohydrate-containing polymers contain different sugar monomers (six and five carbon sugars) and they are covalently bound to lignin. Lignocellulosic biomass can be broadly classified as virgin biomass, waste biomass, and energy crops. Virgin biomass includes plants. Waste biomass is produced as a low value byproduct of various industrial sectors such as agriculture (corn stover, sugarcane bagasse, straw etc.) and forestry (saw mill and paper mill discards). Energy crops are crops with a high yield of lignocellulosic biomass produced as a raw material for the production of second-generation biofuel; examples include switchgrass (Panicum virgatum) and Elephant grass. The biofuels generated from these energy crops are sources of sustainable energy. Chemical composition Lignocellulose consists of three components, each with properties that pose challenges to commercial applications. lignin is a heterogeneous, highly crosslinked polymer akin to phenol-formaldehyde resins. It is derived from 3-4 monomers, the ratio of which varies from species to species. The crosslinking is extensive. Being rich in aromatics, lignin is hydrophobic and relatively rigid. Lignin confe The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. Some microorganisms can digest cellulose, breaking it down into what? A. glucose monomers B. gluclose polymers C. fructose polymers D. fructose monomers Answer:
sciq-11572
multiple_choice
Some metabolic pathways release what by breaking down complex molecules to simpler compounds?
[ "fat", "energy", "water", "hydrogen" ]
B
Relavent Documents: Document 0::: Catabolism () is the set of metabolic pathways that breaks down molecules into smaller units that are either oxidized to release energy or used in other anabolic reactions. Catabolism breaks down large molecules (such as polysaccharides, lipids, nucleic acids, and proteins) into smaller units (such as monosaccharides, fatty acids, nucleotides, and amino acids, respectively). Catabolism is the breaking-down aspect of metabolism, whereas anabolism is the building-up aspect. Cells use the monomers released from breaking down polymers to either construct new polymer molecules or degrade the monomers further to simple waste products, releasing energy. Cellular wastes include lactic acid, acetic acid, carbon dioxide, ammonia, and urea. The formation of these wastes is usually an oxidation process involving a release of chemical free energy, some of which is lost as heat, but the rest of which is used to drive the synthesis of adenosine triphosphate (ATP). This molecule acts as a way for the cell to transfer the energy released by catabolism to the energy-requiring reactions that make up anabolism. Catabolism is a destructive metabolism and anabolism is a constructive metabolism. Catabolism, therefore, provides the chemical energy necessary for the maintenance and growth of cells. Examples of catabolic processes include glycolysis, the citric acid cycle, the breakdown of muscle protein in order to use amino acids as substrates for gluconeogenesis, the breakdown of fat in adipose tissue to fatty acids, and oxidative deamination of neurotransmitters by monoamine oxidase. Catabolic hormones There are many signals that control catabolism. Most of the known signals are hormones and the molecules involved in metabolism itself. Endocrinologists have traditionally classified many of the hormones as anabolic or catabolic, depending on which part of metabolism they stimulate. The so-called classic catabolic hormones known since the early 20th century are cortisol, glucagon, and Document 1::: The term amphibolic () is used to describe a biochemical pathway that involves both catabolism and anabolism. Catabolism is a degradative phase of metabolism in which large molecules are converted into smaller and simpler molecules, which involves two types of reactions. First, hydrolysis reactions, in which catabolism is the breaking apart of molecules into smaller molecules to release energy. Examples of catabolic reactions are digestion and cellular respiration, where sugars and fats are broken down for energy. Breaking down a protein into amino acids, or a triglyceride into fatty acids, or a disaccharide into monosaccharides are all hydrolysis or catabolic reactions. Second, oxidation reactions involve the removal of hydrogens and electrons from an organic molecule. Anabolism is the biosynthesis phase of metabolism in which smaller simple precursors are converted to large and complex molecules of the cell. Anabolism has two classes of reactions. The first are dehydration synthesis reactions; these involve the joining of smaller molecules together to form larger, more complex molecules. These include the formation of carbohydrates, proteins, lipids and nucleic acids. The second are reduction reactions, in which hydrogens and electrons are added to a molecule. Whenever that is done, molecules gain energy. The term amphibolic was proposed by B. Davis in 1961 to emphasise the dual metabolic role of such pathways. These pathways are considered to be central metabolic pathways which provide, from catabolic sequences, the intermediates which form the substrate of the metabolic processes. Reactions exist as amphibolic pathway All the reactions associated with synthesis of biomolecule converge into the following pathway, viz., glycolysis, the Krebs cycle and the electron transport chain, exist as an amphibolic pathway, meaning that they can function anabolically as well as catabolically. Other important amphibolic pathways are the Embden-Meyerhof pathway, the pentos Document 2::: The metabolome refers to the complete set of small-molecule chemicals found within a biological sample. The biological sample can be a cell, a cellular organelle, an organ, a tissue, a tissue extract, a biofluid or an entire organism. The small molecule chemicals found in a given metabolome may include both endogenous metabolites that are naturally produced by an organism (such as amino acids, organic acids, nucleic acids, fatty acids, amines, sugars, vitamins, co-factors, pigments, antibiotics, etc.) as well as exogenous chemicals (such as drugs, environmental contaminants, food additives, toxins and other xenobiotics) that are not naturally produced by an organism. In other words, there is both an endogenous metabolome and an exogenous metabolome. The endogenous metabolome can be further subdivided to include a "primary" and a "secondary" metabolome (particularly when referring to plant or microbial metabolomes). A primary metabolite is directly involved in the normal growth, development, and reproduction. A secondary metabolite is not directly involved in those processes, but usually has important ecological function. Secondary metabolites may include pigments, antibiotics or waste products derived from partially metabolized xenobiotics. The study of the metabolome is called metabolomics. Origins The word metabolome appears to be a blending of the words "metabolite" and "chromosome". It was constructed to imply that metabolites are indirectly encoded by genes or act on genes and gene products. The term "metabolome" was first used in 1998 and was likely coined to match with existing biological terms referring to the complete set of genes (the genome), the complete set of proteins (the proteome) and the complete set of transcripts (the transcriptome). The first book on metabolomics was published in 2003. The first journal dedicated to metabolomics (titled simply "Metabolomics") was launched in 2005 and is currently edited by Prof. Roy Goodacre. Some of the m Document 3::: In molecular biology, biosynthesis is a multi-step, enzyme-catalyzed process where substrates are converted into more complex products in living organisms. In biosynthesis, simple compounds are modified, converted into other compounds, or joined to form macromolecules. This process often consists of metabolic pathways. Some of these biosynthetic pathways are located within a single cellular organelle, while others involve enzymes that are located within multiple cellular organelles. Examples of these biosynthetic pathways include the production of lipid membrane components and nucleotides. Biosynthesis is usually synonymous with anabolism. The prerequisite elements for biosynthesis include: precursor compounds, chemical energy (e.g. ATP), and catalytic enzymes which may need coenzymes (e.g. NADH, NADPH). These elements create monomers, the building blocks for macromolecules. Some important biological macromolecules include: proteins, which are composed of amino acid monomers joined via peptide bonds, and DNA molecules, which are composed of nucleotides joined via phosphodiester bonds. Properties of chemical reactions Biosynthesis occurs due to a series of chemical reactions. For these reactions to take place, the following elements are necessary: Precursor compounds: these compounds are the starting molecules or substrates in a reaction. These may also be viewed as the reactants in a given chemical process. Chemical energy: chemical energy can be found in the form of high energy molecules. These molecules are required for energetically unfavorable reactions. Furthermore, the hydrolysis of these compounds drives a reaction forward. High energy molecules, such as ATP, have three phosphates. Often, the terminal phosphate is split off during hydrolysis and transferred to another molecule. Catalysts: these may be for example metal ions or coenzymes and they catalyze a reaction by increasing the rate of the reaction and lowering the activation energy. In the sim Document 4::: Primary nutritional groups are groups of organisms, divided in relation to the nutrition mode according to the sources of energy and carbon, needed for living, growth and reproduction. The sources of energy can be light or chemical compounds; the sources of carbon can be of organic or inorganic origin. The terms aerobic respiration, anaerobic respiration and fermentation (substrate-level phosphorylation) do not refer to primary nutritional groups, but simply reflect the different use of possible electron acceptors in particular organisms, such as O2 in aerobic respiration, or nitrate (), sulfate () or fumarate in anaerobic respiration, or various metabolic intermediates in fermentation. Primary sources of energy Phototrophs absorb light in photoreceptors and transform it into chemical energy. Chemotrophs release chemical energy. The freed energy is stored as potential energy in ATP, carbohydrates, or proteins. Eventually, the energy is used for life processes such as moving, growth and reproduction. Plants and some bacteria can alternate between phototrophy and chemotrophy, depending on the availability of light. Primary sources of reducing equivalents Organotrophs use organic compounds as electron/hydrogen donors. Lithotrophs use inorganic compounds as electron/hydrogen donors. The electrons or hydrogen atoms from reducing equivalents (electron donors) are needed by both phototrophs and chemotrophs in reduction-oxidation reactions that transfer energy in the anabolic processes of ATP synthesis (in heterotrophs) or biosynthesis (in autotrophs). The electron or hydrogen donors are taken up from the environment. Organotrophic organisms are often also heterotrophic, using organic compounds as sources of both electrons and carbon. Similarly, lithotrophic organisms are often also autotrophic, using inorganic sources of electrons and CO2 as their inorganic carbon source. Some lithotrophic bacteria can utilize diverse sources of electrons, depending on the avail The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. Some metabolic pathways release what by breaking down complex molecules to simpler compounds? A. fat B. energy C. water D. hydrogen Answer:
sciq-5606
multiple_choice
What is the main function of the cardiovascular system?
[ "respiration", "digestion", "to transport", "implanatation" ]
C
Relavent Documents: Document 0::: The blood circulatory system is a system of organs that includes the heart, blood vessels, and blood which is circulated throughout the entire body of a human or other vertebrate. It includes the cardiovascular system, or vascular system, that consists of the heart and blood vessels (from Greek kardia meaning heart, and from Latin vascula meaning vessels). The circulatory system has two divisions, a systemic circulation or circuit, and a pulmonary circulation or circuit. Some sources use the terms cardiovascular system and vascular system interchangeably with the circulatory system. The network of blood vessels are the great vessels of the heart including large elastic arteries, and large veins; other arteries, smaller arterioles, capillaries that join with venules (small veins), and other veins. The circulatory system is closed in vertebrates, which means that the blood never leaves the network of blood vessels. Some invertebrates such as arthropods have an open circulatory system. Diploblasts such as sponges, and comb jellies lack a circulatory system. Blood is a fluid consisting of plasma, red blood cells, white blood cells, and platelets; it is circulated around the body carrying oxygen and nutrients to the tissues and collecting and disposing of waste materials. Circulated nutrients include proteins and minerals and other components include hemoglobin, hormones, and gases such as oxygen and carbon dioxide. These substances provide nourishment, help the immune system to fight diseases, and help maintain homeostasis by stabilizing temperature and natural pH. In vertebrates, the lymphatic system is complementary to the circulatory system. The lymphatic system carries excess plasma (filtered from the circulatory system capillaries as interstitial fluid between cells) away from the body tissues via accessory routes that return excess fluid back to blood circulation as lymph. The lymphatic system is a subsystem that is essential for the functioning of the bloo Document 1::: Cardiovascular physiology is the study of the cardiovascular system, specifically addressing the physiology of the heart ("cardio") and blood vessels ("vascular"). These subjects are sometimes addressed separately, under the names cardiac physiology and circulatory physiology. Although the different aspects of cardiovascular physiology are closely interrelated, the subject is still usually divided into several subtopics. Heart Cardiac output (= heart rate * stroke volume. Can also be calculated with Fick principle, palpating method.) Stroke volume (= end-diastolic volume − end-systolic volume) Ejection fraction (= stroke volume / end-diastolic volume) Cardiac output is mathematically ` to systole Inotropic, chronotropic, and dromotropic states Cardiac input (= heart rate * suction volume Can be calculated by inverting terms in Fick principle) Suction volume (= end-systolic volume + end-diastolic volume) Injection fraction (=suction volume / end-systolic volume) Cardiac input is mathematically ` to diastole Electrical conduction system of the heart Electrocardiogram Cardiac marker Cardiac action potential Frank–Starling law of the heart Wiggers diagram Pressure volume diagram Regulation of blood pressure Baroreceptor Baroreflex Renin–angiotensin system Renin Angiotensin Juxtaglomerular apparatus Aortic body and carotid body Autoregulation Cerebral Autoregulation Hemodynamics Under most circumstances, the body attempts to maintain a steady mean arterial pressure. When there is a major and immediate decrease (such as that due to hemorrhage or standing up), the body can increase the following: Heart rate Total peripheral resistance (primarily due to vasoconstriction of arteries) Inotropic state In turn, this can have a significant impact upon several other variables: Stroke volume Cardiac output Pressure Pulse pressure (systolic pressure - diastolic pressure) Mean arterial pressure (usually approximated with diastolic pressure + Document 2::: Cardiophysics is an interdisciplinary science that stands at the junction of cardiology and medical physics, with researchers using the methods of, and theories from, physics to study cardiovascular system at different levels of its organisation, from the molecular scale to whole organisms. Being formed historically as part of systems biology, cardiophysics designed to reveal connections between the physical mechanisms, underlying the organization of the cardiovascular system, and biological features of its functioning. Zbigniew R. Struzik seems to be a first author who used the term in a scientific publication in 2004. One can use interchangeably also the terms cardiovascular physics. See also Medical physics Important publications in medical physics Biomedicine Biomedical engineering Physiome Nanomedicine Document 3::: The Cardiac Electrophysiology Society (CES) is an international society of basic and clinical scientists and physicians interested in cardiac electrophysiology and arrhythmias. The Cardiac Electrophysiology Society's founder was George Burch in 1949 and its current president is Jonathan C. Makielski, M.D. Document 4::: Lucien Campeau (June 20, 1927March 15, 2010) was a Canadian cardiologist. He was a full professor at the Université de Montréal. He is best known for performing the world's first transradial coronary angiogram. Campeau was one of the founding staff of the Montreal Heart Institute, joining in 1957. He is also well known for developing the Canadian Cardiovascular Society grading of angina pectoris. Education Campeau received his M.D. degree from the University of Laval in 1953 and completed a fellowship in Cardiology at Johns Hopkins Hospital from 1956 to 1957. He later became a professor at University of Montreal in 1961 and was one of the co-founders of the Montreal Heart Institute. In his lifetime, Campeau was awarded the Research Achievement Award of the Canadian Cardiovascular Society. In 2004, he was named “Cardiologue émérite 2004” by the Association des cardiologues du Québec. The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. What is the main function of the cardiovascular system? A. respiration B. digestion C. to transport D. implanatation Answer:
sciq-6686
multiple_choice
Carboxylic acids have an acidic hydrogen atom, but esters do not. what do esters have in place of an acidic hydrogen atom?
[ "carbonation group", "crystallization group", "synthesis group", "hydrocarbon group" ]
D
Relavent Documents: Document 0::: Conceptual questions or conceptual problems in science, technology, engineering, and mathematics (STEM) education are questions that can be answered based only on the knowledge of relevant concepts, rather than performing extensive calculations. They contrast with most homework and exam problems in science and engineering that typically require plugging in numerical values into previously discussed formulas. Such "plug-and-chug" numerical problems can often be solved correctly by just matching the pattern of the problem to a previously discussed problem and changing the numerical inputs, which requires significant amounts of time to perform the calculations but does not test or deepen the understanding of how the concepts and formulas should work together. Conceptual questions, therefore, provide a good complement to conventional numerical problems because they need minimal or no calculations and instead encourage the students to engage more deeply with the underlying concepts and how they relate to formulas. Conceptual problems are often formulated as multiple-choice questions, making them easy to use during in-class discussions, particularly when utilizing active learning, peer instruction, and audience response. An example of a conceptual question in undergraduate thermodynamics is provided below: During adiabatic expansion of an ideal gas, its temperatureincreases decreases stays the same Impossible to tell/need more information The use of conceptual questions in physics was popularized by Eric Mazur, particularly in the form of multiple-choice tests that he called ConcepTests. In recent years, multiple websites that maintain lists of conceptual questions have been created by instructors for various disciplines. Some books on physics provide many examples of conceptual questions as well. Multiple conceptual questions can be assembled into a concept inventory to test the working knowledge of students at the beginning of a course or to track the improvement in Document 1::: In biochemistry, an esterase is a class of enzyme that splits esters into an acid and an alcohol in a chemical reaction with water called hydrolysis (and as such, it is a type of hydrolase). A wide range of different esterases exist that differ in their substrate specificity, their protein structure, and their biological function. Document 2::: GRE Subject Biochemistry, Cell and Molecular Biology was a standardized exam provided by ETS (Educational Testing Service) that was discontinued in December 2016. It is a paper-based exam and there are no computer-based versions of it. ETS places this exam three times per year: once in April, once in October and once in November. Some graduate programs in the United States recommend taking this exam, while others require this exam score as a part of the application to their graduate programs. ETS sends a bulletin with a sample practice test to each candidate after registration for the exam. There are 180 questions within the biochemistry subject test. Scores are scaled and then reported as a number between 200 and 990; however, in recent versions of the test, the maximum and minimum reported scores have been 760 (corresponding to the 99 percentile) and 320 (1 percentile) respectively. The mean score for all test takers from July, 2009, to July, 2012, was 526 with a standard deviation of 95. After learning that test content from editions of the GRE® Biochemistry, Cell and Molecular Biology (BCM) Test has been compromised in Israel, ETS made the decision not to administer this test worldwide in 2016–17. Content specification Since many students who apply to graduate programs in biochemistry do so during the first half of their fourth year, the scope of most questions is largely that of the first three years of a standard American undergraduate biochemistry curriculum. A sampling of test item content is given below: Biochemistry (36%) A Chemical and Physical Foundations Thermodynamics and kinetics Redox states Water, pH, acid-base reactions and buffers Solutions and equilibria Solute-solvent interactions Chemical interactions and bonding Chemical reaction mechanisms B Structural Biology: Structure, Assembly, Organization and Dynamics Small molecules Macromolecules (e.g., nucleic acids, polysaccharides, proteins and complex lipids) Supramolecular complexes (e.g. Document 3::: The actuarial credentialing and exam process usually requires passing a rigorous series of professional examinations, most often taking several years in total, before one can become recognized as a credentialed actuary. In some countries, such as Denmark, most study takes place in a university setting. In others, such as the U.S., most study takes place during employment through a series of examinations. In the UK, and countries based on its process, there is a hybrid university-exam structure. Australia The education system in Australia is divided into three components: an exam-based curriculum; a professionalism course; and work experience. The system is governed by the Institute of Actuaries of Australia. The exam-based curriculum is in three parts. Part I relies on exemptions from an accredited under-graduate degree from either Bond University, Monash University, Macquarie University, University of New South Wales, University of Melbourne, Australian National University or Curtin University. The courses cover subjects including finance, financial mathematics, economics, contingencies, demography, models, probability and statistics. Students may also gain exemptions by passing the exams of the Institute of Actuaries in London. Part II is the Actuarial control cycle and is also offered by each of the universities above. Part III consists of four half-year courses of which two are compulsory and the other two allow specialization. To become an Associate, one needs to complete Part I and Part II of the accreditation process, perform 3 years of recognized work experience, and complete a professionalism course. To become a Fellow, candidates must complete Part I, II, III, and take a professionalism course. Work experience is not required, however, as the Institute deems that those who have successfully completed Part III have shown enough level of professionalism. China Actuarial exams were suspended in 2014 but reintroduced in 2023. Denmark In Denmark it normal Document 4::: The SAT Subject Test in Biology was the name of a one-hour multiple choice test given on biology by the College Board. A student chose whether to take the test depending upon college entrance requirements for the schools in which the student is planning to apply. Until 1994, the SAT Subject Tests were known as Achievement Tests; and from 1995 until January 2005, they were known as SAT IIs. Of all SAT subject tests, the Biology E/M test was the only SAT II that allowed the test taker a choice between the ecological or molecular tests. A set of 60 questions was taken by all test takers for Biology and a choice of 20 questions was allowed between either the E or M tests. This test was graded on a scale between 200 and 800. The average for Molecular is 630 while Ecological is 591. On January 19 2021, the College Board discontinued all SAT Subject tests, including the SAT Subject Test in Biology E/M. This was effective immediately in the United States, and the tests were to be phased out by the following summer for international students. This was done as a response to changes in college admissions due to the impact of the COVID-19 pandemic on education. Format This test had 80 multiple-choice questions that were to be answered in one hour. All questions had five answer choices. Students received one point for each correct answer, lost ¼ of a point for each incorrect answer, and received 0 points for questions left blank. The student's score was based entirely on his or her performance in answering the multiple-choice questions. The questions covered a broad range of topics in general biology. There were more specific questions related respectively on ecological concepts (such as population studies and general Ecology) on the E test and molecular concepts such as DNA structure, translation, and biochemistry on the M test. Preparation The College Board suggested a year-long course in biology at the college preparatory level, as well as a one-year course in algebra, a The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. Carboxylic acids have an acidic hydrogen atom, but esters do not. what do esters have in place of an acidic hydrogen atom? A. carbonation group B. crystallization group C. synthesis group D. hydrocarbon group Answer:
sciq-3359
multiple_choice
What is the name for the sinking of the dense, salty seawater in cold climates?
[ "tidal activity", "cyclones", "downwelling", "jet stream" ]
C
Relavent Documents: Document 0::: Upwelling is an oceanographic phenomenon that involves wind-driven motion of dense, cooler, and usually nutrient-rich water from deep water towards the ocean surface. It replaces the warmer and usually nutrient-depleted surface water. The nutrient-rich upwelled water stimulates the growth and reproduction of primary producers such as phytoplankton. The biomass of phytoplankton and the presence of cool water in those regions allow upwelling zones to be identified by cool sea surface temperatures (SST) and high concentrations of chlorophyll a. The increased availability of nutrients in upwelling regions results in high levels of primary production and thus fishery production. Approximately 25% of the total global marine fish catches come from five upwellings, which occupy only 5% of the total ocean area. Upwellings that are driven by coastal currents or diverging open ocean have the greatest impact on nutrient-enriched waters and global fishery yields. Mechanisms The three main drivers that work together to cause upwelling are wind, Coriolis effect, and Ekman transport. They operate differently for different types of upwelling, but the general effects are the same. In the overall process of upwelling, winds blow across the sea surface at a particular direction, which causes a wind-water interaction. As a result of the wind, the water has transported a net of 90 degrees from the direction of the wind due to Coriolis forces and Ekman transport. Ekman transport causes the surface layer of water to move at about a 45 degree angle from the direction of the wind, and the friction between that layer and the layer beneath it causes the successive layers to move in the same direction. This results in a spiral of water moving down the water column. Then, it is the Coriolis forces that dictate which way the water will move; in the Northern hemisphere, the water is transported to the right of the direction of the wind. In the Southern Hemisphere, the water is transported Document 1::: Thermohaline circulation (THC) is a part of the large-scale ocean circulation that is driven by global density gradients created by surface heat and freshwater fluxes. The adjective thermohaline derives from thermo- referring to temperature and referring to salt content, factors which together determine the density of sea water. Wind-driven surface currents (such as the Gulf Stream) travel polewards from the equatorial Atlantic Ocean, cooling en route, and eventually sinking at high latitudes (forming North Atlantic Deep Water). This dense water then flows into the ocean basins. While the bulk of it upwells in the Southern Ocean, the oldest waters (with a transit time of about 1000 years) upwell in the North Pacific. Extensive mixing therefore takes place between the ocean basins, reducing differences between them and making the Earth's oceans a global system. The water in these circuits transport both energy (in the form of heat) and mass (dissolved solids and gases) around the globe. As such, the state of the circulation has a large impact on the climate of the Earth. The thermohaline circulation is sometimes called the ocean conveyor belt, the great ocean conveyor, or the global conveyor belt, coined by climate scientist Wallace Smith Broecker. On occasion, it is used to refer to the meridional overturning circulation (often abbreviated as MOC). The term MOC is more accurate and well defined, as it is difficult to separate the part of the circulation which is driven by temperature and salinity alone as opposed to other factors such as the wind and tidal forces. Moreover, temperature and salinity gradients can also lead to circulation effects that are not included in the MOC itself. The Atlantic Meridional Overturning circulation (AMOC) is part of a global thermohaline circulation. Overview The movement of surface currents pushed by the wind is fairly intuitive. For example, the wind easily produces ripples on the surface of a pond. Thus, the deep ocean—devo Document 2::: In oceanic biogeochemistry, the f-ratio is the fraction of total primary production fuelled by nitrate (as opposed to that fuelled by other nitrogen compounds such as ammonium). The ratio was originally defined by Richard Eppley and Bruce Peterson in one of the first papers estimating global oceanic production. This fraction was originally believed significant because it appeared to directly relate to the sinking (export) flux of organic marine snow from the surface ocean by the biological pump. However, this interpretation relied on the assumption of a strong depth-partitioning of a parallel process, nitrification, that more recent measurements has questioned. Overview Gravitational sinking of organisms (or the remains of organisms) transfers particulate organic carbon from the surface waters of the ocean to its deep interior. This process is known as the biological pump, and quantifying it is of interest to scientists because it is an important aspect of the Earth's carbon cycle. Essentially, this is because carbon transported to the deep ocean is isolated from the atmosphere, allowing the ocean to act as a reservoir of carbon. This biological mechanism is accompanied by a physico-chemical mechanism known as the solubility pump which also acts to transfer carbon to the ocean's deep interior. Measuring the flux of sinking material (so-called marine snow) is usually done by deploying sediment traps which intercept and store material as it sinks down the water column. However, this is a relatively difficult process, since traps can be awkward to deploy or recover, and they must be left in situ over a long period to integrate the sinking flux. Furthermore, they are known to experience biases and to integrate horizontal as well as vertical fluxes because of water currents. For this reason, scientists are interested in ocean properties that can be more easily measured, and that act as a proxy for the sinking flux. The f-ratio is one such proxy. "New" and "rege Document 3::: Region of Freshwater Influence (ROFI) is a region in coastal sea where stratification is governed by the local input of freshwater discharge from the coastal source, while the role of the seasonal input of buoyancy from atmospheric heating is much smaller. Background ROFI and river plume are similar terms related to water masses formed as a result of mixing of river discharge and sea water. The difference between river plumes and ROFI's consists in their spatial scales and freshwater residence time. River plumes are regarded as water masses formed as a result of transformation of freshwater discharge in coastal sea on diurnal to synoptic time scales, while ROFI’s reproduce transformation of freshwater discharge on seasonal to annual time scales. A river plume embedded into a ROFI reproduce a continuous process of transformation of freshwater discharge. Initially, river discharge enters the shelf sea from a river mouth and forms a sub-mesoscale (with spatial extents ~1-10 km) or mesoscale (with spatial extents ~10-100 km) water mass referred to as a river plume. Salinity within a plume is significantly lower than that of surrounding sea water. Structure and dynamical characteristics within a river plume are strongly inhomogeneous. In particular, salinity and velocity fields in the vicinity of a freshwater source are significantly different as compared to the outer parts of a plume. A river plume is spreading and mixing with ambient saline sea water, which results in the transformation of a plume, but also influences the hydrological structure of the ambient sea. Strength and extent of this influence mainly depend on the volume of freshwater discharge and varies from negligible impact of small plumes formed by rivers with low discharge rates to the formation of stable freshened water masses in the upper ocean by the World’s largest rivers on wide coastal and shelf areas. The latter water masses with spatial extents on the order of hundreds of kilometers are referre Document 4::: The borders of the oceans are the limits of Earth's oceanic waters. The definition and number of oceans can vary depending on the adopted criteria. The principal divisions (in descending order of area) of the five oceans are the Pacific Ocean, Atlantic Ocean, Indian Ocean, Southern (Antarctic) Ocean, and Arctic Ocean. Smaller regions of the oceans are called seas, gulfs, bays, straits, and other terms. Geologically, an ocean is an area of oceanic crust covered by water. See also the list of seas article for the seas included in each ocean area. Overview Though generally described as several separate oceans, the world's oceanic waters constitute one global, interconnected body of salt water sometimes referred to as the World Ocean or Global Ocean. This concept of a continuous body of water with relatively free interchange among its parts is of fundamental importance to oceanography. The major oceanic divisions are defined in part by the continents, various archipelagos, and other criteria. The principal divisions (in descending order of area) are the: Pacific Ocean, Atlantic Ocean, Indian Ocean, Southern (Antarctic) Ocean, and Arctic Ocean. Smaller regions of the oceans are called seas, gulfs, bays, straits, and other terms. Geologically, an ocean is an area of oceanic crust covered by water. Oceanic crust is the thin layer of solidified volcanic basalt that covers the Earth's mantle. Continental crust is thicker but less dense. From this perspective, the Earth has three oceans: the World Ocean, the Caspian Sea, and the Black Sea. The latter two were formed by the collision of Cimmeria with Laurasia. The Mediterranean Sea is at times a discrete ocean because tectonic plate movement has repeatedly broken its connection to the World Ocean through the Strait of Gibraltar. The Black Sea is connected to the Mediterranean through the Bosporus, but the Bosporus is a natural canal cut through continental rock some 7,000 years ago, rather than a piece of oceanic sea floo The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. What is the name for the sinking of the dense, salty seawater in cold climates? A. tidal activity B. cyclones C. downwelling D. jet stream Answer:
sciq-1447
multiple_choice
What is used to darken images of the sky?
[ "polarizing filters", "uv filter", "close-up filter", "neutral density filter" ]
A
Relavent Documents: Document 0::: Remote sensing is the acquisition of information about an object or phenomenon without making physical contact with the object, in contrast to in situ or on-site observation. The term is applied especially to acquiring information about Earth and other planets. Remote sensing is used in numerous fields, including geophysics, geography, land surveying and most Earth science disciplines (e.g. exploration geophysics, hydrology, ecology, meteorology, oceanography, glaciology, geology); it also has military, intelligence, commercial, economic, planning, and humanitarian applications, among others. In current usage, the term remote sensing generally refers to the use of satellite- or aircraft-based sensor technologies to detect and classify objects on Earth. It includes the surface and the atmosphere and oceans, based on propagated signals (e.g. electromagnetic radiation). It may be split into "active" remote sensing (when a signal is emitted by a satellite or aircraft to the object and its reflection detected by the sensor) and "passive" remote sensing (when the reflection of sunlight is detected by the sensor). Overview Remote sensing can be divided into two types of methods: Passive remote sensing and Active remote sensing. Passive sensors gather radiation that is emitted or reflected by the object or surrounding areas. Reflected sunlight is the most common source of radiation measured by passive sensors. Examples of passive remote sensors include film photography, infrared, charge-coupled devices, and radiometers. Active collection, on the other hand, emits energy in order to scan objects and areas whereupon a sensor then detects and measures the radiation that is reflected or backscattered from the target. RADAR and LiDAR are examples of active remote sensing where the time delay between emission and return is measured, establishing the location, speed and direction of an object. Remote sensing makes it possible to collect data of dangerous or inaccessible areas Document 1::: In infrared astronomy, the L band is an atmospheric transmission window centred on 3.5 micrometres (in the mid-infrared). Electromagnetic spectrum Infrared imaging Document 2::: Water Remote Sensing is the observation of water bodies such as lakes, oceans, and rivers from a distance in order to describe their color, state of ecosystem health, and productivity. Water remote sensing studies the color of water through the observation of the spectrum of water leaving radiance. From the spectrum of color coming from the water, the concentration of optically active components of the upper layer of the water body can be estimated via specific algorithms. Water quality monitoring by remote sensing and close-range instruments has obtained considerable attention since the founding of EU Water Framework Directive. Overview Water remote sensing instruments (sensors) allow scientists to record the color of a water body, which provides information on the presence and abundance of optically active natural water components (plankton, sediments, detritus, or dissolved substances). The water color spectrum as seen by a satellite sensor is defined as an apparent optical property (AOP) of the water. This means that the color of the water is influenced by the angular distribution of the light field and by the nature and quantity of the substances in the medium, in this case, water. Thus, the values of remote sensing reflectance, an AOP, will change with changes in the optical properties and concentrations of the optically active substances in the water. Properties and concentrations of substances in the water are known as the inherent optical properties or IOPs. IOPs are independent from the angular distribution of light (the "light field") but they are dependent on the type and amount of substances that are present in the water. For instance, the diffuse attenuation coefficient of downwelling irradiance, Kd (often used as an index of water clarity or ocean turbidity) is defined as an AOP (or quasi-AOP), while the absorption coefficient and the scattering coefficient of the water are defined as IOPs. There are two different approaches to determine the concent Document 3::: Aeronomy is the scientific study of the upper atmosphere of the Earth and corresponding regions of the atmospheres of other planets. It is a branch of both atmospheric chemistry and atmospheric physics. Scientists specializing in aeronomy, known as aeronomers, study the motions and chemical composition and properties of the Earth's upper atmosphere and regions of the atmospheres of other planets that correspond to it, as well as the interaction between upper atmospheres and the space environment. In atmospheric regions aeronomers study, chemical dissociation and ionization are important phenomena. History The mathematician Sydney Chapman introduced the term aeronomy to describe the study of the Earth's upper atmosphere in 1946 in a letter to the editor of Nature entitled "Some Thoughts on Nomenclature." The term became official in 1954 when the International Union of Geodesy and Geophysics adopted it. "Aeronomy" later also began to refer to the study of the corresponding regions of the atmospheres of other planets. Branches Aeronomy can be divided into three main branches: terrestrial aeronomy, planetary aeronomy, and comparative aeronomy. Terrestrial aeronomy Terrestrial aeronomy focuses on the Earth's upper atmosphere, which extends from the stratopause to the atmosphere's boundary with outer space and is defined as consisting of the mesosphere, thermosphere, and exosphere and their ionized component, the ionosphere. Terrestrial aeronomy contrasts with meteorology, which is the scientific study of the Earth's lower atmosphere, defined as the troposphere and stratosphere. Although terrestrial aeronomy and meteorology once were completely separate fields of scientific study, cooperation between terrestrial aeronomers and meteorologists has grown as discoveries made since the early 1990s have demonstrated that the upper and lower atmospheres have an impact on one another's physics, chemistry, and biology. Terrestrial aeronomers study atmospheric tides and upper- Document 4::: The Electro-Optical Systems Atmospheric Effects Library (EOSAEL) was developed in 1979 by the U.S. Army Atmospheric Sciences Laboratory, which later became a part of the U.S. Army Research Laboratory. EOSAEL was a library of theoretical, semi-empirical, and empirical computer models that described various aspects of atmospheric effects in battlefield environments. As of 1999, EOSAEL consisted of 22 models. Background EOSAEL was focused on weather effects and how weather impacts military technology. The battlefield environment includes many sources of aerosols and particulates, including chemical/biological agents, smoke, dust, and chaff. Weather in these environments impacts the functions of military technology, specifically electro-optical devices used for target acquisition. A need for standard tools to facilitate system performance analyses and weather impact decision aids led to development of standard algorithms for modeling efforts, which became a part of EOSAEL. Description The EOSAEL modules provide transmittance and radiance calculations through gases, natural aerosols, battlefield aerosols, smoke, haze, fog, and clouds for bandpass and laser propagation. Its operating system is Microsoft Windows 3.1, a graphical display operating system which gives a common interface to hardware. EOSAEL models provide the visible and near-infrared (0.2-2.0 ]dm), mid-infrared (3.0-5.0 urn), far-infrared (8.0-12.0 ym), and millimeter wave (10–350 GHz) regions of the spectrum, plus 53 laser lines. The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. What is used to darken images of the sky? A. polarizing filters B. uv filter C. close-up filter D. neutral density filter Answer:
sciq-1872
multiple_choice
What substance do developing seeds produce which promotes fruit growth?
[ "pepsin", "xenon", "auxin", "interferon" ]
C
Relavent Documents: Document 0::: A seedless fruit is a fruit developed to possess no mature seeds. Since eating seedless fruits is generally easier and more convenient, they are considered commercially valuable. Most commercially produced seedless fruits have been developed from plants whose fruits normally contain numerous relatively large hard seeds distributed throughout the flesh of the fruit. Varieties Common varieties of seedless fruits include watermelons, tomatoes, and grapes (such as Termarina rossa). Additionally, there are numerous seedless citrus fruits, such as oranges, lemons and limes. A recent development over the last twenty years has been that of seedless sweet peppers (Capsicum annuum). The seedless plant combines male sterility in the pepper plant (commonly occurring) with the ability to set seedless fruits (a natural fruit-setting without fertilization). In male sterile plants, the parthenocarpy expresses itself only sporadically on the plant with deformed fruits. It has been reported that plant hormones provided by the ovary seed (such as auxins and gibberellins) promote fruit set and growth to produce seedless fruits. Initially, without seeds in the fruit, vegetative propagation was essential. However, now – as with seedless watermelon – seedless peppers can be grown from seeds. Biological description Seedless fruits can develop in one of two ways: either the fruit develops without fertilization (parthenocarpy), or pollination triggers fruit development, but the ovules or embryos abort without producing mature seeds (stenospermocarpy). Seedless banana and watermelon fruits are produced on triploid plants, whose three sets of chromosomes make it very unlikely for meiosis to successfully produce spores and gametophytes. This is because one of the three copies of each chromosome cannot pair with another appropriate chromosome before separating into daughter cells, so these extra third copies end up randomly distributed between the two daughter cells from meiosis 1, resul Document 1::: Researchers have shown that the accumulation (or lack of) of prunasin and amygdalin in the almond kernel is responsible for sweet and bitter genotypes. Because amygdalin is responsible for the bitter almond taste, almond growers have selected genotypes which minimize the biosynthesis of amygdalin. The CYP enzymes responsible for generation of prunasin are conserved across Prunus species. There is a correlation between high concentration of prunasin in the vegetative regions of the plant and the sweetness of the almond, which is relevant to the almond agricultural industry. In almonds, the amygdalin biosynthetic genes are expressed at Document 2::: RediRipe is a technology created at the University of Arizona which detects the production of ethylene, a natural ripening hormone, and displaying that detection by means of a color-changing sticker that changes from white to blue. The technology was created in the lab of Mark Riley at the University of Arizona. In conjunction with the Eller College of Management's McGuire Center for Entrepreneurship, the technology was being developed into a viable business that will assist the apple and pear industries in their efforts to improve their efficiency by integrating technology into their age-old processes. Additionally, this technology has potential on other climacteric fruits which emit ethylene as they ripen. Document 3::: The horticulture industry embraces the production, processing and shipping of and the market for fruits and vegetables. As such it is a sector of agribusiness and industrialized agriculture. Industrialized horticulture sometimes also includes the floriculture industry and production and trade of ornamental plants. Among the most important fruits are: bananas Semi-tropical fruits like lychee, guava or tamarillo Citrus fruits soft fruits (berries) apples stone fruits Important vegetables include: Potatoes Sweet potatoes Tomatoes Onions and Cabbage In 2013 global fruit production was estimated at . Global vegetable production (including melons) was estimated at with China and India being the two top producing countries. Value chain The horticultural value chain includes: Inputs: elements needed for production; seeds, fertilizers, agrochemicals, farm equipment, irrigation equipment, GMO technology Production for export: includes fruit and vegetables production and all processes related to growth and harvesting; planting, weeding, spraying, picking Packing and cold storage: grading, washing, trimming, chopping, mixing, packing, labeling, blast chilling Processed fruit and vegetables: dried, frozen, preserved, juices, pulps; mostly for increasing shelf life Distribution and marketing: supermarkets, small scale retailers, wholesalers, food service Companies Fruit Chiquita Brands International Del Monte Foods Dole Food Company Genetically modified crops / GMO Monsanto/Bayer Document 4::: Seed predation, often referred to as granivory, is a type of plant-animal interaction in which granivores (seed predators) feed on the seeds of plants as a main or exclusive food source, in many cases leaving the seeds damaged and not viable. Granivores are found across many families of vertebrates (especially mammals and birds) as well as invertebrates (mainly insects); thus, seed predation occurs in virtually all terrestrial ecosystems. Seed predation is commonly divided into two distinctive temporal categories, pre-dispersal and post-dispersal predation, which affect the fitness of the parental plant and the dispersed offspring (the seed), respectively. Mitigating pre- and post-dispersal predation may involve different strategies. To counter seed predation, plants have evolved both physical defenses (e.g. shape and toughness of the seed coat) and chemical defenses (secondary compounds such as tannins and alkaloids). However, as plants have evolved seed defenses, seed predators have adapted to plant defenses (e.g., ability to detoxify chemical compounds). Thus, many interesting examples of coevolution arise from this dynamic relationship. Seeds and their defenses Plant seeds are important sources of nutrition for animals across most ecosystems. Seeds contain food storage organs (e.g., endosperm) that provide nutrients to the developing plant embryo (cotyledon). This makes seeds an attractive food source for animals because they are a highly concentrated and localized nutrient source in relation to other plant parts. Seeds of many plants have evolved a variety of defenses to deter predation. Seeds are often contained inside protective structures or fruit pulp that encapsulate seeds until they are ripe. Other physical defenses include spines, hairs, fibrous seed coats and hard endosperm. Seeds, especially in arid areas, may have a mucilaginous seed coat that can glue soil to seed hiding it from granivores. Some seeds have evolved strong anti-herbivore chemical The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. What substance do developing seeds produce which promotes fruit growth? A. pepsin B. xenon C. auxin D. interferon Answer:
sciq-6464
multiple_choice
Algae are much simpler than protozoa. they are aquatic and contain this?
[ "chlorophyll", "cloning factor", "sporozoa", "testes" ]
A
Relavent Documents: Document 0::: Eustigmatophytes are a small group (17 genera; ~107 species) of eukaryotic forms of algae that includes marine, freshwater and soil-living species. All eustigmatophytes are unicellular, with coccoid cells and polysaccharide cell walls. Eustigmatophytes contain one or more yellow-green chloroplasts, which contain chlorophyll a and the accessory pigments violaxanthin and β-carotene. Eustigmatophyte zoids (gametes) possess a single or pair of flagella, originating from the apex of the cell. Unlike other heterokontophytes, eustigmatophyte zoids do not have typical photoreceptive organelles (or eyespots); instead an orange-red eyespot outside a chloroplast is located at the anterior end of the zoid. Ecologically, eustigmatophytes occur as photosynthetic autotrophs across a range of systems. Most eustigmatophyte genera live in freshwater or in soil, although Nannochloropsis contains marine species of picophytoplankton (2–4 μm). The class was erected to include some algae previously classified in the Xanthophyceae. Document 1::: Algae (, ; : alga ) is an informal term for a large and diverse group of photosynthetic, eukaryotic organisms. It is a polyphyletic grouping that includes species from multiple distinct clades. Included organisms range from unicellular microalgae, such as Chlorella, Prototheca and the diatoms, to multicellular forms, such as the giant kelp, a large brown alga which may grow up to in length. Most are aquatic and lack many of the distinct cell and tissue types, such as stomata, xylem and phloem that are found in land plants. The largest and most complex marine algae are called seaweeds, while the most complex freshwater forms are the Charophyta, a division of green algae which includes, for example, Spirogyra and stoneworts. Algae that are carried by water are plankton, specifically phytoplankton. Algae constitute a polyphyletic group since they do not include a common ancestor, and although their plastids seem to have a single origin, from cyanobacteria, they were acquired in different ways. Green algae are examples of algae that have primary chloroplasts derived from endosymbiotic cyanobacteria. Diatoms and brown algae are examples of algae with secondary chloroplasts derived from an endosymbiotic red alga. Algae exhibit a wide range of reproductive strategies, from simple asexual cell division to complex forms of sexual reproduction. Algae lack the various structures that characterize land plants, such as the phyllids (leaf-like structures) of bryophytes, rhizoids of non-vascular plants, and the roots, leaves, and other organs found in tracheophytes (vascular plants). Most are phototrophic, although some are mixotrophic, deriving energy both from photosynthesis and uptake of organic carbon either by osmotrophy, myzotrophy, or phagotrophy. Some unicellular species of green algae, many golden algae, euglenids, dinoflagellates, and other algae have become heterotrophs (also called colorless or apochlorotic algae), sometimes parasitic, relying entirely on external e Document 2::: Phycology () is the scientific study of algae. Also known as algology, phycology is a branch of life science. Algae are important as primary producers in aquatic ecosystems. Most algae are eukaryotic, photosynthetic organisms that live in a wet environment. They are distinguished from the higher plants by a lack of true roots, stems or leaves. They do not produce flowers. Many species are single-celled and microscopic (including phytoplankton and other microalgae); many others are multicellular to one degree or another, some of these growing to large size (for example, seaweeds such as kelp and Sargassum). Phycology includes the study of prokaryotic forms known as blue-green algae or cyanobacteria. A number of microscopic algae also occur as symbionts in lichens. Phycologists typically focus on either freshwater or ocean algae, and further within those areas, either diatoms or soft algae. History of phycology While both the ancient Greeks and Romans knew of algae, and the ancient Chinese even cultivated certain varieties as food, the scientific study of algae began in the late 18th century with the description and naming of Fucus maximus (now Ecklonia maxima) in 1757 by Pehr Osbeck. This was followed by the descriptive work of scholars such as Dawson Turner and Carl Adolph Agardh, but it was not until later in the 19th century that efforts were made by J.V. Lamouroux and William Henry Harvey to create significant groupings within the algae. Harvey has been called "the father of modern phycology" in part for his division of the algae into four major divisions based upon their pigmentation. It was in the late 19th and early 20th century, that phycology became a recognized field of its own. Men such as Friedrich Traugott Kützing continued the descriptive work. In Japan, beginning in 1889, Kintarô Okamura not only provided detailed descriptions of Japanese coastal algae, he also provided comprehensive analysis of their distribution. Although R. K. Greville publi Document 3::: Zooplankton are the animal component of the planktonic community (the "zoo-" prefix comes from ). Plankton are aquatic organisms that are unable to swim effectively against currents. Consequently, they drift or are carried along by currents in the ocean, or by currents in seas, lakes or rivers. Zooplankton can be contrasted with phytoplankton, which are the plant component of the plankton community (the "phyto-" prefix comes from ). Zooplankton are heterotrophic (other-feeding), whereas phytoplankton are autotrophic (self-feeding). In other words, zooplankton cannot manufacture their own food. Rather, they must eat plants or other animals instead. In particular, they eat phytoplankton, which are generally smaller than zooplankton. Most zooplankton are microscopic but some (such as jellyfish) are macroscopic, meaning they can be seen with the naked eye. Many protozoans (single-celled protists that prey on other microscopic life) are zooplankton, including zooflagellates, foraminiferans, radiolarians, some dinoflagellates and marine microanimals. Macroscopic zooplankton include pelagic cnidarians, ctenophores, molluscs, arthropods and tunicates, as well as planktonic arrow worms and bristle worms. The distinction between plants and animals often breaks down in very small organisms. Recent studies of marine microplankton have indicated over half of microscopic plankton are mixotrophs. A mixotroph is an organism that can behave sometimes as though it were a plant and sometimes as though it were an animal, using a mix of autotrophy and heterotrophy. Many marine microzooplankton are mixotrophic, which means they could also be classified as phytoplankton. Overview Zooplankton (; ) are heterotrophic (sometimes detritivorous) plankton. The word zooplankton is derived from ; and . Zooplankton is a categorization spanning a range of organism sizes including small protozoans and large metazoans. It includes holoplanktonic organisms whose complete life cycle lies within t Document 4::: The class was erected to include some algae previously classified in the Xanthophyceae. Classification Class Eustigmatophyceae Hibberd & Leedale 1970 Order Eustigmatales Hibberd 1981 Genus Paraeustigmatos Fawley, Nemcová, & Fawley 2019 Family Eustigmataceae Hibberd 1981 [Chlorobothryaceae Pascher 1925; Pseudocharaciopsidaceae Lee & Bold ex Hibberd 1981] Genus ?Ellipsoidion Pascher 1937 Genus Chlorobotrys Bohlin 1901 Genus Eustigmatos Hibberd 1981 Genus Pseudocharaciopsis Lee & Bold 1973 Genus Pseudostaurastrum Chodat 1921 Genus Vischeria Pascher 1938 - 16 spp. Family Monodopsidaceae Hibberd 1981 [Loboceae Hegewald 2007] Genus Microchloropsis Fawley, Jameson & Fawley 2015 Genus Monodopsis Hibberd 1981 Genus Nannochloropsis Hibberd 1981 Genus Pseudotetraedriella Hegewald & Padisák 2007 Family Neomonodaceae Amaral et al. 2020 Genus ?Botryochloropsis Preisig & Wilhelm 1989 Genus Characiopsiella Amaral et al. 2020 Genus Munda Amaral et al. 2020 Genus Neomonodus Amaral et al. 2020 Genus Pseudellipsoidion Neustupa & Nemková 2 The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. Algae are much simpler than protozoa. they are aquatic and contain this? A. chlorophyll B. cloning factor C. sporozoa D. testes Answer:
sciq-8254
multiple_choice
What do scientists use to search other planets suitable for living?
[ "lasers", "optics", "microscopes", "telescopes" ]
D
Relavent Documents: Document 0::: The Nexus for Exoplanet System Science (NExSS) initiative is a National Aeronautics and Space Administration (NASA) virtual institute designed to foster interdisciplinary collaboration in the search for life on exoplanets. Led by the Ames Research Center, the NASA Exoplanet Science Institute, and the Goddard Institute for Space Studies, NExSS will help organize the search for life on exoplanets from participating research teams and acquire new knowledge about exoplanets and extrasolar planetary systems. History In 1995, astronomers using ground-based observatories discovered 51 Pegasi b, the first exoplanet orbiting a Sun-like star. NASA launched the Kepler space telescope in 2009 to search for Earth-size exoplanets. By 2015, they had confirmed more than a thousand exoplanets, while several thousand additional candidates awaited confirmation. To help coordinate efforts to sift through and understand the data, NASA needed a way for researchers to collaborate across disciplines. The success of the Virtual Planetary Laboratory research network at the University of Washington led Mary A. Voytek, director of the NASA Astrobiology Program, to model its structure and create the Nexus for Exoplanet System Science (NExSS) initiative. Leaders from three NASA research centers will run the program: Natalie Batalha of NASA's Ames Research Center, Dawn Gelino of the NASA Exoplanet Science Institute, and Anthony Del Genio of NASA's Goddard Institute for Space Studies. Research Functioning as a virtual institute, NExSS is currently composed of sixteen interdisciplinary science teams from ten universities, three NASA centers and two research institutes, who will work together to search for habitable exoplanets that can support life. The US teams were initially selected from a total of about 200 proposals; however, the coalition is expected to expand nationally and internationally as the project gets underway. Teams will also work with amateur citizen scientists who will have Document 1::: The European Astrobiology Network Association (EANA) coordinates and facilities research expertise in astrobiology in Europe. EANA was created in 2001 to coordinate the different European centers in astrobiology and the related fields previously organized in paleontology, geology, atmospheric physics, planetary science and stellar physics. The association is administered by an Executive Council that is elected every three years and represents the European nations active in the field, as Austria, Belgium, France, Germany, Italy, Portugal, Spain, etc. The EANA Executive Council is composed of a president, two vice-presidents, a treasurer and two secretaries, and councillors. Further information about the current Executive Council can be founded at http://www.eana-net.eu/index.php?page=Discover/eananetwork. The EANA association strongly supports the AbGradE – Astrobiology Graduates in Europe, which is an independent organisation that aim to support early-career scientists and students in astrobiology. Objectives The specific objectives of EANA are to: bring together active European researchers and link their research programs fund exchange visits between laboratories optimize the sharing of information and resources facilities for research promote this field of research to European funding agencies and politicians promote research on extremophiles of relevance to environmental issues in Europe interface with the Research Network with European bodies (e.g. European Space Agency, and the European Commission) attract young scientists to participate promote public interest in astrobiology, and to educate the younger generation Document 2::: Astronomy education or astronomy education research (AER) refers both to the methods currently used to teach the science of astronomy and to an area of pedagogical research that seeks to improve those methods. Specifically, AER includes systematic techniques honed in science and physics education to understand what and how students learn about astronomy and determine how teachers can create more effective learning environments. Education is important to astronomy as it impacts both the recruitment of future astronomers and the appreciation of astronomy by citizens and politicians who support astronomical research. Astronomy has been taught throughout much of recorded human history, and has practical application in timekeeping and navigation. Teaching astronomy contributes to an understanding of physics and the origin of the world around us, a shared cultural background, and a sense of wonder and exploration. It includes education of the general public through planetariums, books, and instructive presentations, plus programs and tools for amateur astronomy, and University-level degree programs for professional astronomers. Astronomy organizations provide educational functions and societies in about 100 nation states around the world. In schools, particularly at the collegiate level, astronomy is aligned with physics and the two are often combined to form a Department of Physics and Astronomy. Some parts of astronomy education overlap with physics education, however, astronomy education has its own arenas, practitioners, journals, and research. This can be demonstrated in the identified 20-year lag between the emergence of AER and physics education research. The body of research in this field are available through electronic sources such as the Searchable Annotated Bibliography of Education Research (SABER) and the American Astronomical Society's database of the contents of their journal "Astronomy Education Review" (see link below). The National Aeronautics and Document 3::: The Space Science Institute (SSI) in Boulder, Colorado, is a nonprofit, public-benefit corporation formed in 1992. Its purpose is to create and maintain an environment where scientific research and education programs can flourish in an integrated fashion. SSI is among the four non-profit institutes in the US cited in a 2007 report by Nature, including Southwest Research Institute, Planetary Science Institute, and Eureka Scientific, which manage federal grants for non-tenure-track astronomers. Description SSI's research program encompasses the following areas: space physics, earth science, planetary science, and astrophysics. The flight operations branch manages the Cassini-Huygens spacecraft's visible camera instrument and provides spectacular images of Saturn and its moons and rings to the public. SSI participates in mission operations and is home to the Cassini Imaging Central Laboratory for OPerations (CICLOPS). The primary goal of SSI is to bring together researchers and educators to improve science education. Toward this end, the institute acts as an umbrella for researchers who wish to be independent of universities. In addition, it works with educators directly to improve teaching methods for astronomy. SSI has also produced several traveling exhibits for science museums, including Electric Space, Mars Quest, and Alien Earths. It is currently producing Giant Worlds. SSI provides management support for research scientists and principal investigators, which help them to submit proposals to major public funding agencies such as National Aeronautics and Space Administration (NASA), National Science Foundation (NSF), Space Telescope Science Institute (STSci), Department of Energy (DOE), and Jet Propulsion Laboratory (JPL) Principal investigators are supported by SSI though proposal budget preparation, proposal submission, and project reporting tools, and have competitive negotiated overhead rates. The institute is loosely affiliated with the University Document 4::: Enduring Quests and Daring Visions is a vision for astrophysics programs chartered by then-Director of NASA's Astrophysics Division, Paul Hertz, and released in late 2013. It lays out plans over 30 years as long-term goals and missions. Goals include mapping the Cosmic Microwave Background and finding Earth like exoplanets, to go deeper into space-time studying the Large Scale Structure of the Universe, extreme physics, and looking back farther in time. The panel that produced the vision included many notable American astrophysicists, including: Chryssa Kouveliotou, Eric Agol, Natalie Batalha, Misty Bentz, Alan Dressler, Scott Gaudi, Olivier Guyon, Enectali Figueroa-Feliciano, Feryal Ozel, Aki Roberge, Amber Straughn, and Joan Centrella. Examples of discussed missions include: Astro-H (Hitomi) Black Hole Mapper CMB Polarization Surveyor Cosmic Dawn Euclid ExoEarth Mapper Gaia Gravitational Wave Surveyor/Mapper Habitable Exoplanet Imaging Mission (HabEx) Far-Infrared Surveyor (later renamed the Origins Space Telescope) JEM-EUSO James Webb Space Telescope (JWST) Large UV Optical Infrared Surveyor (LUVOIR) Nancy Grace Roman Space Telescope Neutron Star Interior Composition Explorer (NICER) Transiting Exoplanet Survey Satellite (TESS) X-Ray Surveyor (later renamed the Lynx X-ray Observatory) The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. What do scientists use to search other planets suitable for living? A. lasers B. optics C. microscopes D. telescopes Answer:
sciq-4674
multiple_choice
What type of bonds are the attractive forces between the positively charged nuclei of the bonded atoms and one or more pairs of electrons that are located between the atoms?
[ "active", "reactive", "covalent", "gravitational" ]
C
Relavent Documents: Document 0::: An intramolecular force (or primary forces) is any force that binds together the atoms making up a molecule or compound, not to be confused with intermolecular forces, which are the forces present between molecules. The subtle difference in the name comes from the Latin roots of English with inter meaning between or among and intra meaning inside. Chemical bonds are considered to be intramolecular forces which are often stronger than intermolecular forces present between non-bonding atoms or molecules. Types The classical model identifies three main types of chemical bonds — ionic, covalent, and metallic — distinguished by the degree of charge separation between participating atoms. The characteristics of the bond formed can be predicted by the properties of constituent atoms, namely electronegativity. They differ in the magnitude of their bond enthalpies, a measure of bond strength, and thus affect the physical and chemical properties of compounds in different ways. % of ionic character is directly proportional difference in electronegitivity of bonded atom. Ionic bond An ionic bond can be approximated as complete transfer of one or more valence electrons of atoms participating in bond formation, resulting in a positive ion and a negative ion bound together by electrostatic forces. Electrons in an ionic bond tend to be mostly found around one of the two constituent atoms due to the large electronegativity difference between the two atoms, generally more than 1.9, (greater difference in electronegativity results in a stronger bond); this is often described as one atom giving electrons to the other. This type of bond is generally formed between a metal and nonmetal, such as sodium and chlorine in NaCl. Sodium would give an electron to chlorine, forming a positively charged sodium ion and a negatively charged chloride ion. Covalent bond In a true covalent bond, the electrons are shared evenly between the two atoms of the bond; there is little or no charge separa Document 1::: Molecular binding is an attractive interaction between two molecules that results in a stable association in which the molecules are in close proximity to each other. It is formed when atoms or molecules bind together by sharing of electrons. It often, but not always, involves some chemical bonding. In some cases, the associations can be quite strong—for example, the protein streptavidin and the vitamin biotin have a dissociation constant (reflecting the ratio between bound and free biotin) on the order of 10−14—and so the reactions are effectively irreversible. The result of molecular binding is sometimes the formation of a molecular complex in which the attractive forces holding the components together are generally non-covalent, and thus are normally energetically weaker than covalent bonds. Molecular binding occurs in biological complexes (e.g., between pairs or sets of proteins, or between a protein and a small molecule ligand it binds) and also in abiologic chemical systems, e.g. as in cases of coordination polymers and coordination networks such as metal-organic frameworks. Types Molecular binding can be classified into the following types: Non-covalent – no chemical bonds are formed between the two interacting molecules hence the association is fully reversible Reversible covalent – a chemical bond is formed, however the free energy difference separating the noncovalently-bonded reactants from bonded product is near equilibrium and the activation barrier is relatively low such that the reverse reaction which cleaves the chemical bond easily occurs Irreversible covalent – a chemical bond is formed in which the product is thermodynamically much more stable than the reactants such that the reverse reaction does not take place. Bound molecules are sometimes called a "molecular complex"—the term generally refers to non-covalent associations. Non-covalent interactions can effectively become irreversible; for example, tight binding inhibitors of enzymes Document 2::: Bond order potential is a class of empirical (analytical) interatomic potentials which is used in molecular dynamics and molecular statics simulations. Examples include the Tersoff potential, the EDIP potential, the Brenner potential, the Finnis–Sinclair potentials, ReaxFF, and the second-moment tight-binding potentials. They have the advantage over conventional molecular mechanics force fields in that they can, with the same parameters, describe several different bonding states of an atom, and thus to some extent may be able to describe chemical reactions correctly. The potentials were developed partly independently of each other, but share the common idea that the strength of a chemical bond depends on the bonding environment, including the number of bonds and possibly also angles and bond lengths. It is based on the Linus Pauling bond order concept and can be written in the form This means that the potential is written as a simple pair potential depending on the distance between two atoms , but the strength of this bond is modified by the environment of the atom via the bond order . is a function that in Tersoff-type potentials depends inversely on the number of bonds to the atom , the bond angles between sets of three atoms , and optionally on the relative bond lengths , . In case of only one atomic bond (like in a diatomic molecule), which corresponds to the strongest and shortest possible bond. The other limiting case, for increasingly many number of bonds within some interaction range, and the potential turns completely repulsive (as illustrated in the figure to the right). Alternatively, the potential energy can be written in the embedded atom model form where is the electron density at the location of atom . These two forms for the energy can be shown to be equivalent (in the special case that the bond-order function contains no angular dependence). A more detailed summary of how the bond order concept can be motivated by the second-moment ap Document 3::: A chemical bonding model is a theoretical model used to explain atomic bonding structure, molecular geometry, properties, and reactivity of physical matter. This can refer to: VSEPR theory, a model of molecular geometry. Valence bond theory, which describes molecular electronic structure with localized bonds and lone pairs. Molecular orbital theory, which describes molecular electronic structure with delocalized molecular orbitals. Crystal field theory, an electrostatic model for transition metal complexes. Ligand field theory, the application of molecular orbital theory to transition metal complexes. Chemical bonding Document 4::: A bonding electron is an electron involved in chemical bonding. This can refer to: Chemical bond, a lasting attraction between atoms, ions or molecules Covalent bond or molecular bond, a sharing of electron pairs between atoms Bonding molecular orbital, an attraction between the atomic orbitals of atoms in a molecule Chemical bonding The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. What type of bonds are the attractive forces between the positively charged nuclei of the bonded atoms and one or more pairs of electrons that are located between the atoms? A. active B. reactive C. covalent D. gravitational Answer:
sciq-11258
multiple_choice
Rise divided by run is called what?
[ "hill", "mound", "steep", "slope" ]
D
Relavent Documents: Document 0::: Conceptual questions or conceptual problems in science, technology, engineering, and mathematics (STEM) education are questions that can be answered based only on the knowledge of relevant concepts, rather than performing extensive calculations. They contrast with most homework and exam problems in science and engineering that typically require plugging in numerical values into previously discussed formulas. Such "plug-and-chug" numerical problems can often be solved correctly by just matching the pattern of the problem to a previously discussed problem and changing the numerical inputs, which requires significant amounts of time to perform the calculations but does not test or deepen the understanding of how the concepts and formulas should work together. Conceptual questions, therefore, provide a good complement to conventional numerical problems because they need minimal or no calculations and instead encourage the students to engage more deeply with the underlying concepts and how they relate to formulas. Conceptual problems are often formulated as multiple-choice questions, making them easy to use during in-class discussions, particularly when utilizing active learning, peer instruction, and audience response. An example of a conceptual question in undergraduate thermodynamics is provided below: During adiabatic expansion of an ideal gas, its temperatureincreases decreases stays the same Impossible to tell/need more information The use of conceptual questions in physics was popularized by Eric Mazur, particularly in the form of multiple-choice tests that he called ConcepTests. In recent years, multiple websites that maintain lists of conceptual questions have been created by instructors for various disciplines. Some books on physics provide many examples of conceptual questions as well. Multiple conceptual questions can be assembled into a concept inventory to test the working knowledge of students at the beginning of a course or to track the improvement in Document 1::: Advanced Placement (AP) Calculus (also known as AP Calc, Calc AB / Calc BC or simply AB / BC) is a set of two distinct Advanced Placement calculus courses and exams offered by the American nonprofit organization College Board. AP Calculus AB covers basic introductions to limits, derivatives, and integrals. AP Calculus BC covers all AP Calculus AB topics plus additional topics (including integration by parts, Taylor series, parametric equations, vector calculus, and polar coordinate functions). AP Calculus AB AP Calculus AB is an Advanced Placement calculus course. It is traditionally taken after precalculus and is the first calculus course offered at most schools except for possibly a regular calculus class. The Pre-Advanced Placement pathway for math helps prepare students for further Advanced Placement classes and exams. Purpose According to the College Board: Topic outline The material includes the study and application of differentiation and integration, and graphical analysis including limits, asymptotes, and continuity. An AP Calculus AB course is typically equivalent to one semester of college calculus. Analysis of graphs (predicting and explaining behavior) Limits of functions (one and two sided) Asymptotic and unbounded behavior Continuity Derivatives Concept At a point As a function Applications Higher order derivatives Techniques Integrals Interpretations Properties Applications Techniques Numerical approximations Fundamental theorem of calculus Antidifferentiation L'Hôpital's rule Separable differential equations AP Calculus BC AP Calculus BC is equivalent to a full year regular college course, covering both Calculus I and II. After passing the exam, students may move on to Calculus III (Multivariable Calculus). Purpose According to the College Board, Topic outline AP Calculus BC includes all of the topics covered in AP Calculus AB, as well as the following: Convergence tests for series Taylor series Parametric equations Polar functions (inclu Document 2::: Further Mathematics is the title given to a number of advanced secondary mathematics courses. The term "Higher and Further Mathematics", and the term "Advanced Level Mathematics", may also refer to any of several advanced mathematics courses at many institutions. In the United Kingdom, Further Mathematics describes a course studied in addition to the standard mathematics AS-Level and A-Level courses. In the state of Victoria in Australia, it describes a course delivered as part of the Victorian Certificate of Education (see § Australia (Victoria) for a more detailed explanation). Globally, it describes a course studied in addition to GCE AS-Level and A-Level Mathematics, or one which is delivered as part of the International Baccalaureate Diploma. In other words, more mathematics can also be referred to as part of advanced mathematics, or advanced level math. United Kingdom Background A qualification in Further Mathematics involves studying both pure and applied modules. Whilst the pure modules (formerly known as Pure 4–6 or Core 4–6, now known as Further Pure 1–3, where 4 exists for the AQA board) build on knowledge from the core mathematics modules, the applied modules may start from first principles. The structure of the qualification varies between exam boards. With regard to Mathematics degrees, most universities do not require Further Mathematics, and may incorporate foundation math modules or offer "catch-up" classes covering any additional content. Exceptions are the University of Warwick, the University of Cambridge which requires Further Mathematics to at least AS level; University College London requires or recommends an A2 in Further Maths for its maths courses; Imperial College requires an A in A level Further Maths, while other universities may recommend it or may promise lower offers in return. Some schools and colleges may not offer Further mathematics, but online resources are available Although the subject has about 60% of its cohort obtainin Document 3::: Advanced Level (A-Level) Mathematics is a qualification of further education taken in the United Kingdom (and occasionally other countries as well). In the UK, A-Level exams are traditionally taken by 17-18 year-olds after a two-year course at a sixth form or college. Advanced Level Further Mathematics is often taken by students who wish to study a mathematics-based degree at university, or related degree courses such as physics or computer science. Like other A-level subjects, mathematics has been assessed in a modular system since the introduction of Curriculum 2000, whereby each candidate must take six modules, with the best achieved score in each of these modules (after any retake) contributing to the final grade. Most students will complete three modules in one year, which will create an AS-level qualification in their own right and will complete the A-level course the following year—with three more modules. The system in which mathematics is assessed is changing for students starting courses in 2017 (as part of the A-level reforms first introduced in 2015), where the reformed specifications have reverted to a linear structure with exams taken only at the end of the course in a single sitting. In addition, while schools could choose freely between taking Statistics, Mechanics or Discrete Mathematics (also known as Decision Mathematics) modules with the ability to specialise in one branch of applied Mathematics in the older modular specification, in the new specifications, both Mechanics and Statistics were made compulsory, with Discrete Mathematics being made exclusive as an option to students pursuing a Further Mathematics course. The first assessment opportunity for the new specification is 2018 and 2019 for A-levels in Mathematics and Further Mathematics, respectively. 2000s specification Prior to the 2017 reform, the basic A-Level course consisted of six modules, four pure modules (C1, C2, C3, and C4) and two applied modules in Statistics, Mechanics Document 4::: The Mathematics Subject Classification (MSC) is an alphanumerical classification scheme that has collaboratively been produced by staff of, and based on the coverage of, the two major mathematical reviewing databases, Mathematical Reviews and Zentralblatt MATH. The MSC is used by many mathematics journals, which ask authors of research papers and expository articles to list subject codes from the Mathematics Subject Classification in their papers. The current version is MSC2020. Structure The MSC is a hierarchical scheme, with three levels of structure. A classification can be two, three or five digits long, depending on how many levels of the classification scheme are used. The first level is represented by a two-digit number, the second by a letter, and the third by another two-digit number. For example: 53 is the classification for differential geometry 53A is the classification for classical differential geometry 53A45 is the classification for vector and tensor analysis First level At the top level, 64 mathematical disciplines are labeled with a unique two-digit number. In addition to the typical areas of mathematical research, there are top-level categories for "History and Biography", "Mathematics Education", and for the overlap with different sciences. Physics (i.e. mathematical physics) is particularly well represented in the classification scheme with a number of different categories including: Fluid mechanics Quantum mechanics Geophysics Optics and electromagnetic theory All valid MSC classification codes must have at least the first-level identifier. Second level The second-level codes are a single letter from the Latin alphabet. These represent specific areas covered by the first-level discipline. The second-level codes vary from discipline to discipline. For example, for differential geometry, the top-level code is 53, and the second-level codes are: A for classical differential geometry B for local differential geometry C for glo The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. Rise divided by run is called what? A. hill B. mound C. steep D. slope Answer:
sciq-3979
multiple_choice
What do we call structures that have lost their use through evolution, which serve as important evidence of evolution?
[ "primordial", "adaptative", "extinct", "vestigial" ]
D
Relavent Documents: Document 0::: Gerd B. Müller (born 1953) is an Austrian biologist who is emeritus professor at the University of Vienna where he was the head of the Department of Theoretical Biology in the Center for Organismal Systems Biology. His research interests focus on vertebrate limb development, evolutionary novelties, evo-devo theory, and the Extended Evolutionary Synthesis. He is also concerned with the development of 3D based imaging tools in developmental biology. Biography Müller received an M.D. in 1979 and a Ph.D. in zoology in 1985, both from the University of Vienna. He has been a sabbatical fellow at the Department of Developmental Biology, Dalhousie University, Canada, (1988) and a visiting scholar at the Museum of Comparative Zoology, Harvard University, and received his Habilitation in Anatomy and Embryology in 1989. He is a founding member of the Konrad Lorenz Institute for Evolution and Cognition Research, Klosterneuburg, Austria, of which he has been President since 1997. Müller is on the editorial boards of several scientific journals, including Biological Theory where he serves as an associate editor. He is editor-in-chief of the Vienna Series in Theoretical Biology, a book series devoted to theoretical developments in the biosciences, published by MIT Press. Scientific contribution Müller has published on developmental imaging, vertebrate limb development, the origins of phenotypic novelty, EvoDevo theory, and evolutionary theory. With the cell and developmental biologist Stuart Newman, Müller co-edited the book Origination of Organismal Form (MIT Press, 2003). This book on evolutionary developmental biology is a collection of papers on generative mechanisms that were plausibly involved in the origination of disparate body forms during early periods of organismal life. Particular attention is given to epigenetic factors, such as physical determinants and environmental parameters, that may have led to the spontaneous emergence of bodyplans and organ forms during a Document 1::: Vestigiality is the retention, during the process of evolution, of genetically determined structures or attributes that have lost some or all of the ancestral function in a given species. Assessment of the vestigiality must generally rely on comparison with homologous features in related species. The emergence of vestigiality occurs by normal evolutionary processes, typically by loss of function of a feature that is no longer subject to positive selection pressures when it loses its value in a changing environment. The feature may be selected against more urgently when its function becomes definitively harmful, but if the lack of the feature provides no advantage, and its presence provides no disadvantage, the feature may not be phased out by natural selection and persist across species. Examples of vestigial structures (also called degenerate, atrophied, or rudimentary organs) are the loss of functional wings in island-dwelling birds; the human vomeronasal organ; and the hindlimbs of the snake and whale. Overview Vestigial features may take various forms; for example, they may be patterns of behavior, anatomical structures, or biochemical processes. Like most other physical features, however functional, vestigial features in a given species may successively appear, develop, and persist or disappear at various stages within the life cycle of the organism, ranging from early embryonic development to late adulthood. Vestigiality, biologically speaking, refers to organisms retaining organs that have seemingly lost their original function. Vestigial organs are common evolutionary knowledge. In addition, the term vestigiality is useful in referring to many genetically determined features, either morphological, behavioral, or physiological; in any such context, however, it need not follow that a vestigial feature must be completely useless. A classic example at the level of gross anatomy is the human vermiform appendix, vestigial in the sense of retaining no significa Document 2::: Tinbergen's four questions, named after 20th century biologist Nikolaas Tinbergen, are complementary categories of explanations for animal behaviour. These are also commonly referred to as levels of analysis. It suggests that an integrative understanding of behaviour must include ultimate (evolutionary) explanations, in particular: behavioural adaptive functions phylogenetic history; and the proximate explanations underlying physiological mechanisms ontogenetic/developmental history. Four categories of questions and explanations When asked about the purpose of sight in humans and animals, even elementary-school children can answer that animals have vision to help them find food and avoid danger (function/adaptation). Biologists have three additional explanations: sight is caused by a particular series of evolutionary steps (phylogeny), the mechanics of the eye (mechanism/causation), and even the process of an individual's development (ontogeny). This schema constitutes a basic framework of the overlapping behavioural fields of ethology, behavioural ecology, comparative psychology, sociobiology, evolutionary psychology, and anthropology. Julian Huxley identified the first three questions. Niko Tinbergen gave only the fourth question, as Huxley's questions failed to distinguish between survival value and evolutionary history; Tinbergen's fourth question helped resolve this problem. Evolutionary (ultimate) explanations First question: Function (adaptation) Darwin's theory of evolution by natural selection is the only scientific explanation for why an animal's behaviour is usually well adapted for survival and reproduction in its environment. However, claiming that a particular mechanism is well suited to the present environment is different from claiming that this mechanism was selected for in the past due to its history of being adaptive. The literature conceptualizes the relationship between function and evolution in two ways. On the one hand, function Document 3::: Exaptation and the related term co-option describe a shift in the function of a trait during evolution. For example, a trait can evolve because it served one particular function, but subsequently it may come to serve another. Exaptations are common in both anatomy and behaviour. Bird feathers are a classic example. Initially they may have evolved for temperature regulation, but later were adapted for flight. When feathers were first used to aid in flight, that was an exaptive use. They have since then been shaped by natural selection to improve flight, so in their current state they are best regarded as adaptations for flight. So it is with many structures that initially took on a function as an exaptation: once molded for a new function, they become further adapted for that function. Interest in exaptation relates to both the process and products of evolution: the process that creates complex traits and the products (functions, anatomical structures, biochemicals, etc.) that may be imperfectly developed. The term "exaptation" was proposed by Stephen Jay Gould and Elisabeth Vrba, as a replacement for 'pre-adaptation', which they considered to be a teleologically loaded term. History and definitions The idea that the function of a trait might shift during its evolutionary history originated with Charles Darwin (). For many years the phenomenon was labeled "preadaptation", but since this term suggests teleology in biology, appearing to conflict with natural selection, it has been replaced by the term exaptation. The idea had been explored by several scholars when in 1982 Stephen Jay Gould and Elisabeth Vrba introduced the term "exaptation". However, this definition had two categories with different implications for the role of adaptation. (1) A character, previously shaped by natural selection for a particular function (an adaptation), is coopted for a new use—cooptation. (2) A character whose origin cannot be ascribed to the direct action of natural selection ( Document 4::: In biology, homology is similarity due to shared ancestry between a pair of structures or genes in different taxa. A common example of homologous structures is the forelimbs of vertebrates, where the wings of bats and birds, the arms of primates, the front flippers of whales, and the forelegs of four-legged vertebrates like dogs and crocodiles are all derived from the same ancestral tetrapod structure. Evolutionary biology explains homologous structures adapted to different purposes as the result of descent with modification from a common ancestor. The term was first applied to biology in a non-evolutionary context by the anatomist Richard Owen in 1843. Homology was later explained by Charles Darwin's theory of evolution in 1859, but had been observed before this, from Aristotle onwards, and it was explicitly analysed by Pierre Belon in 1555. In developmental biology, organs that developed in the embryo in the same manner and from similar origins, such as from matching primordia in successive segments of the same animal, are serially homologous. Examples include the legs of a centipede, the maxillary palp and labial palp of an insect, and the spinous processes of successive vertebrae in a vertebral column. Male and female reproductive organs are homologous if they develop from the same embryonic tissue, as do the ovaries and testicles of mammals including humans. Sequence homology between protein or DNA sequences is similarly defined in terms of shared ancestry. Two segments of DNA can have shared ancestry because of either a speciation event (orthologs) or a duplication event (paralogs). Homology among proteins or DNA is inferred from their sequence similarity. Significant similarity is strong evidence that two sequences are related by divergent evolution from a common ancestor. Alignments of multiple sequences are used to discover the homologous regions. Homology remains controversial in animal behaviour, but there is suggestive evidence that, for example, dom The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. What do we call structures that have lost their use through evolution, which serve as important evidence of evolution? A. primordial B. adaptative C. extinct D. vestigial Answer:
sciq-5921
multiple_choice
What phenomenon occurs when strong winds blow surface water away from shore, allowing deeper water to flow to the surface and take its place?
[ "tsunami", "hurricane", "upwelling", "percolating" ]
C
Relavent Documents: Document 0::: Branched flow refers to a phenomenon in wave dynamics, that produces a tree-like pattern involving successive mostly forward scattering events by smooth obstacles deflecting traveling rays or waves. Sudden and significant momentum or wavevector changes are absent, but accumulated small changes can lead to large momentum changes. The path of a single ray is less important than the environs around a ray, which rotate, compress, and stretch around in an area preserving way. Even more revealing are groups, or manifolds of neighboring rays extending over significant zones. Starting rays out from a point but varying their direction over a range, one to the next, or from different points along a line all with the same initial directions are examples of a manifold. Waves have analogous launching conditions, such as a point source spraying in many directions, or an extended plane wave heading on one direction. The ray bending or refraction leads to characteristic structure in phase space and nonuniform distributions in coordinate space that look somehow universal and resemble branches in trees or stream beds. The branches taken on non-obvious paths through the refracting landscape that are indirect and nonlocal results of terrain already traversed. For a given refracting landscape, the branches will look completely different depending on the initial manifold. Examples Two-dimensional electron gas Branched flow was first identified in experiments with a two-dimensional electron gas. Electrons flowing from a quantum point contact were scanned using a scanning probe microscope. Instead of usual diffraction patterns, the electrons flowed forming branching strands that persisted for several correlation lengths of the background potential. Ocean dynamics Focusing of random waves in the ocean can also lead to branched flow. The fluctuation in the depth of the ocean floor can be described as a random potential. A tsunami wave propagating in such medium will form branches which Document 1::: Stable stratification of fluids occurs when each layer is less dense than the one below it. Unstable stratification is when each layer is denser than the one below it. Buoyancy forces tend to preserve stable stratification; the higher layers float on the lower ones. In unstable stratification, on the other hand, buoyancy forces cause convection. The less-dense layers rise though the denser layers above, and the denser layers sink though the less-dense layers below. Stratifications can become more or less stable if layers change density. The processes involved are important in many science and engineering fields. Destablization and mixing Stable stratifications can become unstable if layers change density. This can happen due to outside influences (for instance, if water evaporates from a freshwater lens, making it saltier and denser, or if a pot or layered beverage is heated from below, making the bottom layer less dense). However, it can also happen due to internal diffusion of heat (the warmer layer slowly heats the adjacent cooler one) or other physical properties. This often causes mixing at the interface, creating new diffusive layers (see photo of coffee and milk). Sometimes, two physical properties diffuse between layers simultaneously; salt and temperature, for instance. This may form diffusive layers or even salt fingering, when the surfaces of the diffusive layers become so wavy that there are "fingers" of layers reaching up and down. Not all mixing is driven by density changes. Other physical forces may also mix stably-stratified layers. Sea spray and whitecaps (foaming whitewater on waves) are examples of water mixed into air, and air into water, respectively. In a fierce storm the air/water boundary may grow indistinct. Some of these wind waves are Kelvin-Helmholtz waves. Depending on the size of the velocity difference and the size of the density contrast between the layers, Kelvin-Helmholtz waves can look different. For instance, between two l Document 2::: In fluid dynamics, wave setup is the increase in mean water level due to the presence of breaking waves. Similarly, wave setdown is a wave-induced decrease of the mean water level before the waves break (during the shoaling process). For short, the whole phenomenon is often denoted as wave setup, including both increase and decrease of mean elevation. This setup is primarily present in and near the coastal surf zone. Besides a spatial variation in the (mean) wave setup, also a variation in time may be present – known as surf beat – causing infragravity wave radiation. Wave setup can be mathematically modeled by considering the variation in radiation stress. Radiation stress is the tensor of excess horizontal-momentum fluxes due to the presence of the waves. In and near the coastal surf zone As a progressive wave approaches shore and the water depth decreases, the wave height increases due to wave shoaling. As a result, there is additional wave-induced flux of horizontal momentum. The horizontal momentum equations of the mean flow requires this additional wave-induced flux to be balanced: this causes a decrease in the mean water level before the waves break, called a "setdown". After the waves break, the wave energy flux is no longer constant, but decreasing due to energy dissipation. The radiation stress therefore decreases after the break point, causing a free surface level increase to balance: wave setup. Both of the above descriptions are specifically for beaches with mild bed slope. Wave setup is particularly of concern during storm events, when the effects of big waves generated by wind from the storm are able to increase the mean sea level (by wave setup), enhancing the risks of damage to coastal infrastructure. Wave setup value The radiation stress pushes the water towards the coast, and is then pushed up, causing an increase in the water level. At a given moment, that increase is such that its hydrostratic pressure is equal to the radiation stress. Fr Document 3::: Wind-wave dissipation or "swell dissipation" is process in which a wave generated via a weather system loses its mechanical energy transferred from the atmosphere via wind. Wind waves, as their name suggests, are generated by wind transferring energy from the atmosphere to the ocean's surface, capillary gravity waves play an essential role in this effect, "wind waves" or "swell" are also known as surface gravity waves. General physics and theory The process of wind-wave dissipation can be explained by applying energy spectrum theory in a similar manner as for the formation of wind-waves (generally assuming spectral dissipation is a function of wave spectrum). However, although even some of recent innovative improvements for field observations (such as Banner & Babanin et al. ) have contributed to solve the riddles of wave breaking behaviors, unfortunately there hasn't been a clear understanding for exact theories of the wind wave dissipation process still yet because of its non-linear behaviors. By past and present observations and derived theories, the physics of the ocean-wave dissipation can be categorized by its passing regions along to water depth. In deep water, wave dissipation occurs by the actions of friction or drag forces such as opposite-directed winds or viscous forces generated by turbulent flows—usually nonlinear forces. In shallow water, the behaviors of wave dissipations are mostly types of shore wave breaking (see Types of wave breaking). Some of simple general descriptions of wind-wave dissipation (defined by Luigi Cavaleri et al. ) were proposed when we consider only ocean surface waves such as wind waves. By means of the simple, the interactions of waves with the vertical structure of the upper layers of the ocean are ignored for simplified theory in many proposed mechanisms. Sources of wind-wave dissipation In general understanding, the physics of wave dissipation can be categorized by considering with its dissipation sources, such as 1) wa Document 4::: Wind setup, also known as wind effect or storm effect, refers to the rise in water level in seas or lakes caused by winds pushing the water in a specific direction. As the wind moves across the water's surface, it applies a shear stress to the water, prompting the formation of a wind-driven current. When this current encounters a shoreline, the water level along the shore increases, generating a hydrostatic counterforce in equilibrium with the shear force. During a storm, wind setup is a component of the overall storm surge. For instance, in The Netherlands, the wind setup during a storm surge can elevate water levels by approximately 3 metres above the normal tide. In the case of cyclones, the wind setup can reach up to 5 metres. This can result in a significant rise in water levels, particularly when the water is forced into a shallow, funnel-shaped area. Observation In lakes, water level fluctuations are typically attributed to wind setup. This effect is particularly noticeable in lakes with well-regulated water levels, where the wind setup can be clearly observed. By comparing this with the wind over the lake, the relationship between wind speed, water depth, and fetch length can be accurately determined. This is especially feasible in lakes where water depth remains fairly consistent, such as the IJsselmeer. At sea, wind setup is usually not directly observable, as the observed water level is a combination of both the tide and the wind setup. To isolate the wind setup, the (calculated) astronomical tide must be subtracted from the observed water level. For example, during the North Sea flood of 1953 at the Vlissingen tidal station (see image), the highest water level along the Dutch coast was recorded at 2.79 metres, but this was not the location of the highest wind setup, which was observed at Scheveningen with a measurement of 3.52 metres. Notably, the highest wind setup ever recorded in the Netherlands (3.63 metres) was in Dintelsas, Steenbergen in 195 The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. What phenomenon occurs when strong winds blow surface water away from shore, allowing deeper water to flow to the surface and take its place? A. tsunami B. hurricane C. upwelling D. percolating Answer:
sciq-10399
multiple_choice
Which type of galaxies contain very little gas and dust, are red to yellow in color, contain over a trillion stars and have mostly old stars?
[ "giant orbital galaxies", "blue elliptical galaxies", "giant elliptical galaxies", "giant gaseous galaxies​" ]
C
Relavent Documents: Document 0::: Types Quasar Supermassive black hole Hypercompact stellar system (hypothetical object organized around a supermassive black hole) Intermediate-mass black holes and candidates Cigar Galaxy (Messier 82, NGC 3034) GCIRS 13E HLX-1 M82 X-1 Messier 15 (NGC 7078) Messier 110 (NGC 205) Sculptor Galaxy (NGC 253) Triangulum Galaxy (Messier 33, NGC 598 Document 1::: A brightest cluster galaxy (BCG) is defined as the brightest galaxy in a cluster of galaxies. BCGs include the most massive galaxies in the universe. They are generally elliptical galaxies which lie close to the geometric and kinematical center of their host galaxy cluster, hence at the bottom of the cluster potential well. They are also generally coincident with the peak of the cluster X-ray emission. Formation scenarios for BCGs include: Cooling flow—Star formation from the central cooling flow in high density cooling centers of X-ray cluster halos. The study of accretion populations in BCGs has cast doubt over this theory and astronomers have seen no evidence of cooling flows in radiative cooling clusters. The two remaining theories exhibit healthier prospects. Galactic cannibalism—Galaxies sink to the center of the cluster due to dynamical friction and tidal stripping. Galactic merger—Rapid galactic mergers between several galaxies take place during cluster collapse. It is possible to differentiate the cannibalism model from the merging model by considering the formation period of the BCGs. In the cannibalism model, there are numerous small galaxies present in the evolved cluster, whereas in the merging model, a hierarchical cosmological model is expected due to the collapse of clusters. It has been shown that the orbit decay of cluster galaxies is not effective enough to account for the growth of BCGs. The merging model is now generally accepted as the most likely one, but recent observations are at odds with some of its predictions. For example, it has been found that the stellar mass of BCGs was assembled much earlier than the merging model predicts. BCGs are divided into various classes of galaxies: giant ellipticals (gE), D galaxies and cD galaxies. cD and D galaxies both exhibit an extended diffuse envelope surrounding an elliptical-like nucleus akin to regular elliptical galaxies. The light profiles of BCGs are often described by a Sersic surface Document 2::: A dark galaxy is a hypothesized galaxy with no (or very few) stars. They received their name because they have no visible stars but may be detectable if they contain significant amounts of gas. Astronomers have long theorized the existence of dark galaxies, but there are no confirmed examples to date. Dark galaxies are distinct from intergalactic gas clouds caused by galactic tidal interactions, since these gas clouds do not contain dark matter, so they do not technically qualify as galaxies. Distinguishing between intergalactic gas clouds and galaxies is difficult; most candidate dark galaxies turn out to be tidal gas clouds. The best candidate dark galaxies to date include HI1225+01, AGC229385, and numerous gas clouds detected in studies of quasars. On 25 August 2016, astronomers reported that Dragonfly 44, an ultra diffuse galaxy (UDG) with the mass of the Milky Way galaxy, but with nearly no discernable stars or galactic structure, is made almost entirely of dark matter. Observational evidence Large surveys with sensitive but low-resolution radio telescopes like Arecibo or the Parkes Telescope look for 21-cm emission from atomic hydrogen in galaxies. These surveys are then matched to optical surveys to identify any objects with no optical counterpart; i.e., sources with no stars. Another way astronomers search for dark galaxies is to look for hydrogen absorption lines in the spectra of background quasars. This technique has revealed many intergalactic clouds of hydrogen, but following up on candidate dark galaxies is difficult, since these sources tend to be too far away and are often optically drowned out by the bright light from the quasars. Nature of dark galaxies Origin In 2005, astronomers discovered gas cloud VIRGOHI21 and attempted to determine what it was and why it exerted such a massive gravitational pull on galaxy NGC 4254. After years of ruling out other possible explanations, some have concluded that VIRGOHI21 is a dark galaxy. Size The actua Document 3::: The Morphs collaboration was a coordinated study to determine the morphologies of galaxies in distant clusters and to investigate the evolution of galaxies as a function of environment and epoch. Eleven clusters were examined and a detailed ground-based and space-based study was carried out. The project was begun in 1997 based upon the earlier observations by two groups using data from images derived from the pre-refurbished Hubble Space Telescope. It was a collaboration of Alan Dressler and Augustus Oemler, Jr., at Observatory of the Carnegie Institute of Washington, Warrick J. Couch at the University of New South Wales, Richard Ellis at Caltech, Bianca Poggianti at the University of Padua, Amy Barger at the University of Hawaii's Institute for Astronomy, Harvey Butcher at ASTRON, and Ray M. Sharples and Ian Smail at Durham University. Results were published through 2000. The collaboration sought answers to the differences in the origins of the various galaxy types — elliptical, lenticular, and spiral. The studies found that elliptical galaxies were the oldest and formed from the violent merger of other galaxies about two to three billion years after the Big Bang. Star formation in elliptical galaxies ceased about that time. On the other hand, new stars are still forming in the spiral arms of spiral galaxies. Lenticular galaxies (SO) are intermediate between the first two. They contain structures similar to spiral arms, but devoid of the gas and new stars of the spiral galaxies. Lenticular galaxies are the prevalent form in rich galaxy clusters, which suggests that spirals may be transformed into lenticular galaxies as time progresses. The exact process may be related to high galactic density, or to the total mass in a rich cluster's central core. The Morphs collaboration found that one of the principal mechanisms of this transformation involves the interaction among spiral galaxies, as they fall toward the core of the cluster. The Inamori Magellan Areal Camer Document 4::: CLASS B1359+154 is a quasar, or quasi-stellar object, that has a redshift of 3.235. A group of three foreground galaxies at a redshift of about 1 are behaving as gravitational lenses. The result is a rare example of a sixfold multiply imaged quasar. See also Twin Quasar Einstein Cross The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. Which type of galaxies contain very little gas and dust, are red to yellow in color, contain over a trillion stars and have mostly old stars? A. giant orbital galaxies B. blue elliptical galaxies C. giant elliptical galaxies D. giant gaseous galaxies​ Answer:
sciq-11376
multiple_choice
Do particles collide more in two reactants when they are both in fluid forms or solid forms?
[ "neither", "fluid", "solid", "plasma" ]
B
Relavent Documents: Document 0::: Interface and colloid science is an interdisciplinary intersection of branches of chemistry, physics, nanoscience and other fields dealing with colloids, heterogeneous systems consisting of a mechanical mixture of particles between 1 nm and 1000 nm dispersed in a continuous medium. A colloidal solution is a heterogeneous mixture in which the particle size of the substance is intermediate between a true solution and a suspension, i.e. between 1–1000 nm. Smoke from a fire is an example of a colloidal system in which tiny particles of solid float in air. Just like true solutions, colloidal particles are small and cannot be seen by the naked eye. They easily pass through filter paper. But colloidal particles are big enough to be blocked by parchment paper or animal membrane. Interface and colloid science has applications and ramifications in the chemical industry, pharmaceuticals, biotechnology, ceramics, minerals, nanotechnology, and microfluidics, among others. There are many books dedicated to this scientific discipline, and there is a glossary of terms, Nomenclature in Dispersion Science and Technology, published by the US National Institute of Standards and Technology. See also Interface (matter) Electrokinetic phenomena Surface science Document 1::: In chemistry, solubility is the ability of a substance, the solute, to form a solution with another substance, the solvent. Insolubility is the opposite property, the inability of the solute to form such a solution. The extent of the solubility of a substance in a specific solvent is generally measured as the concentration of the solute in a saturated solution, one in which no more solute can be dissolved. At this point, the two substances are said to be at the solubility equilibrium. For some solutes and solvents, there may be no such limit, in which case the two substances are said to be "miscible in all proportions" (or just "miscible"). The solute can be a solid, a liquid, or a gas, while the solvent is usually solid or liquid. Both may be pure substances, or may themselves be solutions. Gases are always miscible in all proportions, except in very extreme situations, and a solid or liquid can be "dissolved" in a gas only by passing into the gaseous state first. The solubility mainly depends on the composition of solute and solvent (including their pH and the presence of other dissolved substances) as well as on temperature and pressure. The dependency can often be explained in terms of interactions between the particles (atoms, molecules, or ions) of the two substances, and of thermodynamic concepts such as enthalpy and entropy. Under certain conditions, the concentration of the solute can exceed its usual solubility limit. The result is a supersaturated solution, which is metastable and will rapidly exclude the excess solute if a suitable nucleation site appears. The concept of solubility does not apply when there is an irreversible chemical reaction between the two substances, such as the reaction of calcium hydroxide with hydrochloric acid; even though one might say, informally, that one "dissolved" the other. The solubility is also not the same as the rate of solution, which is how fast a solid solute dissolves in a liquid solvent. This property de Document 2::: A colloid is a mixture in which one substance consisting of microscopically dispersed insoluble particles is suspended throughout another substance. Some definitions specify that the particles must be dispersed in a liquid, while others extend the definition to include substances like aerosols and gels. The term colloidal suspension refers unambiguously to the overall mixture (although a narrower sense of the word suspension is distinguished from colloids by larger particle size). A colloid has a dispersed phase (the suspended particles) and a continuous phase (the medium of suspension). The dispersed phase particles have a diameter of approximately 1 nanometre to 1 micrometre. Some colloids are translucent because of the Tyndall effect, which is the scattering of light by particles in the colloid. Other colloids may be opaque or have a slight color. Colloidal suspensions are the subject of interface and colloid science. This field of study began in 1845 by Francesco Selmi and expanded by Michael Faraday and Thomas Graham, who coined the term colloid in 1861. Classification of colloids Colloids can be classified as follows: Homogeneous mixtures with a dispersed phase in this size range may be called colloidal aerosols, colloidal emulsions, colloidal suspensions, colloidal foams, colloidal dispersions, or hydrosols. Hydrocolloids Hydrocolloids describe certain chemicals (mostly polysaccharides and proteins) that are colloidally dispersible in water. Thus becoming effectively "soluble" they change the rheology of water by raising the viscosity and/or inducing gelation. They may provide other interactive effects with other chemicals, in some cases synergistic, in others antagonistic. Using these attributes hydrocolloids are very useful chemicals since in many areas of technology from foods through pharmaceuticals, personal care and industrial applications, they can provide stabilization, destabilization and separation, gelation, flow control, crystallization cont Document 3::: A powder is an assembly of dry particles dispersed in air. If two different powders are mixed perfectly, theoretically, three types of powder mixtures can be obtained: the random mixture, the ordered mixture or the interactive mixture. Different powder types A powder is called free-flowing if the particles do not stick together. If particles are cohesive, they cling to one another to form aggregates. The significance of cohesion increases with decreasing size of the powder particles; particles smaller than 100 µm are generally cohesive. Random mixture A random mixture can be obtained if two different free-flowing powders of approximately the same particle size, density and shape are mixed (see figure A). Only primary particles are present in this type of mixture, i.e., the particles are not cohesive and do not cling to one another. The mixing time will determine the quality of the random mixture. However, if powders with particles of different size, density or shape are mixed, segregation can occur. Segregation will cause separation of the powders as, for example, lighter particles will be prone to travel to the top of the mixture whereas heavier particles are kept at the bottom. Ordered mixture The term ordered mixture was first introduced to describe a completely homogeneous mixture where the two components adhere to each other to form ordered units. However, a completely homogeneous mixture is only achievable in theory and other denotations were introduced later such as adhesive mixture or interactive mixture. Interactive mixture If a free-flowing powder is mixed with a cohesive powder an interactive mixture can be obtained. The cohesive particles adhere to the free-flowing particles (now called carrier particles) to form interactive units as shown in figure B. An interactive mixture may not contain free aggregates of the cohesive powder, which means that all small particles must be adhered to the larger ones. The difference from an ordered mixture is in Document 4::: When two objects touch, only a certain portion of their surface areas will be in contact with each other. This area of true contact, most often constitutes only a very small fraction of the apparent or nominal contact area. In relation to two contacting objects, the contact area is the part of the nominal area that consists of atoms of one object in true contact with the atoms of the other object. Because objects are never perfectly flat due to asperities, the actual contact area (on a microscopic scale) is usually much less than the contact area apparent on a macroscopic scale. Contact area may depend on the normal force between the two objects due to deformation. The contact area depends on the geometry of the contacting bodies, the load, and the material properties. The contact area between the two parallel cylinders is a narrow rectangle. Two, non-parallel cylinders have an elliptical contact area, unless the cylinders are crossed at 90 degrees, in which case they have a circular contact area. Two spheres also have a circular contact area. Friction and contact area It is an empirical fact for many materials that F = μN, where F is the frictional force for sliding friction, μ is the coefficient of friction, and N is the normal force. There isn't a simple derivation for sliding friction's independence from area. Methods for determining contact area One way of determining the actual contact area is to determine it indirectly through a physical process that depends on contact area. For example, the resistance of a wire is dependent on the cross-sectional area, so one may find the contact area of a metal by measuring the current that flows through that area (through the surface of an electrode to another electrode, for example.) See also Contact mechanics Contact resistance The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. Do particles collide more in two reactants when they are both in fluid forms or solid forms? A. neither B. fluid C. solid D. plasma Answer:
sciq-8099
multiple_choice
The various sensory organs are part of what organ system?
[ "respiratory", "nervous", "digestive", "lymphatic" ]
B
Relavent Documents: Document 0::: In a multicellular organism, an organ is a collection of tissues joined in a structural unit to serve a common function. In the hierarchy of life, an organ lies between tissue and an organ system. Tissues are formed from same type cells to act together in a function. Tissues of different types combine to form an organ which has a specific function. The intestinal wall for example is formed by epithelial tissue and smooth muscle tissue. Two or more organs working together in the execution of a specific body function form an organ system, also called a biological system or body system. An organ's tissues can be broadly categorized as parenchyma, the functional tissue, and stroma, the structural tissue with supportive, connective, or ancillary functions. For example, the gland's tissue that makes the hormones is the parenchyma, whereas the stroma includes the nerves that innervate the parenchyma, the blood vessels that oxygenate and nourish it and carry away its metabolic wastes, and the connective tissues that provide a suitable place for it to be situated and anchored. The main tissues that make up an organ tend to have common embryologic origins, such as arising from the same germ layer. Organs exist in most multicellular organisms. In single-celled organisms such as members of the eukaryotes, the functional analogue of an organ is known as an organelle. In plants, there are three main organs. The number of organs in any organism depends on the definition used. By one widely adopted definition, 79 organs have been identified in the human body. Animals Except for placozoans, multicellular animals including humans have a variety of organ systems. These specific systems are widely studied in human anatomy. The functions of these organ systems often share significant overlap. For instance, the nervous and endocrine system both operate via a shared organ, the hypothalamus. For this reason, the two systems are combined and studied as the neuroendocrine system. The sam Document 1::: The following diagram is provided as an overview of and topical guide to the human nervous system: Human nervous system – the part of the human body that coordinates a person's voluntary and involuntary actions and transmits signals between different parts of the body. The human nervous system consists of two main parts: the central nervous system (CNS) and the peripheral nervous system (PNS). The CNS contains the brain and spinal cord. The PNS consists mainly of nerves, which are long fibers that connect the CNS to every other part of the body. The PNS includes motor neurons, mediating voluntary movement; the autonomic nervous system, comprising the sympathetic nervous system and the parasympathetic nervous system and regulating involuntary functions; and the enteric nervous system, a semi-independent part of the nervous system whose function is to control the gastrointestinal system. Evolution of the human nervous system Evolution of nervous systems Evolution of human intelligence Evolution of the human brain Paleoneurology Some branches of science that study the human nervous system Neuroscience Neurology Paleoneurology Central nervous system The central nervous system (CNS) is the largest part of the nervous system and includes the brain and spinal cord. Spinal cord Brain Brain – center of the nervous system. Outline of the human brain List of regions of the human brain Principal regions of the vertebrate brain: Peripheral nervous system Peripheral nervous system (PNS) – nervous system structures that do not lie within the CNS. Sensory system A sensory system is a part of the nervous system responsible for processing sensory information. A sensory system consists of sensory receptors, neural pathways, and parts of the brain involved in sensory perception. List of sensory systems Sensory neuron Perception Visual system Auditory system Somatosensory system Vestibular system Olfactory system Taste Pain Components of the nervous system Neuron I Document 2::: A biological system is a complex network which connects several biologically relevant entities. Biological organization spans several scales and are determined based different structures depending on what the system is. Examples of biological systems at the macro scale are populations of organisms. On the organ and tissue scale in mammals and other animals, examples include the circulatory system, the respiratory system, and the nervous system. On the micro to the nanoscopic scale, examples of biological systems are cells, organelles, macromolecular complexes and regulatory pathways. A biological system is not to be confused with a living system, such as a living organism. Organ and tissue systems These specific systems are widely studied in human anatomy and are also present in many other animals. Respiratory system: the organs used for breathing, the pharynx, larynx, bronchi, lungs and diaphragm. Digestive system: digestion and processing food with salivary glands, oesophagus, stomach, liver, gallbladder, pancreas, intestines, rectum and anus. Cardiovascular system (heart and circulatory system): pumping and channeling blood to and from the body and lungs with heart, blood and blood vessels. Urinary system: kidneys, ureters, bladder and urethra involved in fluid balance, electrolyte balance and excretion of urine. Integumentary system: skin, hair, fat, and nails. Skeletal system: structural support and protection with bones, cartilage, ligaments and tendons. Endocrine system: communication within the body using hormones made by endocrine glands such as the hypothalamus, pituitary gland, pineal body or pineal gland, thyroid, parathyroid and adrenals, i.e., adrenal glands. Lymphatic system: structures involved in the transfer of lymph between tissues and the blood stream; includes the lymph and the nodes and vessels. The lymphatic system includes functions including immune responses and development of antibodies. Immune system: protects the organism from Document 3::: H2.00.04.4.01001: Lymphoid tissue H2.00.05.0.00001: Muscle tissue H2.00.05.1.00001: Smooth muscle tissue H2.00.05.2.00001: Striated muscle tissue H2.00.06.0.00001: Nerve tissue H2.00.06.1.00001: Neuron H2.00.06.2.00001: Synapse H2.00.06.2.00001: Neuroglia h3.01: Bones h3.02: Joints h3.03: Muscles h3.04: Alimentary system h3.05: Respiratory system h3.06: Urinary system h3.07: Genital system h3.08: Document 4::: Splanchnology is the study of the visceral organs, i.e. digestive, urinary, reproductive and respiratory systems. The term derives from the Neo-Latin splanchno-, from the Greek σπλάγχνα, meaning "viscera". More broadly, splanchnology includes all the components of the Neuro-Endo-Immune (NEI) Supersystem. An organ (or viscus) is a collection of tissues joined in a structural unit to serve a common function. In anatomy, a viscus is an internal organ, and viscera is the plural form. Organs consist of different tissues, one or more of which prevail and determine its specific structure and function. Functionally related organs often cooperate to form whole organ systems. Viscera are the soft organs of the body. There are organs and systems of organs that differ in structure and development but they are united for the performance of a common function. Such functional collection of mixed organs, form an organ system. These organs are always made up of special cells that support its specific function. The normal position and function of each visceral organ must be known before the abnormal can be ascertained. Healthy organs all work together cohesively and gaining a better understanding of how, helps to maintain a healthy lifestyle. Some functions cannot be accomplished only by one organ. That is why organs form complex systems. The system of organs is a collection of homogeneous organs, which have a common plan of structure, function, development, and they are connected to each other anatomically and communicate through the NEI supersystem. The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. The various sensory organs are part of what organ system? A. respiratory B. nervous C. digestive D. lymphatic Answer:
scienceQA-9672
multiple_choice
Select the living thing.
[ "stalactite", "brick wall", "pear tree", "mug" ]
C
A pear tree is a living thing. Pear trees grow and respond to their environment. They need food and water. Pear trees are made up of many cells. Pear trees are plants. They make their own food using water, carbon dioxide, and energy from sunlight. A mug is not a living thing. Mugs do not have all of the traits of living things. They do not grow or respond to their environment. They do not need food or water. A brick wall is not a living thing. Brick walls do not have all of the traits of living things. They do not grow or respond to their environment. They do not need food or water. A stalactite is not a living thing. A stalactite does not have all the traits of a living thing. It contains minerals that formed slowly over many years. But it does not need food or water.
Relavent Documents: Document 0::: Macroflora is a term used for all the plants occurring in a particular area that are large enough to be seen with the naked eye. It is usually synonymous with the Flora and can be contrasted with the microflora, a term used for all the bacteria and other microorganisms in an ecosystem. Macroflora is also an informal term used by many palaeobotanists to refer to an assemblage of plant fossils as preserved in the rock. This is in contrast to the flora, which in this context refers to the assemblage of living plants that were growing in a particular area, whose fragmentary remains became entrapped within the sediment from which the rock was formed and thus became the macroflora. Document 1::: Tech City College (Formerly STEM Academy) is a free school sixth form located in the Islington area of the London Borough of Islington, England. It originally opened in September 2013, as STEM Academy Tech City and specialised in Science, Technology, Engineering and Maths (STEM) and the Creative Application of Maths and Science. In September 2015, STEM Academy joined the Aspirations Academy Trust was renamed Tech City College. Tech City College offers A-levels and BTECs as programmes of study for students. Document 2::: The Science, Technology, Engineering and Mathematics Network or STEMNET is an educational charity in the United Kingdom that seeks to encourage participation at school and college in science and engineering-related subjects (science, technology, engineering, and mathematics) and (eventually) work. History It is based at Woolgate Exchange near Moorgate tube station in London and was established in 1996. The chief executive is Kirsten Bodley. The STEMNET offices are housed within the Engineering Council. Function Its chief aim is to interest children in science, technology, engineering and mathematics. Primary school children can start to have an interest in these subjects, leading secondary school pupils to choose science A levels, which will lead to a science career. It supports the After School Science and Engineering Clubs at schools. There are also nine regional Science Learning Centres. STEM ambassadors To promote STEM subjects and encourage young people to take up jobs in these areas, STEMNET have around 30,000 ambassadors across the UK. these come from a wide selection of the STEM industries and include TV personalities like Rob Bell. Funding STEMNET used to receive funding from the Department for Education and Skills. Since June 2007, it receives funding from the Department for Children, Schools and Families and Department for Innovation, Universities and Skills, since STEMNET sits on the chronological dividing point (age 16) of both of the new departments. See also The WISE Campaign Engineering and Physical Sciences Research Council National Centre for Excellence in Teaching Mathematics Association for Science Education Glossary of areas of mathematics Glossary of astronomy Glossary of biology Glossary of chemistry Glossary of engineering Glossary of physics Document 3::: The Inverness Campus is an area in Inverness, Scotland. 5.5 hectares of the site have been designated as an enterprise area for life sciences by the Scottish Government. This designation is intended to encourage research and development in the field of life sciences, by providing incentives to locate at the site. The enterprise area is part of a larger site, over 200 acres, which will house Inverness College, Scotland's Rural College (SRUC), the University of the Highlands and Islands, a health science centre and sports and other community facilities. The purpose built research hub will provide space for up to 30 staff and researchers, allowing better collaboration. The Highland Science Academy will be located on the site, a collaboration formed by Highland Council, employers and public bodies. The academy will be aimed towards assisting young people to gain the necessary skills to work in the energy, engineering and life sciences sectors. History The site was identified in 2006. Work started to develop the infrastructure on the site in early 2012. A virtual tour was made available in October 2013 to help mark Doors Open Day. The construction had reached halfway stage in May 2014, meaning that it is on track to open doors to receive its first students in August 2015. In May 2014, work was due to commence on a building designed to provide office space and laboratories as part of the campus's "life science" sector. Morrison Construction have been appointed to undertake the building work. Scotland's Rural College (SRUC) will be able to relocate their Inverness-based activities to the Campus. SRUC's research centre for Comparative Epidemiology and Medicine, and Agricultural Business Consultancy services could co-locate with UHI where their activities have complementary themes. By the start of 2017, there were more than 600 people working at the site. In June 2021, a new bridge opened connecting Inverness Campus to Inverness Shopping Park. It crosses the Aberdeen Document 4::: Plant life-form schemes constitute a way of classifying plants alternatively to the ordinary species-genus-family scientific classification. In colloquial speech, plants may be classified as trees, shrubs, herbs (forbs and graminoids), etc. The scientific use of life-form schemes emphasizes plant function in the ecosystem and that the same function or "adaptedness" to the environment may be achieved in a number of ways, i.e. plant species that are closely related phylogenetically may have widely different life-form, for example Adoxa moschatellina and Sambucus nigra are from the same family, but the former is a small herbaceous plant and the latter is a shrub or tree. Conversely, unrelated species may share a life-form through convergent evolution. While taxonomic classification is concerned with the production of natural classifications (being natural understood either in philosophical basis for pre-evolutionary thinking, or phylogenetically as non-polyphyletic), plant life form classifications uses other criteria than naturalness, like morphology, physiology and ecology. Life-form and growth-form are essentially synonymous concepts, despite attempts to restrict the meaning of growth-form to types differing in shoot architecture. Most life form schemes are concerned with vascular plants only. Plant construction types may be used in a broader sense to encompass planktophytes, benthophytes (mainly algae) and terrestrial plants. A popular life-form scheme is the Raunkiær system. History One of the earliest attempts to classify the life-forms of plants and animals was made by Aristotle, whose writings are lost. His pupil, Theophrastus, in Historia Plantarum (c. 350 BC), was the first who formally recognized plant habits: trees, shrubs and herbs. Some earlier authors (e.g., Humboldt, 1806) did classify species according to physiognomy, but were explicit about the entities being merely practical classes without any relation to plant function. A marked exception was The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. Select the living thing. A. stalactite B. brick wall C. pear tree D. mug Answer:
ai2_arc-961
multiple_choice
In 2005, a team of scientists discovered a photosynthetic bacteria living near the molten lava of a thermal vent ecosystem deep in the Pacific Ocean. The bacteria lived 2400 meters below the surface of the ocean, yet made energy from photosynthesis. Which conclusion best explains the results?
[ "Photosynthesis can occur without light.", "The hydrothermal vent emits usable light.", "High water pressure can power photosynthesis.", "The bacteria used to live at the ocean's surface." ]
B
Relavent Documents: Document 0::: The hydrothermal vent microbial community includes all unicellular organisms that live and reproduce in a chemically distinct area around hydrothermal vents. These include organisms in the microbial mat, free floating cells, or bacteria in an endosymbiotic relationship with animals. Chemolithoautotrophic bacteria derive nutrients and energy from the geological activity at Hydrothermal vents to fix carbon into organic forms. Viruses are also a part of the hydrothermal vent microbial community and their influence on the microbial ecology in these ecosystems is a burgeoning field of research. Hydrothermal vents are located where the tectonic plates are moving apart and spreading. This allows water from the ocean to enter into the crust of the earth where it is heated by the magma. The increasing pressure and temperature forces the water back out of these openings, on the way out, the water accumulates dissolved minerals and chemicals from the rocks that it encounters. There are generally three kinds of vents that occur and are all characterized by its temperature and chemical composition. Diffuse vents release clear water typically up to 30 °C. White smoker vents emit a milky-coloured water between 200-330 °C, and black smoker vents generally release water hotter than the other vents between 300-400 °C. The waters from black smokers are darkened by the precipitates of sulfide that are accumulated. Due to the absence of sunlight at these ocean depths, energy is provided by chemosynthesis where symbiotic bacteria and archaea form the bottom of the food chain and are able to support a variety of organisms such as Riftia pachyptila and Alvinella pompejana. These organisms use this symbiotic relationship in order to use and obtain the chemical energy that is released at these hydrothermal vent areas. Environmental Properties Although there is a large variation in temperatures at the surface of the water with the changing depths of the thermocline seasonally, the temperat Document 1::: Colleen Marie Cavanaugh is an American academic microbiologist best known for her studies of hydrothermal vent ecosystems. As of 2002, she is the Edward C. Jeffrey Professor of Biology in the Department of Organismic and Evolutionary Biology at Harvard University and is affiliated with the Marine Biological Laboratory and the Woods Hole Oceanographic Institution. Cavanaugh was the first to propose that the deep-sea giant tube worm, Riftia pachyptila, obtains its food from bacteria living within its cells, an insight which she had as a graduate student at Harvard. Significantly, she made the connection that these chemoautotrophic bacteria were able to play this role through their use of chemosynthesis, the biological oxidation of inorganic compounds (e.g., hydrogen sulfide) to synthesize organic matter from very simple carbon-containing molecules, thus allowing organisms such as the bacteria (and dependent organisms such as tube worms) to exist in deep ocean without sunlight. Early life and education Cavanaugh was born in Detroit, Michigan, in 1953. Cavanaugh received her undergraduate degree from the University of Michigan in 1977, where she initially studied music but ultimately majored in ecology. She says her life changed direction in her sophomore year when she heard about a course in marine ecology at the oceanographic center in Woods Hole, Massachusetts. There, her work involved wading out into chilly waters to study the mating habits of horseshoe crabs, and she described herself as "[falling] in love" with the relaxed camaraderie and exchange of ideas between biologists, geologists, and scientists from other disciplines. Cavanaugh took a Marine Ecology course as an undergraduate offered by the University of Michigan, stayed in Woods Hole afterwards (as her car needed repair) looking for a job, and ultimately replaced a "no show" in a Boston University undergraduate research program, which returned her to work with local horseshoe crabs. Cavanaugh then move Document 2::: Thermophyte (Greek thérmos = warmth, heat + phyton = plant) is an organism which is tolerant or thriving at high temperatures. These organisms are categorized according to ecological valences at high temperatures, including biological extremely. Such organisms included the hot-spring taxa also. A large amount of thermophytes are algae, more specifically blue-green algae, also referred to as cyanobacteria. This type of algae thrives in hot conditions ranging anywhere from 50 to 70 degrees Celsius, which other plants and organisms cannot survive in. Thermophytes are able to survive extreme temperatures as their cells contain an “unorganized nucleus”. As the name suggests, thermophytes are found in high temperatures. They can be found in abundance in and around places like freshwater hot springs, such as YellowStone National park and in the Lassen Volcanic National park. Mutualism in Thermophytes There are instances in which a fungus and plant become thermophytes by forming a symbiotic relationship with one another. Some thermophytes live with a fungal partner in a symbiotic relationship with plants, algae, and viruses. Mutualists like the panic grass and its fungal partner cannot survive individually, but thrive when they are in the symbiotic relationship. This means the fungus, plant, and virus function together to survive in such extreme conditions by benefiting from each other. The fungi typically dwells in the intracellular spaces between the plant's cells. In a study performed at Washington State, it was discovered that panic grass living near the hot springs in Yellowstone National park thrive due to their relationship with the fungus Curvularia protuberata. Neither organism can survive on their own at such high temperatures. The mycoviruses infect the fungi that live within these plants and algae. These mycoviruses prevent the fungi from having a pathogenic effect on the plants, thus preventing the fungus from harming the plant. The panic grass benefit Document 3::: In biochemistry, chemosynthesis is the biological conversion of one or more carbon-containing molecules (usually carbon dioxide or methane) and nutrients into organic matter using the oxidation of inorganic compounds (e.g., hydrogen gas, hydrogen sulfide) or ferrous ions as a source of energy, rather than sunlight, as in photosynthesis. Chemoautotrophs, organisms that obtain carbon from carbon dioxide through chemosynthesis, are phylogenetically diverse. Groups that include conspicuous or biogeochemically important taxa include the sulfur-oxidizing Gammaproteobacteria, the Campylobacterota, the Aquificota, the methanogenic archaea, and the neutrophilic iron-oxidizing bacteria. Many microorganisms in dark regions of the oceans use chemosynthesis to produce biomass from single-carbon molecules. Two categories can be distinguished. In the rare sites where hydrogen molecules (H2) are available, the energy available from the reaction between CO2 and H2 (leading to production of methane, CH4) can be large enough to drive the production of biomass. Alternatively, in most oceanic environments, energy for chemosynthesis derives from reactions in which substances such as hydrogen sulfide or ammonia are oxidized. This may occur with or without the presence of oxygen. Many chemosynthetic microorganisms are consumed by other organisms in the ocean, and symbiotic associations between chemosynthesizers and respiring heterotrophs are quite common. Large populations of animals can be supported by chemosynthetic secondary production at hydrothermal vents, methane clathrates, cold seeps, whale falls, and isolated cave water. It has been hypothesized that anaerobic chemosynthesis may support life below the surface of Mars, Jupiter's moon Europa, and other planets. Chemosynthesis may have also been the first type of metabolism that evolved on Earth, leading the way for cellular respiration and photosynthesis to develop later. Hydrogen sulfide chemosynthesis process Giant tube worms Document 4::: BIOS-3 is an experimental closed ecosystem at the Institute of Biophysics in Krasnoyarsk, Russia. Its construction began in 1965, and was completed in 1972. BIOS-3 consists of a underground steel structure suitable for up to three persons, and was initially used for developing closed ecological human life-support ecosystems. It was divided into 4 compartments, one of which is a crew area. The crew area consists of 3 single-cabins, a galley, lavatory and control room. Initially one other compartment was an algal cultivator, and the other two phytotrons for growing wheat or vegetables. The plants growing in the two phytotrons contributed approximately 25% of the air filtering in the compound. Later, the algal cultivator was converted into a third phytotron. A level of light comparable to sunlight was supplied in each of the 4 compartments by 20 kW xenon lamps, cooled by water jackets. The facility used 400 kW of electricity, supplied by a nearby hydroelectric power station. Chlorella algae were used to recycle air breathed by humans, absorbing carbon dioxide and replenishing it with oxygen through photosynthesis. The algae were cultivated in stacked tanks under artificial light. To achieve a balance of oxygen and carbon dioxide, one human needed of exposed Chlorella. Air was purified of more complex organic compounds by heating to in the presence of a catalyst. Water and nutrients were stored in advance and were also recycled. By 1968, system efficiency had reached 85% by recycling water. Dried meat was imported into the facility, and urine and feces were generally dried and stored, rather than being recycled. BIOS-3 facilities were used to conduct 10 manned closure experiments with a one to three man crew. The longest experiment with a three-man crew lasted 180 days (in 1972-1973). The facilities were used for the tests at least until 1984. In 1986, Dr. Josef Gitelson, head of the Institute of Biophysics (IBP) at Krasnoyarsk and developer of biospherics as we The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. In 2005, a team of scientists discovered a photosynthetic bacteria living near the molten lava of a thermal vent ecosystem deep in the Pacific Ocean. The bacteria lived 2400 meters below the surface of the ocean, yet made energy from photosynthesis. Which conclusion best explains the results? A. Photosynthesis can occur without light. B. The hydrothermal vent emits usable light. C. High water pressure can power photosynthesis. D. The bacteria used to live at the ocean's surface. Answer:
sciq-11079
multiple_choice
How many protons and electrons each do carbon atoms have?
[ "five", "nine", "two", "six" ]
D
Relavent Documents: Document 0::: Conceptual questions or conceptual problems in science, technology, engineering, and mathematics (STEM) education are questions that can be answered based only on the knowledge of relevant concepts, rather than performing extensive calculations. They contrast with most homework and exam problems in science and engineering that typically require plugging in numerical values into previously discussed formulas. Such "plug-and-chug" numerical problems can often be solved correctly by just matching the pattern of the problem to a previously discussed problem and changing the numerical inputs, which requires significant amounts of time to perform the calculations but does not test or deepen the understanding of how the concepts and formulas should work together. Conceptual questions, therefore, provide a good complement to conventional numerical problems because they need minimal or no calculations and instead encourage the students to engage more deeply with the underlying concepts and how they relate to formulas. Conceptual problems are often formulated as multiple-choice questions, making them easy to use during in-class discussions, particularly when utilizing active learning, peer instruction, and audience response. An example of a conceptual question in undergraduate thermodynamics is provided below: During adiabatic expansion of an ideal gas, its temperatureincreases decreases stays the same Impossible to tell/need more information The use of conceptual questions in physics was popularized by Eric Mazur, particularly in the form of multiple-choice tests that he called ConcepTests. In recent years, multiple websites that maintain lists of conceptual questions have been created by instructors for various disciplines. Some books on physics provide many examples of conceptual questions as well. Multiple conceptual questions can be assembled into a concept inventory to test the working knowledge of students at the beginning of a course or to track the improvement in Document 1::: The SAT Subject Test in Biology was the name of a one-hour multiple choice test given on biology by the College Board. A student chose whether to take the test depending upon college entrance requirements for the schools in which the student is planning to apply. Until 1994, the SAT Subject Tests were known as Achievement Tests; and from 1995 until January 2005, they were known as SAT IIs. Of all SAT subject tests, the Biology E/M test was the only SAT II that allowed the test taker a choice between the ecological or molecular tests. A set of 60 questions was taken by all test takers for Biology and a choice of 20 questions was allowed between either the E or M tests. This test was graded on a scale between 200 and 800. The average for Molecular is 630 while Ecological is 591. On January 19 2021, the College Board discontinued all SAT Subject tests, including the SAT Subject Test in Biology E/M. This was effective immediately in the United States, and the tests were to be phased out by the following summer for international students. This was done as a response to changes in college admissions due to the impact of the COVID-19 pandemic on education. Format This test had 80 multiple-choice questions that were to be answered in one hour. All questions had five answer choices. Students received one point for each correct answer, lost ¼ of a point for each incorrect answer, and received 0 points for questions left blank. The student's score was based entirely on his or her performance in answering the multiple-choice questions. The questions covered a broad range of topics in general biology. There were more specific questions related respectively on ecological concepts (such as population studies and general Ecology) on the E test and molecular concepts such as DNA structure, translation, and biochemistry on the M test. Preparation The College Board suggested a year-long course in biology at the college preparatory level, as well as a one-year course in algebra, a Document 2::: There are four Advanced Placement (AP) Physics courses administered by the College Board as part of its Advanced Placement program: the algebra-based Physics 1 and Physics 2 and the calculus-based Physics C: Mechanics and Physics C: Electricity and Magnetism. All are intended to be at the college level. Each AP Physics course has an exam for which high-performing students may receive credit toward their college coursework. AP Physics 1 and 2 AP Physics 1 and AP Physics 2 were introduced in 2015, replacing AP Physics B. The courses were designed to emphasize critical thinking and reasoning as well as learning through inquiry. They are algebra-based and do not require any calculus knowledge. AP Physics 1 AP Physics 1 covers Newtonian mechanics, including: Unit 1: Kinematics Unit 2: Dynamics Unit 3: Circular Motion and Gravitation Unit 4: Energy Unit 5: Momentum Unit 6: Simple Harmonic Motion Unit 7: Torque and Rotational Motion Until 2020, the course also covered topics in electricity (including Coulomb's Law and resistive DC circuits), mechanical waves, and sound. These units were removed because they are included in AP Physics 2. AP Physics 2 AP Physics 2 covers the following topics: Unit 1: Fluids Unit 2: Thermodynamics Unit 3: Electric Force, Field, and Potential Unit 4: Electric Circuits Unit 5: Magnetism and Electromagnetic Induction Unit 6: Geometric and Physical Optics Unit 7: Quantum, Atomic, and Nuclear Physics AP Physics C From 1969 to 1972, AP Physics C was a single course with a single exam that covered all standard introductory university physics topics, including mechanics, fluids, electricity and magnetism, optics, and modern physics. In 1973, the College Board split the course into AP Physics C: Mechanics and AP Physics C: Electricity and Magnetism. The exam was also split into two separate 90-minute tests, each equivalent to a semester-length calculus-based college course. Until 2006, both exams could be taken for a single Document 3::: In mathematical psychology and education theory, a knowledge space is a combinatorial structure used to formulate mathematical models describing the progression of a human learner. Knowledge spaces were introduced in 1985 by Jean-Paul Doignon and Jean-Claude Falmagne, and remain in extensive use in the education theory. Modern applications include two computerized tutoring systems, ALEKS and the defunct RATH. Formally, a knowledge space assumes that a domain of knowledge is a collection of concepts or skills, each of which must be eventually mastered. Not all concepts are interchangeable; some require other concepts as prerequisites. Conversely, competency at one skill may ease the acquisition of another through similarity. A knowledge space marks out which collections of skills are feasible: they can be learned without mastering any other skills. Under reasonable assumptions, the collection of feasible competencies forms the mathematical structure known as an antimatroid. Researchers and educators usually explore the structure of a discipline's knowledge space as a latent class model. Motivation Knowledge Space Theory attempts to address shortcomings of standardized testing when used in educational psychometry. Common tests, such as the SAT and ACT, compress a student's knowledge into a very small range of ordinal ranks, in the process effacing the conceptual dependencies between questions. Consequently, the tests cannot distinguish between true understanding and guesses, nor can they identify a student's particular weaknesses, only the general proportion of skills mastered. The goal of knowledge space theory is to provide a language by which exams can communicate What the student can do and What the student is ready to learn. Model structure Knowledge Space Theory-based models presume that an educational subject can be modeled as a finite set of concepts, skills, or topics. Each feasible state of knowledge about is then a subset of ; the set of Document 4::: Advanced Placement (AP) Physics C: Electricity and Magnetism (also known as AP Physics C: E&M or AP E&M) is an introductory physics course administered by the College Board as part of its Advanced Placement program. It is intended to proxy a second-semester calculus-based university course in electricity and magnetism. The content of Physics C: E&M overlaps with that of AP Physics 2, but Physics 2 is algebra-based and covers other topics outside of electromagnetism, while Physics C is calculus-based and only covers electromagnetism. Physics C: E&M may be combined with its mechanics counterpart to form a year-long course that prepares for both exams. Course content E&M is equivalent to an introductory college course in electricity and magnetism for physics or engineering majors. The course modules are: Electrostatics Conductors, capacitors, and dielectrics Electric circuits Magnetic fields Electromagnetism. Methods of calculus are used wherever appropriate in formulating physical principles and in applying them to physical problems. Therefore, students should have completed or be concurrently enrolled in a calculus class. AP test The course culminates in an optional exam for which high-performing students may receive some credit towards their college coursework, depending on the institution. Registration The AP examination for AP Physics C: Electricity and Magnetism is separate from the AP examination for AP Physics C: Mechanics. Before 2006, test-takers paid only once and were given the choice of taking either one or two parts of the Physics C test. Format The exam is typically administered on a Monday afternoon in May. The exam is configured in two categories: a 35-question multiple choice section and a 3-question free response section. Test takers are allowed to use an approved calculator during the entire exam. The test is weighted such that each section is worth half of the final score. This and AP Physics C: Mechanics are the shortest AP exams, with The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. How many protons and electrons each do carbon atoms have? A. five B. nine C. two D. six Answer:
ai2_arc-775
multiple_choice
Which structure of a bird is correctly paired with its function?
[ "claws for obtaining food", "wings for eliminating waste", "feathers for breathing", "eyes for growing" ]
A
Relavent Documents: Document 0::: Bird flight is the primary mode of locomotion used by most bird species in which birds take off and fly. Flight assists birds with feeding, breeding, avoiding predators, and migrating. Bird flight is one of the most complex forms of locomotion in the animal kingdom. Each facet of this type of motion, including hovering, taking off, and landing, involves many complex movements. As different bird species adapted over millions of years through evolution for specific environments, prey, predators, and other needs, they developed specializations in their wings, and acquired different forms of flight. Various theories exist about how bird flight evolved, including flight from falling or gliding (the trees down hypothesis), from running or leaping (the ground up hypothesis), from wing-assisted incline running or from proavis (pouncing) behavior. Basic mechanics of bird flight Lift, drag and thrust The fundamentals of bird flight are similar to those of aircraft, in which the aerodynamic forces sustaining flight are lift, drag, and thrust. Lift force is produced by the action of air flow on the wing, which is an airfoil. The airfoil is shaped such that the air provides a net upward force on the wing, while the movement of air is directed downward. Additional net lift may come from airflow around the bird's body in some species, especially during intermittent flight while the wings are folded or semi-folded (cf. lifting body). Aerodynamic drag is the force opposite to the direction of motion, and hence the source of energy loss in flight. The drag force can be separated into two portions, lift-induced drag, which is the inherent cost of the wing producing lift (this energy ends up primarily in the wingtip vortices), and parasitic drag, including skin friction drag from the friction of air and body surfaces and form drag from the bird's frontal area. The streamlining of bird's body and wings reduces these forces. Unlike aircraft, which have engines to produce thrust, bi Document 1::: The following is a glossary of common English language terms used in the description of birds—warm-blooded vertebrates of the class Aves and the only living dinosaurs, characterized by , the ability to in all but the approximately 60 extant species of flightless birds, toothless, , the laying of hard-shelled eggs, a high metabolic rate, a four-chambered heart and a strong yet lightweight skeleton. Among other details such as size, proportions and shape, terms defining bird features developed and are used to describe features unique to the class—especially evolutionary adaptations that developed to aid flight. There are, for example, numerous terms describing the complex structural makeup of feathers (e.g., , and ); types of feathers (e.g., , and feathers); and their growth and loss (e.g., , and ). There are thousands of terms that are unique to the study of birds. This glossary makes no attempt to cover them all, concentrating on terms that might be found across descriptions of multiple bird species by bird enthusiasts and ornithologists. Though words that are not unique to birds are also covered, such as or , they are defined in relation to other unique features of external bird anatomy, sometimes called . As a rule, this glossary does not contain individual entries on any of the approximately 9,700 recognized living individual bird species of the world. A B C D {| border="1" |- |carnivores (sometimes called faunivores): birds that predominantly forage for the meat of vertebrates—generally hunters as in certain birds of prey—including eagles, owls and shrikes, though piscivores, insectivores and crustacivores may be called specialized types of carnivores. |- |crustacivores: birds that forage for and eat crustaceans, such as crab-plovers and some rails. |- |detritivores: birds that forage for and eat decomposing material, such as vultures. It is usually used as a more general term than "saprovore" (defined below), which often connotes the eating of de Document 2::: The difficulty of defining or measuring intelligence in non-human animals makes the subject difficult to study scientifically in birds. In general, birds have relatively large brains compared to their head size. The visual and auditory senses are well developed in most species, though the tactile and olfactory senses are well realized only in a few groups. Birds communicate using visual signals as well as through the use of calls and song. The testing of intelligence in birds is therefore usually based on studying responses to sensory stimuli. The corvids (ravens, crows, jays, magpies, etc.) and psittacines (parrots, macaws, and cockatoos) are often considered the most intelligent birds, and are among the most intelligent animals in general. Pigeons, finches, domestic fowl, and birds of prey have also been common subjects of intelligence studies. Studies Bird intelligence has been studied through several attributes and abilities. Many of these studies have been on birds such as quail, domestic fowl, and pigeons kept under captive conditions. It has, however, been noted that field studies have been limited, unlike those of the apes. Birds in the crow family (corvids) as well as parrots (psittacines) have been shown to live socially, have long developmental periods, and possess large forebrains, all of which have been hypothesized to allow for greater cognitive abilities. Counting has traditionally been considered an ability that shows intelligence. Anecdotal evidence from the 1960s has suggested that crows can count up to 3. Researchers need to be cautious, however, and ensure that birds are not merely demonstrating the ability to subitize, or count a small number of items quickly. Some studies have suggested that crows may indeed have a true numerical ability. It has been shown that parrots can count up to 6. Cormorants used by Chinese fishermen were given every eighth fish as a reward, and found to be able to keep count up to 7. E.H. Hoh wrote in Natural Histo Document 3::: Around 350 BCE, Aristotle and other philosophers of the time attempted to explain the aerodynamics of avian flight. Even after the discovery of the ancestral bird Archaeopteryx which lived over 150 million years ago, debates still persist regarding the evolution of flight. There are three leading hypotheses pertaining to avian flight: Pouncing Proavis model, Cursorial model, and Arboreal model. In March 2018, scientists reported that Archaeopteryx was likely capable of flight, but in a manner substantially different from that of modern birds. Flight characteristics For flight to occur, four physical forces (thrust and drag, lift and weight) must be favorably combined. In order for birds to balance these forces, certain physical characteristics are required. Asymmetrical wing feathers, found on all flying birds with the exception of hummingbirds, help in the production of thrust and lift. Anything that moves through the air produces drag due to friction. The aerodynamic body of a bird can reduce drag, but when stopping or slowing down a bird will use its tail and feet to increase drag. Weight is the largest obstacle birds must overcome in order to fly. An animal can more easily attain flight by reducing its absolute weight. Birds evolved from other theropod dinosaurs that had already gone through a phase of size reduction during the Middle Jurassic, combined with rapid evolutionary changes. Flying birds during their evolution further reduced relative weight through several characteristics such as the loss of teeth, shrinkage of the gonads out of mating season, and fusion of bones. Teeth were replaced by a lightweight bill made of keratin, the food being processed by the bird's gizzard. Other advanced physical characteristics evolved for flight are a keel for the attachment of flight muscles and an enlarged cerebellum for fine motor coordination. These were gradual changes, though, and not strict conditions for flight: the first birds had teeth, at best a small keel Document 4::: Significant work has gone into analyzing the effects of climate change on birds. Like other animal groups, birds are affected by anthropogenic (human-caused) climate change. The research includes tracking the changes in species' life cycles over decades in response to the changing world, evaluating the role of differing evolutionary pressures and even comparing museum specimens with modern birds to track changes in appearance and body structure. Predictions of range shifts caused by the direct and indirect impacts of climate change on bird species are amongst the most important, as they are crucial for informing animal conservation work, required to minimize extinction risk from climate change. Climate change mitigation options can also have varying impacts on birds. However, even the environmental impact of wind power is estimated to be much less threatening to birds than the continuing effects of climate change. Causes Climate change has raised the temperature of the Earth by about since the Industrial Revolution. As the extent of future greenhouse gas emissions and mitigation actions determines the climate change scenario taken, warming may increase from present levels by less than with rapid and comprehensive mitigation (the Paris Agreement goal) to around ( from the preindustrial) by the end of the century with very high and continually increasing greenhouse gas emissions. Effects Physical changes Birds are a group of warm-blooded vertebrates constituting the class Aves, characterized by feathers, toothless beaked jaws, the laying of hard-shelled eggs, a high metabolic rate, a four-chambered heart, and a strong yet lightweight skeleton. Climate change has already altered the appearance of some birds by facilitating changes to their feathers. A comparison of museum specimens of juvenile passerines from 1800s with juveniles of the same species today had shown that these birds now complete the switch from their nesting feathers to adult feathers ea The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. Which structure of a bird is correctly paired with its function? A. claws for obtaining food B. wings for eliminating waste C. feathers for breathing D. eyes for growing Answer:
sciq-3575
multiple_choice
Phylum chordata consists of two subphyla of invertebrates, as well as the hagfishes and what else?
[ "cells", "vertebrates", "organs", "lipids" ]
B
Relavent Documents: Document 0::: The polypide in bryozoans encompasses most of the organs and tissues of each individual zooid. This includes the tentacles, tentacle sheath, U-shaped digestive tract, musculature and nerve cells. It is housed in the zooidal exoskeleton, which in cyclostomes is tubular and in cheilostomes is box-shaped. See also Bryozoan Anatomy Document 1::: Caenorhabditis elegans () is a free-living transparent nematode about 1 mm in length that lives in temperate soil environments. It is the type species of its genus. The name is a blend of the Greek caeno- (recent), rhabditis (rod-like) and Latin elegans (elegant). In 1900, Maupas initially named it Rhabditides elegans. Osche placed it in the subgenus Caenorhabditis in 1952, and in 1955, Dougherty raised Caenorhabditis to the status of genus. C. elegans is an unsegmented pseudocoelomate and lacks respiratory or circulatory systems. Most of these nematodes are hermaphrodites and a few are males. Males have specialised tails for mating that include spicules. In 1963, Sydney Brenner proposed research into C. elegans, primarily in the area of neuronal development. In 1974, he began research into the molecular and developmental biology of C. elegans, which has since been extensively used as a model organism. It was the first multicellular organism to have its whole genome sequenced, and in 2019 it was the first organism to have its connectome (neuronal "wiring diagram") completed. Anatomy C. elegans is unsegmented, vermiform, and bilaterally symmetrical. It has a cuticle (a tough outer covering, as an exoskeleton), four main epidermal cords, and a fluid-filled pseudocoelom (body cavity). It also has some of the same organ systems as larger animals. About one in a thousand individuals is male and the rest are hermaphrodites. The basic anatomy of C. elegans includes a mouth, pharynx, intestine, gonad, and collagenous cuticle. Like all nematodes, they have neither a circulatory nor a respiratory system. The four bands of muscles that run the length of the body are connected to a neural system that allows the muscles to move the animal's body only as dorsal bending or ventral bending, but not left or right, except for the head, where the four muscle quadrants are wired independently from one another. When a wave of dorsal/ventral muscle contractions proceeds from the back Document 2::: Animals are multicellular, eukaryotic organisms in the biological kingdom Animalia. With few exceptions, animals consume organic material, breathe oxygen, have myocytes and are able to move, can reproduce sexually, and grow from a hollow sphere of cells, the blastula, during embryonic development. As of 2022, 2.16 million living animal species have been described—of which around 1.05 million are insects, over 85,000 are molluscs, and around 65,000 are vertebrates. It has been estimated there are around 7.77 million animal species. Animals range in length from to . They have complex interactions with each other and their environments, forming intricate food webs. The scientific study of animals is known as zoology. Most living animal species are in Bilateria, a clade whose members have a bilaterally symmetric body plan. The Bilateria include the protostomes, containing animals such as nematodes, arthropods, flatworms, annelids and molluscs, and the deuterostomes, containing the echinoderms and the chordates, the latter including the vertebrates. Life forms interpreted as early animals were present in the Ediacaran biota of the late Precambrian. Many modern animal phyla became clearly established in the fossil record as marine species during the Cambrian explosion, which began around 539 million years ago. 6,331 groups of genes common to all living animals have been identified; these may have arisen from a single common ancestor that lived 650 million years ago. Historically, Aristotle divided animals into those with blood and those without. Carl Linnaeus created the first hierarchical biological classification for animals in 1758 with his Systema Naturae, which Jean-Baptiste Lamarck expanded into 14 phyla by 1809. In 1874, Ernst Haeckel divided the animal kingdom into the multicellular Metazoa (now synonymous with Animalia) and the Protozoa, single-celled organisms no longer considered animals. In modern times, the biological classification of animals relies on ad Document 3::: The evolution of nervous systems dates back to the first development of nervous systems in animals (or metazoans). Neurons developed as specialized electrical signaling cells in multicellular animals, adapting the mechanism of action potentials present in motile single-celled and colonial eukaryotes. Primitive systems, like those found in protists, use chemical signalling for movement and sensitivity; data suggests these were precursors to modern neural cell types and their synapses. When some animals started living a mobile lifestyle and eating larger food particles externally, they developed ciliated epithelia, contractile muscles and coordinating & sensitive neurons for it in their outer layer. Simple nerve nets seen in acoels (basal bilaterians) and cnidarians are thought to be the ancestral condition for the Planulozoa (bilaterians plus cnidarians and, perhaps, placozoans). A more complex nerve net with simple nerve cords is present in ancient animals called ctenophores but no nerves, thus no nervous systems, are present in another group of ancient animals, the sponges (Porifera). Due to the common presence and similarity of some neural genes in these ancient animals and their protist relatives, the controversy of whether ctenophores or sponges diverged earlier, and the recent discovery of "neuroid" cells specialized in coordination of digestive choanocytes in Spongilla, the origin of neurons in the phylogenetic tree of life is still disputed. Further cephalization and nerve cord (ventral and dorsal) evolution occurred many times independently in bilaterians. Neural precursors Action potentials, which are necessary for neural activity, evolved in single-celled eukaryotes. These use calcium rather than sodium action potentials, but the mechanism was probably adapted into neural electrical signaling in multicellular animals. In some colonial eukaryotes, such as Obelia, electrical signals propagate not only through neural nets, but also through epithelial cells Document 4::: In a multicellular organism, an organ is a collection of tissues joined in a structural unit to serve a common function. In the hierarchy of life, an organ lies between tissue and an organ system. Tissues are formed from same type cells to act together in a function. Tissues of different types combine to form an organ which has a specific function. The intestinal wall for example is formed by epithelial tissue and smooth muscle tissue. Two or more organs working together in the execution of a specific body function form an organ system, also called a biological system or body system. An organ's tissues can be broadly categorized as parenchyma, the functional tissue, and stroma, the structural tissue with supportive, connective, or ancillary functions. For example, the gland's tissue that makes the hormones is the parenchyma, whereas the stroma includes the nerves that innervate the parenchyma, the blood vessels that oxygenate and nourish it and carry away its metabolic wastes, and the connective tissues that provide a suitable place for it to be situated and anchored. The main tissues that make up an organ tend to have common embryologic origins, such as arising from the same germ layer. Organs exist in most multicellular organisms. In single-celled organisms such as members of the eukaryotes, the functional analogue of an organ is known as an organelle. In plants, there are three main organs. The number of organs in any organism depends on the definition used. By one widely adopted definition, 79 organs have been identified in the human body. Animals Except for placozoans, multicellular animals including humans have a variety of organ systems. These specific systems are widely studied in human anatomy. The functions of these organ systems often share significant overlap. For instance, the nervous and endocrine system both operate via a shared organ, the hypothalamus. For this reason, the two systems are combined and studied as the neuroendocrine system. The sam The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. Phylum chordata consists of two subphyla of invertebrates, as well as the hagfishes and what else? A. cells B. vertebrates C. organs D. lipids Answer:
sciq-6401
multiple_choice
Chemical formulas for ionic compounds are called what?
[ "ionic formulas", "magnetic formulas", "electronic formulas", "velocity formulas" ]
A
Relavent Documents: Document 0::: In mathematical psychology and education theory, a knowledge space is a combinatorial structure used to formulate mathematical models describing the progression of a human learner. Knowledge spaces were introduced in 1985 by Jean-Paul Doignon and Jean-Claude Falmagne, and remain in extensive use in the education theory. Modern applications include two computerized tutoring systems, ALEKS and the defunct RATH. Formally, a knowledge space assumes that a domain of knowledge is a collection of concepts or skills, each of which must be eventually mastered. Not all concepts are interchangeable; some require other concepts as prerequisites. Conversely, competency at one skill may ease the acquisition of another through similarity. A knowledge space marks out which collections of skills are feasible: they can be learned without mastering any other skills. Under reasonable assumptions, the collection of feasible competencies forms the mathematical structure known as an antimatroid. Researchers and educators usually explore the structure of a discipline's knowledge space as a latent class model. Motivation Knowledge Space Theory attempts to address shortcomings of standardized testing when used in educational psychometry. Common tests, such as the SAT and ACT, compress a student's knowledge into a very small range of ordinal ranks, in the process effacing the conceptual dependencies between questions. Consequently, the tests cannot distinguish between true understanding and guesses, nor can they identify a student's particular weaknesses, only the general proportion of skills mastered. The goal of knowledge space theory is to provide a language by which exams can communicate What the student can do and What the student is ready to learn. Model structure Knowledge Space Theory-based models presume that an educational subject can be modeled as a finite set of concepts, skills, or topics. Each feasible state of knowledge about is then a subset of ; the set of Document 1::: In science, a formula is a concise way of expressing information symbolically, as in a mathematical formula or a chemical formula. The informal use of the term formula in science refers to the general construct of a relationship between given quantities. The plural of formula can be either formulas (from the most common English plural noun form) or, under the influence of scientific Latin, formulae (from the original Latin). In mathematics In mathematics, a formula generally refers to an equation relating one mathematical expression to another, with the most important ones being mathematical theorems. For example, determining the volume of a sphere requires a significant amount of integral calculus or its geometrical analogue, the method of exhaustion. However, having done this once in terms of some parameter (the radius for example), mathematicians have produced a formula to describe the volume of a sphere in terms of its radius: Having obtained this result, the volume of any sphere can be computed as long as its radius is known. Here, notice that the volume V and the radius r are expressed as single letters instead of words or phrases. This convention, while less important in a relatively simple formula, means that mathematicians can more quickly manipulate formulas which are larger and more complex. Mathematical formulas are often algebraic, analytical or in closed form. In a general context, formulas are often a manifestation of mathematical model to real world phenomena, and as such can be used to provide solution (or approximated solution) to real world problems, with some being more general than others. For example, the formula is an expression of Newton's second law, and is applicable to a wide range of physical situations. Other formulas, such as the use of the equation of a sine curve to model the movement of the tides in a bay, may be created to solve a particular problem. In all cases, however, formulas form the basis for calculations. Expr Document 2::: In chemical nomenclature, the IUPAC nomenclature of organic chemistry is a method of naming organic chemical compounds as recommended by the International Union of Pure and Applied Chemistry (IUPAC). It is published in the Nomenclature of Organic Chemistry (informally called the Blue Book). Ideally, every possible organic compound should have a name from which an unambiguous structural formula can be created. There is also an IUPAC nomenclature of inorganic chemistry. To avoid long and tedious names in normal communication, the official IUPAC naming recommendations are not always followed in practice, except when it is necessary to give an unambiguous and absolute definition to a compound. IUPAC names can sometimes be simpler than older names, as with ethanol, instead of ethyl alcohol. For relatively simple molecules they can be more easily understood than non-systematic names, which must be learnt or looked over. However, the common or trivial name is often substantially shorter and clearer, and so preferred. These non-systematic names are often derived from an original source of the compound. Also, very long names may be less clear than structural formulas. Basic principles In chemistry, a number of prefixes, suffixes and infixes are used to describe the type and position of the functional groups in the compound. The steps for naming an organic compound are: Identification of the parent hydride parent hydrocarbon chain. This chain must obey the following rules, in order of precedence: It should have the maximum number of substituents of the suffix functional group. By suffix, it is meant that the parent functional group should have a suffix, unlike halogen substituents. If more than one functional group is present, the one with highest group precedence should be used. It should have the maximum number of multiple bonds. It should have the maximum length. It should have the maximum number of substituents or branches cited as prefixes It should have the ma Document 3::: Conceptual questions or conceptual problems in science, technology, engineering, and mathematics (STEM) education are questions that can be answered based only on the knowledge of relevant concepts, rather than performing extensive calculations. They contrast with most homework and exam problems in science and engineering that typically require plugging in numerical values into previously discussed formulas. Such "plug-and-chug" numerical problems can often be solved correctly by just matching the pattern of the problem to a previously discussed problem and changing the numerical inputs, which requires significant amounts of time to perform the calculations but does not test or deepen the understanding of how the concepts and formulas should work together. Conceptual questions, therefore, provide a good complement to conventional numerical problems because they need minimal or no calculations and instead encourage the students to engage more deeply with the underlying concepts and how they relate to formulas. Conceptual problems are often formulated as multiple-choice questions, making them easy to use during in-class discussions, particularly when utilizing active learning, peer instruction, and audience response. An example of a conceptual question in undergraduate thermodynamics is provided below: During adiabatic expansion of an ideal gas, its temperatureincreases decreases stays the same Impossible to tell/need more information The use of conceptual questions in physics was popularized by Eric Mazur, particularly in the form of multiple-choice tests that he called ConcepTests. In recent years, multiple websites that maintain lists of conceptual questions have been created by instructors for various disciplines. Some books on physics provide many examples of conceptual questions as well. Multiple conceptual questions can be assembled into a concept inventory to test the working knowledge of students at the beginning of a course or to track the improvement in Document 4::: Advanced Placement (AP) Physics C: Electricity and Magnetism (also known as AP Physics C: E&M or AP E&M) is an introductory physics course administered by the College Board as part of its Advanced Placement program. It is intended to proxy a second-semester calculus-based university course in electricity and magnetism. The content of Physics C: E&M overlaps with that of AP Physics 2, but Physics 2 is algebra-based and covers other topics outside of electromagnetism, while Physics C is calculus-based and only covers electromagnetism. Physics C: E&M may be combined with its mechanics counterpart to form a year-long course that prepares for both exams. Course content E&M is equivalent to an introductory college course in electricity and magnetism for physics or engineering majors. The course modules are: Electrostatics Conductors, capacitors, and dielectrics Electric circuits Magnetic fields Electromagnetism. Methods of calculus are used wherever appropriate in formulating physical principles and in applying them to physical problems. Therefore, students should have completed or be concurrently enrolled in a calculus class. AP test The course culminates in an optional exam for which high-performing students may receive some credit towards their college coursework, depending on the institution. Registration The AP examination for AP Physics C: Electricity and Magnetism is separate from the AP examination for AP Physics C: Mechanics. Before 2006, test-takers paid only once and were given the choice of taking either one or two parts of the Physics C test. Format The exam is typically administered on a Monday afternoon in May. The exam is configured in two categories: a 35-question multiple choice section and a 3-question free response section. Test takers are allowed to use an approved calculator during the entire exam. The test is weighted such that each section is worth half of the final score. This and AP Physics C: Mechanics are the shortest AP exams, with The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. Chemical formulas for ionic compounds are called what? A. ionic formulas B. magnetic formulas C. electronic formulas D. velocity formulas Answer:
sciq-2509
multiple_choice
The strength of bases is measured on what scale?
[ "pneumatic scale", "litmus test", "ph scale", "acid test" ]
C
Relavent Documents: Document 0::: Conceptual questions or conceptual problems in science, technology, engineering, and mathematics (STEM) education are questions that can be answered based only on the knowledge of relevant concepts, rather than performing extensive calculations. They contrast with most homework and exam problems in science and engineering that typically require plugging in numerical values into previously discussed formulas. Such "plug-and-chug" numerical problems can often be solved correctly by just matching the pattern of the problem to a previously discussed problem and changing the numerical inputs, which requires significant amounts of time to perform the calculations but does not test or deepen the understanding of how the concepts and formulas should work together. Conceptual questions, therefore, provide a good complement to conventional numerical problems because they need minimal or no calculations and instead encourage the students to engage more deeply with the underlying concepts and how they relate to formulas. Conceptual problems are often formulated as multiple-choice questions, making them easy to use during in-class discussions, particularly when utilizing active learning, peer instruction, and audience response. An example of a conceptual question in undergraduate thermodynamics is provided below: During adiabatic expansion of an ideal gas, its temperatureincreases decreases stays the same Impossible to tell/need more information The use of conceptual questions in physics was popularized by Eric Mazur, particularly in the form of multiple-choice tests that he called ConcepTests. In recent years, multiple websites that maintain lists of conceptual questions have been created by instructors for various disciplines. Some books on physics provide many examples of conceptual questions as well. Multiple conceptual questions can be assembled into a concept inventory to test the working knowledge of students at the beginning of a course or to track the improvement in Document 1::: Single Best Answer (SBA or One Best Answer) is a written examination form of multiple choice questions used extensively in medical education. Structure A single question is posed with typically five alternate answers, from which the candidate must choose the best answer. This method avoids the problems of past examinations of a similar form described as Single Correct Answer. The older form can produce confusion where more than one of the possible answers has some validity. The newer form makes it explicit that more than one answer may have elements that are correct, but that one answer will be superior. Prior to the widespread introduction of SBAs into medical education, the typical form of examination was true-false multiple choice questions. But during the 2000s, educators found that SBAs would be superior. Document 2::: GRE Subject Biochemistry, Cell and Molecular Biology was a standardized exam provided by ETS (Educational Testing Service) that was discontinued in December 2016. It is a paper-based exam and there are no computer-based versions of it. ETS places this exam three times per year: once in April, once in October and once in November. Some graduate programs in the United States recommend taking this exam, while others require this exam score as a part of the application to their graduate programs. ETS sends a bulletin with a sample practice test to each candidate after registration for the exam. There are 180 questions within the biochemistry subject test. Scores are scaled and then reported as a number between 200 and 990; however, in recent versions of the test, the maximum and minimum reported scores have been 760 (corresponding to the 99 percentile) and 320 (1 percentile) respectively. The mean score for all test takers from July, 2009, to July, 2012, was 526 with a standard deviation of 95. After learning that test content from editions of the GRE® Biochemistry, Cell and Molecular Biology (BCM) Test has been compromised in Israel, ETS made the decision not to administer this test worldwide in 2016–17. Content specification Since many students who apply to graduate programs in biochemistry do so during the first half of their fourth year, the scope of most questions is largely that of the first three years of a standard American undergraduate biochemistry curriculum. A sampling of test item content is given below: Biochemistry (36%) A Chemical and Physical Foundations Thermodynamics and kinetics Redox states Water, pH, acid-base reactions and buffers Solutions and equilibria Solute-solvent interactions Chemical interactions and bonding Chemical reaction mechanisms B Structural Biology: Structure, Assembly, Organization and Dynamics Small molecules Macromolecules (e.g., nucleic acids, polysaccharides, proteins and complex lipids) Supramolecular complexes (e.g. Document 3::: Test equating traditionally refers to the statistical process of determining comparable scores on different forms of an exam. It can be accomplished using either classical test theory or item response theory. In item response theory, equating is the process of placing scores from two or more parallel test forms onto a common score scale. The result is that scores from two different test forms can be compared directly, or treated as though they came from the same test form. When the tests are not parallel, the general process is called linking. It is the process of equating the units and origins of two scales on which the abilities of students have been estimated from results on different tests. The process is analogous to equating degrees Fahrenheit with degrees Celsius by converting measurements from one scale to the other. The determination of comparable scores is a by-product of equating that results from equating the scales obtained from test results. Purpose Suppose that Dick and Jane both take a test to become licensed in a certain profession. Because the high stakes (you get to practice the profession if you pass the test) may create a temptation to cheat, the organization that oversees the test creates two forms. If we know that Dick scored 60% on form A and Jane scored 70% on form B, do we know for sure which one has a better grasp of the material? What if form A is composed of very difficult items, while form B is relatively easy? Equating analyses are performed to address this very issue, so that scores are as fair as possible. Equating in item response theory In item response theory, person "locations" (measures of some quality being assessed by a test) are estimated on an interval scale; i.e., locations are estimated in relation to a unit and origin. It is common in educational assessment to employ tests in order to assess different groups of students with the intention of establishing a common scale by equating the origins, and when appropri Document 4::: In mathematical psychology and education theory, a knowledge space is a combinatorial structure used to formulate mathematical models describing the progression of a human learner. Knowledge spaces were introduced in 1985 by Jean-Paul Doignon and Jean-Claude Falmagne, and remain in extensive use in the education theory. Modern applications include two computerized tutoring systems, ALEKS and the defunct RATH. Formally, a knowledge space assumes that a domain of knowledge is a collection of concepts or skills, each of which must be eventually mastered. Not all concepts are interchangeable; some require other concepts as prerequisites. Conversely, competency at one skill may ease the acquisition of another through similarity. A knowledge space marks out which collections of skills are feasible: they can be learned without mastering any other skills. Under reasonable assumptions, the collection of feasible competencies forms the mathematical structure known as an antimatroid. Researchers and educators usually explore the structure of a discipline's knowledge space as a latent class model. Motivation Knowledge Space Theory attempts to address shortcomings of standardized testing when used in educational psychometry. Common tests, such as the SAT and ACT, compress a student's knowledge into a very small range of ordinal ranks, in the process effacing the conceptual dependencies between questions. Consequently, the tests cannot distinguish between true understanding and guesses, nor can they identify a student's particular weaknesses, only the general proportion of skills mastered. The goal of knowledge space theory is to provide a language by which exams can communicate What the student can do and What the student is ready to learn. Model structure Knowledge Space Theory-based models presume that an educational subject can be modeled as a finite set of concepts, skills, or topics. Each feasible state of knowledge about is then a subset of ; the set of The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. The strength of bases is measured on what scale? A. pneumatic scale B. litmus test C. ph scale D. acid test Answer:
sciq-10543
multiple_choice
What shapes, supports, and protects the cell?
[ "the chloroplast", "the epithelium", "the cell wall", "the mesothelium" ]
C
Relavent Documents: Document 0::: Cellular components are the complex biomolecules and structures of which cells, and thus living organisms, are composed. Cells are the structural and functional units of life. The smallest organisms are single cells, while the largest organisms are assemblages of trillions of cells. DNA, double stranded macromolecule that carries the hereditary information of the cell and found in all living cells; each cell carries chromosome(s) having a distinctive DNA sequence. Examples include macromolecules such as proteins and nucleic acids, biomolecular complexes such as a ribosome, and structures such as membranes, and organelles. While the majority of cellular components are located within the cell itself, some may exist in extracellular areas of an organism. Cellular components may also be called biological matter or biological material. Most biological matter has the characteristics of soft matter, being governed by relatively small energies. All known life is made of biological matter. To be differentiated from other theoretical or fictional life forms, such life may be called carbon-based, cellular, organic, biological, or even simply living – as some definitions of life exclude hypothetical types of biochemistry. See also Cell (biology) Cell biology Biomolecule Organelle Tissue (biology) External links https://web.archive.org/web/20130918033010/http://bioserv.fiu.edu/~walterm/FallSpring/review1_fall05_chap_cell3.htm Document 1::: The cell is the basic structural and functional unit of all forms of life. Every cell consists of cytoplasm enclosed within a membrane, and contains many macromolecules such as proteins, DNA and RNA, as well as many small molecules of nutrients and metabolites. The term comes from the Latin word meaning 'small room'. Cells can acquire specified function and carry out various tasks within the cell such as replication, DNA repair, protein synthesis, and motility. Cells are capable of specialization and mobility within the cell. Most plant and animal cells are only visible under a light microscope, with dimensions between 1 and 100 micrometres. Electron microscopy gives a much higher resolution showing greatly detailed cell structure. Organisms can be classified as unicellular (consisting of a single cell such as bacteria) or multicellular (including plants and animals). Most unicellular organisms are classed as microorganisms. The study of cells and how they work has led to many other studies in related areas of biology, including: discovery of DNA, cancer systems biology, aging and developmental biology. Cell biology is the study of cells, which were discovered by Robert Hooke in 1665, who named them for their resemblance to cells inhabited by Christian monks in a monastery. Cell theory, first developed in 1839 by Matthias Jakob Schleiden and Theodor Schwann, states that all organisms are composed of one or more cells, that cells are the fundamental unit of structure and function in all living organisms, and that all cells come from pre-existing cells. Cells emerged on Earth about 4 billion years ago. Discovery With continual improvements made to microscopes over time, magnification technology became advanced enough to discover cells. This discovery is largely attributed to Robert Hooke, and began the scientific study of cells, known as cell biology. When observing a piece of cork under the scope, he was able to see pores. This was shocking at the time as i Document 2::: This lecture, named in memory of Keith R. Porter, is presented to an eminent cell biologist each year at the ASCB Annual Meeting. The ASCB Program Committee and the ASCB President recommend the Porter Lecturer to the Porter Endowment each year. Lecturers Source: ASCB See also List of biology awards Document 3::: Cell physiology is the biological study of the activities that take place in a cell to keep it alive. The term physiology refers to normal functions in a living organism. Animal cells, plant cells and microorganism cells show similarities in their functions even though they vary in structure. General characteristics There are two types of cells: prokaryotes and eukaryotes. Prokaryotes were the first of the two to develop and do not have a self-contained nucleus. Their mechanisms are simpler than later-evolved eukaryotes, which contain a nucleus that envelops the cell's DNA and some organelles. Prokaryotes Prokaryotes have DNA located in an area called the nucleoid, which is not separated from other parts of the cell by a membrane. There are two domains of prokaryotes: bacteria and archaea. Prokaryotes have fewer organelles than eukaryotes. Both have plasma membranes and ribosomes (structures that synthesize proteins and float free in cytoplasm). Two unique characteristics of prokaryotes are fimbriae (finger-like projections on the surface of a cell) and flagella (threadlike structures that aid movement). Eukaryotes Eukaryotes have a nucleus where DNA is contained. They are usually larger than prokaryotes and contain many more organelles. The nucleus, the feature of a eukaryote that distinguishes it from a prokaryote, contains a nuclear envelope, nucleolus and chromatin. In cytoplasm, endoplasmic reticulum (ER) synthesizes membranes and performs other metabolic activities. There are two types, rough ER (containing ribosomes) and smooth ER (lacking ribosomes). The Golgi apparatus consists of multiple membranous sacs, responsible for manufacturing and shipping out materials such as proteins. Lysosomes are structures that use enzymes to break down substances through phagocytosis, a process that comprises endocytosis and exocytosis. In the mitochondria, metabolic processes such as cellular respiration occur. The cytoskeleton is made of fibers that support the str Document 4::: In cell biology, microtrabeculae were a hypothesised fourth element of the cytoskeleton (the other three being microfilaments, microtubules and intermediate filaments), proposed by Keith Porter based on images obtained from high-voltage electron microscopy of whole cells in the 1970s. The images showed short, filamentous structures of unknown molecular composition associated with known cytoplasmic structures. It is now generally accepted that microtrabeculae are nothing more than an artifact of certain types of fixation treatment, although the complexity of the cell's cytoskeleton is not yet fully understood. The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. What shapes, supports, and protects the cell? A. the chloroplast B. the epithelium C. the cell wall D. the mesothelium Answer:
sciq-11370
multiple_choice
How many pairs of chromosomes are found in human cells?
[ "25", "13", "24", "23" ]
D
Relavent Documents: Document 0::: Advanced Placement (AP) Biology (also known as AP Bio) is an Advanced Placement biology course and exam offered by the College Board in the United States. For the 2012–2013 school year, the College Board unveiled a new curriculum with a greater focus on "scientific practices". This course is designed for students who wish to pursue an interest in the life sciences. The College Board recommends successful completion of high school biology and high school chemistry before commencing AP Biology, although the actual prerequisites vary from school to school and from state to state. This course, nevertheless, is considered very challenging and one of the most difficult AP classes, as shown with AP Finals grade distributions. Topic outline The exam covers the following 8 units. The percentage indicates the portion of the multiple-choice section of the exam focused on each content area: The course is based on and tests six skills, called scientific practices which include: In addition to the topics above, students are required to be familiar with general lab procedure. Students should know how to collect data, analyze data to form conclusions, and apply those conclusions. Exam Students are allowed to use a four-function, scientific, or graphing calculator. The exam has two sections: a 90 minute multiple choice section and a 90 minute free response section. There are 60 multiple choice questions and six free responses, two long and four short. Both sections are worth 50% of the score. Score distribution Commonly used textbooks Biology, AP Edition by Sylvia Mader (2012, hardcover ) Life: The Science of Biology (Sadava, Heller, Orians, Purves, and Hillis, ) Campbell Biology AP Ninth Edition (Reece, Urry, Cain, Wasserman, Minorsky, and Andrew Jackson ) See also Glossary of biology A.P Bio (TV Show) Document 1::: Several universities have designed interdisciplinary courses with a focus on human biology at the undergraduate level. There is a wide variation in emphasis ranging from business, social studies, public policy, healthcare and pharmaceutical research. Americas Human Biology major at Stanford University, Palo Alto (since 1970) Stanford's Human Biology Program is an undergraduate major; it integrates the natural and social sciences in the study of human beings. It is interdisciplinary and policy-oriented and was founded in 1970 by a group of Stanford faculty (Professors Dornbusch, Ehrlich, Hamburg, Hastorf, Kennedy, Kretchmer, Lederberg, and Pittendrigh). It is a very popular major and alumni have gone to post-graduate education, medical school, law, business and government. Human and Social Biology (Caribbean) Human and Social Biology is a Level 4 & 5 subject in the secondary and post-secondary schools in the Caribbean and is optional for the Caribbean Secondary Education Certification (CSEC) which is equivalent to Ordinary Level (O-Level) under the British school system. The syllabus centers on structure and functioning (anatomy, physiology, biochemistry) of human body and the relevance to human health with Caribbean-specific experience. The syllabus is organized under five main sections: Living organisms and the environment, life processes, heredity and variation, disease and its impact on humans, the impact of human activities on the environment. Human Biology Program at University of Toronto The University of Toronto offers an undergraduate program in Human Biology that is jointly offered by the Faculty of Arts & Science and the Faculty of Medicine. The program offers several major and specialist options in: human biology, neuroscience, health & disease, global health, and fundamental genetics and its applications. Asia BSc (Honours) Human Biology at All India Institute of Medical Sciences, New Delhi (1980–2002) BSc (honours) Human Biology at AIIMS (New Document 2::: A pre-STEM program is a course of study at any two-year college that prepares a student to transfer to a four-year school to earn a bachelor's degree in a STEM field. Overview The concept of a pre-STEM program is being developed to address America's need for more college-trained professionals in science, technology, engineering, and mathematics (STEM). It is an innovation meant to fill a gap at community colleges that do not have 'major' degree paths that students identify with on their way to earning an Associates degree. Students must complete a considerable amount of STEM coursework before transferring from a two-year school to a four-year school and earn a baccalaureate degree in a STEM field. Schools with a pre-STEM program are able to identify those students and support them with STEM-specific academic and career advising, increasing the student's chances of going on to earn a STEM baccalaureate degree in a timely fashion. With over 50% of America's college-bound students starting their college career at public or private two-year school, and with a very small proportion of students who start college at a two-year school matriculating to and earning STEM degrees from four-year schools, pre-STEM programs have great potential for broadening participation in baccalaureate STEM studies. Example programs The effectiveness of pre-STEM programs is being investigated by a consortium of schools in Missouri: Moberly Area Community College, St. Charles Community College, Metropolitan Community College, and Truman State University. A larger group of schools met at the Belknap Springs Meetings in October 2009 to discuss the challenges and opportunities presented by STEM-focused partnerships between 2-year and 4-year schools. Each program represented a two-year school and a four-year school that were trying to increase the number of people who earn a baccalaureate degree in a STEM area through various means, some of which were pre-STEM programs. Other methods includes Document 3::: The SAT Subject Test in Biology was the name of a one-hour multiple choice test given on biology by the College Board. A student chose whether to take the test depending upon college entrance requirements for the schools in which the student is planning to apply. Until 1994, the SAT Subject Tests were known as Achievement Tests; and from 1995 until January 2005, they were known as SAT IIs. Of all SAT subject tests, the Biology E/M test was the only SAT II that allowed the test taker a choice between the ecological or molecular tests. A set of 60 questions was taken by all test takers for Biology and a choice of 20 questions was allowed between either the E or M tests. This test was graded on a scale between 200 and 800. The average for Molecular is 630 while Ecological is 591. On January 19 2021, the College Board discontinued all SAT Subject tests, including the SAT Subject Test in Biology E/M. This was effective immediately in the United States, and the tests were to be phased out by the following summer for international students. This was done as a response to changes in college admissions due to the impact of the COVID-19 pandemic on education. Format This test had 80 multiple-choice questions that were to be answered in one hour. All questions had five answer choices. Students received one point for each correct answer, lost ¼ of a point for each incorrect answer, and received 0 points for questions left blank. The student's score was based entirely on his or her performance in answering the multiple-choice questions. The questions covered a broad range of topics in general biology. There were more specific questions related respectively on ecological concepts (such as population studies and general Ecology) on the E test and molecular concepts such as DNA structure, translation, and biochemistry on the M test. Preparation The College Board suggested a year-long course in biology at the college preparatory level, as well as a one-year course in algebra, a Document 4::: Conceptual questions or conceptual problems in science, technology, engineering, and mathematics (STEM) education are questions that can be answered based only on the knowledge of relevant concepts, rather than performing extensive calculations. They contrast with most homework and exam problems in science and engineering that typically require plugging in numerical values into previously discussed formulas. Such "plug-and-chug" numerical problems can often be solved correctly by just matching the pattern of the problem to a previously discussed problem and changing the numerical inputs, which requires significant amounts of time to perform the calculations but does not test or deepen the understanding of how the concepts and formulas should work together. Conceptual questions, therefore, provide a good complement to conventional numerical problems because they need minimal or no calculations and instead encourage the students to engage more deeply with the underlying concepts and how they relate to formulas. Conceptual problems are often formulated as multiple-choice questions, making them easy to use during in-class discussions, particularly when utilizing active learning, peer instruction, and audience response. An example of a conceptual question in undergraduate thermodynamics is provided below: During adiabatic expansion of an ideal gas, its temperatureincreases decreases stays the same Impossible to tell/need more information The use of conceptual questions in physics was popularized by Eric Mazur, particularly in the form of multiple-choice tests that he called ConcepTests. In recent years, multiple websites that maintain lists of conceptual questions have been created by instructors for various disciplines. Some books on physics provide many examples of conceptual questions as well. Multiple conceptual questions can be assembled into a concept inventory to test the working knowledge of students at the beginning of a course or to track the improvement in The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. How many pairs of chromosomes are found in human cells? A. 25 B. 13 C. 24 D. 23 Answer: